Skip to Content
InformaticsInformatics
  • Article
  • Open Access

22 January 2026

Digital Skills and Employer Transparency: Two Key Drivers Reinforcing Positive AI Attitudes and Perception Among Europeans

,
and
1
Department of Psychology “Renzo Canestrari”, Alma Mater Studiorum Università di Bologna, 47521 Cesena, Italy
2
Department of Psychology, Università di Chieti-Pescara ‘G. d’Annunzio’, Via dei Vestini, 31, 66013 Chieti, Italy
*
Authors to whom correspondence should be addressed.

Abstract

Using 2024 Eurobarometer survey data from 26,415 workers in 27 EU countries, this study examines how digital skills and employer transparency shape workers’ attitudes toward and perception of artificial intelligence (AI). Drawing on information systems and behavioral theories, regression analyses reveal that digital skills strongly predict augmentation-dominant attitude. Workers with higher digital skills view AI as complementary rather than threatening, with an augmentation attitude mediating 56% of the skills–perception relationship. Adjacently, employer transparency attenuates the translation of replacement attitude into a negative perception of AI in the workplace. Organizations and policymakers should prioritize digital upskilling and ensure workplace AI transparency requirements to foster a positive attitude and perception, recognizing that skills development and organizational communication are equally vital for the successful integration of AI in the workplace.

1. Introduction

A European survey, conducted in 2024, indicates that Europe remains in an early stage of integrating AI into the workplace. Compared with 2023, the use of AI in European Union (EU) enterprises has only increased by 5.45%, roughly from 8% to 13.5% [1]. This is justified because when compared to other leading nations, including the US and China [2], the EU is committed to cautious and responsible AI integration, guided by a risk-based approach [3]. For instance, although the US has taken several steps in AI regulation, the focus is now shifting to accelerating the use of AI and overall promotion. The recent executive order in the National AI Initiatives Act of 2020 is aimed at removing barriers, deregulating, and accelerating the use. As a consequence, safety acts like Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence are being reevaluated [4]. Moreover, the second latest executive order is aimed at accelerating data center infrastructure [5]. This demonstrates the reverse trend of the US’s earlier focus on stricter control. On the other hand, China’s regulations, “Interim Measures for the Management of Generative Artificial Intelligence Services,” entered into force in 2023. China’s approach is more in line with the EU’s approach, as compared to the US. These interim measures draw from pre-existing legislation, notably the Cybersecurity Law, Data Security Law, and Personal Information Protection Law (PIPL). Overall, the measures focus on AI content labeling, reinforcing adherence to existing AI laws, mandating legal sourcing of training data, and aligning AI services with society’s current value system [6]. However, the EU’s commitment is evident in numerous initiatives instigated by the EU. The EU is among the first to pass an exacting AI legislation, the 2024 EU AI Act. The EU is also reinforcing General Data Protection Regulation (GDPR) and platform–work rules, while the Digital Europe Programme (DEP) 2021–2027 focuses on strengthening Europe’s digital capacities. The European Digital Innovation Hubs, covered in DEP 2021–2023, continue to support SMEs in their efforts to harvest the benefits of AI. At the same time, DEP 2025–2027 invests substantially in digital skills, AI, and cybersecurity (i.e., EUR 1.3 billion) [7]. This is in parallel with European public opinion believing AI integration requires careful and strict management (84%), robust worker-privacy rules and regulation (over 80%), and employee involvement in AI deployment (77%) [8].
In attempts to regulate AI, the Organization of Economic Co-operation and Development’s (OECD) AI principles were the first intergovernmental standard on AI that offered recommendations as well as a definition of AI that has become the de facto standard for regulatory jurisdictions. Today, the EU, the Council of Europe, the US, the United Nations, and other jurisdictions use the OECD’s definition of AI and its lifecycle in their legislative and regulatory frameworks [9]. The EU AI Act, adapted to the revised OECD’s definition [10], enforces a risk-based compliance framework that classifies and places restrictions based on the type and level of risk posed by AI. Practices such as social scoring, real-time remote biometric identification, manipulating people’s decisions or exploiting their vulnerabilities, and inferring emotions in workplaces are categorized as “unacceptable risk” and are prohibited. The majority of the obligations are imposed on AI technologies classified as “high-risk AI systems” (i.e., posing high risk to people’s health, safety, or fundamental rights) and their providers. The Act mandates their strict pre-market conformity assessments, rigorous data governance, human oversight, and continuous risk management systems. Furthermore, general-purpose AI (GPAI) models face tiered restrictions, with all providers required to provide technical documentation and instructions for use to the commission as well as downstream providers, while retaining their copyright. In addition, GPAI with systemic risk must comply with additional restrictions. The goal of the Act is to promote human-centered, trustworthy AI, while ensuring Europeans’ protection [11].
Although the EU has taken numerous steps in appropriately ensuring and regulating informed integration of AI into workplaces, they have yet to directly address the core challenge of job displacement [12] or the fear of replacement posed by AI [8]. This fear shapes workers’ attitudes and perceptions surrounding AI. In AI literature, perceptions and attitudes are seldom studied as distinct constructs; however, research into AI perceptions typically investigates attitudes [13]. Attitudes are individuals’ evaluative opinions with some degree of favor or disfavor [14]; perceptions are sense-making judgments of individuals’ environment and related stimuli, indirectly influenced by attitudes [15,16]. Thus, conflating these constructs obscures how attitudes shape subsequent perceptions. Implying that distinguishing evaluative attitudes from impact perception is necessary to clarify how attitudes form individuals’ perception of AI.
Second, while previous studies identify digital skills as central to technology acceptance in the workplace [17,18,19], the specific mechanisms through which skills influence AI attitudes and perception remain unclear. Specifically, if attitudes dictate the effect of technical competence on AI receptivity requires systematic investigation. This is particularly invaluable as it allows us to further narrow down, isolate, and elucidate the role of the primal lens, i.e., individuals’ attitudes, in the larger scheme of AI’s integration into the workplace. Such novel insights can not only be compatibly utilized in organizational interventions but will also be instrumental in guiding future research to explore attitudes in depth and latitude. Similarly, although transparency is widely advocated in responsible AI frameworks [3,20], empirical evidence for its effects on workers’ attitudes and perceptions is limited.
This study addresses these gaps, first, by developing a dual-attitude framework that distinguishes between AI augmentation attitude (viewing AI as complementing human work) and AI replacement attitude (viewing AI as replacing the workforce). Second, it explores the impact of digital skills on AI reception while disentangling attitudes from perception. Third, it examines the role of employer transparency, i.e., whether employees are informed of AI use in their workplace, in the translation of AI replacement attitude into perception. Moreover, we study to what extent digital skills benefit from employer digital support, i.e., training and tools provided by employers to employees to work effectively with AI. Our theoretical approach integrates several key insights. Primarily, from Technology Acceptance Models [21,22], we understand the mechanisms through which digital skills influence attitudes. Drawing on behavioral theories [23,24,25,26], we note attitudes’ influence on individuals’ perception of AI use and shed light on the role of employer transparency. The research models are depicted in Figure 1 and Figure 2.
Figure 1. Research Model 1.
Figure 2. Research Model 2.
We analyze data from the 2024 Eurobarometer survey, commissioned by the European Commission, which covers 26,415 workers across 27 EU member states. We conduct regression analyses to estimate relationships among employer digital support, digital skills, employer transparency, attitudes, and perception (Figure 1 and Figure 2). The paper proceeds with a comprehensive literature review establishing our theoretical framework, followed by methodology and results sections synthesizing our key insights and their implications for both theory and practice.

1.1. Digital Skills

The role of digital skills is foreseeably one of the most crucial in regard to the implementation and real-world use of AI in workplaces [12,18,27,28,29,30], and thus, equally critical is their development. The DEP work plan for 2025–2027, focusing on advancing digital skills, seeks to enhance cooperation between EU member states and stakeholders in digital skills and jobs, with a mission to close the skills gap. The program seeks to launch four digital skills academies aiming to bolster the EU’s technological sovereignty and innovation capacity with specialized education and training in key digital areas, including AI [7]. In fact, there is an expected reorganization of technology-driven skills in the AI-powered world, allowing for exploration of relevant and necessary skills [27]. Thus, a consensus in operationalizing or establishing a standard definition of digital skills per se is crucial. In contrast to a globally accepted standard definition, there is a patchwork of frameworks across the globe [31,32,33,34,35,36].
Nevertheless, we define the digital skills used in our study in line with the DigComp 2.2 framework of digital competence [36], and Deursen and Dijk’s types of digital skills [37,38]. Our scale assesses individuals’ skills in using digital technology including AI across four domains: daily life, job performance, future job adaptability, and digital and online learning [8], and thereby encompasses DigComp’s broad definition of digital competence, defined as “confident, critical and responsible use of, and engagement with, digital technologies for learning, work, and participation in society”. Furthermore, digital competence areas composing Dimension 1, which engenders 21 competences composing Dimension 2, further help us ground our scale in the DigComp 2.2 framework [36]. In terms of DigComp’s competence areas, all four of our domains inherently reflect “problem-solving” and “information and data literacy,” with “communication and collaboration” particularly prominent in daily life and work domains, and “digital content creation” most salient in the work domain. The aspect of “safety,” while not directly measured, is implicitly relevant to daily life and work domains.
In parallel, the four domains coherently map onto the types of digital skills distinguished by Deursen and Dijk [37,38]. Specifically, “operational skills” (basic technical use of hardware and software for everyday tasks) are reflected in the daily life item, “formal skills” (mastery of task-specific applications and software) align with performing your job item, “strategic skills” (planning and adapting digital tools for complex, novel tasks) correspond to future job adaptability, and “information skills” (navigating, filtering and evaluating education content online) underpin the digital and online learning item.

1.2. Employer Digital Support

In a late US-based survey, nearly half of the employees agreed that formal training from their organizations would assist generative AI integration and their likelihood of using AI tools daily. Whereas, more than a fifth report that they received minimal to no support [2]. This is in contrast to other leading nations in AI adoption [39,40], where employees received more support [2]. However, more direct statistics on training or tools provided by employers to employees in our context, i.e., the EU, are reflected in the Eurobarometer survey 101.4 (2025). A total of 68% of Europeans working for an employer or manager reported that they were provided with the necessary tools and training to effectively work with AI [8].
Digitalization is not only a transformation requiring change management, but also a demand driving a broad range of workplace transformation toward the future of work. It represents a core work demand that first requires resources to be addressed, and then skills to be met. Drawing upon Job Demands–Resources (JD–R) theory, training delivered in job crafting interventions, allowing individuals to work on job crafting goals such as increasing specific job resources, can be effective. Skill variety is recognized as an important job resource, especially in a high-demand work environment [41]. Thus, digital skills, as a relatively new skill set in skill variety, can benefit from training. Furthermore, self-efficacy theory proposes that mastering an experience significantly shapes one’s self-efficacy, i.e., an individual’s belief in their capacity to execute behaviors necessary to produce specific performance attainments [23]. Building on that foundation, mastering how to work effectively with AI via training and tools can partly assist, if not shape, individuals’ (digital) skills in efficiently working with AI.
Training is a cornerstone of skill development and is proven to amplify AI integration [42] and foster innovation and adaptability by advancing employees’ skills [43]. AI training and development initiatives equip employees with skills to adopt and effectively use AI [44]. A review article by Morandini et al. recommended implementing comprehensive training and development programs, particularly in developing specific skills required for AI adoption and effective use [45]. Taken together, training and tools provided by the employer, hereafter referred to as employer digital support, should positively influence employees’ digital skills. Thus, we hypothesize the following:
Hypothesis 1 (H1).
Employer digital support is positively associated with employees’ digital skills.

1.3. Attitudes Toward AI

Attitudes are mental tendencies to evaluate an entity along an evaluative continuum with some degree of favor or disfavor; attitudes sum up liking or opinions toward an entity [14]. In regard to AI acceptance specifically, two broad and distinct attitudes have been maintained. First, the AI augmentation view, which regards AI as complementing or augmenting the workforce and work processes. In contrast, the AI replacement view reflects the trepidation among people that AI will replace the workforce.
One of the first systematic literature reviews of human–AI augmentation in the workplace revealed that AI can yield both benefits and drawbacks, and that the human–AI relationship is complex and multifaceted [46]. For instance, AI can relieve humans of basic and repetitive tasks, allowing workers more resources to focus on more complex, multiple responsibilities [47,48,49,50], ultimately improving decision-making, efficiency, and productivity [46]. On the other hand, in the presence of AI, employees across industries report substantial fear of being replaced by AI [51,52,53,54].
Most importantly, there is a lack of a clearer, direct assessment of people’s attitudes toward AI acceptance. While some studies measure people’s attitude toward AI [55,56,57], they are not operationalized in a way to measure whether people are accepting or rejecting AI in the workplace. It is also unclear whether their accepting belief leads their rejecting belief, or vice versa. Here, we investigate people’s attitudes toward AI in such a dichotomy. Moreover, by calculating their difference, we analyze if an individual’s augmentation attitude leads their replacement attitude and vice versa, and how differently these attitudes are predicted by the level of digital skills.
A growing literature indicates that individuals with stronger digital skills tend to hold more positive attitudes toward AI [58,59,60], whereas those with weaker skills exhibit more negative expectations [61]. Studies report that employees with higher occupational and digital self-efficacy experience lower AI anxiety and job insecurity [57,62]. In light of Management Information System (MIS) attitude research and the Technology Acceptance Model (TAM) by Davis [21], we note that attitude toward technology is a function of perceived ease of use. We further note that the effort to develop skills or the ease of adopting skills in the technology use is one of the six sub-constructs of perceived ease of use, specifically designed to “directly tap ease of learning”. Similarly, United Theory of Acceptance and Use of Technology (UTAUT) notes that attitude toward using technology stems from effort expectancy, which envelops “learning to operate the system” as well as ‘ease to become skillful at using the system’ [22]. Thus, individuals with higher digital skills should require less effort to be skillful in using or operating AI, which in turn dictates their attitude toward AI accordingly.
On these premises, we propose that individuals’ digital skills systematically predict their AI attitudes. Individuals with higher digital skills are more likely to endorse an augmentation (favoring) attitude over a replacement (disfavoring) attitude. Thus, we hypothesize the following:
Hypothesis 2 (H2).
Digital skills are positively associated with the augmentation–replacement attitude balance. Specifically, individuals with higher digital skills will report a more favorable attitude (i.e., a net augmentation-dominant attitude), whereas those with lower digital skills will report a more unfavorable attitude (i.e., a net replacement-dominant attitude).

1.4. AI Individual Perception

Individuals’ perception involves making sense of the environment and related stimuli; this process is influenced by past experiences, expectations, motives, and attitudes [16]. Attitudes encapsulate positive and negative feelings and beliefs [63]; affective emotions and cognitive beliefs are two integral components of attitudes [25]. By virtue of automatic activation, the attitude indirectly biases perceptions of the qualities of the object [15]. Furthermore, Lord et al.’s (1979) study on confirmation bias [26] highlighted that people with strong opinions process empirical information in a biased manner, and the biased perception and judgment formation depend on the state of beliefs [64].
However inverted, perception-attitude association toward technology has been studied and noted to be profound [65]. Perception of AI’s capabilities was positively related to affective and cognitive attitudes toward AI among employees [66]. Another study attempting to explain attitudes toward AI and how beliefs influence perceptions noted positive perceptions and negative attitudes, but no association between attitudes and perception was studied [67].
Furthermore, individuals’ perceptions should also be guided by their level of digital skills. Digitally skilled individuals should positively perceive AI in the workplace compared to individuals with lower levels of digital skills. Studies note that individuals’ perceptions can provide insight into their level of AI literacy [13]. Moreover, familiarity with AI is seen to influence perceptions [68], and evidence suggests a positive association between the perceived impact of AI use and perception of digital literacy skills [69].
On these premises, we expect that digital skills influence AI perception, and individuals’ attitude toward AI guides how positively or negatively they perceive the use of AI in the workplace. Consequently, attitude balance should also mediate the relationship between digital skills and individuals’ AI perception. Thus, we hypothesize the following:
Hypothesis 3 (H3).
Digital skills are positively associated with AI perception. Specifically, digitally skilled individuals should report a positive perception of AI’s use in the workplace.
Hypothesis 4 (H4).
Augmentation–replacement attitude balance is positively associated with AI perception. Specifically, a favoring, augmentation-dominant attitude should predict a positive, high AI perception.
Hypothesis 5 (H5).
Augmentation–replacement attitude balance will mediate the positive effect of digital skills on AI perception. Specifically, higher digital skills will exhibit a stronger net favor of augmentation (rather than replacement), which in turn leads to a more positive perception of AI use.

1.5. Employer Transparency

AI concerned information and decisions remain a private, top-down conversation among executives. There is a general ambivalence toward AI, where perception of the need for top-down formal approaches to AI-related change can obscure important new information, and the vast “unknown” makes it prudent to await directives from above [70]. Uncertainty Reduction theory, one of the first theories on interpersonal communication, defines uncertainty as a function of the number and likelihood of alternatives that may occur. This uncertainty requires coping to reduce anxiety and unpredictability caused [24]. In this light, a new AI tool or lack of information on AI deployment status from the employer can indeed exacerbate (but also curb, in cases of informed employees) the negative affective state of employees, an integral part of attitude [25]. Van De Poel’s ethical framework for evaluating the acceptability of experimental technologies [71,72] highlights the requirement for human subjects to be informed about new technologies along with their risks and benefits, and to be involved in the approval regarding their deployment.
Moreover, leaders must use effective communication strategies to prepare workers for AI, explaining AI’s implementation and effects on their jobs [73]. Glikson and Woolley [20] suggested that workers’ cognitive trust, a component of attitude [25], in workplace AI is influenced by its transparency. The employees are keen to learn more about AI, its possibilities, and its impact on work [74]. Workers report that the communication quality of AI guidelines is important for effective adoption of AI [75]. Furthermore, drawing on the self-efficacy theory, Bandura [23] notes that the outcome expectancy’s (i.e., particular behavior leading to certain outcomes) effect on behavior change is of small magnitude and significance, but it does reduce phobic behavior. Further, “informing/telling people what to expect” does raise outcome expectations. Outcome expectancy is prominent in shaping cognitive views of behavior [23], an important component of attitude [25].
Thus, replacement attitude enveloping negative affect and cognitive views of employees should vary depending on the status, whether employees are informed of AI use or not, i.e., employer transparency. Furthermore, due to the curbing power of transparency over the negative attitude, employer transparency should attenuate the translation of replacement attitude into a negative perception of AI. Based on this, we hypothesize the following:
Hypothesis 6 (H6).
The negative relationship between AI replacement attitude and AI perception is positively moderated by employer transparency. Specifically, among uninformed employees, the negative effect of AI replacement attitude on AI perception will be stronger than among informed employees.

2. Materials and Methods

2.1. Data and Participants

Data for this study come from the 2024 Standard Eurobarometer 104.1, commissioned by the European Commission, a cross-national survey of the populations of the 27 EU member states. The survey employed a multi-stage, random probability sampling design stratified by region and density (metropolitan, urban, and rural). Following this, households within these Primary Sampling Units (PSUs) were chosen via systematic random route methods. From each household, one respondent aged 15 and over was selected using a Kish grid to achieve nationally representative samples of EU residents. Up to two recalls were made to minimize non-response, and no more than one interview per household was conducted [8]. Trained interviewers conducted face-to-face, computer-assisted personal interviews (CAPI) between April 2024 and May 2024 with a sample size of 1000 respondents per country, resulting in 26,415 total respondents. Interviews followed a standardized questionnaire translated and back-translated in each national language to ensure conceptual equivalence. Out of all the member states, the Netherlands had the highest response rate (81.6%), and Italy had the lowest (28.5). Thus, post-fieldwork, country-level weights were calculated to correct for unequal selection probabilities, non-response, and to align the sample with official population benchmarks on age, gender, region, and education. Microdata and documentation were obtained from the GESIS Eurobarometer Data Service under an academic-use license. All respondents provided informed consent in accordance with the ethical standards of the Eurobarometer consortium and the European Commission’s Directorate General for Communication. This rigorous, probability-based design and weighting protocol ensures that our analyses rest on a robust, high-quality dataset, fully suitable for examining pan-European attitudes toward AI in the workplace.

2.2. Measures

Out of 349 variables in the dataset, 5 variables, along with some demographics such as age, gender, and education level, were selected in our study. The majority of the variables are retained in their original form, albeit with varied levels of recoding. Whereas, one measure (AI attitudes) is derived from a variable and is (re)coded into two distinct variables. Furthermore, we note that although multi-item scales are preferred for psychometric robustness, we use three single-item variables. Single-item global measures have been shown to perform on par with multi-item scales when the construct is concrete and familiar to respondents [76,77]. For instance, AI perception directly captures the unambiguous, domain-general perception we study, has very high face validity, and correlated r = 0.57 (p < 0.001) with an 8-item domain-specific AI-for-management perception scale, demonstrating adequate concurrent validity for a broad, global AI perception scale. Similarly, the other two single-item scales used contain global, face-valid items, but more importantly, they capture the unidimensional nature we study, i.e., to what extent employees received training/tools, and whether they were informed about AI’s use in their workplace. They are distinct from digital skills and attitudes per se, where the constructs are inherently multidimensional. Lastly, although participants’ extent of familiarity with AI was not discretely measured, both AI perception and attitude explicitly assess individuals’ reflections on AI. Leveraging this, “do not know” responses (ranging from only 4% to 9%) were omitted from the analyses in our best attempt to control familiarity with AI influencing our study.
Digital skills are measured with a 4-item scale, on a 4-point Likert scale, i.e., 1 (totally disagree) to 4 (totally agree). It assesses individuals’ skills in using digital technologies, including AI, across four domains: daily life, job performance, future job adaptability, and digital and online learning. For example, “to what extent do you agree … regarding your skills in the use of the most recent digital technologies, including AI… in your daily life”. The internal consistency measured by Cronbach’s alpha (α) is 0.88.
Participants’ perceptions of employer digital support were measured with a single-item, 4-point Likert scale (1 = strongly disagree; 4 = strongly agree): “To what extent do you agree or disagree that your employer provides you with the necessary tools or training to work effectively with the most recent digital technologies, including AI?”
Individuals’ perception of AI was assessed by a single-item, face-valid measure: “How positively or negatively do you perceive the use of robots and AI in the workplace?” Responses were recorded on a 4-point Likert scale (1 = very negatively; 4 = very positively). Due to the possibility of a convergent validity test, we correlated AI perception with an 8-item scale: individuals’ perception of AI managing workplace activities, such as hiring and monitoring. Aligning with our expectations [78], the test resulted in a moderate correlation, r = 0.57 (p < 0.001).
For the AI augmentation–replacement attitude balance index, we combine a four-item AI augmentation attitude scale (e.g., “Robots and AI help people do their jobs”) with a two-item AI replacement attitude scale (“More jobs will disappear…”, “Robots and AI steal people’s jobs”). Both scales are recorded on a 4-point Likert scale (1 = strongly disagree; 4 = strongly agree). The Cronbach’s α for the augmentation attitude scale is 0.80, and the replacement attitude scale is 0.74. Each scale is mean-centered, and their difference (augmentation–replacement) yields a continuous balance score reflecting augmentation versus replacement views about AI in the workplace. Specifically, a score > 0 represents an augmentation-dominant attitude, a score < 0 represents a replacement-dominant attitude, and a score = 0 represents a perfectly balanced attitude.
To assess employer transparency, a total of 7 responses (e.g., “employer informed … without further details” or “yes, and you have been given access to personal data”) were recoded into dichotomous responses (0 = uninformed; 1 = informed). The face-valid item is as follows: “Did your employer inform you about the use of digital technologies, including Artificial Intelligence, to manage activities in your workplace?”
In demographics, gender and the recoded variable of age were utilized. To measure the level of education, 9 variables were recoded as one, where value 1 represents no education and value 9 represents doctoral or equivalent.

2.3. Data Analysis

The weights were applied in all analyses. Furthermore, after listwise deletion of missing values, the final analytical sample for H1 to H5 ranges from N = 3638 to N = 11,424, and H6 contains N = 10,529.
First, an exploratory factor analysis (EFA) was conducted to examine the underlying three-factor structure. This guided the subsequent confirmatory factor analysis (CFA) to validate the model. To test convergent validity and reliability, Cronbach’s α, composite reliability (CR), and average variance extracted (AVE) were calculated. Prior to regression analyses, necessary correlations and descriptive statistics were extracted. During preliminary analysis, a noteworthy insight was noted, to address which, attitude balance’s tertiles were extracted, and replacement attitude was recoded dichotomously for a cross-tabulation informing an additional inquiry of two mediations. All the hypotheses were tested using linear regression, and PROCESS macro’s Model 1 for moderation analysis and Model 4 for mediation analyses. A variety of assumption checks were performed. Residual normality and homoscedasticity were examined; when significant, reported HC3 (heteroscedasticity-consistent) standard error. Multicollinearity was calculated using VIF and tolerance for predictors and interaction terms. Lastly, previous research has noted that age, generational gap, level of education, and gender (though seldom) affect public receptivity to AI and automation [79,80,81,82]. Accordingly, to mitigate potential confounding effects, all regression analyses were controlled for age, gender, and education level. All the analyses were performed using SPSS V. 25 and PROCESS macro v4.2.

3. Results

3.1. Reliability and Validity

Digital skills, augmentation attitude, and replacement attitude were tested with EFA, CFA, and AVE and CR. The EFA using principal axis factoring with oblimin rotation resulted in an excellent KMO (0.833) and significant Bartlett’s test (χ2 = 59,215.90, p < 0.001), indicating suitability for factor analysis. The pattern matrix revealed strong loadings for all items on their intended factors, with no substantial cross-loadings. The subsequent CFA confirmed the model. The model demonstrated good fit: χ2(33) = 6309.76, p < 0.001; CFI = 0.982; TLI = 0.975; RMSEA = 0.085 (90% CI [0.083, 0.087]); SRMR = 0.038. All standardized loadings were strong and statistically significant (range: 0.67–0.88; 0.82 for the constrained two-item factor). Factor correlations were theoretically consistent in the expected directions and ranged from low to moderate (−0.01 to 0.57), supporting discriminant validity. CR and AVE calculated were above recommended thresholds (AVE ≥ 0.50; CR ≥ 0.70) for all constructs (AVE = 0.55–0.73; CR = 0.80–0.92), supporting convergent validity and internal consistency.
Note: For full technical details, please refer to the Appendix A.

3.2. Descriptives and Preliminary Analysis

Required correlations and descriptives are reported in Table 1. Regarding correlations underlying H1 to H5, digital skills were positively and strongly correlated with augmentation attitude (r = 0.50, p < 0.001) and employer support (r = 0.52, p < 0.001). Although of a negligible magnitude, the direction of the relationship was confirmed by correlation between digital skills and replacement attitude (r = −0.03, p < 0.01). AI perception was positively correlated with digital skills (r = 0.44, p < 0.001), augmentation attitude (r = 0.65, p < 0.001), and negatively related to replacement attitude (r = −0.16, p < 0.001). Regarding correlations concerning H6, employer transparency was positively correlated with AI perception (r = 25, p < 0.001) and, although marginally, negatively related with replacement attitude (r = −0.08, p < 0.001).
Table 1. Descriptives and correlations.

3.3. Assumption Checks

In linear regression for H1, residual diagnostic plots confirmed approximate normality and homoscedasticity. In mediation analysis, a-path regression exhibited normally distributed, homoscedastic residuals. In the outcome regression (Y on X + M), residual plots and White’s tests indicated heteroscedasticity (F = 33.62, p < 0.001), so we used HC3 standard errors and bootstrapped (5000-sample) confidence intervals (see Appendix A for assumption diagnostics). Multicollinearity diagnostics yielded VIFs of 1.002–1.288 and tolerances of 0.776–0.998, all well within acceptable limits. Concerning linear moderation, residual plots supported normality and homoscedasticity; VIFs for the predictor, moderator, and interaction term ranged from 2.38 to 4.03 (tolerances 0.25–0.42), all below conventional cut-offs.

3.4. Regression Model Testing

3.4.1. Research Model 1

This model involves a linear (OLS) regression and a mediation analysis covering hypotheses 1 through 5. All regression results were adjusted for age, gender, and education. For H1, we regressed digital skills to the extent of employer digital support. The regression coefficient was β = 0.42 (p < 0.001), supporting the hypothesis and implying that individuals’ skills benefit from training and tools provided by the employer. The mediation analysis assessing H2, H3, H4, and H5 confirmed all the hypotheses. Digital skills had a positive total effect on AI perception (β = 0.42, p < 0.001), indicating that higher skills relate to a more positive formed perception of AI among workers. Digital skills positively predicted augmentation–replacement attitude balance (β = 0.43, p < 0.001), supporting H2, and in turn, augmentation–replacement attitude balance positively predicted AI perception (β = 0.47, p < 0.001), supporting H5. The indirect effect of digital skills on perception through attitude balance was significant; effect = 0.20, BootSE = 0.0, 95% BootCI [0.186, 0.224]. Because the direct effect remained significant (c’ = 0.22, p < 0.001), mediation is partial rather than full and supports H5. Attitude balance accounts for 48% of the total effect of digital skills on perception (indirect effect/total effect = 0.2045/0.4243). This implies that digitals skill help build a favorable attitude among individuals, which dictates a positive perception of AI’s use in the workplace. Table 2 summarizes these results.
Table 2. Research Model 1.
Here, in an additional inquiry based on the crosstabulation reported in Table 3, we tested two additional mediation models. The rationale for this strict additional inquiry is duly addressed and reported in the discussion, respectively. In a model testing the mediation of replacement attitude on the effect between digital skills and AI perception, the indirect effect (b = 0.007, BootSE = 0.001, 95% CI [0.004, 0.010]) as well as the total effect (b = 0.40, SE = 0.008, 95% CI [0.383, 0.415]) were significant. The indirect effect accounted for 1.8% of the total effect (indirect effect/total effect = 0.0073/0.4062), indicating a minimal mediating role and justifying negligible practical significance. This signifies that digital skills’ effect on AI perception is only minutely carried out by replacement attitude. In a model testing the mediation of augmentation attitude on the effect between digital skills and AI perception, the indirect effect (b = 0.23, BootSE = 0.007, 95% CI [0.219, 0.249]) as well as the total effect (b = 0.42, SE = 0.009, 95% CI [0.400, 0.436]) were significant. The indirect effect accounted for 56% of the total effect (indirect effect/total effect = 0.2346/0.4182), indicating a substantial mediation role of AI augmentation attitude as compared to practically negligible mediation by replacement attitude. This demonstrates that digital skills’ effect on AI perception is largely carried out by an augmentation attitude. Table 4 summarizes these results.
Table 3. Additional inquiry crosstabulation.
Table 4. Additional inquiry mediations.

3.4.2. Research Model 2

Hypothesis 6 examined whether employer transparency moderated the impact of replacement attitude on individual perception, while controlling for age, gender, and education. In the linear moderation analysis, replacement attitude had a negative effect on AI perception (β = −0.26, p < 0.001), whereas employer transparency had a strong positive effect (β = 0.43, p < 0.001). This signifies that replacement attitude predicts a negative perception of AI among individuals, and, in contrast, being informed about AI use in the workplace predicts a positive perception. The interaction was significant (β = 0.14, p < 0.001); employer transparency positively moderated the negative relationship between replacement attitude and AI perception. This indicated that the negative impact of replacement attitude was attenuated by transparency and confirmed H6. Specifically, for uninformed respondents, the attitude–perception slope was β = −0.26 (p < 0.001), whereas for informed respondents it was only β = −0.12 (p < 0.001). This implies that being informed about AI use lowers the levels of replacement attitude among individuals and its translation into a negative perception. This interaction is illustrated in Figure 3.
Figure 3. Moderation effect of employer transparency in the relationship between AI perception and AI replacement attitude.

4. Discussion

This study utilized a 2024 cross-national survey of populations of the 27 EU member states, exploring the public’s views on digitalization and AI. We primarily aimed to understand Europeans’ most recent attitudes toward, and perception of AI via two novel, unexplored pathways. First, we employed a dichotomous operationalization of attitudes toward AI, distinguishing between AI augmentation attitude and AI replacement attitude. Second, we established a justified distinction of individuals’ perception of AI from attitude, and the direction of their relationship, i.e., attitudes form individuals’ perception of AI use in the workplace [15,16]. In particular, we studied how employer transparency and workers’ digital skills influenced their attitudes and perceptions, while controlling for age, gender, and education. In total, we tested six hypotheses.
As hypothesized (H1), employer digital support (i.e., training/tools received) had a very strong, positive effect on employees’ digital skills. This buttresses our theoretical pivot to JD-R and self-efficacy theory, suggesting that training can increase skill variety, particularly where digital skills constitute a relatively new skill set, and mastery through training can assist individuals’ belief in their capacity [23,41]. It also validates academicians’ recommendation of implementing training for effective AI integration via skill building [45]. The findings supporting H2 strongly suggested and confirmed that individuals with stronger digital skills tend to hold a more favorable, augmentation attitude toward AI over a disfavoring, replacement attitude. Digitally skilled people believe AI will augment the workforce rather than replace it. This is consistent with MIS attitude research and theory of TAM [21], along with UTAUT [22], suggesting that attitude is a function of perceived ease of use and effort expectancy, i.e., effort required to be skillful in using or operating the system. Further implying that individuals with higher digital skills will require less of this effort, which in turn, systematically dictates their attitude toward technology. This finding is in support of multiple studies examining digital competence’s positive influence on favoring attitudes toward AI use [59,60]. It also extends support to previous studies confirming the protective influence of digital competence over job replacement insecurity [28,61,62].
Furthermore, digital skills were strongly associated with AI perception (H4), implying that digital skills also influence how positively or negatively workers perceive AI’s use in the workplace. This is in support of numerous contemporary studies noting the positive influence of digital skills on workers’ perception of AI [30,65,69]. In fact, our study builds on their limitations with a larger, improved representative sample, along with influential control variables [83,84]. Moreover, as hypothesized, individuals’ attitudes guided their perception. Augmentation–replacement balance was strongly and positively associated with AI perception (H4). More importantly, confirming one of the core inquiries of this study (H5), the mediation analysis revealed that the positive effect of digital skills on perception was substantially mediated by attitudes (48%). This directly addresses the research gap in AI literature and corroborates the theoretical framework developed in our study. Specifically, individuals’ perception of an entity and its qualities is systematically influenced by their attitudes [15,16], and due to confirmation bias, the biased perception depends on the state of beliefs (an integral component of attitude) [25,64].
However, it is necessary to note that a crosstabulation (Table 3) informs us that we see moderate to high replacement attitude levels even in individuals with low (96.3%) and medium augmentation-dominant attitude (65.6%), reflecting cautious optimism in support of the findings of Dong et al. [52]. Thereby, in an additional inquiry, we investigated whether augmentation attitude alone mediates the relationship between digital skills and perception more profoundly than replacement attitude. Evidently, the augmentation attitude explains 56% of the relationship, compared to only 1.8% explained by the replacement attitude (Table 4). Additionally, there is a difference in r = 0.47 in the strength of correlation between the augmentation attitude and digital skills, compared to the replacement attitude. This is crucially important and might be one of the other novel insights of this study, in contrast to numerous studies alarmingly cautioning against the replacement fear. As it may be suggesting, the replacement attitude is a separate and disparate, yet crucially important parameter that perhaps plays a key role in driving the cautious, responsible adoption of AI [40], rather than AI use in the workplace. Further implying that in the discrete landscape of AI use, the development of digital skills and, as a result, a positive attitude about AI augmenting (and not replacing), assists the individuals in perceiving AI use more positively, and may be more important, as prioritized by the EU in its policies and rigorous public programs [7].
AI integration decisions remaining private among employers, giving rise to the vast “unknown” [70], is another source that can cause unrest among employees via uncertainty [24], which transparency can alleviate [71,72]. Included in our study, evidently, employer transparency moderated a key relationship and indicated that simply being informed about planned AI use lowers the anxious, disfavoring attitude and its translation into a negative perception. This validates our theoretical rationale that a lack of communication on AI engenders uncertainty that exacerbates the negative affective state of employees [24]. Moreover, findings also support that transparency influences workers’ cognitive trust [20], and informing employees helps raise their outcome expectations, shaping their cognitive views [23]. Affective and cognitive components are two of the three components of attitude [25]. Precisely, transparency attenuated the impact of replacement attitude on perception. Among uninformed workers, replacement attitude strongly depressed perception, whereas among informed employees, this effect was much weaker and the levels of replacement attitude were overall low (Figure 3). Implying that being informed about AI plans moderates the extent to which replacement attitude is translated into pessimistic perception, the findings support calls for transparent AI communication [73,74].

4.1. Implications and Recommendations

Digital skills emerged as a pivotal driver of AI acceptance, i.e., higher digital skills engender a favorable, augmentation attitude toward AI and a positive perception of AI’s integration into the workplace. Aligned with technology-acceptance theories (TAM, UTAUT), it suggests that skill-building is one of the most viable paths to improving AI receptivity, and digital skills can allow workers to accept AI as augmenting and hold a positive attitude rather than an attitude that AI will replace the workforce, further implying that digital skills serve a role of protection as well as preparation for AI integration in the workplace. Furthermore, the findings also noted the extreme importance of providing AI-related tools and training to employees in elevating digital skills. Thus, in practice, governments and firms should invest significantly in AI-related training and digital literacy programs. By equipping the workforce with tools and skills to use AI, organizations can create a workforce that views AI as a collaborator rather than a threat.
Second, the replacement attitude persists even in augmentation-dominant attitude respondents, representing cautious optimism. Moreover, our study shows that the positive impact of skills on perception operates mainly through instilling an augmentation mindset, as the augmentation attitude mediated over half of the skills→perception effect. The mediation via replacement attitude was statistically significant but practically negligible. This sheds light on two distinct roles played by two distinct attitudes. The role of replacement attitude is imperative in AI adoption, whereas an augmentation attitude is to be developed by digital skills to perceive AI use in the workplace positively. The replacement attitude essentially establishes the very ground upon which pathways of AI adoption rely, and drives the steps to ensure a responsible, harmless integration of AI in the workplace. Thereupon, the steps to follow include policies and programs (such as those initiated by the EU) that focus on digital skills development and thus, fostering a positive, augmentation attitude. Therefore, future research should systematically and more thoroughly examine workers’ replacement attitude, with an aim to uncover key as well as implicit concerns of workers regarding AI deployment that are methodically nurturing the replacement attitude. Such insights can profoundly inform adoption frameworks that both mitigate job-loss risk and emphasize AI’s capacity to augment human capabilities. In addition, AI promotional communications should emphasize framing AI as a supportive tool under responsible AI policies.
Third, and perhaps the most noteworthy insight is the power of transparency unraveled in our findings. Being informed about AI plans greatly weakens the negative impact of replacement attitude on AI perception. The finding reaffirms that transparent AI communication effectively raises outcome expectancy, an influential belief, and is prominent in shaping attitudes and their translation. Thus, organizations, along with regulators, should prioritize clear and proactive AI communication in order to reduce the negative outlook toward AI among employees. This could even be formalized in corporate AI governance guidelines or labor policies, given the large mitigating effect observed.
In summary, the study suggests two avenues for practice, i.e., building skills and exercising transparency. First, boost digital skills across the workforce, since skills strongly drive positive AI attitude and perception. Second, increase institutional transparency concerning AI adoption, as being informed consistently lowers the negative reception of AI. In regard to research, scholars should further investigate the cautious optimism phenomenon observed in our study, examining why a favorable attitude co-exists with a disfavoring view.

4.2. Limitations and Future Research

While this research offers novel insights into the drivers of AI attitudes and perception, several limitations merit acknowledgement. First, all data derive from a single Eurobarometer wave, preventing any causal inferences. Furthermore, our study’s research design is not strictly experimental, even though it sufficiently practices randomization and rigorous post-collection protocols. This potentially paves the way for several biases. For instance, all variables were self-reported in the same survey, allowing for common-method bias. On the other hand, response bias may especially be relevant as our study contains measures of digital skills and employer support, where an objective measurement may represent a more accurate assessment. Furthermore, for behavior measurements such as attitudes and perception, self-reported measures may lack accuracy, and extensively validated measures may be better suited. Thus, future longitudinal and experimental studies are recommended to adequately address these concerns.
Generalizability beyond Europe is another restraint of our study. Although Eurobarometer offers a large, weighted, representative sample of the EU-27, cultural, regulatory, and labor-market differences limit direct extensions to non-EU contexts (e.g., the U.S., Asia). Another noteworthy limitation is the lack of cross-industry effects of AI integration. This is extremely crucial as some job roles are more at risk for automation and replacement than others. Although capturing these nuances was deemed seemingly impossible by Dong et al. [52], future research should gain insights into cross-industry effects of AI, examining how differently and uniquely AI affects workplaces.
Last, the most crucial limitation of the study lies in its utilization of single-item measures. Nevertheless, all three single-item measures were face-valid, global measures capturing the unambiguous, unidimensional nature that we study. Furthermore, dichotomization of employer transparency’s seven responses (where four responses are dedicated to ‘yes’, one to ‘no’, and two comprise ‘missing/absent’ values) is intentional and pivoted on increasing robustness. The motivation lies behind the non-linearity, randomness, and specificity of four ‘yes’ responses, as the scale does not increase in intensity, unlike a normal scale, implying that it does not measure ‘the degree of transparency’. For instance, one response reads, “yes, and you have been given access to your personal data”, while the other response reads, “yes, and you have been given access to the results of the automated analysis carried out”. Thus, we retain the unambiguous, robust, and binary knowledge that we required, i.e., whether employees were informed about AI use or not. Notably, the scale is originally dichotomized in the Eurobarometer survey as well. Moreover, due to the possibility, the domain-general AI perception measure was validated against the domain-specific longer scale. However, these measures may still under-capture constructs’ conceptual complexity, do not allow estimation of internal consistency (e.g., Cronbach’s alpha), and are more susceptible to random measurement error. Furthermore, estimated relationships may be attenuated, and results may not support price construct-level calibration. Thus, findings should be interpreted as reflecting upon general, global perception, not fine-grained dimensions of perception. As a result, future research may benefit from multi-item batteries of these variables and should also explore the attitudinal distinction made in our study for further validation.

5. Conclusions

This study explored Europeans’ digital skills, attitudes, and perceptions surrounding AI in the workplace. First, it investigated employer digital support for employees’ digital skills. Second, it uncovered AI attitudes’ (augmentation vs. replacement) role in influencing individuals’ perception of AI. Third, it examined the employer transparency’s integral function in the translation of replacement attitude into negative perception. This research utilized the Eurobarometer survey of over 26,000 EU workers, carried out by the European Commission in 2024. The results revealed that employer-provided training and tools significantly boost digital skills, which in turn foster an augmentation-dominant attitude and reduce replacement attitude. Augmentation attitude strongly mediated the effect of digital skills on positive AI perception, whereas replacement attitude had only a minimal mediating impact. Moreover, employer transparency buffered the negative effect of replacement attitude on perception. This study serves as the foundation stone of formal research inquiries into two broad, distinct attitudes individuals hold toward AI’s deployment in the workplace. In addition, this study’s contribution is particularly invaluable in addressing the research gap and guiding future researchers in being mindful about the distinction between attitude toward and perception of AI in the workplace, as they are conceptually disparate. Furthermore, by highlighting the importance of upskilling, it substantiates the motivator of the Digital Europe Programme (DEP), i.e., to tackle the digital skills gap. It also substantiates the EU’s goals in digital transformation of establishing four academies to close the digital skills gap. Moreover, the EU AI Act demands a great degree of transparency about AI deployment from deployers to the European AI office. However, beyond governmental transparency, our study highlights the value of employee-level transparency, i.e., transparency from employers to employees regarding AI’s integration in the European workplaces. Lastly, this study calls for future research into how replacement attitude shapes cautious AI adoption.

Author Contributions

Conceptualization: D.B.; methodology: D.B. and C.B.; software: D.B.; validation: D.B., C.B. and S.Z.; formal analysis: D.B.; investigation: D.B. and S.Z.; resources: D.B.; data curation: D.B.; writing—original draft preparation: D.B.; writing—review and editing: D.B., C.B. and S.Z.; visualization: D.B.; supervision: D.B., C.B. and S.Z.; project administration: D.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the ethical standards of the Eurobarometer consortium and the European Commission’s Directorate General for Communication.

Data Availability Statement

All data and analyses are based on publicly available datasets, ensuring full transparency and replicability of findings. Open dataset publicly available at https://search.gesis.org/research_data/ZA8844 (accessed on 29 May 2025). Appendix D provides complete information necessary to reproduce all analyses reported in this study.

Acknowledgments

During the preparation of this manuscript/study, the author(s) used SPSS V. 25 and PROCESS macro v4.2. for the purposes of data analysis. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Psychometric Properties and Statistical Analysis

Appendix A.1. Assumption Testing

Appendix A.1.1. Data Preparation and Screening

Prior to conducting the main analyses, several statistical assumptions were examined to ensure the appropriateness of the analytical techniques employed. The following section presents the results of assumption testing procedures using residual analysis, diagnostic plots, and statistical tests.

Appendix A.1.2. Normality Assessment

Univariate and Multivariate Normality
Normality of individual variables and their joint distribution was assessed using multiple approaches, including visual inspection of histograms, Q-Q plots, and statistical tests. Based on the diagnostic procedures conducted, the following results were obtained:
  • Visual Inspection: Normal P-P plots of regression standardized residuals showed points closely following the diagonal line, indicating approximately normal distribution of residuals across all models.
  • Histogram Analysis: Histograms of standardized residuals displayed bell-shaped distributions with acceptable symmetry.
  • Residual Distribution: Standardized residuals were generally well-distributed around zero with no severe departures from normality.
Note: Specific statistical test results for skewness, kurtosis, Shapiro–Wilk, and Kolmogorov–Smirnov tests are available upon request but not reported here for brevity.

Appendix A.1.3. Residual Analysis and Regression Diagnostics

Linear Regression: Employer Digital Support with Digital Skills
Informatics 13 00017 i001
Visual Diagnostic Assessment:
  • Normal P-P Plot of Regression Standardized Residuals: Points closely follow the diagonal line, indicating approximately normal distribution of residuals.
  • Histogram of Standardized Residuals: Bell-shaped distribution with acceptable symmetry.
  • Scatterplot of Standardized Residuals vs. Standardized Predicted Values: Random scatter around zero with no clear pattern, indicating homoscedasticity and linearity.
Interpretation: Visual inspection of residual plots confirmed that assumptions of normality, homoscedasticity, and linearity were adequately met for this regression model.
Mediation Analysis: Digital Skills→AI Augmentation–Replacement Balance→AI Individual Perception
Path a: Digital Skills→Augmentation–Replacement Balance
Informatics 13 00017 i002
Residual Diagnostics:
  • Normal P-P Plot: Residuals approximate a normal distribution with minor deviations at the extremes, which is typical and acceptable.
  • Scatterplot Analysis: Standardized residuals show random distribution around the zero line.
  • Pattern Assessment: No evidence of systematic heteroscedasticity or non-linearity detected.
Paths b + c’: AI Perception on Digital Skills and Augmentation–Replacement Balance
Informatics 13 00017 i003
White’s Test for Heteroscedasticity:
Table A1. White’s test results: mediation model.
Table A1. White’s test results: mediation model.
SourceSum of SquaresdfMean SquareFSig.
Regression26.459213.22933.615<0.001
Residual1420.57336100.394
Total1447.0323612
Note: Dependent Variable: RSID2 (residuals squared). Predictors: (constant), PRED2, unstandardized predicted value.
Interpretation: Scatterplot indicated non-linearity, and the significant White’s test result suggests the presence of heteroscedasticity. This was addressed through the use of robust standard errors in the final analysis to ensure valid statistical inference.
Moderation Analysis: AI Replacement × Employer Informed→AI Perception
Informatics 13 00017 i004
Residual Pattern Analysis:
  • Standardized Residuals vs. Standardized Predicted Values: Generally random distribution with acceptable scatter pattern.
  • Normal P-P Plot: Acceptable approximation to normality with minor deviations at tails, which is common in large samples.
  • Homoscedasticity Assessment: Visual inspection suggests relatively constant variance across fitted values.
Interpretation: No severe violations of regression assumptions were detected. Minor deviations from perfect normality are acceptable given the sample size and robust estimation methods employed.

Appendix A.1.4. Multicollinearity Diagnostics

Variance Inflation Factor (VIF) Analysis
Multicollinearity among predictor variables was assessed using VIF and tolerance values across different analytical models.
Table A2. Multicollinearity assessment across models.
Table A2. Multicollinearity assessment across models.
ModelPredictorVIFTolerance
Mediation Model
Digital Skills1.2880.776
Augmentation–Replacement Balance1.0020.998
Moderation Model
AI Replacement Attitude2.7350.421
Employer-Informed Status3.0690.326
Interaction Term (Replacement × Info)4.0270.248
Note: Predictors were mean-centered prior to forming interaction terms in moderation models to minimize multicollinearity concerns.
Assessment Criteria and Interpretation:
  • VIF < 5.0: All values meet this criterion, indicating acceptable levels of multicollinearity.
  • Tolerance > 0.2: All predictors exceed this threshold, confirming adequate independence.
  • Mean-Centering Effect: Successfully reduced potential multicollinearity in interaction terms.
  • Highest VIF: The interaction term (4.027) represents the highest VIF but remains within acceptable limits.

Appendix A.1.5. Independence of Observations

Assessment Method: Independence of observations was evaluated through examination of residual patterns and temporal/spatial clustering analysis.
Findings: No evidence of systematic dependence or clustering was observed in the residual plots. The random distribution of residuals across predicted values supports the independence assumption.

Appendix A.1.6. Outlier Detection and Influence Diagnostics

Visual Assessment:
  • Standardized residual plots were examined for extreme values.
  • Cook’s distance was calculated to identify potentially influential observations.
  • Cases with extreme residuals were retained as they appeared to represent legitimate response patterns rather than data entry errors.
Treatment: Robust estimation techniques were employed to minimize the impact of any extreme observations while preserving the integrity of the complete dataset.

Appendix A.2. Exploratory Factor Analysis (EFA)

Appendix A.2.1. Methodology

Exploratory Factor Analysis was conducted to identify the underlying factor structure of the measurement instruments. Principal axis factoring with oblimin rotation was employed to extract factors, allowing for correlated factors as is common in social science research.

Appendix A.2.2. Sample Adequacy Tests

The Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy was 0.833, exceeding the recommended threshold of 0.6, indicating that the data were suitable for factor analysis. Bartlett’s test of sphericity was statistically significant (p < 0.001), confirming that the correlation matrix was not an identity matrix and that factor analysis was appropriate.

Appendix A.2.3. EFA Results

The analysis revealed a three-factor solution that explained the variance in the observed variables. Factor loadings below 0.30 were suppressed for clarity and interpretability.
Table A3. Exploratory factor analysis pattern matrix.
Table A3. Exploratory factor analysis pattern matrix.
ItemFactor 1
(Digital Skills)
Factor 2
(AI Augmentation Attitude)
Factor 3
(AI Replacement Attitude)
Digital skill in daily life (qb2_1)0.83
Digital skill for job tasks (qb2_2)0.83
Digital skill for future work (qb2_3)0.82
Benefit from digital learning (qb2_4)0.82
AI increases task pace (qb6_2)0.59
AI makes accurate decisions (qb6_4)0.72
AI does boring/repetitive jobs (qb6_6)0.68
AI helps people at work/home (qb6_8)0.57
AI steals jobs (qb6_1)0.82
More jobs disappear than are created (qb6_5)0.72
Note: Principal axis factoring with oblimin rotation; loadings < 0.30 suppressed for clarity. KMO = 0.833, Bartlett’s test p < 0.001.

Appendix A.3. Confirmatory Factor Analysis (CFA)

Appendix A.3.1. Model Specification

Following the EFA results, a three-factor confirmatory factor analysis model was specified to validate the factor structure. The model included three latent constructs: Digital Skills, AI Augmentation Attitude, and AI Replacement Attitude.

Appendix A.3.2. Standardized Factor Loadings

Table A4. Confirmatory factor analysis: standardized factor loadings.
Table A4. Confirmatory factor analysis: standardized factor loadings.
Factor and ItemStandardized Loading
Factor 1: Digital Skills
Digital skill in daily life (qb2_1)0.86
Digital skill for job tasks (qb2_2)0.84
Digital skill for future work (qb2_3)0.84
Benefit from digital learning (qb2_4)0.88
Factor 2: AI Augmentation Attitude
AI increases task pace (qb6_2)0.81
AI makes accurate decisions (qb6_4)0.76
AI does boring/repetitive jobs (qb6_6)0.71
AI helps people at work/home (qb6_8)0.67
Factor 3: AI Replacement Attitude
AI steals jobs (qb6_1)0.82 *
More jobs disappear than are created (qb6_5)0.82 *
Note: * = constrained loading for model identification purposes.

Appendix A.4. Inter-Construct Correlations

Appendix A.4.1. Factor Correlation Matrix

The correlations between the three latent constructs provide insight into the relationships between digital skills, AI augmentation attitudes, and AI replacement attitudes.
Table A5. Factor correlation matrix.
Table A5. Factor correlation matrix.
Digital SkillsAI Augmentation AttitudeAI Replacement Attitude
Digital Skills0.57 **−0.01
AI Augmentation Attitude0.57 **0.10 *
AI Replacement Attitude−0.010.10 *
Note: ** p < 0.001; * p < 0.01.

Appendix A.4.2. Interpretation

The correlation analysis reveals the following:
  • A moderate positive correlation (r = 0.57) between Digital Skills and AI Augmentation Attitude suggests that individuals with higher digital skills tend to have more positive attitudes toward AI augmentation.
  • A negligible correlation (r = −0.01) between Digital Skills and AI Replacement Attitude indicates that digital skill levels do not significantly relate to attitudes about AI job replacement.
  • A weak positive correlation (r = 0.10) between AI Augmentation Attitude and AI Replacement Attitude suggests a slight tendency for those who have positive attitudes toward AI augmentation to also acknowledge potential replacement concerns.

Appendix A.5. Reliability and Validity Assessment

Appendix A.5.1. Internal Consistency and Convergent Validity

Table A6. Composite reliability (CR) and average variance extracted (AVE).
Table A6. Composite reliability (CR) and average variance extracted (AVE).
FactorNumber of ItemsComposite Reliability (CR)Average Variance Extracted (AVE)
Digital Skills40.920.73
AI Augmentation Attitude40.830.55
AI Replacement Attitude20.800.67

Appendix A.5.2. Reliability and Validity Criteria

Composite Reliability (CR): All constructs exceeded the recommended threshold of 0.70, with values ranging from 0.80 to 0.92, indicating good to excellent internal consistency.
Average Variance Extracted (AVE): Two of the three constructs (Digital Skills and AI Replacement Attitude) exceeded the recommended threshold of 0.50, indicating adequate convergent validity. The AI Augmentation Attitude construct showed an AVE of 0.55, which is marginally above the threshold.
Discriminant Validity: The square root of each construct’s AVE should exceed its correlations with other constructs. This criterion was satisfied for all constructs, confirming discriminant validity.

Appendix A.5.3. Mathematical Formulations

The reliability and validity indices were calculated using the following formulas:
Composite Reliability (CR):
C R = (   λ ) 2 (   λ ) 2 +   ( 1 λ 2 )
Average Variance Extracted (AVE):
A V E =   ( λ 2 ) k
where
  • λ = standardized factor loading
  • k = number of items per construct

Appendix A.6. Conclusions

Appendix A.6.1. Summary of Assumption Testing

The comprehensive assumption testing revealed the following:
  • Normality: Visual inspection of residual plots (P-P plots and histograms) indicated acceptable approximation to normal distribution across all models.
  • Homoscedasticity: Homoscedasticity was generally satisfied across models, with heteroscedasticity detected in one model and addressed through robust standard errors.
  • Linearity: Linearity was confirmed through residual plot analysis, showing random distribution around zero with no systematic pattern.
  • Independence: Independence was supported by visual inspection of residual patterns with no evidence of systematic dependence.
  • Multicollinearity: All VIF values remained within acceptable limits (< 5.0), with mean-centering successfully addressing interaction term concerns.

Appendix A.6.2. Model Appropriateness

The psychometric analysis supports the three-factor structure of the measurement model. Statistical assumptions were adequately met or appropriately addressed through robust analytical techniques. The constructs demonstrate adequate reliability and validity, confirming their appropriateness for subsequent structural equation modeling, mediation analysis, and moderation testing.

Appendix A.6.3. Methodological Transparency

This appendix provides complete documentation of assumption testing procedures and diagnostic assessments. Where specific numerical results from normality tests, descriptive statistics, or additional diagnostic procedures are needed, these can be provided upon request to maintain full transparency while focusing on the most critical diagnostic information here.

Appendix B. Variable Construction and Definitions

Appendix B.1. Overview

This appendix provides detailed information about the construction, coding, and recoding procedures for all variables used in the analysis. All variables were derived from survey questionnaire items and underwent systematic data preparation procedures to ensure analytical appropriateness.

Appendix B.2. Primary Variables

Appendix B.2.1. Employer Digital Support

Conceptual Definition: The extent to which employees perceive their employer provides necessary tools and training for effective use of digital technologies, including AI.
Original Survey Item:
  • QB3: “To what extent do you agree or disagree that your employer provides you with the necessary tools or training to work effectively with the most recent digital technologies, including Artificial Intelligence?”
Original Coding:
  • 1 = “Totally agree”
  • 2 = “Tend to agree”
  • 3 = “Tend to disagree”
  • 4 = “Totally disagree”
  • 5 = “Not applicable”
  • 6 = “Do not know”
Recoding Procedure: Eliminated non-substantive responses: Values 5 and 6 were recoded to system as missing. Reversed scale direction for intuitive interpretation was as follows:
  • Original 1→Recoded 4 (Totally agree)
  • Original 2→Recoded 3 (Tend to agree)
  • Original 3→Recoded 2 (Tend to disagree)
  • Original 4→Recoded 1 (Totally disagree)
Final Scale: 1 = Totally disagree … 4 = Totally agree

Appendix B.2.2. Digital Skills

Conceptual Definition: Self-assessed competence in using digital technologies across multiple life domains.
Original Survey Items:
  • QB2.1: “You consider yourself to be sufficiently skilled in the use of digital technologies… in your daily life.”
  • QB2.2: “You consider yourself to be sufficiently skilled in the use of digital technologies… to do your job.”
  • QB2.3: “You consider yourself to be sufficiently skilled in the use of digital technologies… to do a future job if you were to find a job or to change jobs within the next twelve months.”
  • QB2.4: “You consider yourself to be sufficiently skilled in the use of digital technologies… to be able to benefit from digital and online learning opportunities.”
Context Provided: “Think about, for example, using smart appliances at home that you can control remotely for heating, cooking, or cleaning; using apps on your smartphone to handle your daily schedule; or using websites to manage your tasks at work.”
Original Coding: Same as Employer Digital Support (1–6 scale).
Recoding Procedure: Same systematic recoding as Employer Digital Support applied to all four items:
  • qb2_1→qb2_1r
  • qb2_2→qb2_2r
  • qb2_3→qb2_3r
  • qb2_4→qb2_4r
Index Construction:
  • DigSkill_Index = MEAN (qb2_1r, qb2_2r, qb2_3r, qb2_4r)
Final Scale: 1 = Totally disagree … 4 = Totally agree (continuous composite)

Appendix B.2.3. AI Attitudes

AI Augmentation Attitude
Conceptual Definition: Positive perceptions of AI and robotics as beneficial tools that enhance human capabilities and productivity.
Original Survey Items:
  • QB6.2: “Robots and Artificial Intelligence are a good thing for society, because they help people do their jobs or carry out daily tasks at home.”
  • QB6.4: “Artificial Intelligence is necessary as it can do jobs that are seen as boring or repetitive.”
  • QB6.6: “Robots and Artificial Intelligence increase the pace at which workers complete tasks.”
  • QB6.8: “Robots and Artificial Intelligence can be used to make accurate decisions in the workplace.”
AI Replacement Attitude
Conceptual Definition: Perceptions of AI and robotics as threats to employment and job security.
Original Survey Items:
  • QB6.1: “Due to the use of robots and Artificial Intelligence, more jobs will disappear than new jobs will be created.”
  • QB6.5: “Robots and Artificial Intelligence steal people’s jobs.”
Original Coding for all AI Attitude Items: Same 1–6 scale as previous variables.
Recoding Procedure: Same systematic recoding applied to all items (qb6_1r through qb6_8r).
Scale Construction:
  • AI Augmentation Attitude = MEAN (qb6_2r, qb6_4r, qb6_6r, qb6_8r)
  • AI Replacement Attitude = MEAN (qb6_1r, qb6_5r)
Final Scales: 1 = Totally disagree … 4 = Totally agree (continuous composites)

Appendix B.2.4. AI Individual Perception

Conceptual Definition: Overall individual assessment of AI and robotics use in workplace contexts.
Original Survey Item:
  • QB5: “How positively or negatively do you perceive the use of robots and Artificial Intelligence in the workplace?”
Original Coding:
  • 1 = “Very positively”
  • 2 = “Fairly positively”
  • 3 = “Fairly negatively”
  • 4 = “Very negatively”
  • 5 = “Not applicable”
  • 6 = “Do not know”
Recoding Procedure: Same systematic approach—eliminated non-substantive responses and reversed scale.
Final Scale: 1 = Very negatively … 4 = Very positively

Appendix B.2.5. Employer Transparency

Conceptual Definition: Whether the employer informed the employee about AI/digital technology use in the workplace.
Original Survey Items (QB9 series—multiple binary indicators):
  • qb9_1: “Employer informed you about AI without further details” (1 = Yes; 0 = No)
  • qb9_2: “Employer gave detailed explanation” (1 = Yes; 0 = No)
  • qb9_3: “Employer gave access to personal data” (1 = Yes; 0 = No)
  • qb9_4: “Employer gave results from automated analysis” (1 = Yes; 0 = No)
  • qb9_5: “Employer did not inform at all” (1 = Yes; 0 = No)
  • qb9_6: “Not applicable” (student/retired, etc.)
  • qb9_7: “Do not know”
Recoding Logic:
  • Informed (1): If any of qb9_1 through qb9_4 = 1 (informed in any way)
  • Not Informed (0): If qb9_5 = 1 (“Not informed”)
  • System Missing: If qb9_6 (“Not applicable”) or qb9_7 (“Do not know”)
Final Variable: Binary (0 = Not informed, 1 = Informed)

Appendix B.3. Control Variables

Appendix B.3.1. Age

Original Variable: d11 (exact age in years).
Recoding: Created categorical variable d11r1:
  • 1 = 15–24 years
  • 2 = 25–39 years
  • 3 = 40–54 years
  • 4 = 55–98 years
Usage: Categorical predictor with 4 levels.

Appendix B.3.2. Gender

Original Variable: d10.
  • 1 = Man
  • 2 = Woman
  • 3 = “None of the above/Nonbinary/Prefer not to say”
Usage: Used exactly as coded in the original data.

Appendix B.3.3. Education (Highest Level Attained)

Original Variables: d13 and d13r (existing education variables).
Recoded Variable: Highest_Level_Education:
  • 1 = Pre-primary (no education)
  • 2 = Primary
  • 3 = Lower secondary
  • 4 = Upper secondary
  • 5 = Post-secondary non-tertiary (vocational)
  • 6 = Short-cycle tertiary
  • 7 = Bachelor’s or equivalent
  • 8 = Master’s or equivalent
  • 9 = Doctoral or equivalent
Usage: Categorical predictor with 9 levels (reference category = pre-primary).

Appendix B.4. Data Quality and Missing Values

Appendix B.4.1. Missing Data Treatment

Systematic Approach: All variables used identical missing data handling:
  • “Not applicable” responses→System missing
  • “Do not know” responses→System missing
  • Only substantive responses (agreement/disagreement) retained for analysis
Rationale: This approach ensures that only respondents with meaningful opinions on each topic contribute to the analysis, avoiding potential bias from non-substantive responses.

Appendix B.4.2. Scale Construction Reliability

Composite Variables: All multi-item scales (Digital Skills, AI Augmentation Attitude, AI Replacement Attitude) were constructed using mean-based approaches, which provide:
  • Robustness to minor missing data within scales
  • Intuitive interpretation (same metric as original items)
  • Appropriate handling of partial responses

Appendix B.5. Variable Summary

Table A7. Final variable summary.
Table A7. Final variable summary.
VariableTypeScaleRangeConstruction Method
Employer Digital SupportContinuous1–4Single item, reverse-codedDirect recoding
Digital SkillsContinuous1–4Mean of 4 itemsMean composite
AI Augmentation AttitudeContinuous1–4Mean of 4 itemsMean composite
AI Replacement AttitudeContinuous1–4Mean of 2 itemsMean composite
AI Individual PerceptionContinuous1–4Single item, reverse-codedDirect recoding
Employer TransparencyBinary0–1Multiple items combinedLogical recoding
AgeCategorical1–44 age groupsCategorical recoding
GenderCategorical1–33 categoriesOriginal coding
EducationCategorical1–99 education levelsCategorical recoding

Appendix B.6. Methodological Notes

Appendix B.6.1. Scale Direction

All continuous variables were coded so that higher values represent more positive responses:
  • Higher Digital Skills = Greater self-assessed competence
  • Higher AI Augmentation Attitude = More positive view of AI benefits
  • Higher AI Replacement Attitude = Greater concern about job displacement
  • Higher AI Individual Perception = More positive workplace AI perception
  • Higher Employer Digital Support = Greater perceived employer support

Appendix B.6.2. Analytical Considerations

Composite Reliability: All multi-item scales demonstrated adequate reliability (see Appendix A for psychometric details).
Missing Data: The systematic approach to missing data ensures consistent treatment across all variables while preserving only meaningful responses for analysis.
Categorical Variables: All categorical control variables were treated as nominal predictors in analyses, with appropriate reference categories specified.

Appendix C. Supplementary Analyses

Appendix C.1. Overview

This appendix presents additional analyses conducted to explore relationships identified during the main investigation. These analyses were referenced in Section 4 but are reported here in detail to avoid cluttering Section 3 while providing complete transparency for interested readers.

Appendix C.2. Supplementary Mediation Analyses

Appendix C.2.1. Mediation Analysis 1: Digital Skills→Augmentation Attitude→AI Individual Perception

Research Question: To what extent does AI augmentation attitude mediate the relationship between digital skills and AI individual perception?
Theoretical Rationale: This analysis explores whether positive attitudes toward AI augmentation serve as a primary pathway through which digital skills influence individual perceptions of AI in the workplace.
Model Specification:
  • Independent Variable (X): Digital Skills
  • Mediator (M): AI Augmentation Attitude
  • Dependent Variable (Y): AI Individual Perception
  • Sample Size: 7943
Results
Model Summary:
  • R = 0.4669, R2 = 0.2180, F(1, 7941) = 2213.2002, p < 0.001
Path Coefficients:
Path a (Digital Skills→AI Augmentation Attitude):
  • Coefficient = 0.4029, SE = 0.0086, t = 47.0447, p < 0.001, 95% CI [0.3862, 0.4197]
Path b (AI Augmentation Attitude→AI Individual Perception, controlling for Digital Skills):
  • Coefficient = 0.5822, SE = 0.0102, t = 57.1072, p < 0.001, 95% CI [0.5622, 0.6022]
Direct and Indirect Effects:
Total Effect of Digital Skills on AI Individual Perception:
  • Effect = 0.4182, SE = 0.0092, t = 45.2534, p < 0.001, 95% CI [0.4001, 0.4363]
Direct Effect of Digital Skills on AI Individual Perception (c’):
  • Effect = 0.1836, SE = 0.0088, t = 20.8681, p < 0.001, 95% CI [0.1664, 0.2009]
Indirect Effect (Mediation) through AI Augmentation Attitude:
  • Effect = 0.2346, Bootstrap SE = 0.0076, 95% Bootstrap CI [0.2195, 0.2494]
Interpretation
Mediation Effect: The analysis reveals that AI augmentation attitude explains 56.1% of the relationship between digital skills and AI individual perception (0.2346/0.4182 = 56.1%), indicating a substantial mediation effect.
Statistical Significance: The bootstrap confidence interval for the indirect effect does not include zero [0.2195, 0.2494], confirming significant mediation.
Conclusion: Augmentation attitude serves as a primary mediating pathway, suggesting that digital skills influence AI perception largely through fostering positive attitudes about AI’s augmentative capabilities.

Appendix C.2.2. Mediation Analysis 2: Digital Skills→Replacement Attitude→AI Individual Perception

Research Question: To what extent does AI replacement attitude mediate the relationship between digital skills and AI individual perception?
Theoretical Rationale: This analysis examines whether concerns about AI replacement serve as a mediating mechanism between digital skills and AI perceptions, providing a comparison to the augmentation pathway.
Model Specification:
  • Independent Variable (X): Digital Skills
  • Mediator (M): AI Replacement Attitude
  • Dependent Variable (Y): AI Individual Perception
  • Sample Size: 11,344
Results
Model Summary:
  • R = 0.0488, R2 = 0.0024, F(1, 11342) = 27.0289, p < 0.001
Path Coefficients:
Path a (Digital Skills→AI Replacement Attitude):
  • Coefficient = −0.0298, SE = 0.0057, t = −5.1989, p < 0.001, 95% CI [−0.0411, −0.0186]
Path b (AI Replacement Attitude→AI Individual Perception, controlling for Digital Skills):
  • Coefficient = −0.2445, SE = 0.0133, t = −18.3600, p < 0.001, 95% CI [−0.2706, −0.2184]
Direct and Indirect Effects:
Total Effect of Digital Skills on AI Individual Perception:
  • Effect = 0.4032, SE = 0.0081, t = 49.6965, p < 0.001, 95% CI [0.3873, 0.4191]
Direct Effect of Digital Skills on AI Individual Perception (c’):
  • Effect = 0.3959, SE = 0.0081, t = 48.9683, p < 0.001, 95% CI [0.3829, 0.4148]
Indirect Effect (Mediation) through AI Replacement Attitude:
  • Effect = 0.0073, Bootstrap SE = 0.0014, 95% Bootstrap CI [0.0045, 0.0101]
Interpretation
Mediation Effect: The analysis shows that AI replacement attitude explains only 1.8% of the relationship between digital skills and AI individual perception (0.0073/0.4032 = 1.8%), indicating minimal mediation.
Statistical Significance: The bootstrap confidence interval for the indirect effect does not include zero [0.0045, 0.0101], confirming significant but very small mediation.
Conclusion: Replacement attitude plays a limited role as a mediating pathway, suggesting that concerns about job displacement operate largely independently from the digital skills→perception relationship.

Appendix C.3. Cross-Tabulation Analysis

Appendix C.3.1. Augmentation–Replacement Balance and AI Replacement Attitude

Research Question: How does the balance between AI augmentation and replacement attitudes relate to dichotomized AI replacement attitudes?
Examined Variables:
  • Row Variable: Augmentation–Replacement Balance (Mdiff variable)
  • Column Variable: AI Replacement Attitude (Dichotomized: Agree vs. Disagree)
Analytical Purpose: This cross-tabulation explores the relationship between different levels of augmentation-dominant attitudes and agreement with AI replacement concerns, providing evidence for the concept of “cautious optimism” in AI attitudes.
Results
Table A8. Cross-tabulation: augmentation–replacement balance by AI replacement attitude.
Table A8. Cross-tabulation: augmentation–replacement balance by AI replacement attitude.
Augmentation–Replacement BalanceAI Replacement Attitude
DisagreeAgreeTotal
Low-Augmentation-Dominant (0.88)32 (3.7%)822 (96.3%)854 (100.0%)
Medium-Augmentation-Dominant (1.88)97 (34.4%)185 (65.6%)282 (100.0%)
Note: Percentages calculated within each balance category (row percentages).
Interpretation
Pattern Description: The cross-tabulation reveals a striking pattern supporting the concept of “cautious optimism” in AI attitudes:
  • Low-Augmentation-Dominant Group (0.88): The group shows overwhelming agreement with AI replacement concerns (96.3% agree vs. 3.7% disagree).
  • Medium-Augmentation-Dominant Group (1.88): The group shows moderate but substantial agreement with replacement concerns (65.6% agree vs. 34.4% disagree).
Key Findings:
6.
Cautious Optimism Pattern: Even individuals with augmentation-dominant attitudes maintain moderate to high levels of replacement concerns, reflecting sophisticated, nuanced thinking about AI’s workplace implications.
7.
Persistent Concerns: Replacement concerns remain substantial across both groups, suggesting that positive and negative AI attitudes coexist rather than oppose each other.
8.
Gradient of Concern: Higher augmentation-dominance is associated with reduced replacement concerns, but concerns remain prevalent even in the most optimistic group.
Theoretical Implications: This pattern supports the notion that replacement attitude functions as a separate parameter driving cautious, responsible AI adoption rather than simply representing the opposite of augmentation attitudes. The coexistence of positive augmentation views with replacement concerns suggests that individuals can simultaneously appreciate AI’s benefits while maintaining realistic caution about potential job displacement.

Appendix C.4. Summary of Supplementary Findings

Appendix C.4.1. Key Insights

The supplementary analyses provide crucial additional context for interpreting the main findings:
9.
Dramatic Mediation Difference: Augmentation attitude (56.1%) vastly outweighs replacement attitude (1.8%) in mediating the digital skills→perception relationship, representing a 54.3 percentage point difference in explanatory power.
10.
Strong Correlation Contrast: The path coefficients reveal a substantial difference in how digital skills relate to each attitude type:
  • Digital Skills→Augmentation Attitude: β = 0.4029 (strong positive)
  • Digital Skills→Replacement Attitude: β = −0.0298 (weak negative)
  • Correlation difference: Δr ≈ 0.47
11.
Separate Attitude Dimensions: The analyses demonstrate that augmentation and replacement attitudes operate as distinct, largely independent dimensions rather than opposing ends of a single continuum.
12.
Cautious Optimism: The cross-tabulation reveals that even augmentation-dominant individuals maintain substantial replacement concerns, supporting a model of cautious optimism rather than uncritical enthusiasm for AI.

Appendix C.4.2. Integration with Main Results

These supplementary analyses complement the main findings by the following:
  • Clarifying Mediation Pathways: Demonstrating that positive AI perception development occurs primarily through augmented attitudes rather than reduced replacement concerns.
  • Supporting Policy Implications: Providing evidence that fostering augmentation attitudes and digital skills development may be more effective for positive AI adoption than simply addressing replacement fears.
  • Revealing Attitude Complexity: Showing that individuals can simultaneously hold positive augmentation views while maintaining responsible caution about replacement risks.

Appendix D. Reproducibility Information

Appendix D.1. Overview

This appendix provides complete information necessary to reproduce all analyses reported in this study. All data and analyses are based on publicly available datasets, ensuring full transparency and replicability of findings.

Appendix D.2. Dataset Information

Data Source: Eurobarometer 104.1 (2024)
Access: Open dataset publicly available at https://search.gesis.org/research_data/ZA8844 (accessed on 29 May 2025)
Sample Characteristics:
  • Total sample size varies by analysis due to missing data handling
  • Cross-sectional survey data
  • The survey employed a multi-stage, random probability sampling design stratified by region and density of 27 EU member states between April 2024 and May 2024.
Ethical Considerations: As this study uses publicly available, anonymized secondary data, no additional ethical approval was required beyond that obtained by the original data collectors.

Appendix D.3. Software and Statistical Environment

Primary Analysis Software: IBM SPSS Statistics V. 25
Supplementary Tools:
  • Hayes PROCESS macro for SPSS v4.2

Appendix D.4. Variable Construction

Reference: Complete variable construction procedures, including original survey items, coding schemes, recoding procedures, and scale construction methods, are detailed in Appendix B: Variable Construction and Definitions.
Key Variables Used in Analyses:
  • Employer Digital Support
  • Digital Skills (composite scale)
  • AI Individual Perception
  • Augmentation–Replacement Attitude Balance (composite scale)
  • AI Replacement Attitude (composite scale)
  • AI Augmentation Attitude (composite scale)
  • Employer Transparency (binary)
  • Control variables: Age (categorical), Gender, Education (categorical)

Appendix D.5. Analytical Procedures

Appendix D.5.1. Analysis 1: Linear Regression (Employer Digital Support and Digital Skills)

Objective: Examine the relationship between digital skills and employer digital support while controlling for demographic factors.
SPSS Procedure:
  • Analyze→Regression→Linear
  • Method: Enter (Block-wise)
Model Specification:
  • Dependent Variable: Employer Digital Support
  • Block 1 (Controls): Age, Gender, Education
  • Block 2 (Predictor): Digital Skills
Statistical Settings:
  • Entry method: Enter
  • Block-wise entry with controls entered first
  • Default confidence intervals (95%)
  • Standard residual diagnostics enabled

Appendix D.5.2. Analysis 2: Mediation Analysis (Main Hypothesis Testing)

Objective: Test mediation of digital skills→AI individual perception relationship through augmentation–replacement attitude balance.
SPSS Procedure:
  • Hayes PROCESS macro, Model 4 (Simple Mediation)
Model Specification:
  • Independent Variable (X): Digital Skills
  • Mediator (M): Augmentation–Replacement Attitude Balance
  • Dependent Variable (Y): AI Individual Perception
  • Covariates: Age, Gender, Education
Statistical Settings:
  • Bootstrap samples: 5000
  • Confidence level: 95%
  • Heteroscedasticity-Consistent Standard Errors: HC3 (due to detected heteroscedasticity in assumption testing)
  • Total Effect Option: Enabled (to report total, direct, and indirect effects)
  • Missing data: Listwise deletion

Appendix D.5.3. Analysis 3: Moderation Analysis

Objective: The aim was to test whether employer transparency moderates the relationship between AI replacement attitude and AI individual perception.
SPSS Procedure:
  • Hayes PROCESS macro, Model 1 (Simple Moderation)
Model Specification:
  • Independent Variable (X): AI Replacement Attitude
  • Moderator (W): Employer Transparency
  • Dependent Variable (Y): AI Individual Perception
  • Covariates: Age, Gender, Education
Statistical Settings:
  • Bootstrap samples: 5000
  • Confidence level: 95%
  • Mean-centering: Continuous
  • Missing data: Listwise deletion

Appendix D.5.4. Supplementary Analysis 1: Augmentation Attitude Mediation

Objective: The aim was to test mediation through AI augmentation attitude specifically.
SPSS Procedure:
  • Hayes PROCESS macro, Model 4 (Simple Mediation)
Model Specification:
  • Independent Variable (X): Digital Skills
  • Mediator (M): AI Augmentation Attitude
  • Dependent Variable (Y): AI Individual Perception
  • Covariates: None
Statistical Settings:
  • Bootstrap samples: 5000
  • Confidence level: 95%
  • Missing data: Listwise deletion

Appendix D.5.5. Supplementary Analysis 2: Replacement Attitude Mediation

Objective: The aim was to test mediation through AI replacement attitude specifically.
SPSS Procedure:
  • Hayes PROCESS macro, Model 4 (Simple Mediation)
Model Specification:
  • Independent Variable (X): Digital Skills
  • Mediator (M): AI Replacement Attitude
  • Dependent Variable (Y): AI Individual Perception
  • Covariates: None
Statistical Settings:
  • Bootstrap samples: 5000
  • Confidence level: 95%
  • Missing data: Listwise deletion

Appendix D.5.6. Supplementary Analysis 3: Cross-Tabulation

Objective: The aim was to examine the relationship between augmentation–replacement balance and dichotomized AI replacement attitude.
SPSS Procedure:
  • Analyze→Descriptive Statistics→Crosstabs
Model Specification:
  • Row Variable: Augmentation–Replacement Balance
  • Column Variable: AI Replacement Attitude (Dichotomized)
Statistical Settings:
  • Cell percentages: Row percentages
  • Statistics: Chi-square test
  • Expected counts displayed

Appendix D.6. Missing Data Handling

Primary Strategy: Listwise deletion (SPSS default for PROCESS macro)
Rationale:
  • Consistent with variable construction approach (see Appendix B)
  • Maintains interpretability across analyses
  • Sample sizes remain adequate for all analyses
Impact: Sample sizes vary across analyses:
  • Main linear regression: N = 11,410
  • Main mediation analysis: N = 3635
  • Main moderation analysis: N = 11,338
  • Supplementary mediation 1: N = 7543
  • Supplementary mediation 2: N = 11,344
  • Cross-tabulation: N = 1136 (selected categories)

Appendix D.7. Assumption Testing Procedures

Reference: Complete assumption testing procedures and diagnostics are reported in Appendix A: Psychometric Properties and Statistical Analysis.
Key Methodological Decisions:
  • Heteroscedasticity: HC3 robust standard errors were implemented in the main mediation analysis.
  • Normality: Visual diagnostic approaches were supplemented by bootstrap procedures.
  • Multicollinearity: VIF diagnostics were conducted; mean-centering was applied in the moderation analysis.
  • Outliers: Outliers were retained with robust estimation techniques.

Appendix D.8. Reproducibility Checklist

For Complete Replication:
Dataset Access: Obtain the original dataset from https://search.gesis.org/research_data/ZA8844 (accessed on 29 May 2025)
Variable Construction: Follow procedures in Appendix B exactly
Software Requirements: Install SPSS with Hayes PROCESS macro
Analysis Sequence: Execute analyses in order presented (Appendix D.5.1 through Appendix D.5.6)
Statistical Settings: Use exact settings specified for each analysis
Missing Data: Apply consistent listwise deletion
Assumption Testing: Conduct diagnostics as detailed in Appendix A
Expected Output Verification:
  • Main mediation: Indirect effect = 0.20 with 95% CI [0.186, 0.224]
  • Main linear regression: Coefficient = 0.42; p < 0.001
  • Main moderation: Interaction effect = 0.14; p < 0.001
  • Supplementary mediation 1: Indirect effect = 0.2346 (56.1% mediation)
  • Supplementary mediation 2: Indirect effect = 0.0073 (1.8% mediation)
  • Cross-tabulation: 96.3% vs. 65.6% agreement pattern
Note: This reproducibility appendix follows best practices for transparent and replicable quantitative research. All procedures are documented to enable independent verification of findings.

References

  1. European Commission Eurostat. Use of Artificial Intelligence in Enterprises. Digital Economy and Society; European Commission: Brussels, Belgium, 2025. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Use_of_artificial_intelligence_in_enterprises (accessed on 2 December 2025).
  2. Mayer, H.; Yee, L.; Chui, M.; Roberts, R. Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential at Work; McKinsey Digital, 2025. Available online: https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/superagency%20in%20the%20workplace%20empowering%20people%20to%20unlock%20ais%20full%20potential%20at%20work/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-v4.pdf (accessed on 20 July 2025).
  3. European Commission. On Artificial Intelligence: A European Approach to Excellence and Trust (COM/2020/0065). 2020. Available online: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A52020DC0065 (accessed on 13 July 2025).
  4. The White House. Removing Barriers to American Leadership in Artificial Intelligence (Executive Order No. 14179, 90 Fed. Reg. 8741); Federal Register, 2025. Available online: https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence (accessed on 10 January 2026).
  5. The White House. Accelerating Federal Permitting of Data Center Infrastructure (Executive Order No. 14318, 90 Fed. Reg. 35385); Federal Register, 2025. Available online: https://www.federalregister.gov/documents/2025/07/28/2025-14212/accelerating-federal-permitting-of-data-center-infrastructure (accessed on 10 January 2026).
  6. Migliorini, S. China’s Interim Measures on generative AI: Origin, content and significance. Comput. Law Secur. Rev. 2024, 53, 105985. [Google Scholar] [CrossRef]
  7. European Commission. Annex Digital Europe: Work Programme 2025–2027 of the Digital Europe Programme. 2025. Available online: https://ec.europa.eu/newsroom/dae/redirection/document/114219 (accessed on 19 July 2025).
  8. European Commission. Eurobarometer 101.4 (2024) (ZA8844; Version 1.0.0); GESIS: Cologne, Germany, 2025. [CrossRef]
  9. Organisation for Economic Co-Operation Development. OECD AI Principles Overview; OECD.AI, 2025. Available online: https://oecd.ai/en/ai-principles (accessed on 15 June 2025).
  10. Wendehorst, C.; Nessler, B. Guidelines on the Application of the Definition of an AI System in the AI Act: ELI Proposal for a Three Factor Approach; European Law Institute, 2024. Available online: https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Response_on_the_definition_of_an_AI_System.pdf (accessed on 12 July 2025).
  11. European Parliament; Council of the European Union. Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence and Amending other Union Acts (Artificial Intelligence Act) (Regulation (EU) 2024/1689). 2024. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 9 January 2026).
  12. Pozzi, F.; Valetto, P.; Kuiper, E. AI’s Impact on Europe’s Job Market: A Call for a Social Compact; Social Europe, 2025. Available online: https://www.socialeurope.eu/ais-impact-on-europes-job-market-a-call-for-a-social-compact (accessed on 15 August 2025).
  13. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  14. Johnson, B.T.; Martinez-Berman, L.; Curley, C.M. Formation of Attitudes: How People (Wittingly or Unwittingly) Develop Their Viewpoints. Oxford Research Encyclopedia of Psychology. 2022. Available online: https://oxfordre.com/psychology/view/10.1093/acrefore/9780190236557.001.0001/acrefore-9780190236557-e-812 (accessed on 14 July 2025).
  15. Fazio, R.H.; Olson, M.A. The Mode Model. Dual-Process Theories of the Social Mind. 2014. Available online: https://www.asc.ohio-state.edu/psychology/fazio/documents/FazioOlson_DualProcessVolume__Feb062013.pdf (accessed on 4 July 2025).
  16. Robbins, S.P.; Judge, T.A. Organizational Behavior, 19th ed.; Pearson: London, UK, 2024; Available online: https://www.pearson.com/en-us/subject-catalog/p/organizational-behavior/P200000007044/9780137687206?srsltid=AfmBOoq3lfnuZge_rVVm_LuZjG210qeDdA9mPL_wdO930R24-b40mHdG (accessed on 20 June 2025).
  17. Curtarelli, M.; Gualtieri, V.; Jannati, M.S.; Donlevy, V. ICT for Work: Digital Skills in the Workplace; European Commission: Brussels, Belgium, 2016. Available online: https://digital-strategy.ec.europa.eu/en/library/ict-work-digital-skills-workplace (accessed on 15 July 2025).
  18. Nikou, S.; De Reuver, M.; Mahboob Kanafi, M. Workplace literacy skills—How information and digital literacy affect adoption of digital technology. J. Doc. 2022, 78, 371–391. [Google Scholar] [CrossRef]
  19. Shakina, E.; Parshakov, P.; Alsufiev, A. Rethinking the corporate digital divide: The complementarity of technologies and the demand for digital skills. Technol. Forecast. Soc. Change 2021, 162, 120405. [Google Scholar] [CrossRef]
  20. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  21. Davis, F.D. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1985. Available online: https://dspace.mit.edu/bitstream/handle/1721.1/15192/14927137-MIT.pdf (accessed on 30 June 2025).
  22. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  23. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. Available online: https://psycnet.apa.org/doi/10.1037/0033-295X.84.2.191 (accessed on 30 June 2025). [CrossRef]
  24. Bilmes, J. Charles R. Berger and James J. Bradac, Language and social knowledge: Uncertainty in interpersonal relations. London: Edward Arnold, 1982. Pp. viii + 151. Lang. Soc. 1984, 13, 87–90. [Google Scholar] [CrossRef]
  25. Ellis, A. Reason and Emotion in Psychotherapy; Lyle Stuart, 1962. Available online: https://psycnet.apa.org/record/1963-01437-000 (accessed on 28 June 2025).
  26. Nickerson, R.S. Confirmation bias: A ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 1998, 2, 175–220. [Google Scholar] [CrossRef]
  27. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  28. Pham, L.; O’Sullivan, B.; Scantamburlo, T.; Mai, T. Addressing digital and AI skills gaps in European living areas: A comparative analysis of small and large communities. Proc. AAAI Conf. Artif. Intell. 2024, 38, 23119–23127. [Google Scholar] [CrossRef]
  29. Santana, M.; Díaz-Fernández, M. Competencies for the artificial intelligence age: Visualisation of the state of the art and future perspectives. Rev. Manag. Sci. 2023, 17, 1971–2004. [Google Scholar] [CrossRef]
  30. Vitezić, V.; Perić, M. The role of digital skills in the acceptance of artificial intelligence. J. Bus. Ind. Mark. 2024, 39, 1546–1566. [Google Scholar] [CrossRef]
  31. Audrin, B.; Audrin, C.; Salamin, X. Digital skills at work–Conceptual development and empirical validation of a measurement scale. Technol. Forecast. Soc. Change 2024, 202, 123279. [Google Scholar] [CrossRef]
  32. Borgonovi, F.; Calvino, F.; Criscuolo, C.; Samek, L.; Seitz, H.; Nania, J.; Nitschke, J.; O’Kane, L. Emerging Trends in AI Skill Demand Across 14 OECD Countries; OECD Artificial Intelligence Papers, No. 2; OECD: Paris, France, 2023. [Google Scholar] [CrossRef]
  33. Law, N.; Woo, D.; de la Torre, J.; Wong, G. A Global Framework of Reference on Digital Literacy Skills for Indicator 4.4.2; UNESCO Institute for Statistics, 2018. Available online: https://uis.unesco.org/sites/default/files/documents/ip51-global-framework-reference-digital-literacy-skills-2018-en.pdf (accessed on 19 July 2025).
  34. Miyamoto, K.; Bashir, S. Digital Skills (No. 35080); The World Bank Group, 2020. Available online: https://documents1.worldbank.org/curated/en/099080723145042066/pdf/BOSIB039fceed5094083460f475698a212d.pdf (accessed on 27 June 2025).
  35. Sanz, L.F. Digital Skills: A Deep Dive. Digital Skills & Jobs Platform; EU, 2023. Available online: https://digital-skills-jobs.europa.eu/en/latest/briefs/digital-skills-deep-dive (accessed on 27 July 2025).
  36. Vuorikari, R.; Kluzer, S.; Punie, Y. DigComp 2.2: The Digital Competence Framework for Citizens; Publications Office of the EU: Luxemburg, 2022; EUR 31006 EN. [CrossRef]
  37. van Deursen, A.J.A.M.; van Dijk, J.A.G.M. Improving digital skills for the use of online public information and services. Gov. Inf. Q. 2009, 26, 333–340. [Google Scholar] [CrossRef]
  38. van Deursen, A.J.A.M.; van Dijk, J.A.G.M. Toward a multifaceted model of Internet access for understanding digital inequalities. Inf. Soc. 2015, 31, 379–391. [Google Scholar] [CrossRef]
  39. Coleman Parkes Research; SAS Institute Inc. GenAI in the Enterprise 2024: A Global Survey of Organizational Readiness and Adoption; SAS Institute: Lane Cove, NSW, Australia, 2024; Available online: https://www.sas.com/en_us/news/press-releases/2024/july/genai-research-study-global.html (accessed on 7 July 2025).
  40. Fattorini, L.; Maslej, N.; Perrault, R.; Parli, V.; Etchemendy, J.; Shoham, Y.; Ligett, K. The Global AI Vibrancy Tool. arXiv 2024, arXiv:2412.04486. [Google Scholar] [CrossRef]
  41. Bakker, A.B.; Demerouti, E. Job demands–resources theory: Taking stock and looking forward. J. Occup. Health Psychol. 2017, 22, 273. [Google Scholar] [CrossRef]
  42. Kaushik, D.; Mukherjee, U. High-performance work system: A systematic review of literature. Int. J. Organ. Anal. 2022, 30, 1624–1643. [Google Scholar] [CrossRef]
  43. Ali, M.; Shah, W.M.; Shah, A.U.M. Effect of high involvement work system on perceived employees development. RADS J. Bus. Manag. 2021, 3, 1–17. Available online: https://jbm.juw.edu.pk/index.php/jbm/article/view/50/37 (accessed on 3 July 2025). [CrossRef]
  44. Zahoor, S.; Chaudhry, I.S.; Yang, S.; Ren, X. Artificial intelligence application and high-performance work systems in the manufacturing sector: A moderated-mediating model. Artif. Intell. Rev. 2024, 58, 11. [Google Scholar] [CrossRef]
  45. Morandini, S.; Fraboni, F.; De Angelis, M.; Puzzo, G.; Giusino, D.; Pietrantoni, L. The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Sci. 2023, 26, 39–68. [Google Scholar] [CrossRef] [PubMed]
  46. Nguyen, T.; Elbanna, A. Understanding Human-AI Augmentation in the Workplace: A Review and a Future Research Agenda. Inf. Syst. Front. 2025, 1–21. [Google Scholar] [CrossRef]
  47. Brynjolfsson, E.; Li, D.; Raymond, L. Generative AI at work. Q. J. Econ. 2025, 140, 889–942. [Google Scholar] [CrossRef]
  48. Huang, M.-H.; Rust, R.; Maksimovic, V. The feeling economy: Managing in the next generation of artificial intelligence (AI). Calif. Manag. Rev. 2019, 61, 43–65. [Google Scholar] [CrossRef]
  49. Lebovitz, S.; Lifshitz-Assaf, H.; Levina, N. To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis. Organ. Sci. 2022, 33, 126–148. [Google Scholar] [CrossRef]
  50. Spring, M.; Faulconbridge, J.; Sarwar, A. How information technology automates and augments processes: Insights from Artificial-Intelligence-based systems in professional service operations. J. Oper. Manag. 2022, 68, 592–618. [Google Scholar] [CrossRef]
  51. Chen, C.; Cai, R. Are robots stealing our jobs? Examining robot-phobia as a job stressor in the hospitality workplace. Int. J. Contemp. Hosp. Manag. 2024, 37, 94–112. [Google Scholar] [CrossRef]
  52. Dong, M.; Conway, J.R.; Bonnefon, J.-F.; Shariff, A.; Rahwan, I. Fears about artificial intelligence across 20 countries and six domains of application. Am. Psychol. 2024. Advance online publication. [Google Scholar] [CrossRef]
  53. Gull, A.; Ashfaq, J.; Aslam, M. AI in the workplace: Uncovering its impact on employee well-being and the role of cognitive job insecurity. Int. J. Bus. Econ. Aff. 2023, 8, 79–91. [Google Scholar] [CrossRef]
  54. Vorobeva, D.; El Fassi, Y.; Costa Pinto, D.; Hildebrand, D.; Herter, M.M.; Mattila, A.S. Thinking skills don’t protect service workers from replacement by artificial intelligence. J. Serv. Res. 2022, 25, 601–613. [Google Scholar] [CrossRef]
  55. Gnambs, T.; Stein, J.P.; Appel, M.; Griese, F.; Zinn, S. An economical measure of attitudes towards artificial intelligence in work, healthcare, and education (ATTARI-WHE). Comput. Hum. Behav. Artif. Hum. 2025, 3, 100106. [Google Scholar] [CrossRef]
  56. La Torre, D.; Colapinto, C.; Durosini, I.; Triberti, S. Team formation for human-artificial intelligence collaboration in the workplace: A goal programming model to foster organizational change. IEEE Trans. Eng. Manag. 2021, 70, 1966–1976. [Google Scholar] [CrossRef]
  57. Park, J.; Woo, S.E.; Kim, J. Attitudes towards artificial intelligence at work: Scale development and validation. J. Occup. Organ. Psychol. 2024, 97, 920–951. [Google Scholar] [CrossRef]
  58. Abou Hashish, E.A.; Alnajjar, H. Digital proficiency: Assessing knowledge, attitudes, and skills in digital transformation, health literacy, and artificial intelligence among university nursing students. BMC Med. Educ. 2024, 24, 508. [Google Scholar] [CrossRef]
  59. Galindo-Domínguez, H.; Delgado, N.; Campo, L.; Losada, D. Relationship between teachers’ digital competence and attitudes towards artificial intelligence in education. Int. J. Educ. Res. 2024, 126, 102381. [Google Scholar] [CrossRef]
  60. Sergeeva, O.V.; Masalimova, A.R.; Zheltukhina, M.R.; Chikileva, L.S.; Lutskovskai, L.Y.; Luzin, A. Impact of digital media literacy on attitude toward generative AI acceptance in higher education. Front. Educ. 2025, 10, 1563148. [Google Scholar] [CrossRef]
  61. Ayduğ, D.; Altınpulluk, H. Are Turkish pre-service teachers worried about AI? A study on AI anxiety and digital literacy. AI Soc. 2025, 40, 5823–5834. [Google Scholar] [CrossRef]
  62. Zhao, H.; Wu, P. AI Job substitution risks, digital self-efficacy and mental health among employees. J. Occup. Environ. Med. 2023, 67, 10–1097. [Google Scholar] [CrossRef]
  63. Olson, M.A.; Kendrick, R.V. Origins of Attitudes. In Attitudes and Attitude Change; Crano, W.D., Prislin, R., Eds.; Psychology Press: Hove, UK, 2008; pp. 111–130. Available online: https://psycnet.apa.org/record/2008-09973-006 (accessed on 5 June 2025).
  64. Lord, C.G.; Ross, L.; Lepper, M.R. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. J. Personal. Soc. Psychol. 1979, 37, 2098–2109. [Google Scholar] [CrossRef]
  65. Trenerry, B.; Chng, S.; Wang, Y.; Suhaila, Z.S.; Lim, S.S.; Lu, H.Y.; Oh, P.H. Preparing workplaces for digital transformation: An integrative review and framework of multi-level factors. Front. Psychol. 2021, 12, 620766. [Google Scholar] [CrossRef]
  66. Chiu, Y.T.; Zhu, Y.Q.; Corbett, J. In the hearts and minds of employees: A model of pre-adoptive appraisal toward artificial intelligence in organizations. Int. J. Inf. Manag. 2021, 60, 102379. [Google Scholar] [CrossRef]
  67. Liehner, G.L.; Biermann, H.; Hick, A.; Brauner, P.; Ziefle, M. Perceptions, attitudes and trust towards artificial intelligence—An assessment of the public opinion. Artif. Intell. Soc. Comput. 2023, 72, 32–41. [Google Scholar] [CrossRef]
  68. Kumar, V.R.; Raman, R. Student Perceptions on Artificial Intelligence (AI) in higher education. In Proceedings of the 2022 IEEE Integrated STEM Education Conference (ISEC), Online, 26 March 2022; pp. 450–454. [Google Scholar] [CrossRef]
  69. Naamati-Schneider, L.; Alt, D. Beyond digital literacy: The era of AI-powered assistants and evolving user skills. Educ. Inf. Technol. 2024, 29, 21263–21293. [Google Scholar] [CrossRef]
  70. Engström, A.; Pittino, D.; Mohlin, A.; Johansson, A.; Edh Mirzaei, N. Artificial intelligence and work transformations: Integrating sensemaking and workplace learning perspectives. Inf. Technol. People 2024, 37, 2441–2461. [Google Scholar] [CrossRef]
  71. Hosseini, Z.; Nyholm, S.; Le Blanc, P.M.; Preenen, P.T.; Demerouti, E. Assessing the artificially intelligent workplace: An ethical framework for evaluating experimental technologies in workplace settings. AI Ethics 2023, 4, 285–297. [Google Scholar] [CrossRef]
  72. Van de Poel, I. An ethical framework for evaluating experimental technology. Sci. Eng. Ethics 2016, 22, 667–686. [Google Scholar] [CrossRef]
  73. Braganza, A.; Chen, W.; Canhoto, A.; Sap, S. Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. J. Bus. Res. 2021, 131, 485–494. [Google Scholar] [CrossRef]
  74. Orellana, O. Exploration of Employee Attitudes in AI Adoption. Master’s Thesis, School of Management, University of Vaasa, Vaasa, Finland, 2025. Available online: https://urn.fi/URN:NBN:fi-fe202501318514 (accessed on 4 July 2025).
  75. Kelley, S. Employee perceptions of the effective adoption of AI principles. J. Bus. Ethics 2022, 178, 871–893. [Google Scholar] [CrossRef]
  76. Scarpello, V.; Campbell, J.P. Job satisfaction: Are all the parts there? Pers. Psychol. 1983, 36, 577–600. [Google Scholar] [CrossRef]
  77. Wanous, J.P.; Reichers, A.E.; Hudy, M.J. Overall job satisfaction: How good are single-item measures? J. Appl. Psychol. 1997, 82, 247–252. Available online: https://psycnet.apa.org/doi/10.1037/0021-9010.82.2.247 (accessed on 30 June 2025). [CrossRef]
  78. Campbell, D.T.; Fiske, D.W. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol. Bull. 1959, 56, 81–105. [Google Scholar] [CrossRef] [PubMed]
  79. Cremaschi, A.; Lee, D.J.; Leonelli, M. Understanding support for AI regulation: A Bayesian network perspective. Int. J. Eng. Bus. Manag. 2025, 17, 18479790251383310. [Google Scholar] [CrossRef]
  80. McClure, P.K. “You’re fired,” says the robot: The rise of automation in the workplace, technophobes, and fears of unemployment. J. Comput.-Mediat. Commun. 2018, 23, 145–162. [Google Scholar] [CrossRef]
  81. Roll, L.C. Employees’ Perceived Fear of Automation: Which Age Groups are Most Affected? European Union’s Horizon 2020 Research and Innovation Program; The Oxford Institute of Population Ageing, 2022. Available online: https://www.ageing.ox.ac.uk/blog/Employees-Perceived-Fear-of-Automation (accessed on 3 July 2025).
  82. Strzelecki, A.; ElArabawy, S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: Comparative evidence from Poland and Egypt. Br. J. Educ. Technol. 2024, 55, 1209–1230. [Google Scholar] [CrossRef]
  83. Al-khresheh, M.H. Bridging technology and pedagogy from a global lens: Teachers’ perspectives on integrating ChatGPT in English language teaching. Comput. Educ. Artif. Intell. 2024, 6, 100218. [Google Scholar] [CrossRef]
  84. Vu, H.T.; Lim, J. Effects of country and individual factors on public acceptance of artificial intelligence and robotics technologies: A multilevel SEM analysis of 28-country survey data. Behav. Inf. Technol. 2022, 41, 1515–1528. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.