Next Article in Journal
FedEHD: Entropic High-Order Descent for Robust Federated Multi-Source Environmental Monitoring
Previous Article in Journal
Super-Resolution Reconstruction Approach for MRI Images Based on Transformer Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attitudes Toward Artificial Intelligence in Organizational Contexts

1
Department of Psychology of Development and Socialization Processes, Sapienza University of Rome, 00185 Rome, Italy
2
Department of Human and Social Sciences, Mercatorum University, 00186 Rome, Italy
3
Department of Political and Social Sciences, University of Cagliari, 09123 Cagliari, Italy
*
Author to whom correspondence should be addressed.
AI 2025, 6(11), 292; https://doi.org/10.3390/ai6110292
Submission received: 22 September 2025 / Revised: 5 November 2025 / Accepted: 12 November 2025 / Published: 14 November 2025

Abstract

The adoption of artificial intelligence (AI) is reshaping organizational practices, yet workers’ attitudes remain crucial for its successful integration. This study examines how perceived organizational ethical culture, organizational innovativeness, and job performance influence workers’ attitudes towards AI. A survey was administered to 356 workers across diverse sectors, with analyses focusing on 154 participants who reported prior AI use. Measures included the Attitudes Towards Artificial Intelligence at Work (AAAW), Corporate Ethical Virtues (CEV), Inventory of Organizational Innovativeness (IOI), and an adapted version of the In-Role Behaviour Scale. Hierarchical regression analyses revealed that ethical culture dimensions, particularly Clarity and Feasibility, significantly predicted attitudes towards AI, such as anxiety and job insecurity, with Feasibility also associated with the attribution of human-like traits to AI. Supportability, reflecting a cooperative work environment, was linked to lower perceptions of AI human-likeness and adaptability. Among innovation dimensions, only Raising Projects, the active encouragement of employees’ ideas, was positively related to perceptions of AI adaptability, highlighting the importance of participatory innovation practices over abstract signals. Most importantly, perceived job performance improvements through AI predicted more positive attitudes, including greater perceived quality, utility, and reduced anxiety. Overall, this study contributes to the growing literature on AI in organizations by offering an exploratory yet integrative framework that captures the multifaceted nature of AI acceptance in the workplace.

1. Introduction

The integration of artificial intelligence (AI) into organizational contexts represents one of the most profound transformations in contemporary work environments. AI acts as a powerful driver of change in work dynamics, reshaping tasks and influencing interpersonal interactions, strategic decision-making, and the emotional and cognitive meanings workers attach to their work [1]. However, despite its potential, the diffusion of AI within organizations is often accompanied by skepticism and resistance [2,3]. Accurately understanding workers’ attitudes toward AI is crucial for understanding adoption dynamics [4].
Several studies in the field of human-AI interaction have employed traditional measures such as the Technology Acceptance Model [5], which assesses perceived usefulness and ease of use, thereby focusing more narrowly on the technical and functional characteristics of the technology itself while neglecting the specific features of the systems under investigation. More recent instruments, including the General Attitudes Toward Artificial Intelligence Scale [6] and the Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA) Awareness Scale [7], aim to capture general orientations toward AI or perceptions of how smart technologies may influence their career prospects. Yet, although these tools address specific technologies more directly, they often lack contextual sensitivity, overlooking the organizational settings in which attitudes are shaped.
The traditional conception of attitude formation assumes a dichotomy between “subjects”, who evaluate, and “objects”, which are judged as more or less acceptable. This view neglects the dynamic, interactive relationship between individuals and their environment [8]. As Lakoff [9] highlights, the categories through which people organize reality do not depend on objects per se, but on the ways in which individuals interact with them: how they perceive, represent, and structure information, and how they engage with it through embodied experience [10]. Accordingly, attitudes towards AI in organizations should be understood as socially situated phenomena, shaped both by individual perceptions and by responses to the attitudes and actions of others. They cannot, therefore, be examined in isolation or in purely abstract terms, but only in relation to the specific organizational and social context in which they arise.
From an organizational perspective, two contextual factors can be considered particularly relevant in this field. First, organizational innovativeness may represent a key resource in an era of rapid transformation, enabling organizations to adapt to volatile environments and maintain competitive advantage. Innovation orientation is not merely a general predisposition toward change but a precursor to the implementation of innovation, organizational performance, and economic growth. Van de Ven [11] defined innovation orientation as “the development and implementation of new ideas by people who engage in transactions with others within an institutional context over time”. In this perspective, innovation orientation reflects both the willingness of members to adopt innovations and the extent to which management prioritizes and supports innovation processes. As Siguaw and colleagues [12] argue, long-term organizational success depends less on individual innovations than on an overall innovation orientation, which creates enduring capabilities for generating new ideas. Despite its importance, research has often focused on product and process innovation, paying limited attention to innovation orientation at the organizational level itself, and to how workers’ perceptions of this orientation may positively influence their attitudes towards the use of innovative products.
Secondly, the ethical culture of organizations may play a decisive role in shaping workers’ perceptions and responses to AI. Corporate ethics refers to the laws, norms, codes, or principles that guide morally acceptable decision-making in business operations and relationships. At its core, it reflects integrity and fairness in interactions with workers and clients [13]. Given the sensitive issues raised by AI [14], such as transparency, explainability, and accountability, the ethical dimension of the organizational environment becomes a critical factor in determining whether employees perceive AI adoption as legitimate and trustworthy when promoted by the organization itself.
On the individual level, attitude formation can also be framed into consolidated psychological theories. Among these, Chaiken’s heuristic–systematic model (HSM; [15,16]) is particularly influential. The HSM posits two distinct routes of information processing: a heuristic route, based on cognitive shortcuts, and a systematic route, requiring greater effort and careful analysis. Since systematic processing is cognitively costly, individuals tend to favor the heuristic route whenever this does not jeopardize decision accuracy [17]. In this perspective, the HSM explains how prior behaviors and experiences, such as the perception of job performances supported by AI, may shape subsequent evaluations. Hence, in this perspective, workers’ past experiences with AI, whether positive or negative, can be decisive in shaping their attitudes.
Consequently, it clearly emerges that focusing exclusively on the technical features of AI systems is insufficient; equal attention must be devoted to organizational factors and individual perceptions associated with their use [18,19].
Our study focuses on three underexplored dimensions, integrating contextual and individual factors: workers’ perceptions of organizational ethical culture, workers’ perceptions of organizational innovativeness, and workers’ perceptions of their own job performance supported by the use of AI within their professional roles. To this aim, it becomes essential to examine how these organizational and individual factors shape workers’ attitudes toward AI and influence its acceptance and integration in the workplace. Specifically, the study aims to address the following research questions:
  • How do workers’ perceptions of organizational ethical culture relate to their attitudes toward the use of AI in their work?
  • How does workers’ perception of their organization’s innovativeness influence their openness to adopting and integrating AI technologies?
  • To what extent do workers’ perceptions of their own job performance in relation to AI use affect their overall attitudes toward these technologies?

Hypothesis Development

Based on the research questions and the overall objective of the present study, specific hypotheses were formulated to be tested through the analysis of the collected data.
These hypotheses aim to translate theoretical questions into testable relationships between observed variables, in order to identify the factors that may affect workers’ acceptance of AI in organizational settings. In particular, the following hypotheses were formulated (Figure 1):
H1. 
Perceived organizational ethical culture has a significant impact on workers’ attitudes toward the use of AI in organizational contexts.
H2. 
Perceived organizational innovativeness has a significant impact on workers’ attitudes toward the use of AI in organizational contexts.
H3. 
Perceived job performance related to the use of AI has a significant impact on workers’ attitudes toward the use of AI in organizational contexts.
Although firmly grounded in the literature, these constructs have not been directly connected to attitudes toward AI, underscoring the exploratory nature of the present study. For these reasons, the hypotheses were not formulated with a specific direction (e.g., positive or negative impact). Instead, the actual directions of the relationships were left to be determined by the data.

2. Measures and Methods

After selecting the most suitable scales based on the literature review and the research questions, a structured process was initiated to develop and administer the questionnaire. The first phase involved the translation and linguistic adaptation of some of the selected scales. Next, the overall structure of the questionnaire was defined and prepared. This phase was followed by the contact and recruitment of Italian companies and professional organizations operating in both the public and private sectors, across different industries and of varying sizes, characterized by diverse organizational modalities (on-site, hybrid, and remote). Specifically, organizations were selected based on their willingness to participate in the study and the presence of work arrangements that involved, at least in part, the use of technologies for communication and coordination. Contact with organizations was primarily established via email, which included a brief description of the study and a link to the online questionnaire hosted on the Qualtrics platform. Data collection took place throughout April 2025.

2.1. Participants

The final analytic sample comprised N = 356 participants who completed the survey after n = 46 individuals discontinued following informed consent. The gender distribution was approximately balanced (n = 177 women, n = 173 men, n = 6 other), with a mean age of M = 42.37, SD = 11.36. Men were slightly older on average (M = 43.90, SD = 11.56) than women (M = 41.11, SD = 11.12).
Other sociodemographic variables were explored: concerning educational attainment, the majority of participants held a high school diploma (41.8%), while 43.5% had completed a university degree or higher, indicating a relatively well-educated sample. Instead, regarding the occupational sectors of research participants, the most represented were Banking (18.7%), Information Technology (13.4%), and Electronic/Electromechanical (13.2%), whereas medium-represented sectors included Construction, Consulting, and Training. Role distribution was also diverse, including employees (41.8%), managers (12.2%), executives (5.5%), consultants (10.9%), and workers (10.2%) as participants. Regarding work modality, the sample was distributed as follows: 3% fully remote, 52% fully on-site, and 31.3% hybrid.
Regarding AI use, 29.9% of participants reported using organization-provided AI tools, and 41.8% had experience with AI at work more generally. The most frequently used AI applications were aimed at automatic text generation, content production, translation, and fact-checking, whereas creative or specialized tools, such as music/video generation, social media management, recommendation systems, and reinforcement learning, were less commonly used. These results indicate that AI adoption remains partial and selective, with a clear preference for text- and information-focused applications.
Hypotheses were tested on a subsample of N = 154 participants who reported prior experience of AI use in their workplace. This subsample was necessary because only participants with AI experience could provide meaningful responses to items assessing perceived job performance in relation to AI, which were central to the study’s hypotheses.

2.2. Measures

To address the research questions, it was necessary to select tools capable of measuring the identified theoretical constructs. The selection process involved a careful review of the scientific literature, aiming to identify well-established, theoretically grounded scales that have already been applied in organizational contexts. Specifically, four scales were identified:
Attitudes Towards Artificial Intelligence at Work (AAAW), to assess workers’ attitudes toward AI in the workplace [1];
Corporate Ethical Virtues–Short Version (CEV), to measure workers’ perceptions of ethical culture within the organization [13];
Inventory of Organizational Innovativeness (IOI), to investigate workers’ perceived orientation toward change and innovation within the organization [20];
An adapted scale based on the In-Role Behavior Scale by Williams and Anderson [21], aimed at assessing workers’ perceived job performance related to AI use.
  • Attitudes Towards Artificial Intelligence at Work (AAAW)
The AAAW scale [1] assesses workers’ attitudes toward AI in organizational settings through 25 items across six dimensions:
  • Perceived humanlikeness of AI: the extent to which individuals attribute human-like characteristics (desires, emotions, beliefs, free will) to AI. Example items: “AI has desires”; “AI has the ability to experience emotion”.
  • Perceived adaptability of AI: the perception of AI’s ability to learn, improve, and adapt to the work context. Example items: “AI learns from experience at work”; “AI adapts itself over time at work”.
  • Perceived quality of AI: the perceived reliability, accuracy, completeness, and clarity of the information provided by AI. Example items: “AI produces correct information”; “The information from AI is always up to date”.
  • AI use anxiety: the level of discomfort or anxiety experienced when using AI at work. Example items: “Using AI for work is somewhat intimidating to me”; “I would feel uneasy if I were given a job where I had to use AI”.
  • Job insecurity: concerns that AI may replace one’s role, diminish career prospects, or make specific skills obsolete. Example items: “I am worried that what I can do now with my work skills will be replaced by AI”; “I think my job could be replaced by AI”.
  • Perceived personal utility of AI: the extent to which AI is perceived as a useful tool that enhances skills and improves the work experience. Example items: “Using AI would allow me to have increased confidence in my skills at work”; “Using AI would give me greater control over my work”.
Three dimensions (Perceived adaptability, Perceived quality, and Perceived personal utility) reflect functional aspects of AI, while the other three (AI use anxiety, Job insecurity, Perceived humanlikeness) capture socio-emotional aspects. From a practical perspective, the AAAW is useful for organizations to monitor the impact of AI implementation and to design interventions, such as training programs, aimed at reducing anxiety and insecurity while enhancing perceived utility. Compared to the other previously mentioned scales, this instrument captures both the specificity of the technology and the context of investigation. The AAAW dimensions demonstrated good to excellent reliability: Perceived Humanlikeness of AI (α = 0.77), Perceived Adaptability of AI (α = 0.90), Perceived Quality of AI (α = 0.82), AI Use Anxiety (α = 0.89), Job Insecurity (α = 0.90), and Perceived Personal Utility of AI (α = 0.86).
  • Corporate Ethical Virtues (CEV)-Short Version
To measure perceived organizational ethical culture, the Italian short version of the Corporate Ethical Virtues (CEV) model was used [13]. This version includes 24 items across eight dimensions (three items per dimension), each capturing a specific aspect of the ethical culture:
  • Clarity: workers’ perception of the clarity of organizational norms.
  • Feasibility: measuring the extent to which workers feel able to act consistently with their values, given time and resources.
  • Supportability: perception of an environment characterized by shared commitment, respect for rules, and positive interpersonal relationships.
  • Transparency: visibility of actions by supervisors and colleagues within the organization.
  • Discussability: perceived opportunity to openly discuss ethical issues, including reporting and correcting unethical behaviors.
  • Sanctionability: perception that unethical behaviors are punished and ethical behaviors are rewarded.
  • Congruency of supervisors: alignment between supervisors’ behaviors and expected ethical values.
  • Congruency of management: coherence between top management behaviors and the ethical principles promoted within the organization.
Its multidimensional structure allows for a detailed assessment of ethical culture while remaining practical for survey administration.
The CEV scale dimensions showed acceptable to strong internal consistency: Clarity (α = 0.76), Supportability (α = 0.75), Transparency (α = 0.70), Discussability (α = 0.72), Congruency of Supervisors (α = 0.84), and Congruency of Management (α = 0.80). Two subscales, Feasibility (α = 0.63) and Sanctionability (α = 0.60), were slightly below the conventional threshold of 0.70, indicating moderate reliability, but were considered acceptable given both the multidimensional nature of this scale and the fact that it has been validated in an Italian context, as in the present research.
  • Inventory of Organizational Innovativeness (IOI)
The Italian version of the IOI scale [20,22] was used to measure the perceived organizational innovativeness from workers’ perspectives, reflecting both individual openness to innovation and management support for innovative processes.
Originally, the IOI assessed nine dimensions at three levels:
Organizational level: managerial support, project initiation, and communication processes.
Interpersonal level: consultative leadership, teamwork integration and trust, peer support.
Individual level: knowledge and skills, task stimulation.
The original 44-item version was reduced in subsequent validations, resulting in a 36-item scale. For the purposes of this study, three key dimensions were selected for their theoretical relevance and practical feasibility:
  • Support: the extent to which the organization provides resources, training, and recognition to foster innovation and creativity.
  • Raising projects: organizational encouragement for workers to propose ideas and engage in innovative initiatives.
  • Summary assessment items: workers’ overall perception of organizational effectiveness and innovativeness.
Selecting only these three dimensions allowed the survey to remain manageable while capturing the core aspects of an innovative organizational climate.
The IOI scale also demonstrated good reliability: Support (α = 0.89), Raising Projects (α = 0.87), and Summary Assessment of General Organizational Effectiveness and Innovativeness (α = 0.82).
  • In-Role Behaviour (IRB) Scale-Adapted Version
Finally, perceived job performance in relation to AI was measured using items adapted from Williams and Anderson (1991) [21], focusing specifically on in-role behaviors (IRB)-i.e., the completion of formally expected job tasks.
The scale includes four items, modified to explicitly reference AI (e.g., “Thanks to AI, I adequately complete my assigned tasks”; see Appendix A).
This approach allowed for a precise assessment of perceived work effectiveness enhanced by AI. The perceived job performance scale, adapted from the In-Role Behavior Scale and comprising four items, showed excellent reliability (α = 0.94).

2.3. Socio-Demographic Variables

In addition to the variables directly related to the research hypotheses, the questionnaire included several socio-demographic variables to better contextualize participants’ responses. These covered a range of sociodemographic characteristics (such as gender, age, and educational level) and professional information (including organizational sector, job role). Finally, the questionnaire also explored participants’ prior exposure to AI tools in the workplace, both in terms of organizationally provided solutions and individually adopted technologies.

2.4. Data Analysis

Before hypothesis testing, data were screened for completeness and suitability for regression analyses. Descriptive statistics were computed for all variables, and reliability analyses were conducted for multi-item scales to ensure internal consistency. Independent variables included dimensions of perceived organizational ethical culture, perceived organizational innovativeness, and perceived job performance related to AI, while dependent variables were the dimensions of the AAAW Scale.
Hypotheses were tested using hierarchical multiple regression analyses, with each AAAW dimension entered as a separate dependent variable. Predictor variables were entered in blocks corresponding to organizational ethical culture, organizational innovativeness, and perceived job performance. This allowed examination of the unique contribution of each predictor while controlling for the others. Given the exploratory nature of the study, analyses were two-tailed, and only statistically significant effects (p < 0.05) were interpreted.

3. Results

The results, aligned with the three hypotheses, are presented below. Table 1 summarizes the hierarchical regression analyses using the AI Use Anxiety dimension of the AAAW scale as the dependent variable.

3.1. Hypothesis 1 (H1)-Effects of Perceived Organizational Ethical Culture

H1 proposed that workers’ perceptions of their Organizational Ethical Culture would influence their attitudes toward AI use. The results partially supported this hypothesis, revealing that specific dimensions of ethical culture significantly relate to workers’ attitudes. In particular, higher Clarity, reflecting clear norms and procedures, was associated with lower AI Use Anxiety (β = –0.290, p = 0.012) and lower Job Insecurity related to AI (β = –0.260, p = 0.028). This indicates that when organizational norms are clear, workers feel more confident and less threatened by AI integration in the organizational context. Similarly, the dimension Supportability, which captures perceptions of a cooperative and goal-aligned work environment, was negatively associated with both Perceived Humanlikeness of AI (β = –0.330, p = 0.001) and Perceived Adaptability of AI (β = –0.255, p = 0.017), suggesting that in environments with high social support and rule alignment, workers are less likely to anthropomorphize AI or perceive it as highly adaptive. Finally, Feasibility, representing perceived possibility of acting in line with personal ethical values, was positively associated with AI Use Anxiety (β = 0.406, p < 0.001), Job Insecurity (β = 0.423, p < 0.001), and Perceived Humanlikeness of AI (β = 0.190, p = 0.050). Figure 2 illustrates the scatter plots showing the significant relationships between the dimensions of perceived organizational ethical culture and attitudes toward AI. In other words, when workers perceive ethical conflicts between their personal values and those promoted in their workplace, they tend to experience greater anxiety toward AI, feel a stronger threat of being replaced in their role, and are more likely to attribute human-like characteristics to AI. Overall, these findings indicate that ethical organizational culture exerts a significant, albeit dimension-specific, influence on the formation of workers’ attitudes toward AI.

3.2. Hypothesis 2 (H2)-Effects of Perceived Organizational Innovativeness

H2 predicted that workers’ perceptions of Organizational Innovativeness would positively influence attitudes toward AI. The findings provided only partial support for this hypothesis. Among the dimensions of Organizational Innovativeness, only Raising Projects, reflecting active support for proposing new ideas and initiatives, was significantly associated with Perceived Adaptability of AI (Β = 0.401, p = 0.004) (Figure 3). This indicates that in organizations that actively encourage innovation, workers perceive AI as more capable of learning, improving, and adapting to changing work requirements. Other dimensions of Perceived Organizational Innovativeness did not show significant associations with any other attitudes toward AI, suggesting that, among the various facets of organizational innovativeness, workers’ perception of active engagement in innovation plays a more substantial role in shaping positive attitudes toward AI compared to more abstract or generalized views of organizational innovativeness.

3.3. Hypothesis 3 (H3)-Effects of Perceived Job Performance

H3 proposed that workers’ perceptions of their own Job Performance, in relation to AI use, would influence attitudes toward AI. The results strongly supported this hypothesis. Higher Perceived Job Performance was positively associated with Perceived Quality of AI (Β = 0.240, p = 0.006; Table 1, and Perceived Personal Utility of AI (Β = 0.470, p < 0.001), indicating that workers who experience improvements in their own effectiveness tend to perceive AI more favorably, attributing higher quality and utility to it. Moreover, Perceived Job Performance was negatively associated with AI Use Anxiety (Β = –0.183, p = 0.019), showing that higher self-perceived effectiveness in AI-supported tasks reduces anxiety associated with its use. Figure 4 illustrates the Scatter Plots showing the significant relationships between the perceived job performance and attitudes toward AI. These results suggest that workers’ past direct experiences of performance gains from AI play a central role in shaping positive attitudes towards AI.

4. Discussion

This study examined the factors influencing workers’ attitudes toward AI in organizational contexts, with a focus on perceived organizational ethical culture, perceived organizational innovativeness, and perceived job performance in relation to AI use. The findings underscore the multidimensional nature of AI acceptance, reinforcing prior claims that workplace attitudes toward innovative technologies cannot be explored in isolation, but only in relation to the specific organizational and social context in which they emerge [10,18,19].
A first relevant contribution concerns perceived organizational ethical culture, which emerged as a decisive factor in shaping how workers perceive and accept AI. In particular, Clarity, the perception of organizational procedures as transparent and definite, was associated with lower levels of AI-related anxiety and job insecurity in workers. This suggests that transparent ethical frameworks help reduce the uncertainty and perceived threat of AI integration. Moreover, Supportability, understood as a cooperative and respectful work environment, appears to be linked to lower perceptions of AI’s adaptability and learning capacity in participants. This may indicate that, in an organizational climate seen as stable and supportive, workers may rely less on technology to provide assistance. Consistently, Supportability is also negatively associated with the humanization of AI, suggesting that when interpersonal support is already considered abundant among workers, there is little need to attribute human-like supportive roles to AI. Conversely, perceptions of Feasibility, reflecting the possibility of acting in line with personal ethical values, were linked to heightened anxiety and job insecurity, as well as to stronger attributions of human-like qualities to AI. This finding demonstrates that in contexts perceived as ethically incoherent, workers not only experience higher levels of discomfort and insecurity but also tend to project human characteristics onto AI, thereby further complicating the human–technology relationship. This latter aspect is complex to interpret and may be related to the tendency of workers to seek compensatory mechanisms in the face of ethical incoherence; however, it requires further investigation. Previous research has widely investigated the ethical dimension of AI, primarily focusing on individual-level perceptions of AI-related ethics [2,23]. Such studies often highlight concerns such as the risk of discrimination, where AI may reproduce or amplify biases present in data or system design, and fears related to job displacement, including potential unemployment and limited alternatives for those affected. The main contribution of the present study is to shift the focus from the individual to the organizational level, examining the interplay between organizational ethical culture and the formation of positive or negative attitudes toward AI. This perspective allows for a better understanding of how ethical challenges, traditionally studied at the individual level, may be mitigated or amplified by organizational conditions.
A second contribution relates to organizational innovativeness. As hypothesized, its influence was more selective than general: among the different explored dimensions, only Raising Projects, the active encouragement of workers’ ideas, was positively associated with perceptions of AI adaptability. This indicates that the perception of broad, abstract signals of organizational innovativeness is insufficient to shape workers’ perceptions. This highlights the importance of active engagement in innovation as a mechanism through which organizational innovativeness is perceived, and, in turn, translates into partially favorable attitudes toward innovative technological adoption. Although this relationship warrants further investigation, the present study provides an initial insight into this area. Previous research on organizational innovativeness and AI adoption in the workplace has primarily focused on the aspects of organizational readiness [24,25] or the level of organizational digital transformation [26], emphasizing not only technological infrastructure and data structures, but also the skills, expertise, and processes of human resources necessary to leverage emerging technologies and facilitate AI adoption. In contrast, our study examines organizational innovation orientation in a broader sense, beyond purely technological considerations. Preliminary results indicate a complex and not yet fully interpretable relationship between general innovation orientation and AI attitudes, underscoring the need for further research with larger samples to clarify these dynamics.
While previous studies have examined this relationship by considering attitudes toward AI as predictors of improved performance [27], the present study takes the opposite perspective. Specifically, it investigates whether perceived performance influences attitudes, in line with the HSM [15], which posits that prior experiences play a key role in shaping subsequent opinions and evaluations. This finding resonates with the HSM, showing that workers who experienced enhanced performance through AI reported not only lower anxiety in the use of AI but also greater perceptions of quality, usefulness, and even human-like traits in the technology. This underscores the importance of direct successful experiences in promoting favorable attitudes and AI acceptance. Hence, when workers perceive AI as enhancing their effectiveness, they are more likely to regard the technology as valuable and useful, thereby increasing appreciation and potentially fostering trust while reducing resistance.

5. Conclusions

This study demonstrates that attitudes toward AI in organizations are socially situated and context-dependent, emerging from the interaction of organizational culture, innovation practices, and individual experiences rather than from technological features alone. By adopting a multidimensional perspective, the findings highlight both the enablers that foster trust and openness toward AI (e.g., organizational ethical clarity, supportive climates, and participatory innovation) and the barriers that heighten anxiety or resistance (e.g., organizational ethical incoherence with personal values or negative prior experiences with AI). Importantly, the contribution of this work is to provide a comprehensive account of workers’ perceptions, mapping the conditions under which AI is viewed as useful, trustworthy, or threatening. Organizational factors such as transparent procedures and participatory innovation initiatives represent actionable levers for shaping workers’ attitudes. At the same time, individual experiences with AI leave room for user experience design and service design [28,29,30] to foster positive interactions with the technology, complementing the technical features that ensure its effectiveness.
Theoretically, it integrates organizational-level factors with individual perceptions, moving beyond traditional models such as TAM. Practically, it offers guidance for managers and policymakers: investing in transparent ethical procedures, participatory innovation, and genuine performance-enhancing AI tools can foster smoother integration. Looking forward, these findings also speak to the future of work, suggesting that organizations cultivating ethical integrity, innovation, and meaningful human–AI collaboration will be better positioned to ensure both effectiveness and worker well-being [31,32].
Ultimately, this study enriches the growing literature on AI use in organizations by offering an exploratory yet integrative framework that captures the multifaceted nature of AI acceptance.

Limitations and Future Directions

Several limitations should be acknowledged. First, the exclusive reliance on self-report measures may introduce social desirability bias; complementary qualitative approaches, such as interviews or focus groups, could provide deeper insights into AI-related experiences. Second, although the study surveyed 356 workers, only 154 participants reported prior AI use at work. This proportion is consistent with broader adoption patterns [33]: globally, only 42% of enterprise businesses with more than 1000 employees actively use AI. Consequently, the representation of participants with prior AI experience in the working context aligns with observed trends in real-world organizational adoption. It is also important to consider potential biases. Sample bias may have arisen due to the voluntary nature of participation, potentially overrepresenting individuals with a higher interest or comfort with AI. While company managers were asked to distribute the survey widely to reach a broad range of employees, we cannot fully control for individual motivation. Future research could include measures of curiosity or prior interest in AI to better understand its influence on attitudes. Moreover, future studies should examine in greater depth the descriptive differences within the sample, analyzing variations across sectors, nationalities, professional roles, and generational cohorts, as well as their impact on AI acceptance. Moreover, regarding the CEV scale, two subscales, Feasibility and Sanctionability, presented Cronbach’s alphas slightly below the conventional threshold of 0.70, respectively, of 0.63 and 0.60. This is an important limitation of these two dimensions of the CEV, but, from the reliability analysis of the whole scale, it emerges that the used version of the CEV has high values both of the Cronbach’s alpha and of the McDonald’s omega. So, given both the multidimensional nature of this scale and the fact that this scale has been validated in an Italian context, as in the present research, the authors decided to accept the values resulting from these sub-dimensions of the overall scale anyway. This should be considered in future research to explore these dimensions in different samples and contexts. Additionally, distinguishing between different types of AI technologies and considering the nature of human–machine interactions could provide further understanding of how AI shapes work experiences and employee attitudes. Finally, longitudinal research designs would allow for the tracking of changes in attitudes over time and help uncover potential causal relationships between organizational practices, individual experiences, and the formation of attitudes toward AI.

Author Contributions

Writing—original draft, Writing—review and editing, Conceptualization, S.M.; Writing—review and editing, Conceptualization, D.B.; Writing—review and editing, Conceptualization, B.B.; Writing—review and editing, Methodology, F.P.; Writing—review and editing, E.G.; Conceptualization, Writing—review and editing, Supervision, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of Cagliari (Research ID: 0262499, approved on 19 September 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors wish to thank all the participants for their valuable contributions to this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. In Role Behaviour Scale (Adapted Version). Note: 5-point Likert scale (1 = strongly disagree, 7 = strongly agree).
Table A1. In Role Behaviour Scale (Adapted Version). Note: 5-point Likert scale (1 = strongly disagree, 7 = strongly agree).
NOriginal ItemsItalian Translation (Adapted Version)
1Adequately completes assigned dutiesGrazie all’IA, completo adeguatamente i compiti assegnati
2Fulfills responsibilities specified in job descriptionGrazie al supporto dell’IA, adempio alle responsabilità specificate nella descrizione del mio lavoro
3Performs task that are expected of him/her Grazie all’IA, svolgo i compiti previsti per il mio ruolo
4Meet formal performance requirements of the jobPer mezzo dell’IA, soddisfo i requisiti formali di performance del mio lavoro

References

  1. Park, J.; Woo, S.E.; Kim, J. Attitudes towards artificial intelligence at work: Scale development and validation. J. Occup. Organ. Psychol. 2024, 97, 920–951. [Google Scholar] [CrossRef]
  2. Mahmud, H.; Islam, A.K.M.N.; Mitra, R.K. What Drives Managers Towards Algorithm Aversion and How to Overcome It? Mitigating the Impact of Innovation Resistance through Technology Readiness. Technol. Forecast. Soc. Change 2023, 193, 122641. [Google Scholar] [CrossRef]
  3. Marocco, S.; Barbieri, B.; Talamo, A. Exploring facilitators and barriers to managers’ adoption of AI-based systems in decision making: A systematic review. AI 2024, 5, 2538–2567. [Google Scholar] [CrossRef]
  4. Fishbein, M.; Ajzen, I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research; Addison-Wesley: Reading, MA, USA, 1975. [Google Scholar]
  5. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  6. Schepman, A.; Rodway, P. Initial validation of the General Attitudes Toward Artificial Intelligence Scale. Comput. Hum. Behav. Rep. 2020, 1, 100014. [Google Scholar] [CrossRef]
  7. Brougham, D.; Haar, J. Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. J. Manag. Org. 2018, 24, 239–257. [Google Scholar] [CrossRef]
  8. Mantovani, G. Virtual reality as a communication environment: Consensual hallucination, fiction, and possible selves. Human Relat. 1995, 48, 669–683. [Google Scholar] [CrossRef]
  9. Lakoff, G. Cognitive models and prototype theory. In Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization; Neisser, U., Ed.; Cambridge University Press: Cambridge, UK, 1987; pp. 63–100. [Google Scholar]
  10. Mantovani, G. Manuale di Psicologia Sociale; Giunti Editore: Florence, Italy, 2003. [Google Scholar]
  11. Van de Ven, A.H. Central problems in the management of innovation. Manag. Sci. 1986, 32, 590–607. [Google Scholar] [CrossRef]
  12. Siguaw, J.A.; Simpson, P.M.; Enz, C.A. Conceptualizing innovation orientation: A framework for study and integration of innovation research. J. Prod. Innov. Manag. 2006, 23, 556–574. [Google Scholar] [CrossRef]
  13. Tannorella, B.C.; Santoro, P.E.; Moscato, U.; Gualano, M.; Buccico, R.; Rossi, M.; Amantea, C.; Daniele, A.; Perrotta, A.; Borrelli, I. The assessment of the ethical organizational culture: Validation of an Italian short version of the Corporate Ethical Virtues Model–based questionnaire. J. Clin. Res. Bioeth. 2022, 13, 1000419. [Google Scholar]
  14. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  15. Chaiken, S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J. Pers. Soc. Psychol. 1980, 39, 752–766. [Google Scholar] [CrossRef]
  16. Chaiken, S. The heuristic model of persuasion. In Social Influence: The Ontario Symposium; Zanna, M.P., Olson, J.M., Herman, C.P., Eds.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1987; Volume 5, pp. 3–39. [Google Scholar]
  17. Mannetti, L. Psicologia Sociale; Il Mulino: Bologna, Italy, 2002. [Google Scholar]
  18. Kaplan, A.; Haenlein, M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 2019, 62, 15–25. [Google Scholar] [CrossRef]
  19. Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial intelligence for decision making in the era of big data: Evolution, challenges, and research agenda. Int. J. Inf. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
  20. Farnese, M.L.; Fida, R. Premises for innovation: Italian validation and dimensionality of the Inventory of Organizational Innovativeness (IOI). BPA—Appl. Psychol. Bull. Boll. Psic. Appl. 2016, 64, 3–18. [Google Scholar]
  21. Williams, L.J.; Anderson, S.E. Job satisfaction and organizational commitment as predictors of organizational citizenship and in-role behaviors. J. Manag. 1991, 17, 601–617. [Google Scholar] [CrossRef]
  22. Tang, H.K. An inventory of organizational innovativeness. Technovation 1998, 19, 41–51. [Google Scholar] [CrossRef]
  23. Booyse, D.; Scheepers, C.B. Barriers to adopting automated organizational decision-making through the use of artificial intelligence. Manag. Res. Rev. 2024, 47, 64–85. [Google Scholar] [CrossRef]
  24. Phuoc, N.V. The Critical Factors Impacting Artificial Intelligence Applications Adoption in Vietnam: A Structural Equation Modeling Analysis. Economies 2022, 10, 129. [Google Scholar] [CrossRef]
  25. Lada, S.; Chekima, B.; Karim, M.R.A.; Fabeil, N.F.; Ayub, M.S.; Amirul, S.M.; Ansar, R.; Bouteraa, M.; Fook, L.M.; Zaki, H.O. Determining factors related to artificial intelligence (AI) adoption among Malaysia’s small and medium-sized businesses. J. Open Innov. Technol. Mark. Complex. 2023, 9, 100144. [Google Scholar] [CrossRef]
  26. Rodríguez-Espíndola, O.; Chowdhury, S.; Dey, P.K.; Albores, P.; Emrouznejad, A. Analysis of the Adoption of Emergent Technologies for Risk Management in the Era of Digital Manufacturing. Technol. Forecast. Soc. Chang. 2022, 178, 21562. [Google Scholar] [CrossRef]
  27. Morales-García, W.C.; Sairitupa-Sanchez, L.Z.; Flores-Paredes, A.; Morales-García, M.; Gutierrez-Caballero, F.N. Influence of Attitude toward Artificial Intelligence (AI) on Job Performance with AI in Nurses. Data Metadata 2025, 4, 221. [Google Scholar] [CrossRef]
  28. Marocco, S.; Marini, M.; Talamo, A. Enhancing Organizational Processes for Service Innovation: Strategic Organizational Counseling and Organizational Network Analysis. Front. Res. Metr. Anal. 2024, 9, 1270501. [Google Scholar] [CrossRef]
  29. Marocco, S.; Talamo, A.; Quintiliani, F. Applying design thinking to develop AI-based multi-actor decision-support systems: A case study on human capital investments. Appl. Sci. 2024, 14, 5613. [Google Scholar] [CrossRef]
  30. Marocco, S.; Talamo, A.; Quintiliani, F. From service design thinking to the third generation of activity theory: A new model for designing AI-based decision-support systems. Front. Artif. Intell. 2024, 7, 1303691. [Google Scholar] [CrossRef]
  31. Daly, S.J.; Wiewiora, A.; Hearn, G. Shifting attitudes and trust in AI: Influences on organizational AI adoption. Technol. Forecast. Soc. Change 2025, 215, 124108. [Google Scholar] [CrossRef]
  32. Soulami, M.; Benchekroun, S.; Galiulina, A. Exploring how AI adoption in the workplace affects employees: A bibliometric and systematic review. Front. Artif. Intell. 2024, 7, 1473872. [Google Scholar] [CrossRef] [PubMed]
  33. IBM; Statista. How Many Companies Use AI? (New 2025 Data). Available online: https://explodingtopics.com/blog/companies-using-ai (accessed on 24 October 2025).
Figure 1. Conceptual framework illustrating the hypotheses.
Figure 1. Conceptual framework illustrating the hypotheses.
Ai 06 00292 g001
Figure 2. Scatter plots illustrating the relationships between dimensions of Perceived Organizational Ethical Culture and Attitudes Toward Artificial Intelligence.
Figure 2. Scatter plots illustrating the relationships between dimensions of Perceived Organizational Ethical Culture and Attitudes Toward Artificial Intelligence.
Ai 06 00292 g002
Figure 3. Scatter plot illustrating the relationship between Perceived Organizational Innovativeness and Attitudes Toward Artificial Intelligence.
Figure 3. Scatter plot illustrating the relationship between Perceived Organizational Innovativeness and Attitudes Toward Artificial Intelligence.
Ai 06 00292 g003
Figure 4. Scatter plots illustrating the relationship between Perceived Job Performance and Attitudes Toward Artificial Intelligence.
Figure 4. Scatter plots illustrating the relationship between Perceived Job Performance and Attitudes Toward Artificial Intelligence.
Ai 06 00292 g004
Table 1. Hierarchical multiple linear regression models with the AI Use Anxiety dimension of the AAAW scale as the dependent variable.
Table 1. Hierarchical multiple linear regression models with the AI Use Anxiety dimension of the AAAW scale as the dependent variable.
AI Use AnxietyJob InsecurityPerceived AI HumanlikenessPerceived AI AdaptabilityPerceived AI QualityPerceived AI Personal Utility
Independent variablesβpβpβpβpβpβp
Age0.1040.1740.0300.701−0.0930.2450.0340.6880.0580.4940.0700.363
Gender−0.1100.137−0.1790.020−0.1890.0160.0530.5230.0470.5680.2230.003
CEV
Clarity−0.2900.012−0.2600.028−0.0430.7190.0280.8250.1730.171−0.0140.904
Feasibility0.406<0.0010.423<0.0010.1900.050−0.1130.2730.1880.0670.1140.222
Supportability−0.0690.463−0.0010.988−0.3300.001−0.2550.017−0.0470.655−0.1190.211
Transparency−0.0330.7130.0390.6720.0130.893−0.0080.9340.1670.0940.0250.783
Discussability0.0150.9060.0280.838−0.0680.6160.1780.2230.0290.8390.0410.752
Sanctionability0.1470.197−0.0120.9170.1370.251−0.1270.318−0.1750.167−0.0250.828
Congruency of supervisors−0.0710.5050.2050.0640.0330.765−0.0480.6900.0600.6100.0300.779
Congruency of management0.0980.4730.1180.4010.1820.202−0.0110.9430.1330.3780.2700.051
IOI
Support0.1590.309−0.0990.541−0.1380.401−0.2530.148−0.0790.6460.0080.958
Raising projects−0.1160.342−0.0200.874−0.0960.4530.4010.0040.0390.770−0.0940.446
Organizational effectiveness/innovativeness0.1600.2630.0480.7430.1300.384−0.0810.614−0.1270.423−0.0870.544
IRB
Job performance−0.1830.019−0.0670.4010.1570.0530.1100.2030.2400.0060.470<0.001
R2 0.2990.2530.2300.1240.1390.287
Adj. R2 0.2290.1770.1520.0360.0520.215
F 0.0590.7790.2220.0410.072<0.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marocco, S.; Bellini, D.; Barbieri, B.; Presaghi, F.; Grossi, E.; Talamo, A. Attitudes Toward Artificial Intelligence in Organizational Contexts. AI 2025, 6, 292. https://doi.org/10.3390/ai6110292

AMA Style

Marocco S, Bellini D, Barbieri B, Presaghi F, Grossi E, Talamo A. Attitudes Toward Artificial Intelligence in Organizational Contexts. AI. 2025; 6(11):292. https://doi.org/10.3390/ai6110292

Chicago/Turabian Style

Marocco, Silvia, Diego Bellini, Barbara Barbieri, Fabio Presaghi, Elena Grossi, and Alessandra Talamo. 2025. "Attitudes Toward Artificial Intelligence in Organizational Contexts" AI 6, no. 11: 292. https://doi.org/10.3390/ai6110292

APA Style

Marocco, S., Bellini, D., Barbieri, B., Presaghi, F., Grossi, E., & Talamo, A. (2025). Attitudes Toward Artificial Intelligence in Organizational Contexts. AI, 6(11), 292. https://doi.org/10.3390/ai6110292

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop