1. Introduction
The dynamic development of digital technologies and the growing importance of artificial intelligence in the operations of organizations are becoming a detriment to the fact that the issue of trust in AI algorithms can be considered one of the challenges of modern management [
1,
2]. With the increase in the automation of decision-making processes, there is a need to understand how employees and stakeholders assess the trustworthiness, transparency, and accountability of systems based on intelligent algorithms [
3,
4]. In addition, at the same time, organizations operate under increasing pressure to conduct business in accordance with the principles of sustainable development [
5]. And this requires combining technological innovation with concern for social and ethical factors. The relationship between the use of AI and building trust in the organizational environment is therefore of particular importance [
6,
7]. The level of acceptance of the technology depends on both the effectiveness of implementations and the ability to implement sustainability assumptions. In the existing literature, there is a growing gap in the empirical recognition of the factors determining trust in AI in organizational practice [
8,
9]. This is especially evident in the context of the role of transparency, fears of algorithmic errors and the impact of organizational culture [
10,
11]. Filling this gap is a justification for taking up this topic.
In this article, the analysis focuses on trust in artificial intelligence algorithms from an organizational perspective, while simultaneously taking into account the context of sustainable development. It is assumed that trust in AI is a complex construct shaped by the coexistence of technological, organizational, and cognitive factors. In particular, the study examines the role of perceived reliability of algorithm-generated outcomes, the transparency and comprehensibility of algorithmic mechanisms, the perceived effectiveness of AI applications, as well as concerns related to the risk of errors, bias, and the decision-making autonomy of systems. The study is empirical and exploratory in nature. A quantitative survey-based approach was applied among organizational employees, which enabled the identification of relationships between key variables describing trust in AI in the work environment. The obtained results cannot be treated as a basis for formulating universal generalizations; however, they allow for the identification of dominant perceptual tendencies and potential mechanisms shaping trust within the examined organizational context. On this basis, a conceptual model of trust in AI was proposed, which serves a structuring and diagnostic function rather than a predictive one. The structure of the article is aligned with these assumptions. First, the theoretical foundations of sustainable organizational development and the existing findings on trust in artificial intelligence algorithms are presented. Next, the methodology of the empirical research is described, including the characteristics of the sample and the applied data analysis methods. The subsequent section presents the research results and their interpretation in relation to the adopted assumptions. The article concludes with a discussion and conclusions, including the limitations of the study and directions for future research.
The aim of the article is to identify and analyze the determinants of organizational trust in artificial intelligence systems and to develop a model describing the relationships between key variables conditioning the acceptance of algorithms. The study focused on assessing the perceived reliability of AI-generated outcomes, the transparency and comprehensibility of the mechanisms of operation, the effectiveness of the technology, concerns about errors, and the acceptance of the autonomy of decision-making systems. The aim was to show how trust in AI fits into the broader context of the organization’s sustainable development and the importance of algorithm-based technologies for the stability and responsibility of modern enterprises. The added value of the work results from the combination of technological and organizational perspectives, based on empirical data. This allows for a more complete understanding of how trust is formed and its determinants.
The structure of the article is subordinated to the achievement of the above goals. In the theoretical part, the key determinants of sustainable development of the organization were explained. Next, the most important concepts regarding trust in artificial intelligence algorithms are presented. In the next part, the research methodology is presented. It included the design of the research tool, the characteristics of the sample and the methods of data analysis adopted. Then, the results of empirical research on the perception of AI in organizations are presented, along with the interpretation of the main relationships. The last part of the article contains a discussion of the results, formulated conclusions, practical recommendations and indication of directions for further research. Such a structure of the work made it possible to approach the issue in many aspects. It became possible to create the basis for further reflection on the role of trust in the processes of implementing artificial intelligence in organizations.
2. Literature Review
Determinants of sustainable development of an organization
The sustainable development of an organization is seen as the result of structural, cultural, and institutional factors working together [
12,
13]. They determine the ability of entities to achieve long-term goals. The literature emphasizes that a properly adopted strategic orientation and appropriate organization of processes strengthen the readiness to implement sustainable development practices [
14]. In turn, the quality of organizational culture and internal consistency of activities are conducive to long-term stability [
15]. It can also be noted that organizational competencies, management maturity and the integration of sustainability principles with business practices are crucial for the effective implementation of the Sustainable Development Goals (SDGs) [
16,
17,
18,
19].
In the literature on the subject, the role of innovation and technological modernization as the main determinants of sustainable development can also be consistently crossed out. Certainly, innovation processes enable organizations to adapt to environmental changes and improve operational efficiency [
20]. Similarly, technological innovations in resource-intensive sectors, such as the agri-food sector, significantly increase environmental efficiency and competitiveness [
21]. The agri-food sector is used as an illustrative example due to its high consumption of natural resources and significant environmental footprint. As a result, technological innovations in this sector lead to especially visible and quantifiable improvements in environmental efficiency and competitive performance compared to less resource-intensive industries.
In turn, the ability to converge and adapt is an important factor conducive to organizational development [
22]. The literature also draws attention to the importance of socio-economic conditions that shape the possibilities of achieving global sustainable development goals [
23] and affect the development of industrial sectors [
24]. Environmental, Social and Governance (ESG) analyses are an important complement to this perspective. They emphasize that sustainability strategies are based on the integration of environmental, social and governance factors [
25].
Social and educational factors are another group of determinants important for sustainable development at the organizational level. It is emphasized that the pro-environmental behavior of employees depends on motivation, environmental awareness and support from organizational policies [
26]. Education, both formal and within an organization, strengthens the competencies needed to implement change and build a culture of responsibility [
27]. In addition, the socio-political environment affects the way human capital is managed. It also shapes the ability of organizations to respond to challenges related to the implementation of sustainable development principles, although the scope and nature of these challenges may vary across sectors [
28].
The identified determinants can be considered broadly applicable across sectors; however, their specific configuration and impact may vary depending on sector-specific characteristics, which should be taken into account when interpreting the proposed model.
The problem of trust in AI algorithms
The issue of trust in artificial intelligence algorithms has become a topic of research on digital technologies in recent years. This remains strongly related to the transparency of the systems, their explainability and the ability to understand the logic of decision-making [
29,
30]. In the scientific literature, it is noted that explainability strengthens user trust. It is an important foundation for assessing the reliability of algorithms [
31,
32]. The importance of transparency in the operation of systems is clearly marked not only in medicine, but also in other sensitive sectors where algorithmic decisions have significant social, ethical, or safety implications. In these areas, the lack of clarity on how technological decisions are made limits the readiness to use AI in practice [
33,
34]. It is also indicated that skepticism towards “black boxes” stems from both technological concerns and epistemological doubts about the quality and sources of decisions made by algorithms [
35,
36].
In the literature on the subject, the importance of errors and algorithmic bias is also consistently emphasized. They can be considered significant barriers to trust, as they undermine users’ confidence in the fairness, reliability, and predictability of algorithmic decision-making. Problems related to the reproduction of bias and inequalities arise both in educational tools [
37,
38] and in medical diagnostics, where the impact of bias can lead to serious consequences for users [
39]. The results of research on algorithmic justice further indicate that systems declared to be “fair-AI” may in practice generate hidden forms of injustice if they are not subjected to appropriate oversight mechanisms [
40]. Moreover, in theory, trust in AI is shaped in the human–technology relationship, based on the user’s competences and the ability to maintain control over the decision-making process [
41,
42]. It is also pointed out that the way algorithms are presented affects the understanding of the system and thus the reduction in biases [
43].
Another area of research is focused on the acceptance of technology and the psychosocial determinants of AI use. The TAM and UTAUT models indicate that the acceptance of AI depends on both its functionality and the perception of usability, ease of use, and social norms and the level of control over the technology [
44,
45,
46]. Consumer research, in turn, proves that the lack of explainability limits user trust, also in the context of e-commerce [
47,
48]. Recent empirical studies show that, in organizational settings, trust in artificial intelligence supports technology acceptance, innovative employee behavior, and higher levels of engagement [
42]. Algorithm transparency, on the other hand, can act as a “channel of trust”, reducing negative attitudes towards AI solutions [
49,
50,
51]. At the same time, the importance of an ethical and regulatory framework that creates conditions conducive to sustainable trust through auditability, security and compliance with standards for responsible technology deployment [
29,
52] can be highlighted. It is also worth pointing to the prospects related to new technologies [
53], which include quantum algorithms, which can strengthen the AI surveillance process and increase its credibility [
40,
49].
3. Materials and Methods
The empirical study was based on the results of the authors’ own research, conducted using a quantitative survey method. Primary data were collected in 2025 through an online questionnaire, which was addressed to organizational employees with experience in using artificial intelligence systems supporting decision-making processes. The sample selection was purposive and non-probabilistic. The research instrument was an author-designed questionnaire comprising statements related to the key dimensions of trust in AI algorithms, assessed using a five-point Likert scale. The distribution of respondents’ answers is presented in
Table 1. The study involved 325 respondents, and all correctly completed questionnaires were included in the analysis. Data analysis included descriptive statistics and Pearson correlation analysis, which enabled the identification of relationships between variables. Due to the nature of the sample and the exploratory aim of the study, the obtained results constitute the basis for developing a conceptual model of trust in AI, rather than for statistical generalization. The online survey was conducted using Google Forms (Google LLC, Mountain View, CA, USA). Data were organized and analyzed using Microsoft Excel (Microsoft Corporation, Redmond, WA, USA), Office 2021.
The aim of the research was to identify how the employees of the organization perceive trust in artificial intelligence algorithms used in decision-making processes and which factors determine the positive or negative attitude towards AI algorithms to the greatest extent. It is hypothesized that this trust is multifactorial and depends on the perceived reliability of the results generated by AI, the level of transparency and comprehensibility of the algorithms’ mechanisms, the effectiveness of their application, and the strength of concerns related to errors and biases, as well as the acceptance of the decision-making autonomy of systems. Therefore, the research questions focused on the assessment of individual aspects of the functioning of AI in organizations and on determining the relationships that occur between these elements. The research method was an online survey conducted in 2025 among 325 organizational employees recruited through targeted professional and institutional channels, whose responses reflect perceptions of AI systems based on their declared experience with AI-supported processes in their organizations. Given the sample size and the non-probabilistic nature of the study, the proposed model should be interpreted as exploratory and conceptual, aiming to identify potential relationships between variables rather than to provide statistically generalizable conclusions. It allowed for the acquisition of comparable quantitative data and the analysis of the interdependencies between variables. Thanks to this, the key determinants of trust were identified and a model was presented describing the way it is formed in organizations.
The online survey was conducted in 2025 using a purposive sampling approach. The respondents were employees of organizations operating in various sectors of the economy, including agriculture and the agri-food sector, industry, services, and modern knowledge-intensive sectors. The sample comprised representatives of organizations of different sizes, ranging from small entities to large enterprises, which made it possible to capture diverse organizational contexts. Participation in the study was limited to individuals declaring experience in working with artificial intelligence systems or those involved in decision-making processes supported by AI in their organizations. Therefore, the research sample does not consist of a random group of Internet users; rather, it represents a deliberately selected group of organizational actors whose responses reflect practical perceptions of artificial intelligence in real work environments. Therefore, the proposed model has an exploratory and conceptual character and serves to structure relationships observed in the data, rather than to formulate universal or predictive claims.
In addition to the sample size, particular attention was paid to the substantive relevance of the respondents. Participation in the survey was restricted to individuals who declared professional experience with artificial intelligence systems or direct involvement in decision-making processes supported by AI within their organizations. The questionnaire was preceded by a screening section verifying respondents’ familiarity with AI concepts and applications in organizational practice, ensuring that the answers were provided by individuals possessing at least a basic operational understanding of artificial intelligence. The collected sociodemographic data indicate that the respondents represented diverse sectors of the economy and organizations of varying sizes, which supports the contextual validity of the obtained assessments. Consequently, the study does not rely on opinions of randomly selected Internet users, but on informed perceptions of organizational actors interacting with AI in real work environments.
In the course of the research, we tried to analyze the way trust in artificial intelligence algorithms is perceived in organizations. Both the level of acceptance of their performance and the related concerns and expectations for transparency were taken into account. Twelve statements were evaluated, referring to different dimensions of trust, transparency, ethics and effectiveness of AI systems.
If we were convinced that AI algorithms supporting decision-making processes were reliable, the answers were distributed across all categories of the scale. However, the largest proportion of respondents indicated “rather yes” (147 people), while the least “definitely yes” (17 people) and “definitely no” (16 people). A similar distribution of responses was noted when assessing trust in the results generated by AI systems. In this case, exactly the same numerical scheme was repeated—this may suggest a similar perception of the overall reliability of algorithms and the results generated by them. With regard to the transparency of algorithms, understood as explainable AI, the largest group of respondents indicated the answer “I don’t have an opinion” (124 people), followed by the answers “rather yes” (90 people) and “definitely yes” (53 people). The lowest number of indications was recorded in the “definitely not” category (23 people). This, in turn, may signal a moderate, albeit ambiguous, assessment of organizational practices in this area.
Furthermore, it can be noted that the statement about the impact of the use of AI on the effectiveness of the organization met with the highest approval among the analyzed elements. The highest number of indications was recorded in the “rather yes” category (142 people), followed by the “definitely yes” category (47 people). Negative responses were much rarer. They amounted to 42 people for the “rather not” category and 16 people for “definitely not”. When there was concern that AI-based decisions may be burdened with errors or biases, positive responses prevailed (138 indications for the “rather yes” category and 74 for “definitely yes”). Negative answers, on the other hand, appeared less frequently. This may indicate a fairly strong perception of risk in this area.
The statement about the need to provide insight into the decision-making process of AI systems has been highly accepted. The highest number of responses was recorded in the “definitely yes” category (132 people). And then in the “rather yes” category (117 people). Negative responses remained at a low level, as they did not exceed 21 indications.
The belief that trust in AI algorithms depends on being able to understand how they work has also been highly supported. The most frequently chosen answers were “rather yes” (136 people) and “definitely yes” (81 people). On the other hand, relatively few people declared that they did not have an opinion (74 people). Negative answers were few, not exceeding 24 indications. With regard to the existence of clear ethical principles governing the use of AI in an organization, the distribution of responses turned out to be more diverse. The largest number of people indicated the option “I don’t have an opinion” (92 people). A comparable number of respondents, on the other hand, chose positive and negative answers. The answer “rather yes” was given by 88 people, while “rather not” by 48 people. Extreme responses remained at the level of 46 people (“definitely not”) and 51 people (“definitely yes”).
The statement that transparency and understanding of algorithms are more important than their accuracy was met with indifference or moderate acceptance. The largest number of answers was given in the category “I don’t have an opinion” (111 people), followed by the category “rather yes” (87 people). Negative responses appeared with a similar frequency: 70 people indicated “rather not”. On the other hand, 30 people “definitely not”. Only 27 respondents chose the “definitely yes” category. One of the more controversial statements, regarding the acceptance of a situation in which the final decision is made by an AI system instead of a human, was met with clear reluctance. The answer “definitely not” was given by 135 people, and “rather not” by 95 people. Positive responses were few. They did not exceed 39 indications for the “rather yes” category and 10 indications for “definitely yes”. 46 people declared that they did not give an opinion.
Organizational culture conducive to openness to AI technology was assessed primarily as neutral or positive. The largest number of respondents indicated the answer “I don’t have an opinion” (128 people), followed by “rather yes” (120 people). Negative answers were less frequent. On the other hand, extremely negative indications did not exceed eight people. The answer “definitely yes” was chosen by 31 people. In the last of the analyzed statements, concerning the increase in trust in AI along with the increase in its effectiveness, neutral or positive answers dominated. The answer “I don’t have an opinion” (106 people) was chosen most often, followed by “rather yes” (108 people). Positive responses in the extreme form were recorded in 46 cases. Negative responses were less frequent, while the number of indications in the “definitely not” category was 2
In order to determine the strength and direction of the relationship between variables, a linear correlation analysis based on the Pearson coefficient was used. The choice of this method resulted from the nature of the data, which, despite being derived from Likert scales, were characterized by distributions close to continuous and met the criteria for the use of parametric correlation measures. In addition, previous research on trust in AI systems indicates that the Pearson coefficient is commonly used in analyses of the relationship between perceptual user ratings.
For each correlation coefficient, a test of statistical significance (p-value) was calculated, allowing for the verification of the null hypothesis that there is no relationship between the analyzed variables. The study assumed a significance level of α = 0.05, which is considered standard in empirical research in social sciences and management. Correlation values for which p < 0.05 were considered statistically significant and interpreted in the analysis of the results as dependencies of real cognitive significance.
The use of the parametric Pearson coefficient, combined with significance tests, made it possible to identify both strong and moderate or weak relationships between variables describing the reliability of algorithms, transparency of their mechanisms, perception of effectiveness, risk of error, and decision-making autonomy of AI systems. This procedure is the basis for the interpretation of the correlation matrix presented in the output part and supports the construction of a conceptual model describing the determinants of organizational trust in artificial intelligence. The distributions of the variables were assessed as close to normal based on histograms and indices of skewness and kurtosis (values < |1|).
Figure 1 shows a correlation matrix illustrating the strength and direction of the relationship between twelve variables describing different dimensions of organizational trust in AI algorithms. The interpretation of the strength of the effect adopted standard thresholds: correlations below 0.30 were considered weak, values 0.30–0.49 were considered moderate, and correlations above 0.50 were considered strong. Relationships exceeding 0.90 indicate very high covariability, which may indicate conceptual proximity of the analyzed variables or the risk of their collinearity, requiring interpretative caution. The scoreboard takes into account both factors that strengthen trust, such as the reliability of results, the transparency of algorithms, and the perceived effectiveness of AI systems, as well as limiting factors, including concerns about errors, algorithmic biases and the decision-making autonomy of systems.
Figure 1 presents a correlation matrix illustrating the strength and direction of relationships between the analyzed variables. The figure allows you to see a consistent structure of relationships between variables and is the basis for the construction of a model explaining the mechanism of trust formation in organizations.
On the basis of the conducted empirical research, treated as exploratory, including the analysis of the correlation structure and the interpretation of relationships observed in the correlation matrix for a sample of 325 respondents, a preliminary and conceptual model was proposed. Taking into account the exploratory aim of the study, a sample size of 325 respondents was considered sufficient for developing a preliminary conceptual model intended to structure the observed relationships (rather than to formulate statistically generalizable or predictive conclusions). It points to the determinants of organizational trust in artificial intelligence algorithms.
The relationships between the key determinants of organizational trust in AI systems can be written in the form of a linear model:
where ZT stands for the level of trust in AI in the organization, while the variables W, E, P, R, and A represent, respectively: the reliability of the results generated by AI systems (W), the transparency and comprehensibility of algorithms (E), the perceived effectiveness and usefulness of AI in organizational processes (P), the risk of algorithmic errors and biases (R), and the acceptance of the decision-making autonomy of AI systems (A). The parameters α, β, and γ take positive values. This reflects their reinforcing effect on trust, while the δ and ε ratios are negative, indicating that an increase in perceived risk or an increase in acceptance of decisions made autonomously by AI leads to a decrease in the level of trust.
The proposed linear model assumes five structural parameters denoted by the symbols α, β, γ, δ and ε. Each reflects the strength and direction of the impact of the relevant explanatory variable on the level of organizational trust in artificial intelligence (AI) systems.
The α parameter describes the impact of the reliability of algorithm-generated results (W) on the level of trust. It takes positive values, which means that an increase in perceived trustworthiness leads to an increase in user trust. The parameter β determines the importance of transparency and comprehensibility of the mechanisms of operation of algorithms (E). Its positive value indicates that greater transparency is a factor that strengthens trust.
The γ parameter represents the impact of the perceived effectiveness and usability of AI systems (P). The higher the effectiveness attributed to the system, the more likely it is to build a positive attitude and acceptance of the technology. The δ parameter, as a negative coefficient, reflects the impact of algorithmic error risk and bias (R). This means that an increase in concerns about errors leads to a decrease in the level of trust.
The ε parameter, also negative, represents the impact of accepting the decision-making autonomy of AI systems (A). A negative value indicates that an increase in acceptance of full autonomy of algorithms is associated with a decrease in trust, which is consistent with the results of empirical studies showing a strong preference for users for the human-AI collaboration model.
The model is linear in nature, which results from both theoretical assumptions and empirical analysis. The correlation matrix of variables used in the study indicates clear, monotonic relationships between the evaluated aspects of AI functioning, which justifies the use of a simplified linear structure at the stage of conceptualization of dependencies. The characteristics of the data obtained—based on Likert scales, generating uniform response distributions and high linear correlation coefficients—further confirm the legitimacy of such an approach. The adopted form of the model therefore serves as a conceptual and empirical model, allowing to present the directions and relative strength of the influence of individual determinants on organizational trust in AI. The model is not a statistically estimated equation, but a descriptive tool that synthesizes research results and organizes the relationships between key factors influencing trust in the organizational environment.
The model is conceptual in nature and does not constitute a statistical estimate. The presented equation is a formal approach to the proposed model. It reflects the results of empirical analysis, according to which trust in AI is multidimensional and depends on the simultaneous interaction of several separate (but interrelated) factors. The design of the model emphasizes that a positive perception of the trustworthiness, transparency and effectiveness of AI systems strengthens the level of trust. On the other hand, concerns about errors and the reluctance to give full decision-making autonomy to algorithms limit them. The functional record allows the model to be treated as an analytical tool that can assess which aspects of the use of AI are most important for shaping trust in the surveyed organizations.
4. Discussion
The discussion of the results is based directly on the empirical data obtained in the study, as well as on the identified relationships between the analyzed variables. In contrast to the theoretical section, this part focuses on the interpretation of the results presented in
Table 1 and the correlation matrix (
Figure 1), as well as on their synthetic representation shown in
Figure 2.
The obtained results clearly indicate that, within the studied sample, the key determinants of organizational trust in AI algorithms are the perceived reliability of generated outcomes, the transparency of operating mechanisms, and the perceived effectiveness of artificial intelligence applications. These variables exhibit strong and consistent positive relationships, which confirms their central role in the process of building trust in the organizational environment.
At the same time, the correlation analysis revealed the presence of factors that limit trust. In particular, these include concerns related to the risk of errors, algorithmic bias, and the acceptance of full decision-making autonomy of AI systems. It should also be emphasized that these factors remain relatively independent of positive assessments of technological effectiveness. This suggests that even effective AI systems may fail to generate trust if they do not ensure an adequate level of control and transparency.
The conducted research allows us to formulate several conclusions regarding the way organizational trust in artificial intelligence systems is formed. The data obtained indicate that this trust is based primarily on three key pillars: the reliability of the results generated by algorithms, the transparency of their mechanisms of operation, and the perceived effectiveness of the use of AI in organizational practice. The relationships between these variables turned out to be strong and consistent. Therefore, it can be said that a positive attitude towards AI systems is built primarily through the belief in their reliability, transparency and real usefulness. At the same time, the presented results suggest that, within the context of the studied organizations, increases in effectiveness and operational efficiency associated with the use of AI may contribute to strengthening the climate of trust. The observed relationship between increased effectiveness of AI use and a strengthened climate of trust should be interpreted as context-dependent and contingent on organizational characteristics, rather than as a universal pattern applicable to all organizations.
The analysis of the data also pointed to the significant role of risk factors, especially concerns about algorithmic errors and biases. These factors, although partly related to the need for transparency and control over the operation of systems, create a separate area that limits the level of trust, regardless of the positive assessment of other aspects of the functioning of AI. An analysis of the results of the research indicates that these concerns are permanent. They are not significantly weakened even when respondents appreciate the effectiveness of the systems. The strongly negative reaction to the idea of full decision-making autonomy of AI further confirms that trust has its limits. And employees prefer a model of coexistence between humans and technology, in which the ultimate responsibility remains with the human side.
Moreover, on the basis of the presented conclusions, it can be stated that trust in AI in organizations is a multidimensional construction, complex and at the same time susceptible to the action of many simultaneous determinants. Empirical results point to the need for a systemic approach to building trust, including both actions to strengthen the transparency and explainability of algorithms, as well as continuous monitoring of potential risks related to errors or biases. The identified dependencies confirm that organizations that invest in transparent technological solutions and educating employees in the field of AI systems can achieve a higher level of acceptance and trust. The results of the research are thus the basis for further analyses on how to shape the safe, responsible and socially acceptable implementation of artificial intelligence in the organizational environment.
Figure 2 shows the key determinants of organizational trust in AI systems. They have been included as conditions supporting the sustainable development of the organization. Reinforcing and limiting factors shaping the level of trust are identified, including the importance of reliability of results, transparency of algorithms, perceived effectiveness of AI, as well as the impact of concerns related to errors and decision-making autonomy. The dependencies presented reflect the multidimensional structure of trust and its links to the practices of responsible implementation of technology in organizations.