Next Article in Journal
A Holistic Design Framework for Post-Disaster Housing Using Interlinked Modules for Diverse Architectural Applications
Next Article in Special Issue
Stress-Testing Slovenian SME Resilience: A Scenario Model Calibrated on South African Evidence
Previous Article in Journal
The Role of Fintech in Enhancing Financial Innovation in Asia: Sustainable Development Approach
Previous Article in Special Issue
Innovation Hub Drivers and Activities: A Desktop Assessment for Sustainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Determinants of Trust in AI Algorithms in the Conditions of Sustainable Development of the Organization

1
Faculty of Mechanical and Industrial Engineering, Warsaw University of Technology, 02-524 Warsaw, Poland
2
Department of Economic Informatics, Faculty of Economics, University of Economics in Katowice, 40-287 Katowice, Poland
3
Faculty of Social and Technical Sciences, Higher School of Professional Education Wroclaw, 53-329 Wroclaw, Poland
4
Department of Production Management, Faculty of Production Engineering and Materials Technology, Częstochowa University of Technology, 19 Aleja Armii Krajowej, 42-201 Częstochowa, Poland
*
Author to whom correspondence should be addressed.
Sustainability 2026, 18(2), 776; https://doi.org/10.3390/su18020776
Submission received: 10 December 2025 / Revised: 4 January 2026 / Accepted: 9 January 2026 / Published: 12 January 2026
(This article belongs to the Special Issue Advancing Innovation and Sustainability in SMEs and Entrepreneurship)

Abstract

The article addresses the problem of the insufficient empirical recognition of the determinants of trust in artificial intelligence (AI) algorithms in organizations operating under conditions of sustainable development. The aim of the study was to identify the factors shaping organizational trust in AI and to examine how perceived trustworthiness, transparency, and effectiveness of algorithms influence their acceptance in the work environment. The research was conducted using a quantitative survey-based approach among organizational employees, which enabled the analysis of relationships between key variables and the identification of factors that strengthen or limit trust. The results indicate that algorithmic transparency, the reliability of generated outcomes, and the perceived effectiveness of AI applications significantly foster trust, whereas concerns related to errors and the decision-making autonomy of systems constitute important barriers to acceptance. Based on the findings, a conceptual and exploratory model of trust in AI was proposed, which may be used to diagnose the level of technology acceptance and to support the responsible implementation of artificial intelligence-based solutions in organizations. The contribution of the article lies in integrating organizational and technological perspectives and in providing an empirical approach to trust in AI within the context of sustainable development.

1. Introduction

The dynamic development of digital technologies and the growing importance of artificial intelligence in the operations of organizations are becoming a detriment to the fact that the issue of trust in AI algorithms can be considered one of the challenges of modern management [1,2]. With the increase in the automation of decision-making processes, there is a need to understand how employees and stakeholders assess the trustworthiness, transparency, and accountability of systems based on intelligent algorithms [3,4]. In addition, at the same time, organizations operate under increasing pressure to conduct business in accordance with the principles of sustainable development [5]. And this requires combining technological innovation with concern for social and ethical factors. The relationship between the use of AI and building trust in the organizational environment is therefore of particular importance [6,7]. The level of acceptance of the technology depends on both the effectiveness of implementations and the ability to implement sustainability assumptions. In the existing literature, there is a growing gap in the empirical recognition of the factors determining trust in AI in organizational practice [8,9]. This is especially evident in the context of the role of transparency, fears of algorithmic errors and the impact of organizational culture [10,11]. Filling this gap is a justification for taking up this topic.
In this article, the analysis focuses on trust in artificial intelligence algorithms from an organizational perspective, while simultaneously taking into account the context of sustainable development. It is assumed that trust in AI is a complex construct shaped by the coexistence of technological, organizational, and cognitive factors. In particular, the study examines the role of perceived reliability of algorithm-generated outcomes, the transparency and comprehensibility of algorithmic mechanisms, the perceived effectiveness of AI applications, as well as concerns related to the risk of errors, bias, and the decision-making autonomy of systems. The study is empirical and exploratory in nature. A quantitative survey-based approach was applied among organizational employees, which enabled the identification of relationships between key variables describing trust in AI in the work environment. The obtained results cannot be treated as a basis for formulating universal generalizations; however, they allow for the identification of dominant perceptual tendencies and potential mechanisms shaping trust within the examined organizational context. On this basis, a conceptual model of trust in AI was proposed, which serves a structuring and diagnostic function rather than a predictive one. The structure of the article is aligned with these assumptions. First, the theoretical foundations of sustainable organizational development and the existing findings on trust in artificial intelligence algorithms are presented. Next, the methodology of the empirical research is described, including the characteristics of the sample and the applied data analysis methods. The subsequent section presents the research results and their interpretation in relation to the adopted assumptions. The article concludes with a discussion and conclusions, including the limitations of the study and directions for future research.
The aim of the article is to identify and analyze the determinants of organizational trust in artificial intelligence systems and to develop a model describing the relationships between key variables conditioning the acceptance of algorithms. The study focused on assessing the perceived reliability of AI-generated outcomes, the transparency and comprehensibility of the mechanisms of operation, the effectiveness of the technology, concerns about errors, and the acceptance of the autonomy of decision-making systems. The aim was to show how trust in AI fits into the broader context of the organization’s sustainable development and the importance of algorithm-based technologies for the stability and responsibility of modern enterprises. The added value of the work results from the combination of technological and organizational perspectives, based on empirical data. This allows for a more complete understanding of how trust is formed and its determinants.
The structure of the article is subordinated to the achievement of the above goals. In the theoretical part, the key determinants of sustainable development of the organization were explained. Next, the most important concepts regarding trust in artificial intelligence algorithms are presented. In the next part, the research methodology is presented. It included the design of the research tool, the characteristics of the sample and the methods of data analysis adopted. Then, the results of empirical research on the perception of AI in organizations are presented, along with the interpretation of the main relationships. The last part of the article contains a discussion of the results, formulated conclusions, practical recommendations and indication of directions for further research. Such a structure of the work made it possible to approach the issue in many aspects. It became possible to create the basis for further reflection on the role of trust in the processes of implementing artificial intelligence in organizations.

2. Literature Review

Determinants of sustainable development of an organization
The sustainable development of an organization is seen as the result of structural, cultural, and institutional factors working together [12,13]. They determine the ability of entities to achieve long-term goals. The literature emphasizes that a properly adopted strategic orientation and appropriate organization of processes strengthen the readiness to implement sustainable development practices [14]. In turn, the quality of organizational culture and internal consistency of activities are conducive to long-term stability [15]. It can also be noted that organizational competencies, management maturity and the integration of sustainability principles with business practices are crucial for the effective implementation of the Sustainable Development Goals (SDGs) [16,17,18,19].
In the literature on the subject, the role of innovation and technological modernization as the main determinants of sustainable development can also be consistently crossed out. Certainly, innovation processes enable organizations to adapt to environmental changes and improve operational efficiency [20]. Similarly, technological innovations in resource-intensive sectors, such as the agri-food sector, significantly increase environmental efficiency and competitiveness [21]. The agri-food sector is used as an illustrative example due to its high consumption of natural resources and significant environmental footprint. As a result, technological innovations in this sector lead to especially visible and quantifiable improvements in environmental efficiency and competitive performance compared to less resource-intensive industries.
In turn, the ability to converge and adapt is an important factor conducive to organizational development [22]. The literature also draws attention to the importance of socio-economic conditions that shape the possibilities of achieving global sustainable development goals [23] and affect the development of industrial sectors [24]. Environmental, Social and Governance (ESG) analyses are an important complement to this perspective. They emphasize that sustainability strategies are based on the integration of environmental, social and governance factors [25].
Social and educational factors are another group of determinants important for sustainable development at the organizational level. It is emphasized that the pro-environmental behavior of employees depends on motivation, environmental awareness and support from organizational policies [26]. Education, both formal and within an organization, strengthens the competencies needed to implement change and build a culture of responsibility [27]. In addition, the socio-political environment affects the way human capital is managed. It also shapes the ability of organizations to respond to challenges related to the implementation of sustainable development principles, although the scope and nature of these challenges may vary across sectors [28].
The identified determinants can be considered broadly applicable across sectors; however, their specific configuration and impact may vary depending on sector-specific characteristics, which should be taken into account when interpreting the proposed model.
The problem of trust in AI algorithms
The issue of trust in artificial intelligence algorithms has become a topic of research on digital technologies in recent years. This remains strongly related to the transparency of the systems, their explainability and the ability to understand the logic of decision-making [29,30]. In the scientific literature, it is noted that explainability strengthens user trust. It is an important foundation for assessing the reliability of algorithms [31,32]. The importance of transparency in the operation of systems is clearly marked not only in medicine, but also in other sensitive sectors where algorithmic decisions have significant social, ethical, or safety implications. In these areas, the lack of clarity on how technological decisions are made limits the readiness to use AI in practice [33,34]. It is also indicated that skepticism towards “black boxes” stems from both technological concerns and epistemological doubts about the quality and sources of decisions made by algorithms [35,36].
In the literature on the subject, the importance of errors and algorithmic bias is also consistently emphasized. They can be considered significant barriers to trust, as they undermine users’ confidence in the fairness, reliability, and predictability of algorithmic decision-making. Problems related to the reproduction of bias and inequalities arise both in educational tools [37,38] and in medical diagnostics, where the impact of bias can lead to serious consequences for users [39]. The results of research on algorithmic justice further indicate that systems declared to be “fair-AI” may in practice generate hidden forms of injustice if they are not subjected to appropriate oversight mechanisms [40]. Moreover, in theory, trust in AI is shaped in the human–technology relationship, based on the user’s competences and the ability to maintain control over the decision-making process [41,42]. It is also pointed out that the way algorithms are presented affects the understanding of the system and thus the reduction in biases [43].
Another area of research is focused on the acceptance of technology and the psychosocial determinants of AI use. The TAM and UTAUT models indicate that the acceptance of AI depends on both its functionality and the perception of usability, ease of use, and social norms and the level of control over the technology [44,45,46]. Consumer research, in turn, proves that the lack of explainability limits user trust, also in the context of e-commerce [47,48]. Recent empirical studies show that, in organizational settings, trust in artificial intelligence supports technology acceptance, innovative employee behavior, and higher levels of engagement [42]. Algorithm transparency, on the other hand, can act as a “channel of trust”, reducing negative attitudes towards AI solutions [49,50,51]. At the same time, the importance of an ethical and regulatory framework that creates conditions conducive to sustainable trust through auditability, security and compliance with standards for responsible technology deployment [29,52] can be highlighted. It is also worth pointing to the prospects related to new technologies [53], which include quantum algorithms, which can strengthen the AI surveillance process and increase its credibility [40,49].

3. Materials and Methods

The empirical study was based on the results of the authors’ own research, conducted using a quantitative survey method. Primary data were collected in 2025 through an online questionnaire, which was addressed to organizational employees with experience in using artificial intelligence systems supporting decision-making processes. The sample selection was purposive and non-probabilistic. The research instrument was an author-designed questionnaire comprising statements related to the key dimensions of trust in AI algorithms, assessed using a five-point Likert scale. The distribution of respondents’ answers is presented in Table 1. The study involved 325 respondents, and all correctly completed questionnaires were included in the analysis. Data analysis included descriptive statistics and Pearson correlation analysis, which enabled the identification of relationships between variables. Due to the nature of the sample and the exploratory aim of the study, the obtained results constitute the basis for developing a conceptual model of trust in AI, rather than for statistical generalization. The online survey was conducted using Google Forms (Google LLC, Mountain View, CA, USA). Data were organized and analyzed using Microsoft Excel (Microsoft Corporation, Redmond, WA, USA), Office 2021.
The aim of the research was to identify how the employees of the organization perceive trust in artificial intelligence algorithms used in decision-making processes and which factors determine the positive or negative attitude towards AI algorithms to the greatest extent. It is hypothesized that this trust is multifactorial and depends on the perceived reliability of the results generated by AI, the level of transparency and comprehensibility of the algorithms’ mechanisms, the effectiveness of their application, and the strength of concerns related to errors and biases, as well as the acceptance of the decision-making autonomy of systems. Therefore, the research questions focused on the assessment of individual aspects of the functioning of AI in organizations and on determining the relationships that occur between these elements. The research method was an online survey conducted in 2025 among 325 organizational employees recruited through targeted professional and institutional channels, whose responses reflect perceptions of AI systems based on their declared experience with AI-supported processes in their organizations. Given the sample size and the non-probabilistic nature of the study, the proposed model should be interpreted as exploratory and conceptual, aiming to identify potential relationships between variables rather than to provide statistically generalizable conclusions. It allowed for the acquisition of comparable quantitative data and the analysis of the interdependencies between variables. Thanks to this, the key determinants of trust were identified and a model was presented describing the way it is formed in organizations.
The online survey was conducted in 2025 using a purposive sampling approach. The respondents were employees of organizations operating in various sectors of the economy, including agriculture and the agri-food sector, industry, services, and modern knowledge-intensive sectors. The sample comprised representatives of organizations of different sizes, ranging from small entities to large enterprises, which made it possible to capture diverse organizational contexts. Participation in the study was limited to individuals declaring experience in working with artificial intelligence systems or those involved in decision-making processes supported by AI in their organizations. Therefore, the research sample does not consist of a random group of Internet users; rather, it represents a deliberately selected group of organizational actors whose responses reflect practical perceptions of artificial intelligence in real work environments. Therefore, the proposed model has an exploratory and conceptual character and serves to structure relationships observed in the data, rather than to formulate universal or predictive claims.
In addition to the sample size, particular attention was paid to the substantive relevance of the respondents. Participation in the survey was restricted to individuals who declared professional experience with artificial intelligence systems or direct involvement in decision-making processes supported by AI within their organizations. The questionnaire was preceded by a screening section verifying respondents’ familiarity with AI concepts and applications in organizational practice, ensuring that the answers were provided by individuals possessing at least a basic operational understanding of artificial intelligence. The collected sociodemographic data indicate that the respondents represented diverse sectors of the economy and organizations of varying sizes, which supports the contextual validity of the obtained assessments. Consequently, the study does not rely on opinions of randomly selected Internet users, but on informed perceptions of organizational actors interacting with AI in real work environments.
In the course of the research, we tried to analyze the way trust in artificial intelligence algorithms is perceived in organizations. Both the level of acceptance of their performance and the related concerns and expectations for transparency were taken into account. Twelve statements were evaluated, referring to different dimensions of trust, transparency, ethics and effectiveness of AI systems.
If we were convinced that AI algorithms supporting decision-making processes were reliable, the answers were distributed across all categories of the scale. However, the largest proportion of respondents indicated “rather yes” (147 people), while the least “definitely yes” (17 people) and “definitely no” (16 people). A similar distribution of responses was noted when assessing trust in the results generated by AI systems. In this case, exactly the same numerical scheme was repeated—this may suggest a similar perception of the overall reliability of algorithms and the results generated by them. With regard to the transparency of algorithms, understood as explainable AI, the largest group of respondents indicated the answer “I don’t have an opinion” (124 people), followed by the answers “rather yes” (90 people) and “definitely yes” (53 people). The lowest number of indications was recorded in the “definitely not” category (23 people). This, in turn, may signal a moderate, albeit ambiguous, assessment of organizational practices in this area.
Furthermore, it can be noted that the statement about the impact of the use of AI on the effectiveness of the organization met with the highest approval among the analyzed elements. The highest number of indications was recorded in the “rather yes” category (142 people), followed by the “definitely yes” category (47 people). Negative responses were much rarer. They amounted to 42 people for the “rather not” category and 16 people for “definitely not”. When there was concern that AI-based decisions may be burdened with errors or biases, positive responses prevailed (138 indications for the “rather yes” category and 74 for “definitely yes”). Negative answers, on the other hand, appeared less frequently. This may indicate a fairly strong perception of risk in this area.
The statement about the need to provide insight into the decision-making process of AI systems has been highly accepted. The highest number of responses was recorded in the “definitely yes” category (132 people). And then in the “rather yes” category (117 people). Negative responses remained at a low level, as they did not exceed 21 indications.
The belief that trust in AI algorithms depends on being able to understand how they work has also been highly supported. The most frequently chosen answers were “rather yes” (136 people) and “definitely yes” (81 people). On the other hand, relatively few people declared that they did not have an opinion (74 people). Negative answers were few, not exceeding 24 indications. With regard to the existence of clear ethical principles governing the use of AI in an organization, the distribution of responses turned out to be more diverse. The largest number of people indicated the option “I don’t have an opinion” (92 people). A comparable number of respondents, on the other hand, chose positive and negative answers. The answer “rather yes” was given by 88 people, while “rather not” by 48 people. Extreme responses remained at the level of 46 people (“definitely not”) and 51 people (“definitely yes”).
The statement that transparency and understanding of algorithms are more important than their accuracy was met with indifference or moderate acceptance. The largest number of answers was given in the category “I don’t have an opinion” (111 people), followed by the category “rather yes” (87 people). Negative responses appeared with a similar frequency: 70 people indicated “rather not”. On the other hand, 30 people “definitely not”. Only 27 respondents chose the “definitely yes” category. One of the more controversial statements, regarding the acceptance of a situation in which the final decision is made by an AI system instead of a human, was met with clear reluctance. The answer “definitely not” was given by 135 people, and “rather not” by 95 people. Positive responses were few. They did not exceed 39 indications for the “rather yes” category and 10 indications for “definitely yes”. 46 people declared that they did not give an opinion.
Organizational culture conducive to openness to AI technology was assessed primarily as neutral or positive. The largest number of respondents indicated the answer “I don’t have an opinion” (128 people), followed by “rather yes” (120 people). Negative answers were less frequent. On the other hand, extremely negative indications did not exceed eight people. The answer “definitely yes” was chosen by 31 people. In the last of the analyzed statements, concerning the increase in trust in AI along with the increase in its effectiveness, neutral or positive answers dominated. The answer “I don’t have an opinion” (106 people) was chosen most often, followed by “rather yes” (108 people). Positive responses in the extreme form were recorded in 46 cases. Negative responses were less frequent, while the number of indications in the “definitely not” category was 2
In order to determine the strength and direction of the relationship between variables, a linear correlation analysis based on the Pearson coefficient was used. The choice of this method resulted from the nature of the data, which, despite being derived from Likert scales, were characterized by distributions close to continuous and met the criteria for the use of parametric correlation measures. In addition, previous research on trust in AI systems indicates that the Pearson coefficient is commonly used in analyses of the relationship between perceptual user ratings.
For each correlation coefficient, a test of statistical significance (p-value) was calculated, allowing for the verification of the null hypothesis that there is no relationship between the analyzed variables. The study assumed a significance level of α = 0.05, which is considered standard in empirical research in social sciences and management. Correlation values for which p < 0.05 were considered statistically significant and interpreted in the analysis of the results as dependencies of real cognitive significance.
The use of the parametric Pearson coefficient, combined with significance tests, made it possible to identify both strong and moderate or weak relationships between variables describing the reliability of algorithms, transparency of their mechanisms, perception of effectiveness, risk of error, and decision-making autonomy of AI systems. This procedure is the basis for the interpretation of the correlation matrix presented in the output part and supports the construction of a conceptual model describing the determinants of organizational trust in artificial intelligence. The distributions of the variables were assessed as close to normal based on histograms and indices of skewness and kurtosis (values < |1|).
Figure 1 shows a correlation matrix illustrating the strength and direction of the relationship between twelve variables describing different dimensions of organizational trust in AI algorithms. The interpretation of the strength of the effect adopted standard thresholds: correlations below 0.30 were considered weak, values 0.30–0.49 were considered moderate, and correlations above 0.50 were considered strong. Relationships exceeding 0.90 indicate very high covariability, which may indicate conceptual proximity of the analyzed variables or the risk of their collinearity, requiring interpretative caution. The scoreboard takes into account both factors that strengthen trust, such as the reliability of results, the transparency of algorithms, and the perceived effectiveness of AI systems, as well as limiting factors, including concerns about errors, algorithmic biases and the decision-making autonomy of systems. Figure 1 presents a correlation matrix illustrating the strength and direction of relationships between the analyzed variables. The figure allows you to see a consistent structure of relationships between variables and is the basis for the construction of a model explaining the mechanism of trust formation in organizations.
On the basis of the conducted empirical research, treated as exploratory, including the analysis of the correlation structure and the interpretation of relationships observed in the correlation matrix for a sample of 325 respondents, a preliminary and conceptual model was proposed. Taking into account the exploratory aim of the study, a sample size of 325 respondents was considered sufficient for developing a preliminary conceptual model intended to structure the observed relationships (rather than to formulate statistically generalizable or predictive conclusions). It points to the determinants of organizational trust in artificial intelligence algorithms.
The relationships between the key determinants of organizational trust in AI systems can be written in the form of a linear model:
ZT = αW + βE + γPδRεA
where ZT stands for the level of trust in AI in the organization, while the variables W, E, P, R, and A represent, respectively: the reliability of the results generated by AI systems (W), the transparency and comprehensibility of algorithms (E), the perceived effectiveness and usefulness of AI in organizational processes (P), the risk of algorithmic errors and biases (R), and the acceptance of the decision-making autonomy of AI systems (A). The parameters α, β, and γ take positive values. This reflects their reinforcing effect on trust, while the δ and ε ratios are negative, indicating that an increase in perceived risk or an increase in acceptance of decisions made autonomously by AI leads to a decrease in the level of trust.
The proposed linear model assumes five structural parameters denoted by the symbols α, β, γ, δ and ε. Each reflects the strength and direction of the impact of the relevant explanatory variable on the level of organizational trust in artificial intelligence (AI) systems.
The α parameter describes the impact of the reliability of algorithm-generated results (W) on the level of trust. It takes positive values, which means that an increase in perceived trustworthiness leads to an increase in user trust. The parameter β determines the importance of transparency and comprehensibility of the mechanisms of operation of algorithms (E). Its positive value indicates that greater transparency is a factor that strengthens trust.
The γ parameter represents the impact of the perceived effectiveness and usability of AI systems (P). The higher the effectiveness attributed to the system, the more likely it is to build a positive attitude and acceptance of the technology. The δ parameter, as a negative coefficient, reflects the impact of algorithmic error risk and bias (R). This means that an increase in concerns about errors leads to a decrease in the level of trust.
The ε parameter, also negative, represents the impact of accepting the decision-making autonomy of AI systems (A). A negative value indicates that an increase in acceptance of full autonomy of algorithms is associated with a decrease in trust, which is consistent with the results of empirical studies showing a strong preference for users for the human-AI collaboration model.
The model is linear in nature, which results from both theoretical assumptions and empirical analysis. The correlation matrix of variables used in the study indicates clear, monotonic relationships between the evaluated aspects of AI functioning, which justifies the use of a simplified linear structure at the stage of conceptualization of dependencies. The characteristics of the data obtained—based on Likert scales, generating uniform response distributions and high linear correlation coefficients—further confirm the legitimacy of such an approach. The adopted form of the model therefore serves as a conceptual and empirical model, allowing to present the directions and relative strength of the influence of individual determinants on organizational trust in AI. The model is not a statistically estimated equation, but a descriptive tool that synthesizes research results and organizes the relationships between key factors influencing trust in the organizational environment.
The model is conceptual in nature and does not constitute a statistical estimate. The presented equation is a formal approach to the proposed model. It reflects the results of empirical analysis, according to which trust in AI is multidimensional and depends on the simultaneous interaction of several separate (but interrelated) factors. The design of the model emphasizes that a positive perception of the trustworthiness, transparency and effectiveness of AI systems strengthens the level of trust. On the other hand, concerns about errors and the reluctance to give full decision-making autonomy to algorithms limit them. The functional record allows the model to be treated as an analytical tool that can assess which aspects of the use of AI are most important for shaping trust in the surveyed organizations.

4. Discussion

The discussion of the results is based directly on the empirical data obtained in the study, as well as on the identified relationships between the analyzed variables. In contrast to the theoretical section, this part focuses on the interpretation of the results presented in Table 1 and the correlation matrix (Figure 1), as well as on their synthetic representation shown in Figure 2.
The obtained results clearly indicate that, within the studied sample, the key determinants of organizational trust in AI algorithms are the perceived reliability of generated outcomes, the transparency of operating mechanisms, and the perceived effectiveness of artificial intelligence applications. These variables exhibit strong and consistent positive relationships, which confirms their central role in the process of building trust in the organizational environment.
At the same time, the correlation analysis revealed the presence of factors that limit trust. In particular, these include concerns related to the risk of errors, algorithmic bias, and the acceptance of full decision-making autonomy of AI systems. It should also be emphasized that these factors remain relatively independent of positive assessments of technological effectiveness. This suggests that even effective AI systems may fail to generate trust if they do not ensure an adequate level of control and transparency.
The conducted research allows us to formulate several conclusions regarding the way organizational trust in artificial intelligence systems is formed. The data obtained indicate that this trust is based primarily on three key pillars: the reliability of the results generated by algorithms, the transparency of their mechanisms of operation, and the perceived effectiveness of the use of AI in organizational practice. The relationships between these variables turned out to be strong and consistent. Therefore, it can be said that a positive attitude towards AI systems is built primarily through the belief in their reliability, transparency and real usefulness. At the same time, the presented results suggest that, within the context of the studied organizations, increases in effectiveness and operational efficiency associated with the use of AI may contribute to strengthening the climate of trust. The observed relationship between increased effectiveness of AI use and a strengthened climate of trust should be interpreted as context-dependent and contingent on organizational characteristics, rather than as a universal pattern applicable to all organizations.
The analysis of the data also pointed to the significant role of risk factors, especially concerns about algorithmic errors and biases. These factors, although partly related to the need for transparency and control over the operation of systems, create a separate area that limits the level of trust, regardless of the positive assessment of other aspects of the functioning of AI. An analysis of the results of the research indicates that these concerns are permanent. They are not significantly weakened even when respondents appreciate the effectiveness of the systems. The strongly negative reaction to the idea of full decision-making autonomy of AI further confirms that trust has its limits. And employees prefer a model of coexistence between humans and technology, in which the ultimate responsibility remains with the human side.
Moreover, on the basis of the presented conclusions, it can be stated that trust in AI in organizations is a multidimensional construction, complex and at the same time susceptible to the action of many simultaneous determinants. Empirical results point to the need for a systemic approach to building trust, including both actions to strengthen the transparency and explainability of algorithms, as well as continuous monitoring of potential risks related to errors or biases. The identified dependencies confirm that organizations that invest in transparent technological solutions and educating employees in the field of AI systems can achieve a higher level of acceptance and trust. The results of the research are thus the basis for further analyses on how to shape the safe, responsible and socially acceptable implementation of artificial intelligence in the organizational environment.
Figure 2 shows the key determinants of organizational trust in AI systems. They have been included as conditions supporting the sustainable development of the organization. Reinforcing and limiting factors shaping the level of trust are identified, including the importance of reliability of results, transparency of algorithms, perceived effectiveness of AI, as well as the impact of concerns related to errors and decision-making autonomy. The dependencies presented reflect the multidimensional structure of trust and its links to the practices of responsible implementation of technology in organizations.

5. Research Limitations and Scope of Generalization

The conducted research encountered several limitations, which require that the findings be interpreted with caution. These limitations result primarily from the nature of the method used and the specificity of the research sample. The use of the online survey allowed for a quick reach of a wide group of respondents, but it may have fostered some distortions resulting from the declarative nature of the answers and differences in the level of digital competences of individual participants. The first and most important limitation of the study is the relatively small sample size (N = 325), which limits the generalizability of the results and necessitates cautious interpretation. There was also a risk that some respondents did not have sufficient experience in using AI systems directly. This could certainly limit the depth and accuracy of assessments regarding the transparency, autonomy or reliability of algorithms. An additional limitation was the inability to fully control the conditions for completing the survey. It should be noted that this is a common methodological problem in online research.
Another limitation associated with the design of the research tool was the reduction in complex attitudes and beliefs to closed-ended questions based on the Likert scale. This type of approach allows for quantitative analysis, but at the same time it does not allow for the more subtle or contextual aspects of trust in AI systems that could be revealed in qualitative methods such as in-depth interviews. In addition, the study only included the perspective of the organization’s employees, which means that it did not take into account the broader institutional, cultural or cross-sectoral differences that could affect the way trust is formed. The presented factors indicate that the obtained results should be interpreted with caution and treated as a preliminary basis for further, more in-depth analyses.

6. Conclusions

Taking these limitations into account, the analysis of the conducted research allows for the formulation of preliminary and context-dependent recommendations for organizations implementing or planning to implement artificial intelligence systems. The analysis of the relationship between the key components of trust indicates that it is of particular importance to consistently strengthen the transparency of algorithms and build employee awareness of how AI systems generate results. Implementing tools based on explainable AI principles, as well as investing in educational and communication activities to explain the logic of algorithms, can significantly increase the acceptance of technology. Given the strong impact of the perceived effectiveness of AI on the level of trust, organizations should strive to highlight the real benefits of using AI systems to further strengthen the positive perception of technology.
It is also possible to formulate recommendations for managing risk areas that revealed a clearly negative impact on trust in the survey. Organizations should implement mechanisms that minimize the likelihood of algorithmic errors and biases. They should also conduct a regular audit of the models used. To monitor their performance. The results indicate that concerns about the autonomy of AI systems remain a major barrier to technology acceptance. Therefore, it becomes particularly important to maintain a structure of responsibility, in which a person plays a superior decision-making role. The model of cooperation between a human and an algorithm, rather than replacing a human with a machine, turns out to be a key issue for maintaining the trust and sense of security of employees.
The proposed model of organizational trust in AI can be implemented as a diagnostic tool that supports the process of managing digital technologies. Its use makes it possible to assess which areas of the organization require intervention to increase the acceptance of the technology and where the most important barriers hindering its effective use have been identified. The model also allows for systematic monitoring of changes in employee attitudes as the organization’s technological development and adaptation of new solutions occurs. As a result, it can be considered the basis for the design of implementation policies, communication strategies and training programs, which together contribute to the conscious, responsible and sustainable introduction of AI systems in the work environment.
Future research directions may focus on an in-depth analysis of the dynamics of trust in AI in different organizational contexts, taking into account industry and cultural diversity and company sizes. It would be particularly valuable to expand the research with qualitative methods that can capture the complex mechanisms of the formation of attitudes towards AI and identify factors that are difficult to measure in surveys. It also seems reasonable to conduct longitudinal research, which allows us to observe how the level of trust changes with the growing experience of organizations in working with AI and how it is affected by subsequent generations of technology. An interesting direction is the analysis of the relationship between digital competences and the acceptance of autonomous systems. Research on how an organization’s ethical policy and legal regulations modify the perception of risk associated with algorithms would also be useful. It is also worth considering the development of predictive models to determine which combinations of factors most strongly predict a high level of confidence. This would provide practical support for both managers and designers of AI systems.
The results of the conducted research are consistent with the findings of other authors analyzing trust in artificial intelligence systems in the organizational environment. According to the findings of Afroogh and co-authors (2024) [29], it has been noted that concerns related to errors, biases, and autonomous decision-making create a separate dimension of trust. Therefore, they can operate independently of a positive assessment of other aspects of the functioning of AI. Therefore, the results obtained are in line with the current state of knowledge, at the same time supplementing it with a model describing the simultaneous interaction of factors strengthening and limiting trust in organizations.

7. Future Research Directions

Further research on trust in artificial intelligence algorithms in organizations should take into account diverse organizational contexts. Particularly interesting would be the exploration of sectoral and cultural differences, as well as varying levels of technological maturity. It also appears justified to extend the analyses by incorporating qualitative research methods, which would allow for a better understanding of the complex mechanisms underlying the formation of trust in AI.
An important direction for future research involves longitudinal studies, which make it possible to observe changes in the level of trust as organizations gain increasing experience in the use of artificial intelligence-based systems. Moreover, further empirical verification of the proposed model using larger and more diverse research samples may contribute to its refinement and enhance its practical applicability.

Author Contributions

Conceptualization, M.K. and M.S.; methodology, A.K. (Artur Kwasek); software, A.T.-D.; validation, M.K., A.K. (Adrian Kopytowski) and A.T.-D.; formal analysis M.S.; investigation A.T.-D. and M.P.; resources A.K. (Artur Kwasek), M.K.; data curation, M.S.; writing—original draft preparation, A.T.-D. and M.S.; writing—review and editing, M.K., A.T.-D., A.K. (Artur Kwasek) and M.S.; visualization, M.S.; supervision, A.K. (Artur Kwasek), M.K., M.P. and M.S.; project administration, M.S.; funding acquisition, A.K. (Adrian Kopytowski). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study by Institution Committee due to Legal Regulations. (Based on the applicable laws in Poland and the internal policies of University of Technology and Economics in Warsaw, the approval of the Ethics Committee is not required for the publication of scientific research results in scientific journals, as long as the research does not directly involve human subjects, animals or sensitive materials, and the research procedure itself did not involve interventions that would require prior ethical approval).

Informed Consent Statement

Informed consent for participation was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sundu, M.; Ozdemir, S. The Effect of Artificial Intelligence on Management Process: Challenges and Opportunities. In Advances in Business Strategy and Competitive Advantage; Ahmad, N.H., Iqbal, Q., Halim, H.A., Eds.; IGI Global: Hershey, PA, USA, 2020; pp. 22–41. ISBN 978-1-7998-2577-7. [Google Scholar]
  2. Giraud, L.; Zaher, A.; Hernandez, S.; Akram, A.A. The Impacts of Artificial Intelligence on Managerial Skills. J. Decis. Syst. 2023, 32, 566–599. [Google Scholar] [CrossRef]
  3. Bańka, M.; Marczewska, M.; Salwin, M.; Andrade, R.D.D.; Boulange, P.; Chmiel, N.; Golda, I.J. Exploring the Impact of Accelerator Programs on Startup Success: A Focus on Corporate Collaboration and Goal Achievement. J. Co-Oper. Organ. Manag. 2024, 12, 100235. [Google Scholar] [CrossRef]
  4. Bańka, M.; Chmiel, N.; Kostrzewski, M.; Marczewska, M.; Kowalski, A.M.; Sedkiewicz, K.; Salwin, M. Understanding Corporate Concerns. Barriers and Challenges in Corporate–Start-up Collaboration. J. Open Innov. Technol. Mark. Complex. 2024, 10, 100388. [Google Scholar] [CrossRef]
  5. Salwin, M.; Chmielewski, T.M. Smart Product-Service System for Intelligent Welding System. In Computational Science—ICCS 2025 Workshops; Paszynski, M., Barnard, A.S., Zhang, Y.J., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2025; Volume 15912, pp. 285–304. ISBN 978-3-031-97572-1. [Google Scholar]
  6. Brunet-Thornton, R.; Martinez, F. (Eds.) Analyzing the Impacts of Industry 4.0 in Modern Business Environments; Advances in Business Information Systems and Analytics; IGI Global: Hershey, PA, USA, 2018; ISBN 978-1-5225-3468-6. [Google Scholar]
  7. Orbán, F.; Stefkovics, Á. Trust in Artificial Intelligence: A Survey Experiment to Assess Trust in Algorithmic Decision-Making. AI Soc. 2025, 40, 4955–4969. [Google Scholar] [CrossRef]
  8. Krueger, F.; Riedl, R.; Bartz, J.A.; Cook, K.S.; Gefen, D.; Hancock, P.A.; Jarvenpaa, S.L.; Krabbendam, L.; Lee, M.R.; Mayer, R.C.; et al. A Call for Transdisciplinary Trust Research in the Artificial Intelligence Era. Humanit. Soc. Sci. Commun. 2025, 12, 1124. [Google Scholar] [CrossRef]
  9. Zhang, Z.; Ning, H.; Shi, F.; Farha, F.; Xu, Y.; Xu, J.; Zhang, F.; Choo, K.-K.R. Artificial Intelligence in Cyber Security: Research Advances, Challenges, and Opportunities. Artif. Intell. Rev. 2022, 55, 1029–1053. [Google Scholar] [CrossRef]
  10. Thiebes, S.; Lins, S.; Sunyaev, A. Trustworthy Artificial Intelligence. Electron. Mark. 2021, 31, 447–464. [Google Scholar] [CrossRef]
  11. Sijtsma, H.; Van Buuren, M.; Hollarek, M.; Walsh, R.J.; Lee, N.C.; Braams, B.R.; Krabbendam, L. Social Network Position, Trust Behavior, and Neural Activity in Young Adolescents. NeuroImage 2023, 268, 119882. [Google Scholar] [CrossRef]
  12. Salwin, M.; Gladysz, B.; Santarek, K. Technical Product-Service Systems—A Business Opportunity for Machine Industry. In Advances in Manufacturing; Hamrol, A., Ciszak, O., Legutko, S., Jurczyk, M., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2018; pp. 269–278. ISBN 978-3-319-68618-9. [Google Scholar]
  13. Chou, D.C.; Chen, H.-G.; Lin, B. Green IT and Corporate Social Responsibility for Sustainability. J. Comput. Inf. Syst. 2023, 63, 322–333. [Google Scholar] [CrossRef]
  14. Rodriguez, R.; Svensson, G.; Otero-Neira, C. Framing Sustainable Development through Descriptive Determinants in Private Hospitals—Orientation and Organization. Eval. Program Plan. 2019, 75, 78–88. [Google Scholar] [CrossRef]
  15. Horak, S.; Arya, B.; Ismail, K.M. Organizational Sustainability Determinants in Different Cultural Settings: A Conceptual Framework. Bus. Strategy Environ. 2018, 27, 528–546. [Google Scholar] [CrossRef]
  16. AlAqeel, A.A. Factors Influencing the Sustainable Development of Organizations. Ph.D. Thesis, University of Gloucestershire, Cheltenham, UK, 2012; pp. 1–307. [Google Scholar]
  17. Khaled, R.; Ali, H.; Mohamed, E.K.A. The Sustainable Development Goals and Corporate Sustainability Performance: Mapping, Extent and Determinants. J. Clean. Prod. 2021, 311, 127599. [Google Scholar] [CrossRef]
  18. Almatrooshi, B.; Singh, S.K.; Farouk, S. Determinants of Organizational Performance: A Proposed Framework. Int. J. Product. Perform. Manag. 2016, 65, 844–859. [Google Scholar] [CrossRef]
  19. Lipiak, J.; Salwin, M. The Improvement of Sustainability with Reference to the Printing Industry—Case Study. In Advances in Manufacturing II; Hamrol, A., Grabowska, M., Maletic, D., Woll, R., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2019; pp. 254–266. ISBN 978-3-030-17268-8. [Google Scholar]
  20. Suárez, J.C.; Paredes, S.S.; Ortega, G.R.; González, J.G.; Lebrún, C.A.V. The Process of Innovation as a Determinant Factor of Sustainable Development in Companies. Int. J. Innov. Sustain. Dev. 2021, 15, 100. [Google Scholar] [CrossRef]
  21. Temple, L.; Kwa, M.; Tetang, J.; Bikoi, A. Organizational Determinant of Technological Innovation in Food Agriculture and Impacts on Sustainable Development. Agron. Sustain. Dev. 2011, 31, 745–755. [Google Scholar] [CrossRef]
  22. Skrynnyk, O. Prediction of Convergent and Divergent Determinants of Organisational Development. Bus. Ethics Leadersh. 2023, 7, 74–81. [Google Scholar] [CrossRef]
  23. Ali, A.; Khamisa, M.A.; ur Rehman, A. Socioeconomic Determinants of SDG Performance. J. Soc. Signs Rev. 2025, 3, 296–318. [Google Scholar]
  24. Misztal, A.; Kowalska, M. Determinants of Sustainable Development of Industrial Enterprises in Poland in the Period from 2010 to 2019—A Statistical Evaluation. Pr. Nauk. Uniw. Ekon. We Wrocławiu 2020, 64, 160–173. [Google Scholar] [CrossRef]
  25. Huang, C.-C.; Chan, Y.-K.; Hsieh, M.Y. The Determinants of ESG for Community LOHASism Sustainable Development Strategy. Sustainability 2022, 14, 11429. [Google Scholar] [CrossRef]
  26. Saifulina, N.; Carballo-Penela, A. Promoting Sustainable Development at an Organizational Level: An Analysis of the Drivers of Workplace Environmentally Friendly Behaviour of Employees. Sustain. Dev. 2017, 25, 299–310. [Google Scholar] [CrossRef]
  27. Popelo, O.; Arefiev, S.; Rogulska, O.; Rudnitska, K.; Derevianko, D. Higher education as a determinant of sustainable development. Rev. Univ. Zulia 2022, 13, 734–746. [Google Scholar] [CrossRef]
  28. Chitescu, R.I.; Lixandru, M. The Influence of the Social, Political and Economic Impact on Human Resources, as a Determinant Factor of Sustainable Development. Procedia Econ. Financ. 2016, 39, 820–826. [Google Scholar] [CrossRef]
  29. Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, Challenges, and Future Directions. Humanit. Soc. Sci. Commun. 2024, 11, 1568. [Google Scholar] [CrossRef]
  30. Bhat, A.K. Application and Impact of Artificial Intelligence in Financial Decision Making. Int. J. Sci. Res. Sci. Eng. Technol. 2024, 11, 57–63. [Google Scholar] [CrossRef]
  31. Cheung, J.C.; Ho, S.S. The Effectiveness of Explainable AI on Human Factors in Trust Models. Sci. Rep. 2025, 15, 23337. [Google Scholar] [CrossRef]
  32. Ferrario, A.; Loi, M. How Explainability Contributes to Trust in AI. In Proceedings of the 2022 ACM Conference on Fairness Accountability and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 1457–1466. [Google Scholar]
  33. Fehr, J.; Citro, B.; Malpani, R.; Lippert, C.; Madai, V.I. A Trustworthy AI Reality-Check: The Lack of Transparency of Artificial Intelligence Products in Healthcare. Front. Digit. Health 2024, 6, 1267290. [Google Scholar] [CrossRef]
  34. Nouis, S.C.; Uren, V.; Jariwala, S. Evaluating Accountability, Transparency, and Bias in AI-Assisted Healthcare Decision-Making: A Qualitative Study of Healthcare Professionals’ Perspectives in the UK. BMC Med. Ethics 2025, 26, 89. [Google Scholar] [CrossRef]
  35. Durán, J.M.; Jongsma, K.R. Who Is Afraid of Black Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI. J. Med. Ethics 2021. online ahead of print. [Google Scholar] [CrossRef]
  36. Bai, R.T.; R., J.; Shanavas, A. Sustainable Finance and Use of Artificial Intelligence in Investment Decision Making. Int. J. Adv. Res. 2024, 12, 1212–1218. [Google Scholar] [CrossRef]
  37. Farheen, S.; Cheema, A.A.; Ullah, R.S.; Bandeali, M.M. Equity and Bias in AI Educational Tools: A Critical Examination of Algorithmic Decision-Making in Classrooms. Crit. Rev. Soc. Sci. Stud. 2025, 3, 67–85. [Google Scholar] [CrossRef]
  38. Chen, C. Investigation into the Development of Intelligent Financial Management Systems Based on Artificial Intelligence. In Proceedings of the 2022 7th International Conference on Management Science and Management Innovation, Chengdu, China, 30 December 2022; Volume 1, p. 429. [Google Scholar] [CrossRef]
  39. Kakish, D.R.K.; AlSamhori, J.F.; Fajardo, A.N.R.; Qaqish, L.N.; Jaber, L.A.; Abujudeh, R.; Al-Zuriqat, M.H.M.; Mohammed, A.Y.; Nashwan, A.J. Transforming Dermatopathology With AI: Addressing Bias, Enhancing Interpretability, and Shaping Future Diagnostics. Dermatol. Rev. 2025, 6, e70018. [Google Scholar] [CrossRef]
  40. Ruggieri, S.; Alvarez, J.M.; Pugnana, A.; State, L.; Turini, F. Can We Trust Fair-AI? In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 15421–15430. [Google Scholar] [CrossRef]
  41. Wei, J.; Qi, S.; Wang, W.; Jiang, L.; Gao, H.; Zhao, F.; Al-Bukhaiti, K.; Wan, A. Decision-Making in the Age of AI: A Review of Theoretical Frameworks, Computational Tools, and Human-Machine Collaboration. Contemp. Math. 2025, 6, 2089–2112. [Google Scholar] [CrossRef]
  42. Juravle, G.; Boudouraki, A.; Terziyska, M.; Rezlescu, C. Trust in Artificial Intelligence for Medical Diagnoses. Prog. Brain Res. 2020, 253, 263–282. [Google Scholar] [CrossRef] [PubMed]
  43. Branley-Bell, D.; Whitworth, R.; Coventry, L. User Trust and Understanding of Explainable AI: Exploring Algorithm Visualisations and User Biases. In Human-Computer Interaction. Human Values and Quality of Life; Kurosu, M., Ed.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12183, pp. 382–399. ISBN 978-3-030-49064-5. [Google Scholar]
  44. Dash, B.; Sharma, P.; Swayamsiddha, S. Organizational Digital Transformations and the Importance of Assessing Theoretical Frameworks Such as TAM, TTF, and UTAUT: A Review. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1–6. [Google Scholar] [CrossRef]
  45. Rouidi, M.; Elouadi, A.E.; Hamdoune, A.; Choujtani, K.; Chati, A. TAM-UTAUT and the Acceptance of Remote Healthcare Technologies by Healthcare Professionals: A Systematic Review. Inform. Med. Unlocked 2022, 32, 101008. [Google Scholar] [CrossRef]
  46. Lee, A.T.; Ramasamy, R.K.; Subbarao, A. Understanding Psychosocial Barriers to Healthcare Technology Adoption: A Review of TAM Technology Acceptance Model and Unified Theory of Acceptance and Use of Technology and UTAUT Frameworks. Healthcare 2025, 13, 250. [Google Scholar] [CrossRef]
  47. Teodorescu, D.; Aivaz, K.-A.; Vancea, D.P.C.; Condrea, E.; Dragan, C.; Olteanu, A.C. Consumer Trust in AI Algorithms Used in E-Commerce: A Case Study of College Students at a Romanian Public University. Sustainability 2023, 15, 11925. [Google Scholar] [CrossRef]
  48. Park, K.; Yoon, H.Y. AI Algorithm Transparency, Pipelines for Trust Not Prisms: Mitigating General Negative Attitudes and Enhancing Trust toward AI. Humanit. Soc. Sci. Commun. 2025, 12, 1160. [Google Scholar] [CrossRef]
  49. Mylrea, M.; Robinson, N. Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI. Entropy 2023, 25, 1429. [Google Scholar] [CrossRef]
  50. Thomas, H.; Abbas, N. Building Trust in AI: A Secure and Transparent Framework for Automated Decision-Making. 2025. Available online: https://www.researchgate.net/publication/392659058_Building_Trust_in_AI_A_Secure_and_Transparent_Framework_for_Automated_Decision-Making (accessed on 9 December 2025).
  51. Visave, J. Transparency in AI for Emergency Management: Building Trust and Accountability. AI Ethics 2025, 5, 3967–3980. [Google Scholar] [CrossRef]
  52. Kaur, D.; Uslu, S.; Durresi, A. Quantum Algorithms for Trust-Based AI Applications. In Complex, Intelligent and Software Intensive Systems; Barolli, L., Ed.; Lecture Notes on Data Engineering and Communications Technologies; Springer Nature: Cham, Switzerland, 2023; Volume 176, pp. 1–12. ISBN 978-3-031-35733-6. [Google Scholar]
  53. Devalla, S.; Yogix, M.K. Building Trust in AI—A Simplified Guide to Ensure Software Quality. J. Soft Comput. Paradig. 2023, 5, 218–231. [Google Scholar] [CrossRef]
Figure 1. Correlation Matrix Heatmap. Study: own work. Legend to Variables (1–12): The variables represent respondents’ perceptions of key dimensions of trust in AI systems, measured using survey statements (see Table 1). 1—Perceived reliability of AI-generated results; 2—Trust in the correct functioning of AI systems; 3—Perceived transparency and comprehensibility of algorithmic mechanisms; 4—Perceived effectiveness of AI applications; 5—Perceived usefulness of AI systems in organizational practice; 6—Ease of interpreting AI-generated outcomes; 7—Perceived data security and system stability; 8—Perceived risk of algorithmic errors; 9—Perceived risk of algorithmic bias and decision distortions; 10—Acceptance of AI decision-making autonomy; 11—Willingness to rely on AI recommendations; 12—Overall perceived level of organizational trust in AI.
Figure 1. Correlation Matrix Heatmap. Study: own work. Legend to Variables (1–12): The variables represent respondents’ perceptions of key dimensions of trust in AI systems, measured using survey statements (see Table 1). 1—Perceived reliability of AI-generated results; 2—Trust in the correct functioning of AI systems; 3—Perceived transparency and comprehensibility of algorithmic mechanisms; 4—Perceived effectiveness of AI applications; 5—Perceived usefulness of AI systems in organizational practice; 6—Ease of interpreting AI-generated outcomes; 7—Perceived data security and system stability; 8—Perceived risk of algorithmic errors; 9—Perceived risk of algorithmic bias and decision distortions; 10—Acceptance of AI decision-making autonomy; 11—Willingness to rely on AI recommendations; 12—Overall perceived level of organizational trust in AI.
Sustainability 18 00776 g001
Figure 2. Key determinants of organizational trust in artificial intelligence systems as conditions for the sustainable development of organizations. Study: own work.
Figure 2. Key determinants of organizational trust in artificial intelligence systems as conditions for the sustainable development of organizations. Study: own work.
Sustainability 18 00776 g002
Table 1. Perception of trust in AI algorithms in an organization.
Table 1. Perception of trust in AI algorithms in an organization.
Definitely NotRather NotI Don’t Have an OpinionRather YesDefinitely YesTogether
I believe that AI algorithms supporting decision-making processes are reliable.16707514717325
I have confidence in the results generated by AI-based systems.16707514717325
My organization cares about the transparency of decision-making algorithms (the so-called explainable AI).23351249053325
The use of AI in decision-making processes increases the efficiency of organizations.16427814247325
I’m concerned that AI-based decisions may be burdened with errors or bias.9416313874325
I believe that AI systems should always provide insight into how a decision was made.132142117132325
Trust in AI algorithms depends a lot on whether you can understand the mechanism of their operation.10247413681325
In my organization, there are clear ethical principles regarding the use of AI.4648928851325
Transparency and understanding of how algorithms work are more important to me than their accuracy.30701118727325
I accept a situation in which the final decision is made by the AI system, not by a human.13595463910325
Organizational culture fosters openness to AI technology.83812812031325
The level of trust in AI in my organization increases as its effectiveness increases.254010610846325
Study: own.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salwin, M.; Kocot, M.; Kwasek, A.; Trzaskowska-Dmoch, A.; Pałęga, M.; Kopytowski, A. Assessing the Determinants of Trust in AI Algorithms in the Conditions of Sustainable Development of the Organization. Sustainability 2026, 18, 776. https://doi.org/10.3390/su18020776

AMA Style

Salwin M, Kocot M, Kwasek A, Trzaskowska-Dmoch A, Pałęga M, Kopytowski A. Assessing the Determinants of Trust in AI Algorithms in the Conditions of Sustainable Development of the Organization. Sustainability. 2026; 18(2):776. https://doi.org/10.3390/su18020776

Chicago/Turabian Style

Salwin, Mariusz, Maria Kocot, Artur Kwasek, Adrianna Trzaskowska-Dmoch, Michał Pałęga, and Adrian Kopytowski. 2026. "Assessing the Determinants of Trust in AI Algorithms in the Conditions of Sustainable Development of the Organization" Sustainability 18, no. 2: 776. https://doi.org/10.3390/su18020776

APA Style

Salwin, M., Kocot, M., Kwasek, A., Trzaskowska-Dmoch, A., Pałęga, M., & Kopytowski, A. (2026). Assessing the Determinants of Trust in AI Algorithms in the Conditions of Sustainable Development of the Organization. Sustainability, 18(2), 776. https://doi.org/10.3390/su18020776

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop