Next Article in Journal
Enhanced Frequency Dynamic Support for PMSG Wind Turbines via Hybrid Inertia Control
Previous Article in Journal
Domain Knowledge-Infused Synthetic Data Generation for LLM-Based ICS Intrusion Detection: Mitigating Data Scarcity and Imbalance
Previous Article in Special Issue
Geriatric Healthcare Supported by Decision-Making Tools Integrated into Digital Health Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decision-Making in Complex Systems Using AI-Based Decision Support: The Role of Trust, Transparency, and Data Quality

by
Georgiana-Tatiana Bondac
1,
Sorina-Geanina Stanescu
2,3,*,
Constantin Aurelian Ionescu
1,3,
Anisoara Duica
4 and
Marilena Carmen Uzlău
3,5
1
Faculty of Economics, Valahia University of Targoviste, 130004 Targoviste, Romania
2
Institute of Multidisciplinary Research for Science and Technology, Valahia University of Targoviste, 130004 Targoviste, Romania
3
Faculty of Economics, Hyperion University of Bucharest, 030615 Bucharest, Romania
4
Doctoral School of Economics and Humanities, Valahia University of Targoviste, 130004 Targoviste, Romania
5
Institute of Economic Forecasting, Romanian Academy, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(2), 372; https://doi.org/10.3390/electronics15020372
Submission received: 9 December 2025 / Revised: 9 January 2026 / Accepted: 14 January 2026 / Published: 14 January 2026
(This article belongs to the Special Issue Advances in Decision Making for Complex Systems)

Abstract

In the context of accelerated digital transformation, organizations increasingly operate as complex systems in which strategic decision-making is challenged by uncertainty, data heterogeneity, and bounded rationality. The integration of artificial intelligence (AI) into organizational processes is therefore redefining how decisions are supported and enacted. This study develops and validates an integrated conceptual model that explains how trust in AI-based decision support systems (AI-DSSs), data transparency and quality, perceived usefulness, and ease of use influence decision-making efficiency and the intention to adopt AI-DSS in complex organizational contexts. The empirical analysis is based on a questionnaire survey administered to 324 respondents from Romanian organizations operating in IT, services, industry, and public administration. Data were analyzed using partial least squares structural equation modeling (PLS-SEM) implemented in SmartPLS 4. The results show that data transparency and quality strongly enhance trust in AI-DSS (β = 0.784, p < 0.001). Trust positively influences both perceived usefulness (β = 0.229, p < 0.01) and perceived ease of use (β = 0.482, p < 0.001), confirming its role as a key psychological enabler of favorable technology perceptions. Furthermore, perceived ease of use significantly affects perceived usefulness (β = 0.597, p < 0.001). Regarding adoption-related attitudes, perceived usefulness (β = 0.352, p < 0.001), trust (β = 0.311, p < 0.001), and perceived ease of use (β = 0.135, p < 0.05) exert significant positive effects on the intention to adopt AI-DSS, which in turn demonstrates a strong association with decision-making efficiency (β = 0.544, p < 0.001). By extending traditional technology acceptance models (TAM) with AI-specific dimensions—namely transparency, data quality, and trust—this study contributes to the literature on decision-making in complex systems and offers practical insights for organizations seeking to improve decision effectiveness through AI-based support.

1. Introduction

In recent decades, decision support systems (DSSs) have evolved from simple data analysis tools into essential components of strategic management, supporting complex decisions in dynamic organizational environments [1,2]. Currently, through the integration of AI, these systems have gained enhanced capabilities for learning, anticipation, and recommendation, transforming into platforms that analyze massive data volumes, identify hidden patterns, and generate predictive solutions in near real time, known as AI-based Decision Support Systems (AI-DSSs) by specialists in the field [3,4]. Over time, researchers have studied DSS from various perspectives, defining DSS as “an interactive computer-based system that helps decision-makers use data and models to solve problems with varying degrees of structure and facilitates decision-making processes” [5], but also as “a specific class of computerized information systems that support managerial decision-making activities” [6].
The decision-making process is one of the most critical and complex managerial functions, as it involves identifying problems, generating alternatives, evaluating the consequences, and choosing the optimal action under conditions of uncertainty. In the contemporary organizational environment, characterized by volatility, interdependencies, and increased volume of information, decisions must be based on relevant data and complex analyses, not just on experience or intuition [7]. In this context, information technology is becoming a key enabler, enabling decision-makers to quickly collect, process, and interpret large volumes of data to reduce risks and increase the objectivity of decisions [8]. To support these processes, DSSs have been developed as interactive tools that combine data, models, and analytical interfaces to improve decision-making [9]. In their traditional form, DSS supports semi-structured decision-making, where human reasoning and computational analysis complement each other. Traditional decision support systems often struggle to manage the massive volumes of heterogeneous data flows and the dynamic nature of modern industrial settings, even when they operate efficiently in isolation [10]. However, the increasing complexity of the economic environment and the vast availability of unstructured data have led to the emergence of a new generation of AI-DSSs that integrate machine learning, natural language processing, and predictive analytics [3,11]. Integrating AI into decision-making enables systems to push the boundaries of traditional analytics and provide augmented cognitive support through pattern recognition, scenario evaluation, and personalized recommendations. Thus, AI-DSS not only automates analytical activities but also transforms how decisions are designed, communicated, and implemented in organizations, facilitating the transition from descriptive to predictive and prescriptive decision-making [12]. AI-powered DDS is a transformative force within Industry 4.0, providing unparalleled capabilities to enhance operational efficiency, quality control, supply chain management, and risk mitigation [13,14,15].
The integration of AI into decision-making represents not just a technological innovation but a fundamental paradigm shift in how organizational managers think, evaluate, and implement decisions. While traditional decision support systems relied on logical models and deterministic analyses, AI-DSS introduces an adaptive cognitive dimension that learns from historical data, recognizes emerging patterns, and provides more accurate predictions [16]. AI-based DSS aims to assist decision-makers in processing large volumes of data to make accurate, informed decisions [17]. These systems can integrate heterogeneous data sources, from economic indicators to real-time feedback, offering a broader information base for strategic, operational, and tactical decisions [14].
However, integrating AI into decision-making processes poses complex challenges. First, the hard-to-decipher nature of algorithms—known as the “black box problem”—limits understanding of the reasoning behind the recommendations generated, affecting user trust and the legitimacy of decisions [18,19,20]. Second, the lack of transparent data governance and an organizational infrastructure capable of supporting these systems creates reluctance to adopt them, especially in organizations with traditional or conservative managerial cultures [21,22].
Recent literature emphasizes that successful AI-DSS adoption depends on organizational factors like top management support, cultural openness to innovation, and employee training [23,24]. Managers who perceive AI as a strategic complement to human reasoning tend to foster positive attitudes and facilitate the acceptance and institutionalization of AI [25,26]. In parallel, advances in adaptive AI learning models embedded in decision support systems further strengthen the role of AI as a strategic complement to managerial decision-making in complex environments [27]. In this study, organizations are conceptualized as complex economic systems characterized by multiple interdependent actors, dynamic information flows, and nonlinear decision-making processes, in which strategic and operational outcomes emerge from the interaction among human judgment, data, and technological support systems.
In this context, the present study aims to explore the role of AI in organizational decision-making by developing and validating an extended conceptual model based on the Technology Acceptance Model (TAM). The proposed model integrates essential constructs such as perceived utility (PU), perceived ease of use (PEOU), trust in AI (TRUST), as well as transparency and data quality (DATAQ), to explain the intention to adopt AI-DSS (INT-AI) and the resulting decision efficiency (DME). This approach provides an integrated theoretical and empirical perspective on how organizations can use AI to transform decision-making processes, while also advancing the literature on the adoption of intelligent technologies in organizational contexts.
The organization of this article is as follows: Section 2 provides a comprehensive review of relevant literature and introduces the development of the conceptual model. Section 3 describes the research methodology and analytical procedures adopted in this study. Section 4 reports the findings from empirical analysis, while Section 5 offers a critical discussion of these results in the context of existing scientific research. Finally, Section 6 summarizes the principal conclusions, discusses both theoretical and managerial implications, addresses the research limitations, and suggests avenues for future investigation.

2. Literature Review

The Technology Acceptance Model (TAM), introduced by Davis (1989), is one of the most influential theoretical frameworks for explaining how individuals and organizations adopt and use new technologies [28]. Understanding these factors is essential for advancing research on AI-DSS adoption. According to this model, the intention to use a technology is determined by two central factors: Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). Perceived usefulness reflects the degree to which the user believes that technology improves their professional performance, while perceived ease of use expresses the extent to which interaction with technology is devoid of cognitive effort [29]. These two dimensions influence users’ attitudes and, subsequently, their behavioral intention to use technology. As the organizational environment becomes increasingly digitized, the TAM has been expanded to include contextual, organizational, and psychological variables, becoming a complex framework for analyzing the adoption of AI-based technologies [30,31].
The application of TAM to AI-DSSs provides a solid conceptual framework for understanding how user perceptions and organizational characteristics influence the integration of AI into decision-making. Clarifying these factors helps researchers and practitioners identify barriers and facilitators of AI adoption, which is crucial for advancing the field. Unlike traditional information systems, AI-DSS involves an adaptive cognitive dimension, based on machine learning and predictive algorithms, that fundamentally alters the interaction between humans and the system, thereby balancing accuracy, transparency, and trust in decision-making across various sectors [16,32]. This particularity justifies the inclusion of additional constructs, such as trust in the system (TRUST) and transparency and data quality (DATAQ), which condition the perceptions of usefulness and ease of use [21,33]. In addition, decision-making efficiency becomes a direct result of AI adoption, providing an organizational perspective on decision-making performance [12].
In recent literature, perceived utility is considered a key predictor of technology adoption. In the case of AI-DSS, this is manifested by convincing decision-makers that AI leads to faster, more accurate, and better-informed decisions, reducing human error and optimizing organizational resources. Highlighting the importance of perceived utility helps clarify why decision-makers are motivated to adopt AI technology. The efficient data processing and analysis capabilities offered by AI technology enable managers to obtain the information needed to make decisions more quickly and accurately, thereby improving decision quality and efficiency. After perceiving this utility, users’ intention to adopt AI technology will increase significantly. Especially in organizational management and decision-making processes, the application of AI technology can not only optimize resource allocation but also reduce human influence on decision-making, thereby improving decision accuracy [12].
Studies show that when the system’s instrumental value is clearly recognized—through accuracy, speed, and superior adaptability—behavioral intent to adopt increases significantly, regardless of technological complexity [17]. This dynamic is also evident in managerial environments, where AI-DSSs have been shown to reduce risk and optimize resources through predictive decision support [34]. When users perceive clear value in AI integration, their intention to adopt increases significantly, particularly in organizations facing large volumes of data and high decision-making uncertainty [17]. At the same time, research highlights that perceptions of AI’s practical value vary across contexts. In marketing, AI is reconfiguring decision-making processes by enabling advanced data analysis, understanding consumer behavior, and personalizing customer relationships, thereby reinforcing the perceived value of technology [35]. In the field of procurement, AI is redefining the traditional operational function into a strategic one, capable of anticipating market trends and improving supplier relationships through intelligent systems, transforming decision-makers into ‘augmented buyers’, who can make complex decisions and collaborate effectively with other departments [36].
Perceived ease of use is another central determinant of technology acceptance. In the context of AI, the complexity of algorithms and the lack of transparency can lead to user reluctance. However, when systems are designed to be explainable, interactive, and easy to operate, the perception of difficulty decreases and the availability of use increases [37]. Ease of use also contributes to increased trust and perceived greater utility, indirectly reinforcing adoption intent [38]. The perceived ease of use not only reduces users’ cognitive burden during learning and operation but also increases their trust in the technology. For many technologically complex AI systems, simplifying the user interface, providing clear guidance, and training have become key to promoting the technology and improving users’ perceptions of ease of use [12].
A distinctive element in adopting AI-DSS is trust in the system. Trust refers to users’ perceptions of the reliability, ethics, and impartiality of the system, and is a key factor in the acceptance of automated decisions [39]. Trust in AI is a critical determinant of user acceptance [40]. Decisions made with AI are often met with skepticism by users, especially when they conflict with users’ intuition or experience [16]. McKnight et al. (2002) showed that trust is not a one-dimensional construct but has a complex nature, composed of two main components [41]. The first is cognitive confidence, which derives from the rational assessment of a system’s competence and reliability. In this sense, employees can demonstrate confidence in an organizational system due to its logical performance and technical foundation, as confirmed by empirical evidence. The second dimension is affective trust, anchored in emotional and relational elements. This occurs when users trust the system not only because they understand it from a technical point of view, but also because they perceive constant support from the organization, for example, through cybersecurity measures or technical assistance that inspires safety, even in the absence of a complete understanding of the internal mechanisms [42]. As automated systems become more complex, trust plays a central role in mediating the relationship between users and automated decision support systems, influencing users’ willingness to rely on the system’s recommendations in situations characterized by uncertainty and cognitive limitations [43].
Another significant determinant is the transparency and quality of the data, defined by the accuracy, consistency, and clarity of the data used by AI-DSS. The lack of clear standards of data quality and governance can undermine trust in the system and the legitimacy of decisions [14]. In contrast, organizations that promote transparency and ethical data management tend to have higher adoption rates and greater internal trust [21]. Thus, data quality and transparency are fundamental prerequisites for strengthening trust in AI-DSS and, implicitly, for increasing its acceptance in the decision-making process. In this study, data quality and transparency are conceptualized as a multidimensional perceptual construct encompassing accuracy, completeness, timeliness, clarity of data sources and processing, algorithmic transparency, and ethical data governance. These dimensions are measured jointly to reflect users’ overall assessment of AI-DSS data reliability. Transparency is conceptualized as a direct antecedent of trust via the data quality construct, rather than as a moderating variable, reflecting the assumption that transparent, high-quality data reduces uncertainty and fosters trust in AI-based decision support systems.
The intent to adopt AI-DSS (INT-AI) reflects users’ willingness to integrate AI into decision-making and to continue using this system over time. This intention is influenced by perceptions of usefulness, ease, and trust, and its effect is manifested in decision-making efficiency. In this context, decision-making efficiency (DME) is defined as the extent to which decisions become faster, more coherent, and better informed thanks to the use of AI-DSS [12]. Therefore, the intention to adopt can be considered a prerequisite for increasing decision-making efficiency.
Although the literature provides relevant contributions on the role of AI in decision-making, several underexplored directions warrant this study. First, numerous studies analyze AI-DSS predominantly from a technological perspective, focusing on algorithm performance and processing capabilities. At the same time, the psychological and organizational dimensions of human–system interaction are given only marginal attention. Second, emerging concepts such as transparency and data quality—essential for user trust and the legitimacy of the decisions they generate—are often discussed piecemeal, without being integrated into a coherent theoretical framework that explains their role in the adoption process. At the same time, few empirical investigations simultaneously address perceptions of transparency and trust, as well as the central factors of technology acceptance, and these studies are even rarer in the context of Romanian organizations, where AI implementation is at an early stage. In contrast to the extensive international literature on perceptions and the adoption of artificial intelligence technologies, empirical research on the organizational context in Romania remains limited and fragmented. Existing studies focus primarily on the development and technical performance of AI-based solutions rather than on users’ perceptual and behavioral dimensions. For example, ref. [44] propose and test a decision support platform based on OpenAI and blockchain technology, emphasizing the role of transparency and security in organizational decision-making processes, but the analysis is mainly centered on system architecture and technical efficiency, without investigating user perceptions or adoption intent. Similarly, ref. [45] develop an AI-based decision support system to reduce road congestion, demonstrating the effectiveness of advanced algorithms in improving operational outcomes, but without explicitly analyzing user attitudes, level of trust, or willingness to use these systems.
From a broader perspective, ref. [46] conducts a systematic analysis of the integration of AI, IoT, and predictive analytics in adaptive traffic control systems, highlighting the benefits of data-driven decision-making; however, the perceptual and organizational dimensions of technology acceptance are treated only marginally.
In the field of logistics, ref. [47] use structural equation modeling to analyze AI-assisted digital transformation in circular logistics in Romania, showing that organizational preparation and the configuration of AI-assisted decision-making significantly influence operational transformation. However, the focus is on strategic and environmental factors, rather than individual perceptions such as perceived utility, ease of use, or trust in AI-based decision support systems.
Overall, these contributions highlight the growing interest in AI applications in decision-making processes in Romania but also point to the lack of empirical studies that explicitly analyze the perceptions of users and managers towards AI-based decision support systems (AI-DSSs) in an integrated theoretical framework of technology acceptance. Thus, the present study aims to address these gaps by developing a comprehensive perspective on the adoption of AI-DSS, integrating technological, cognitive, and informational dimensions that, although recognized as important in the recent literature, have not been analyzed together in a validated empirical model. By focusing on the connections among data quality, user perceptions, and decision-making outcomes, the research provides an up-to-date approach to how AI can support decision-making processes in contemporary organizations, contributing to both the expansion of theoretical knowledge and the foundation of managerial practices grounded in responsibility and trust.

3. Materials and Methods

The choice of a quantitative, explanatory approach is driven by the need to rigorously analyze the complex dynamics of interactions among cognitive, technological, and organizational factors that influence the adoption of AI-DSS. Emphasizing its suitability for complex analysis clarifies the rationale for selecting this methodology, as it enables a comprehensive examination of these multifaceted relationships. Given the multidimensional nature of the phenomenon under study, structural equation modeling (PLS-SEM) is an appropriate methodological approach, as it enables simultaneous testing of direct and indirect relationships among latent variables and the evaluation of instrument validity. Data analysis was conducted using SmartPLS software (SmartPLS GmbH, Bönningstedt, Germany), v4.1.1.5. PLS-SEM is particularly suitable for this study for several reasons. First, the proposed conceptual framework comprises multiple latent variables and hypothesized direct and indirect relationships, necessitating simultaneous estimation of both the measurement and structural models. Unlike traditional factor analysis, which is primarily used for data reduction or construct identification, PLS-SEM enables the testing of theory-driven relationships among constructs, including mediation effects. Second, the objective of this research is not only to confirm factor structures but also to explain and predict behavioral and organizational outcomes, such as the intention to adopt AI-DSS and perceived decision-making efficiency. PLS-SEM is recognized as particularly appropriate for prediction-oriented and exploratory research, especially in emerging research contexts where theoretical models are still being refined, such as AI-DSS adoption in Romanian organizations. To clarify the conceptual model’s structure, it integrates cognitive variables—perceived usefulness, perceived ease of use, and trust in AI—technological variables—transparency and data quality—and organizational variables—adoption intention and decision-making efficiency. This structure reflects a socio-technical view of AI adoption, in which data-related factors influence perceptions, which in turn shape decision-making behaviors and organizational outcomes. The influence relations identified by PLS-SEM are designed to illustrate how these dimensions interact within a decision-making ecosystem. A clear definition of variables is essential: transparency and data quality pertain to reliability and comprehensibility; trust, perceived usefulness, and ease of use relate to subjective evaluations; and adoption intention and decision-making efficiency reflect behavioral and organizational aspects of technology integration.
This methodological approach is thus designed to provide a comprehensive analysis of the mechanisms by which AI-based technologies influence decision-making processes. Highlighting its purpose underscores its contribution to understanding not only causal relationships but also the relative weight of each dimension in explaining AI-DSS adoption and effectiveness within organizations. This focus on the mechanisms of influence enhances the study’s significance by offering insights into how various factors shape organizational decision-making in AI adoption contexts.

3.1. Research Design

The present research adopts a quantitative, explanatory design, based on a conceptual model derived from the Technology Acceptance Model (TAM), extended with variables specific to the adoption of AI within decision support systems (DSSs), namely: trust in the system (TRUST) and transparency and data quality (DATAQ). The main goal of the research is to empirically test the causal relationships among constructs to assess how perceptions of utility, ease of use, and trust influence the intention to adopt AI-DSSs and, implicitly, decision-making efficiency in Romanian organizations. The empirical investigation was conducted in Romanian organizations that have either implemented or are exploring AI-based decision support solutions. The selection of participating organizations was based on three key criteria: (1) active engagement in digital transformation initiatives, (2) exposure to data-driven or AI-supported decision-making processes, and (3) accessibility to managerial and professional staff involved in decision-related activities. These criteria ensured that respondents’ experiences were relevant to the research objectives. Respondents were therefore selected based on decision-relevant experience with AI-supported processes, rather than casual or purely observational exposure.
The research method used is structural equation modelling based on latent variables (PLS-SEM), which is considered suitable for validating complex models and exploring relationships among emerging theoretical constructs [48]. This approach allows simultaneous testing of the validity of the measuring instrument and the hypothetical relationships among constructs.
The primary data collection involved a structured questionnaire developed through a thorough literature review. Designed to evaluate the constructs in the conceptual model, it used items from validated sources, tailored to the Romanian organizational context, with five items per construct on a 5-point Likert scale from ‘Total disagree’ to ‘Total agreement’. This detailed description improves understanding of the data collection process and its rigor.
The questionnaire was distributed online via Google Forms from September to November 2025 through platforms and groups on social media platforms (LinkedIn, Facebook, etc.). Invitations were sent to Romanian organizations across sectors, including industry, services, IT, logistics, and public administration, to gather diverse insights on AI-DSS adoption. Participation was voluntary and anonymous, with respondents informed about the research purpose and the confidentiality of their responses. Table 1 details the constructs, their operational definitions, and the bibliographic sources used for item formulation.

3.2. Conceptual Framework and Testable Assumptions

Based on the operationalized constructs and theoretical relationships identified in the literature, the research conceptual model was developed to integrate the key dimensions of AI-based decision support system (AI-DSS) adoption. The proposed model explores the causal relationships between data transparency and quality (DATAQ), trust in AI (TRUST), perceived utility (PU), perceived ease of use (PEOU), intent to adopt (INT-AI), and decision-making efficiency (DME). Figure 1 illustrates the architecture of the conceptual model and the hypothesized directions of influence between the latent variables.
Using this conceptual framework, the following research hypotheses were created to investigate the direct and indirect connections between the identified constructs.
H1. 
Data transparency and quality (DATAQ) have a significant positive effect on user trust in AI-DSS (TRUST) systems.
H2. 
Trust in AI-DSS (TRUST) systems positively influences (a) perceived utility (PU) and (b) perceived ease of use (PEOU).
H3. 
Perceived Ease of Use (PEOU) has a significant positive effect on Perceived Utility (PU).
H4. 
The intention to adopt AI technology within the decision support system (INT-AI) is influenced by (a) perceived utility, (b) perceived ease of use (PEOU), and (c) trust in AI-DSS (TRUST) systems.
H5. 
The intention to adopt AI-DSS (INT-AI) positively influences decision-making efficiency (DME).
The proposed model integrates cognitive, organizational, and technological variables-such as trust, data transparency, perceived utility, and ease of use- that define the process of AI acceptance in support of managerial decision-making. Empirical testing of these assumptions enables assessment of how these factors influence both adoption intent and decision-making efficiency at the organizational level.

3.3. Population and Sample

The target population of the study comprises employees and managers from Romanian organizations who use or are familiar with AI-based technologies applied to decision-making (AI-DSS, intelligent ERP platforms, predictive analytics systems, etc.).
Data collection was conducted through a convenience non-probabilistic sampling method, adapted to the specifics of exploratory research. The questionnaire was distributed online via Google Forms platform from September to November 2025 and disseminated through professional networks such as LinkedIn and ResearchGate, as well as to relevant managerial and academic groups.
According to the methodological recommendations for PLS-SEM analysis, a minimum of two hundred respondents is considered appropriate for models with more than five constructs [48]. In total, 324 valid questionnaires were collected, providing a solid statistical basis for testing the conceptual model.
The distribution of respondents shows a balanced representation across hierarchical levels, with 56% in management positions (managers) and 44% in executive staff. In terms of demographic structure, the data indicates an important level of diversity in age, professional experience, and fields of activity, as shown in Figure 2.
The demographic and occupational distribution of participants reflects a high degree of diversity, both in terms of gender and occupational and sectoral structure. From a gender perspective, 52% of respondents are women and 48% are men, suggesting balanced participation and adequate representation of both genders in the sample. In terms of age, the highest proportion is held by respondents in the 36–45 age group (31%), followed by those in the 46–55 age range (29%) and the 26–35 age group (20%), indicating a predominance of people in full professional maturity. The 18–25 years (8%) and over 55 (12%) categories have a lower share but ensure complete coverage of the active age spectrum.
In terms of professional experience, almost half of the respondents (46%) have 6–15 years of experience, and a third (31%) have 16–25 years, indicating a high level of competence and familiarity with organizational decision-making processes. Only 9% are at the beginning of their careers (0–5 years), and 14% have over 25 years of experience, making this a representative group of senior leaders or experts in strategic fields.
Occupational roles show 56% of participants in managerial positions and 44% in executive roles, illustrating a balanced representation of decision-makers and operational staff. This distribution provides a comprehensive view of perceptions from both strategic leaders and direct users, enriching the analysis of AI-DSS adoption and organizational impact.
The analysis of the distribution by fields of activity shows a predominance of respondents from IT/Software (17%), finance (15%) and the public sector (14%), followed by manufacturing (12%), retail (11%), education (10%), health (9%) and other fields (12%). This sectoral diversity underscores the multidimensional nature of research and its relevance to a wide range of organizations involved in digital transformation and the adoption of AI-based decision-support systems.
Overall, the sample structure is balanced and representative of the Romanian organizational environment, allowing generalization of the results in the context of analyzing the factors influencing adoption intention and the efficiency of AI-DSS use.

4. Results

To ensure the robustness and credibility of the empirical conclusions, the analysis was conducted in two main stages: evaluation of the measurement model and testing of the structural model. In the first stage, the focus was on verifying the quality of the measuring instrument, by examining the internal consistency and convergent validity of the constructs included in the proposed model.

4.1. Evaluation of the Measurement Model

Table 2 displays the findings related to internal reliability and convergent validity.
All analyzed constructs recorded Cronbach’s Alpha coefficient values between 0.876 and 0.927, exceeding the recommended minimum threshold of 0.70 [55], which confirms the adequate internal consistency of the items associated with each construct.
Also, the composite reliability values (rho_c) vary between 0.910 and 0.945, indicating a high stability of the measurements and a conceptual coherence between the items related to each construct. Similarly, the rho_a coefficients are in the range of 0.890–0.927, reinforcing the validity of the results.
As for the average Variance Extracted (AVE), all values exceed the theoretical threshold of 0.50, being between 0.670 and 0.774, which demonstrates a satisfactory convergent validity, according to the methodological recommendations of Fornell & Larcker (1981) [56]. In other words, each construct explains more than 50% of the variance of its indicators, indicating that the items consistently measure the associated latent concept.

4.2. Discriminating Validity

The discriminant validity was evaluated using the Fornell–Larcker criterion, which assumes that the square root of the AVE value for each construct (the values on the diagonal) is greater than its correlations with the other constructs (the values outside the diagonal) [56]. The results presented in Table 3 confirm that this condition is met for all constructs included in the model.
Thus, the diagonal values vary between 0.819 and 0.880, exceeding the corresponding inter-construct correlations, which demonstrates that each construct explains the variance of its own indicators better than the common variance with other constructs. For example, for the DATAQ construct, the square root value of AVE (0.875) is higher than its correlations with TRUST (0.778), DME (0.823), and INT-AI (0.774). Similar results are observed for PEOU (0.880), TRUST (0.870) and PU (0.819), which confirms the appropriate discrimination between the theoretical dimensions analyzed.
Therefore, it can be concluded that the measurement model meets the criteria of discriminant validity, and the constructs are conceptually and empirically distinct, faithfully reflecting the latent dimensions proposed in the theoretical model.

4.3. Factor Load Analysis (Outer Loadings)

To assess the individual contribution of each item to the related latent construct, standardized factor loads were examined. According to methodological recommendations [48] values above the threshold of 0.70 indicate a substantial correlation between the item and its construct, reflecting an appropriate convergence.
The results presented in Table 4 show that all factor loads exceed the threshold of 0.70, ranging from 0.700 to 0.909, which confirms that each item contributes significantly to the definition of the associated construct. Of these, the items DATAQ4 (0.906), PEOU3 (0.909), INT5 (0.904) and TRUST5 (0.891) show the highest values, indicating a strong representativeness of the dimensions “transparency and data quality”, “ease of use”, “intent to adopt” and “trust”.
Also, even the lowest recorded value—PU1 (0.700)—remains above the minimum acceptable level, supporting the item to remain under analysis. Overall, these results confirm the convergent validity of the indicators and the high quality of the measurement model, providing the basis for testing the structural relationships between constructs.
At the same time, to assess the absence of collinearity among the model indicators, the Variance Inflation Factor (VIF) was examined. The results obtained indicate values between 1.69 and 4.79, all of which are below the critical threshold of 5, recommended by Hair et al. (2021) [48]. Thus, the absence of multicollinearity among the items used and the satisfactory independence of the explanatory variables within the measurement model are confirmed.

4.4. Structural Relationship Analysis and Hypothesis Testing

After confirming the validity and reliability of the measurement model, the structural model was evaluated to test the hypothesized causal relationships between the constructs. The standardized coefficients (β) were calculated using the PLS-SEM algorithm procedure, a standardized factor to determine the statistical significance of the direct relationships (Figure 3).
The results indicate that data transparency and quality (DATAQ) exert a significant positive effect on trust in AI-DSS (β = 0.784), confirming that a high perception of data quality strengthens users’ level of trust in AI-based systems. Trust (TRUST) also has a direct positive impact on both perceived utility (PU) (β = 0.229) and perceived ease of use (PEOU) (β = 0.482), suggesting that trust functions as a psychological catalyst for favorable perceptions related to technology performance and operability.
Moreover, perceived ease of use (PEOU) significantly influences perceived utility (PU) (β = 0.597), which confirms the hypothesis that systems perceived as easy to use are also considered more useful and efficient in supporting decision-making. Similarly, both PU (β = 0.352) and PEOU (β = 0.135) and TRUST (β = 0.311) exert positive effects on the intention to adopt AI-DSS (INT-AI), validating the premises of the Extended Technology Acceptance Model.
In terms of the final result, the intention to adopt AI-DSS (INT-AI) has a significant and strong effect on decision-making efficiency (DME) (β = 0.544), highlighting the fact that an increased intention to use these systems leads to a tangible improvement in the quality and coherence of organizational decisions.
Overall, all hypothesized relationships are positive and statistically significant (p < 0.05), confirming the validity of the proposed conceptual model and the relevance of the integrated constructs (Table 5).
The results confirm all the hypotheses, demonstrating significant relationships among the model’s constructs in the Romanian organizational context. Data transparency and quality (DATAQ) are key factors influencing user trust (TRUST), which, in turn, shapes perceptions of usefulness and ease of use. Perceived Ease of Use (PEOU) enhances Perceived Utility (PU), and together with trust, they shape the intention to adopt AI-DSS (INT-AI) systems. This intention notably improves decision-making efficiency (DME), emphasizing the model’s importance for advancing AI adoption strategies in organizational settings.

5. Discussion

The results of the present research provide an integrated perspective on how cognitive, technological, and organizational factors influence the adoption intention and decision-making efficiency of AI-DSSs. This section provides a critical analysis of the conclusions, in relation to the recent scientific literature, highlighting both theoretical convergence and the distinct contributions of the study. Thus, the results are compared with prior research on the roles of trust, perceived utility, and data transparency in AI technology adoption to assess the degree of alignment, expansion, and differentiation relative to existing models.
The results obtained in the present research are consistent with the conclusions of his [33], who developed an AI-based conceptual framework to support project management. It has been demonstrated that AI systems can reduce risks and improve decision-making through predictive analytics and resource optimization. However, in contrast to his predominantly technological and procedural approach [34], the present study extends the analysis to the cognitive and behavioral dimension, exploring how trust, usefulness, and perceived ease influence adoption intention and decision-making efficiency in Romanian organizations. This difference in perspective underscores the theoretical contribution of current research: the success of AI-DSS integration is not solely determined by algorithmic performance but also by perceptual and contextual factors. In a complementary framework, ref. [57] developed an AI-DSS system based on Bayesian networks to support judicial decisions. The authors showed that probabilistic models can reduce cognitive overload and increase transparency in decision-making. In contrast to this legal approach, our study makes an empirical contribution applied to the managerial environment, demonstrating that transparency and data quality strengthen trust and decision-making efficiency in economic organizations [57]. While [57] emphasizes the importance of algorithmic explainability, our research demonstrates the applicability of these principles in real-world organizational contexts.
The results are also consistent with those reported by [58], who identified performance, effort, and transparency as decisive factors in the selection and adoption of AI-DSS. Our findings confirm that trust (TRUST) and data quality (DATAQ) are significant predictors of adoption intent and decision-making efficiency. The contribution of the present study is to integrate these factors into a validated empirical framework of the Technology Acceptance Model (TAM), providing a psychological and organizational perspective on the adoption mechanisms.
In a similar vein [59], they developed the Multisource AI Scorecard Table (MAST) tool to assess the degree of trust and transparency in AI systems. They have shown that integrating MAST criteria increases user confidence but does not guarantee superior performance [59]. Our results confirm the critical role of trust in AI-DSS adoption but extend the analysis by highlighting the link between perceived trust, behavioral intention, and decision-making efficiency. By including data transparency and quality variables in an extended empirical model (TAM), the study provides a systemic understanding of how these perceptions translate into measurable performance in decision-making.
Gupta et al. (2022) conducted a comprehensive analysis of the role of AI in DSSs and operational research, highlighting the importance of transparency and ethical governance [17]. Our research empirically validates these directions, demonstrating that transparency and data quality increase trust and improve decision-making efficiency (DME). The results also align with those of [16], which highlight the need to balance accuracy, transparency, and trust. The researcher shows that, although accurate, deep learning models can diminish interpretability and implicitly erode user trust [16]. Our research empirically supports these observations, demonstrating that perceived transparency and data quality (DATAQ) significantly affect trust (TRUST) and decision-making efficiency (DME). The empirically tested extended TAM thus provides a concrete validation of the theoretically proposed equilibrium of [16].
In an innovative ethical approach, ref. [60] explores the implications of equipping AI-DSS with emotional capabilities, warning of the risk of diminishing human autonomy. Our study brings an empirical complement to this discussion, demonstrating that trust and transparency are necessary conditions for a healthy balance between human control and automation. In contrast to his conceptual approach [60], the present research validates trust as a behavioral predictor of technology adoption rather than merely an ethical issue.
From an applicative perspective, ref. [61] have demonstrated, in a financial context, that integrating AI increases decision-making accuracy but raises issues of interpretability and accountability. Our results complement this perspective, empirically confirming that transparency (DATAQ) and trust (TRUST) amplify decision-making efficiency (DME). They also [62] showed that legal AI systems can increase efficiency but erode decision-making autonomy in the absence of transparency. Our study expands on this analysis, proposing a socio-technical approach in which transparency and data quality turn AI-DSSs into collaborative decision-making partners rather than substitutes.
In line with these ideas, ref. [63] argued that the performance of AI-based decisions depends on the trust calibrated between algorithmic autonomy and human control. Our results confirm that trust is shaped by transparency and data quality, influencing perceptions of usefulness and ease of use, as well as adoption intent. From a strategic perspective, ref. [64] highlighted the need to align organizational objectives with technological implementation. The present research complements this vision with a micro-organizational approach, demonstrating that transparency, data quality, and trust are cognitive mechanisms that facilitate the bottom-up acceptance of AI-DSS, thereby strengthening the technology’s strategic value.
In the same vein, ref. [65] they showed that trust in AI-assisted decisions depends on perceptions of control and vulnerability rather than on technological performance. In this case, it operationalizes the concept, showing that trust can be systematically built through data transparency and process clarity, thereby serving as a cognitive mechanism to improve decision-making efficiency.
In line with human resources research, ref. [66] have emphasized the importance of employee engagement and ethical governance in the adoption of AI systems, we confirm and expand on these conclusions, demonstrating that transparency and data quality not only foster user engagement, but also strengthen trust, perceived usefulness, and adoption intent. From a psychological perspective [67], they showed that constant interaction with AI can lead to decision-making automatisms. Our results counterbalance this trend, suggesting that transparent and explainable AI-DSS can stimulate deliberative reasoning, reducing the risk of behavioral addiction. Another study that emphasizes the role of artificial intelligence in shaping user perceptions, trust, and decision-related behaviors in digital environments is that of [68], which analyzes how AI-driven personalization and immersive digital experiences contribute to consumer engagement and trust in digital brand management. Their findings indicate that transparency of digital interactions, perceived usefulness of AI-enabled tools, and the quality of data-driven personalization significantly influence users’ attitudes and behavioral intentions. Although conducted in the context of digital branding, these results are consistent with and supportive of the findings of the present study, reinforcing the relevance of transparency, perceived usefulness, and trust as key determinants of AI acceptance across different application domains.
The results obtained reinforce the idea that integrating AI-DSSs into decision-making processes is not only a technological challenge but also a complex process of cognitive and organizational adaptation. Research demonstrates that transparency and data quality are fundamental to trust in technology, and that this trust, in turn, fuels positive perceptions of usefulness and ease of use, thereby influencing adoption intention and decision-making efficiency. Thus, the empirically validated model confirms the interdependence between the technological and psychological dimensions, providing an integrated perspective on the process of acceptance of artificial intelligence in the organizational environment. While the findings highlight the positive role of trust in fostering favorable attitudes toward AI-DSS and adoption intentions, it is important to acknowledge the potential risks associated with excessively high levels of trust. Prior research warns that overreliance on automated systems may lead to automation bias, a phenomenon in which decision-makers accept AI-generated recommendations uncritically, even when contextual judgment or contradictory evidence is available. In organizational decision-making, such passive reliance may reduce vigilance, limit critical reflection, and ultimately compromise decision quality.
From a theoretical perspective, the study extends the Technology Acceptance Model (TAM) by integrating constructs related to transparency and data quality, which are essential for explainability and trust in AI-DSS. This conceptual expansion allows for a deeper understanding of how user perceptions translate into adoption behaviors and measurable decision-making performance. At the same time, the research critically engages with existing literature, empirically demonstrating that the success of AI technologies cannot be explained solely by algorithmic performance but must be analyzed through the prism of the cognitive and social mechanisms that govern human–machine interaction.
On a practical level, the results provide guidelines for the organizational environment: adopting AI-DSS requires transparent data governance, user-level explainability mechanisms, and an organizational culture grounded in trust and participation. The promotion of transparency and information quality must be accompanied by digital literacy and ethical communication programs that reinforce perceptions of reliability and control. In this sense, AI-DSSs should not be perceived as tools that replace human reasoning, but rather as cognitive partners that support decision-making through coherence, speed, and accuracy, without compromising decision-makers’ autonomy and responsibility.
Therefore, this research makes a relevant contribution at both theoretical and practical levels: by validating an extended model of technology acceptance and by providing an applicable framework for the responsible design and implementation of AI-DSSs in modern organizations. At the same time, it opens future research directions on dynamic trust assessment, the role of adaptive explanations, and the optimal balance between automation and human control in AI-assisted decision-making.

6. Conclusions

The main objective of this research was to examine managerial and user attitudes and perceptions regarding the implementation of artificial intelligence-based decision support systems (AI-DSSs) in Romanian organizations, with a particular focus on how cognitive, technological, and organizational dimensions are associated with adoption intention and perceived decision-making efficiency.
The results of the PLS-SEM analysis confirm the validity of the proposed model and reveal significant relationships among the analyzed variables. In particular, transparency and data quality have proven to be direct determinants of trust in AI-DSSs, and trust, in turn, has positively influenced perceived utility (PU), perceived ease of use (PEOU), and intent to adopt (INT-AI). The intent to adopt also significantly affected decision-making efficiency (DME), supporting the hypothesis that the systematic and deliberate use of AI-DSS improves organizational performance.
Rather than assessing the technical performance of AI-DSS, this study provides an integrated empirical perspective on the psychological and organizational mechanisms underlying attitudes toward AI implementation in decision-making processes. By emphasizing user perceptions related to data clarity, system reliability, and ease of interaction, the findings illustrate how these perceptions translate into adoption-related attitudes and perceived improvements in decision-making practices.
Overall, the research contributes to the growing literature on AI-supported decision-making by clarifying how attitudinal and perceptual factors shape organizations’ readiness to embrace AI-DSS, offering relevant insights for both future research and responsible managerial practice.

6.1. Theoretical Implications

Theoretically, this paper contributes to the consolidation and expansion of the Technology Acceptance Model (TAM) by integrating the DATAQ and TRUST constructs, addressing current gaps in AI-DSS literature, and providing a foundation for understanding complex adoption challenges.
This perspective contributes to the development of a socio-technical theory of AI adoption, in which the interaction between humans and the intelligent system is seen as an adaptive process, based on evaluations of transparency, predictability, and control. Unlike traditional technology acceptance models, this research demonstrates that trust and data quality are not only prerequisites for adoption but also dynamic elements that can evolve with the user experience. Thus, the paper extends the applicability of TAM to a specific and current context—that of AI-assisted decisions—and provides a robust conceptual framework for further studies on algorithmic governance, decision explainability, and AI ethics in the organizational environment. In particular, the relationship between DATAQ and TRUST highlighted in this model confirms recent literature on “AI accountability” and “responsible AI”, contributing to the integration of ethical factors into classical theories of technological adoption.

6.2. Managerial and Practical Implications

From an application perspective, the study’s results provide clear directions for organizations planning to implement or expand AI-DSS, emphasizing the importance of trust, transparency, and data quality to ensure successful adoption and improved decision-making. When algorithmic transparency is limited due to the “black box” nature of AI models, but data quality is high, managers can adopt several strategies to maintain decision accuracy and organizational confidence. First, organizations should implement procedural transparency mechanisms, such as clearly documenting data sources, data preprocessing steps, and validation procedures. Even if the model’s internal logic is not fully interpretable, transparency regarding input quality and governance helps decision-makers assess the reliability of AI-generated outputs.
Second, perceptions of usefulness and ease of use can be reinforced through user-centered design, intuitive interfaces, and training programs that facilitate the interpretation of AI-generated recommendations. Such training helps to reduce technological anxiety and promote responsible use.
At the same time, the study highlights the need for hybrid governance, in which AI-DSS serves as a cognitive partner to the decision-maker, providing analytical support without substituting for professional reasoning. In this context, algorithmic accountability policies—such as model auditing, data validation, and feedback mechanisms—become essential for maintaining trust and enabling continuous system improvement.
The results also show that decision-making efficiency increases significantly in organizations that harmonize technological and behavioral components, combining robust data infrastructure with human capital development. This complementarity indicates the need for parallel investments in digital platforms, continuous learning, and data-driven leadership.
The PLS-SEM analysis highlights the quantifiable dimension of these implications: transparency and data quality have a substantial impact on trust, and the intention to adopt is a significant determinant of decision-making efficiency. Thus, investments in explainability and user training translate into measurable improvements in decision performance.
In terms of applicability, the study’s results are relevant to areas with high data volume and decision-making pressure, such as IT, financial services, logistics, public administration, and human resources management. The validated model provides a sound operational framework for both developers of AI-DSS solutions and managers interested in the responsible and efficient integration of technologies into decision-making processes, thereby guiding future research and practice to strengthen the link between theory and application.

6.3. Limits of Research

Like any empirical research, this study has several limitations that should be considered when interpreting the findings. First, a cross-sectional design does not permit examination of changes in perceptions over time or support strong causal inferences. The results, therefore, reflect attitudes and perceptions at a specific moment, in a context in which AI-based decision support systems are still in the early stages of adoption in Romania. Future research could employ longitudinal designs to capture the dynamic evolution of trust, transparency perceptions, and adoption attitudes as organizational experience with AI-DSS increases.
Second, although the study includes 324 respondents, the number of participating organizations remains relatively limited in the Romanian economy. This reflects the current uneven diffusion of AI-based decision support systems across organizations, as only a restricted number of companies have reached a sufficient level of digital maturity to enable meaningful evaluation of AI-DSS. Consequently, while the respondents represent the full population of interest within the selected organizational contexts, the findings cannot be generalized to all Romanian organizations. Instead, they provide context-specific insights into organizations that are actively engaged in AI-supported decision-making initiatives.
Third, the research relies on self-reported data collected through an online questionnaire, which may be subject to common method bias and social desirability effects. Although this approach is appropriate for capturing perceptions and attitudes toward AI-DSS, future studies could complement subjective measures with objective indicators of decision-making performance, such as accuracy, response time, or consistency of decisions.
Finally, the proposed model focuses primarily on perceptual and attitudinal variables. Additional organizational factors—such as digital leadership, AI maturity, governance mechanisms, or the level of algorithmic explainability—were not explicitly examined. Incorporating these variables in future research could enhance the explanatory power of the model and provide a more comprehensive understanding of AI-DSS adoption in organizational decision-making processes.

6.4. Future Research Directions

Building on the limitations and findings of this study, several well-defined directions for future research can be identified. First, the proposed conceptual model should be replicated and extended in other organizational, cultural, and industrial contexts to assess the robustness and contextual sensitivity of the identified relationships. Comparative cross-country studies or multi-industry analyses would allow researchers to examine how differences in digital maturity, regulatory environments, and organizational culture shape attitudes toward AI-DSS adoption.
Second, future research could expand the model by incorporating additional perceptual and ethical variables, such as algorithmic fairness, accountability, decision responsibility, and perceived control in human–AI collaboration. These constructs are increasingly relevant in AI-supported decision-making and could provide deeper insights into the formation of trust and adoption-related attitudes, particularly in high-stakes organizational contexts.
Third, an important avenue for further investigation concerns the role of explainable artificial intelligence (XAI). Future studies could empirically test how different forms of explainability—such as adaptive or role-specific explanations tailored to decision-makers’ expertise—affect trust, perceived usefulness, and intention to adopt AI-DSS. Experimental or mixed-method research designs could be particularly valuable in isolating the effects of explainability mechanisms on user perceptions and decision outcomes.
Finally, future research should explore hybrid decision-making models that integrate AI-based decision support with human collective intelligence. Such studies could analyze how feedback loops between human decision-makers and AI systems influence learning, coordination, and performance over time, as well as how AI-DSS can enhance group deliberation and collaborative decision quality.
In conclusion, this paper contributes to strengthening a complex and balanced understanding of the role of artificial intelligence in contemporary decision-making. It demonstrates that the success of AI-DSS does not depend solely on algorithmic performance, but on the quality of the relationship between technology and user—a relationship based on transparency, trust, and cognitive collaboration. These results provide a solid framework for the design and implementation of future decision support systems that can integrate artificial intelligence in an ethical, explainable, and human-value-oriented way.

Author Contributions

Conceptualization, G.-T.B. and S.-G.S.; methodology, G.-T.B.; software, S.-G.S.; validation, A.D. and M.C.U.; formal analysis, A.D.; investigation, G.-T.B.; resources, M.C.U.; data curation, S.-G.S.; writing—original draft preparation, G.-T.B. and S.-G.S.; writing—review and editing, C.A.I.; visualization, A.D.; supervision, C.A.I.; project administration, C.A.I.; funding acquisition, A.D. and M.C.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kučević, E.; Leible, S.; Lewandowski, T.; von Brackel-Schmidt, C.; Ohlsen, F.P. A user-based study on the acceptance of artificial intelligence-based decision-support systems. In Proceedings of the PACIS 2024, Ho Chi Minh City, Vietnam, 1–5 July 2024. [Google Scholar]
  2. Guo, Y.; Wang, N.; Xu, Z.-Y.; Wu, K. The internet of things-based decision support system for information processing in intelligent manufacturing using data mining technology. Mech. Syst. Signal Process. 2020, 142, 106630. [Google Scholar] [CrossRef]
  3. Kostopoulos, G.; Davrazos, G.; Kotsiantis, S. Explainable artificial intelligence-based decision support systems: A recent review. Electronics 2024, 13, 2842. [Google Scholar] [CrossRef]
  4. Lal, K.; Ballamudi, V.K.R.; Thaduri, U.R. Exploiting the potential of artificial intelligence in decision support systems. ABC J. Adv. Res. 2018, 7, 131–138. [Google Scholar] [CrossRef]
  5. Eom, S.B.; Lee, S.M.; Kim, E.B.; Somarajan, C. A survey of decision support system applications (1988–1994). J. Oper. Res. Soc. 1998, 49, 109–120. [Google Scholar] [CrossRef][Green Version]
  6. Yazdani, M.; Zarate, P.; Coulibaly, A.; Zavadskas, E.K. A group decision making support system in logistics and supply chain management. Expert Syst. Appl. 2017, 88, 376–392. [Google Scholar] [CrossRef]
  7. Sharda, R.; Delen, D.; Turban, E. Analytics, Data Science, & Artificial Intelligence: Systems for Decision Support; Pearson: London, UK, 2021. [Google Scholar]
  8. Arnott, D.; Pervan, G. A critical analysis of decision support systems research. J. Inf. Technol. 2005, 20, 67–87. [Google Scholar] [CrossRef]
  9. Mohemad, R.; Hamdan, A.R.; Othman, Z.A.; Noor, N.M.M. Decision support systems (DSS) in construction tendering processes. arXiv 2010, arXiv:1004.3260. [Google Scholar] [CrossRef]
  10. Wang, H.; Xu, Z.; Fujita, H.; Liu, S. Towards felicitous decision making: An overview on challenges and trends of Big Data. Inf. Sci. 2016, 367, 747–765. [Google Scholar] [CrossRef]
  11. Shen, F.; Zhao, X.; Kou, G. Three-stage reject inference learning framework for credit scoring using unsupervised transfer learning and three-way decision theory. Decis. Support Syst. 2020, 137, 113366. [Google Scholar] [CrossRef]
  12. Song, Y.; Qiu, X.; Liu, J. The impact of artificial intelligence adoption on organizational decision-making: An empirical study based on the technology acceptance model in business management. Systems 2025, 13, 683. [Google Scholar] [CrossRef]
  13. Burggräf, P.; Wagner, J.; Koke, B.; Bamberg, M. Performance assessment methodology for AI-supported decision-making in production management. Procedia CIRP 2020, 93, 891–896. [Google Scholar] [CrossRef]
  14. Soori, M.; Jough, F.K.G.; Dastres, R.; Arezoo, B. AI-based decision support systems in Industry 4.0: A review. J. Econ. Technol. 2024, 4, 206–225. [Google Scholar] [CrossRef]
  15. Ofosu-Ampong, K.; Asmah, A.; Amoako Kani, J.; Commey, N.O. Factors Influencing Artificial Intelligence Decision-making Quality. SSRN Preprint 2025, 7, 11–20. [Google Scholar] [CrossRef]
  16. Kovari, A. AI for decision support: Balancing accuracy, transparency, and trust across sectors. Information 2024, 15, 725. [Google Scholar] [CrossRef]
  17. Gupta, S.; Modgil, S.; Bhattacharyya, S.; Bose, I. Artificial intelligence for decision support systems in the field of operations research: Review and future scope. Ann. Oper. Res. 2022, 308, 215–274. [Google Scholar] [CrossRef]
  18. Brożek, B.; Furman, M.; Jakubiec, M.; Kucharzyk, B. The black box problem revisited: Real and imaginary challenges for automated legal decision making. Artif. Intell. Law 2024, 32, 427–440. [Google Scholar] [CrossRef]
  19. Coussement, K.; Abedin, M.Z.; Kraus, M.; Maldonado, S.; Topuz, K. Explainable AI for enhanced decision-making. Decis. Support Syst. 2024, 184, 114276. [Google Scholar] [CrossRef]
  20. Cetinkaya, N.E. Between transparency and trust: Identifying key factors in AI system perception. Behav. Inf. Technol. 2025; in press. [Google Scholar] [CrossRef]
  21. Cheong, B.C. Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Front. Hum. Dyn. 2024, 6, 1421273. [Google Scholar] [CrossRef]
  22. Yu, L.; Li, Y. Artificial intelligence decision-making transparency and employees’ trust: The parallel multiple mediating effect of effectiveness and discomfort. Behav. Sci. 2022, 12, 127. [Google Scholar] [CrossRef]
  23. Salaheldin, S.; Hussein, S. The determinants of AI adoption and its impact on employee engagement. Int. J. Manag. Appl. Res. 2025, 12, 45–61. [Google Scholar] [CrossRef]
  24. Nguyen, K.M.; Bui, T.T.M.; Pham, H.T.T.; Nguyen, L.X.; Pham, H.T.X.; Le Gia, L.; Nguyen, N.T. Fostering employees’ AI adoption in strategic marketing planning and decision making. Cogn. Technol. Work, 2025; in press. [Google Scholar] [CrossRef]
  25. Rajagopal, N.K.; Qureshi, N.I.; Durga, S.; Ramirez Asis, E.H.; Huerta Soto, R.M.; Gupta, S.K.; Deepak, S. Future of business culture: AI-driven digital framework for organizational decision-making. Complexity 2022, 2022, 7796507. [Google Scholar] [CrossRef]
  26. Fel, S.; Kozak, J.; Horodyski, P. Responsibility and AI: Exploring technology acceptance models. J. Innov. Knowl. 2025, 10, 100852. [Google Scholar] [CrossRef]
  27. Fedorka, P.; Buchuk, R.; Klymenko, M.; Saibert, F.; Petrushyn, A. The Use of Adaptive Artificial Intelligence (AI) Learning Models in Decision Support Systems for Smart Regions. J. Res. Innov. Technol. 2025, 4, 99. [Google Scholar] [CrossRef]
  28. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  29. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  30. Venkatesh, V.; Bala, H. Technology acceptance model 3. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  31. Baroni, I.; Re Calegari, G.; Scandolari, D.; Celino, I. AI-TAM. Hum. Comput. 2022, 9, 1–21. [Google Scholar] [CrossRef]
  32. Wanner, J.; Herm, L.V.; Heinrich, K.; Janiesch, C. The effect of transparency and trust on intelligent system acceptance. Electron. Mark. 2022, 32, 2079–2102. [Google Scholar] [CrossRef]
  33. Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI. Humanit. Soc. Sci. Commun. 2024, 11, 1568. [Google Scholar] [CrossRef]
  34. Almalki, S.S. AI-driven decision support systems in agile software project management. Systems 2025, 13, 208. [Google Scholar] [CrossRef]
  35. Anayat, S.; Rasool, G. Artificial Intelligence Marketing (AIM): Connecting-the-Dots Using Bibliometrics. J. Mark. Theory Pract. 2022, 32, 114–135. [Google Scholar] [CrossRef]
  36. Allal-Chérif, O.; Simón-Moya, V.; Cuenca Ballester, A.C. Intelligent purchasing. J. Bus. Res. 2021, 124, 69–76. [Google Scholar] [CrossRef]
  37. Al-Rahmi, W.M.; Yahaya, N.; Alamri, M.M.; Alyoussef, I.Y.; Al-Rahmi, A.M.; Kamin, Y.B. Integrating innovation diffusion theory with technology acceptance model: Supporting students’ attitude towards using a massive open online courses (MOOCs) systems. Interact. Learn. Environ. 2021, 29, 1380–1392. [Google Scholar] [CrossRef]
  38. Pathak, A.; Bansal, V. AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents. Comput. Hum. Behav. Artif. Humans 2024, 2, 100094. [Google Scholar] [CrossRef]
  39. Choung, H.; David, P.; Ross, A. Trust in AI and its role in the acceptance of AI technologies. Int. J. Hum.-Comput. Interact. 2023, 39, 1727–1739. [Google Scholar] [CrossRef]
  40. Cheng, X.; Zhang, X.; Yang, B.; Fu, Y. An investigation on trust in AI-enabled collaboration: Application of AI-Driven chatbot in accommodation-based sharing economy. Electron. Commer. Res. Appl. 2022, 54, 101164. [Google Scholar] [CrossRef]
  41. McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and validating trust measures for e-commerce: An integrative typology. Inf. Syst. Res. 2002, 13, 334–359. [Google Scholar] [CrossRef]
  42. Alshammari, M.M.; Al-Mamary, Y.H. User acceptance of AI-powered training: Extending the technology acceptance model (TAM). Future Bus. J. 2025, 11, 239. [Google Scholar] [CrossRef]
  43. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Human Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  44. Manolache, S.; Popescu, N. The Development of an OpenAI-Based Solution for Decision-Making. Appl. Sci. 2025, 15, 3408. [Google Scholar] [CrossRef]
  45. Dumitrescu, C.; Tăbîrcă, A.-I.; Stanciu, A.; Nemtoi, L.; Radu, V.; Gore, B.E. AI-Based Decision Support System for Attenuating Traffic Congestion. Appl. Sci. 2025, 15, 11470. [Google Scholar] [CrossRef]
  46. Gheorghe, C.; Soica, A. Revolutionizing Urban Mobility: A Systematic Review of AI, IoT, and Predictive Analytics in Adaptive Traffic Control Systems for Road Networks. Electronics 2025, 14, 719. [Google Scholar] [CrossRef]
  47. Oncioiu, I.; Mândricel, D.A.; Hojda, M.H. Artificial Intelligence-Enabled Digital Transformation in Circular Logistics: A Structural Equation Model of Organizational, Technological, and Environmental Drivers. Logistics 2025, 9, 102. [Google Scholar] [CrossRef]
  48. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. An introduction to structural equation modeling. In Partial Least Squares Structural Equation Modeling Using R: A Workbook; Springer: Cham, Switzerland, 2021; pp. 1–29. [Google Scholar]
  49. Andargoli, A.E.; Ulapane, N.; Nguyen, T.A.; Shuakat, N.; Zelcer, J.; Wickramasinghe, N. Intelligent decision support systems for dementia care. Artif. Intell. Med. 2024, 150, 102815. [Google Scholar] [CrossRef]
  50. Resende, C.H.; Geraldes, C.A.; Junior, F.R.L. Decision models for supplier selection. Procedia Manuf. 2021, 55, 492–499. [Google Scholar] [CrossRef]
  51. Ibrahim, F.; Münscher, J.C.; Daseking, M.; Telle, N.T. The technology acceptance model and adopter type analysis in the context of artificial intelligence. Front. Artif. Intell. 2025, 7, 1496518. [Google Scholar] [CrossRef]
  52. Orbán, F.; Stefkovics, Á. Trust in artificial intelligence: A survey experiment to assess trust in algorithmic decision-making. AI Soc. 2025, 40, 4955–4969. [Google Scholar] [CrossRef]
  53. Jarrahi, M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
  54. Mustofa, R.H.; Kuncoro, T.G.; Atmono, D.; Hermawan, H.D. Extending the technology acceptance model: The role of subjective norms, ethics, and trust in AI tool adoption among students. Comput. Educ. Artif. Intell. 2025, 8, 100379. [Google Scholar] [CrossRef]
  55. Hair, J.F., Jr.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate data analysis. In Multivariate Data Analysis 2010; Pearson: New York, NY, USA, 2010; p. 785. [Google Scholar]
  56. Fornell, C.; Larcker, D.F. Evaluating structural equation models. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  57. Morić, Z.; Dakić, V.; Urošev, S. An AI-Based Decision Support System for judicial decision-making. Systems 2025, 13, 131. [Google Scholar] [CrossRef]
  58. Heinrich, K.; Janiesch, C.; Krancher, O.; Stahmann, P.; Wanner, J.; Zschech, P. Decision factors for the selection of AI-based decision support systems—The case of task delegation in prognostics. PLoS ONE 2025, 20, e0328411. [Google Scholar] [CrossRef] [PubMed]
  59. Salehi, P.; Ba, Y.; Kim, N.; Mosallanezhad, A.; Pan, A.; Cohen, M.C.; Wang, Y.; Zhao, J.; Bhatti, S.; Sung, J.; et al. Towards Trustworthy AI-Enabled Decision Support Systems: Validation of the Multisource AI Scorecard Table (MAST). J. Artif. Intell. Res. 2024, 80, 1311–1341. [Google Scholar] [CrossRef]
  60. Tretter, M. Equipping AI-decision-support-systems with emotional capabilities? Ethical perspectives. Front. Artif. Intell. 2024, 7, 1398395. [Google Scholar] [CrossRef]
  61. Artene, A.E.; Domil, A.E.; Ivascu, L. Unlocking Business Value: Integrating AI-Driven Decision-Making in Financial Reporting Systems. Electronics 2024, 13, 3069. [Google Scholar] [CrossRef]
  62. Kolkman, D.; Bex, F.; Narayan, N.; van der Put, M. Justitia ex machina: The impact of an AI system on legal decision-making and discretionary authority. Big Data Soc. 2024, 11, 1–14. [Google Scholar] [CrossRef]
  63. Montealegre-López, N. Exploring the role of trust in AI-driven decision-making: A systematic literature review. Manag. Rev. Q. 2025; in press. [Google Scholar] [CrossRef]
  64. Gudigantala, N.; Madhavaram, S.; Bicen, P. An AI decision-making framework for business value maximization. AI Mag. 2023, 44, 67–84. [Google Scholar] [CrossRef]
  65. Vereschak, O.; Alizadeh, F.; Bailly, G.; Caramiaux, B. Trust in AI-assisted decision making: Perspectives from those behind the system and those for whom the decision is made. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–14. [Google Scholar] [CrossRef]
  66. Taslim, W.S.; Rosnani, T.; Fauzan, R. Employee involvement in AI-driven HR decision-making: A systematic review. SA J. Hum. Resour. Manag. 2025, 23, 2856. [Google Scholar] [CrossRef]
  67. Treiman, L.S.; Ho, C.-J.; Kool, W. The consequences of AI training on decision-making. Proc. Natl. Acad. Sci. USA 2024, 121, e2408731121. [Google Scholar] [CrossRef]
  68. Terentieva, N.; Karpenko, V.; Yarova, N.; Shkvyria, N.; Pasko, M. Technological innovation in digital brand management: Leveraging artificial intelligence and immersive experiences. J. Res. Innov. Technol. 2025, 4, 201. [Google Scholar] [CrossRef]
Figure 1. Conceptual model. (Author’s own processing).
Figure 1. Conceptual model. (Author’s own processing).
Electronics 15 00372 g001
Figure 2. Profile of respondents. (Author’s own processing).
Figure 2. Profile of respondents. (Author’s own processing).
Electronics 15 00372 g002
Figure 3. Valid estimated model from SMART-PLS. (Author’s own processing).
Figure 3. Valid estimated model from SMART-PLS. (Author’s own processing).
Electronics 15 00372 g003
Table 1. Constructs of the conceptual model and corresponding items.
Table 1. Constructs of the conceptual model and corresponding items.
ConstructOperational
Definition
ItemsMain
Theoretical Source
Perceived utility (PU)The degree to which the user believes that the use of AI-DSS improves the quality and efficiency of decisionsPU1. The use of the AI-DSS significantly improves decision quality within my organization.
PU2. The AI-DSS provides valuable information to support strategic decision-making.
PU3. Using AI-DSS simplifies repetitive tasks in decision-making processes.
PU4. AI-DSS directly contributes to increasing the overall efficiency of decision-making.
PU5. The integration of AI-DSS delivers tangible benefits for risk management.
[17,28,49,50]
Perceived ease of use (PEOU)The degree to which the user perceives that AI-DSS is easy to learn and usePEOU1. The AI-DSS is easy to learn and use.
PEOU2. The AI-DSS interface is clear, logical, and easy to navigate.
PEOU3. My interaction with AI-DSS is simple and easy to understand.
PEOU4. Relevant results from AI-DSS are achieved with minimal effort.
PEOU5. I can quickly acquire the necessary skills to use the AI-DSS system effectively.
[12,30,51]
Trust in AI (TRUST)Convincing the user that AI-DSS is reliable, transparent, and ethicalTRUST1. AI-DSS provides accurate, reliable information.
TRUST2. AI-DSS is sufficiently transparent in its explanation of how to generate recommendations.
TRUST3. AI-DSS manages and uses data ethically and responsibly.
TRUST4. I am comfortable making decisions based on AI-DSS recommendations.
TRUST5. AI-DSS serves the organization’s interests and supports its objectives.
[16,33]
Transparency and data quality (DATAQ)Perception of the accuracy, clarity and transparency of the data used by AI-DSSDATAQ1. The data used by AI-DSS is accurate, complete, and up-to-date.
DATAQ2. AI-DSS clarifies the sources, processing, and accuracy of the data used.
DATAQ3. Understand how AI-DSS transforms and interprets data to generate decision-making recommendations.
DATAQ4. AI-DSS ensures an adequate level of transparency on the algorithms and methods used.
DATAQ5. Data management in AI-DSS is conducted safely, verifiably, and in accordance with ethical standards.
[16,52]
AI-DSS Adoption Intent (INT-AI)The willingness of users to adopt and continue the use of AI-DSS in decision-makingINT-AI1. I intend to continue using the AI-DSS in my professional work.
INT-AI2. I recommend implementing and using AI-DSS within my organization.
INT-AI3. AI-DSS will be an essential component of the organization’s future development.
INT-AI4. I’m willing to invest time and resources to use AI-DSS effectively.
INT-AI5. I intend to use AI-DSS as the main tool to support strategic decision-making.
[53,54]
Decision-making efficiency (DME)The extent to which AI-DSS contributes to faster, more coherent and more informed decisionsDME1. Decisions made with AI-DSS support are faster, more consistent, and more accurate.
DME2. AI-DSS helps reduce errors and uncertainties in decision-making.
DME3. Using AI-DSS improves the overall performance of the organization’s decision-making process.
DME4. AI-DSS supports decision-making based on predictive analytics and objective data.
DME5. AI-DSS makes it easy to evaluate complex alternatives and select optimal options.
[14,17]
(Author’s own processing).
Table 2. Convergent validity and reliability tests.
Table 2. Convergent validity and reliability tests.
ConstructCronbach’s AlphaComposite Reliability (rho_a)Composite Reliability (rho_c)Average Variance Extracted (AVE)
PU0.8760.8900.9100.670
PEOU0.9270.9270.9450.774
DATAQ0.9230.9230.9420.766
TRUST0.9190.9190.9390.757
INT0.9150.9160.9360.746
DME0.9160.9160.9370.749
(Author’s own processing).
Table 3. Discriminant validity Fornell–Larcker criterion.
Table 3. Discriminant validity Fornell–Larcker criterion.
DATAQDMEINT-AIPEOUPUTRUST
DATAG0.875
DME0.8230.866
INT-AI0.7740.8460.864
PEOU0.6480.7300.7200.880
PU0.6520.7040.7170.7900.819
TRUST0.7780.7760.7380.8400.7300.870
(Author’s own processing).
Table 4. Heatmap representation of outer loadings for the measurement model.
Table 4. Heatmap representation of outer loadings for the measurement model.
ItemsDATAQDMEINT-AIPEOUPUTRUST
10.8130.7730.8100.8130.7000.817
20.8960.8800.8640.8930.8510.875
30.8620.8920.8710.9090.8850.876
40.9060.8910.8690.8940.8410.887
50.8900.8860.9040.8880.8020.891
Note. Background colors range from light red to dark green, where darker green indicates higher standardized outer loading values. (Author’s own processing).
Table 5. Hypothesis testing.
Table 5. Hypothesis testing.
HypothesisRelationship TestedCoefficient βStatistical Significance (p)Result
H1DATAQ → TRUST0.784<0.001Confirmed
H2aTRUST → PU0.229<0.01Confirmed
H2bTRUST → PEOU0.842<0.001Confirmed
H3PEOU → PU0.597<0.001Confirmed
H4aPU → INT-AI0.331<0.001Confirmed
H4bPEOU → INT-AI0.135<0.05Confirmed
H4cTRUST → INT-AI0.382<0.01Confirmed
H5INT-AI → DME0.844<0.001Confirmed
(Author’s own processing).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bondac, G.-T.; Stanescu, S.-G.; Ionescu, C.A.; Duica, A.; Uzlău, M.C. Decision-Making in Complex Systems Using AI-Based Decision Support: The Role of Trust, Transparency, and Data Quality. Electronics 2026, 15, 372. https://doi.org/10.3390/electronics15020372

AMA Style

Bondac G-T, Stanescu S-G, Ionescu CA, Duica A, Uzlău MC. Decision-Making in Complex Systems Using AI-Based Decision Support: The Role of Trust, Transparency, and Data Quality. Electronics. 2026; 15(2):372. https://doi.org/10.3390/electronics15020372

Chicago/Turabian Style

Bondac, Georgiana-Tatiana, Sorina-Geanina Stanescu, Constantin Aurelian Ionescu, Anisoara Duica, and Marilena Carmen Uzlău. 2026. "Decision-Making in Complex Systems Using AI-Based Decision Support: The Role of Trust, Transparency, and Data Quality" Electronics 15, no. 2: 372. https://doi.org/10.3390/electronics15020372

APA Style

Bondac, G.-T., Stanescu, S.-G., Ionescu, C. A., Duica, A., & Uzlău, M. C. (2026). Decision-Making in Complex Systems Using AI-Based Decision Support: The Role of Trust, Transparency, and Data Quality. Electronics, 15(2), 372. https://doi.org/10.3390/electronics15020372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop