Previous Article in Journal
A Comparative Study of Turnover Drivers Among Real Estate Sales Professionals in Lebanon and the UAE
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Employee Comfort with AI-Driven Algorithmic Decision-Making: Evidence from the GCC and Lebanon

1
Faculty of Business Studies, Arab Open University (AOU), Beirut 2058 4518, Lebanon
2
CIRAME Research Center, Business School, Holy Spirit University of Kaslik, Jounieh P.O. Box 446, Lebanon
*
Author to whom correspondence should be addressed.
Adm. Sci. 2026, 16(1), 49; https://doi.org/10.3390/admsci16010049 (registering DOI)
Submission received: 3 December 2025 / Revised: 26 December 2025 / Accepted: 15 January 2026 / Published: 18 January 2026

Abstract

In this digital era, many companies are integrating new solutions involving Artificial Intelligence (AI)-based automation systems to optimize processes, reach higher efficiency, and help them with decision-making. While implementing these changes, various challenges may arise, including resistance to AI integration from employees. This study examines how employees’ perceived benefits, concerns, and trust regarding AI-driven algorithmic decision-making influence their comfort with AI-driven algorithmic decision-making in the workplace. This study employed a quantitative method by surveying employees in the Gulf Cooperation Council (GCC) and Lebanon with a final sample size of 388 participants. The results demonstrate that employees are more likely to feel comfortable with AI-driven algorithmic decision-making in the workplace if they believe AI will increase efficiency, promote fairness, and decrease errors. Unexpectedly, employee concerns were positively associated with comfort, suggesting an adaptive response to AI adoption. Lastly, comfort with AI-driven algorithmic decision-making is positively correlated with greater levels of trust in AI systems. These findings provide actionable guidance to organizations, underscoring the need to communicate clearly about AI’s role, address employees’ concerns through transparency and human oversight, and invest in training and reskilling initiatives that build trust and foster responsible, employee-centered adoption of AI.

1. Introduction

Artificial intelligence (AI) is transforming the workplace by automating complex tasks and reducing the need for human intervention (Korteling et al., 2021; Shekhar, 2019). Although AI adoption has several advantages for companies, organizations face numerous challenges related to employee perceptions of AI (Biswas et al., 2024).
AI utilization is constantly becoming a significant mechanism for development, especially the exclusive adoption of AI technologies in the field of human resource management (HRM), since it holds great capacity to enhance employee performance across various tasks (Olan et al., 2024). Workers’ concerns continue to grow regarding the loss of human intervention, data security, and biases (Masawi et al., 2025), particularly with new burgeoning subfields of AI emerging, like Generative AI (Gen-AI), which concentrates on creating new content revolutionizing different industries by 2028 (Sharma & Rathore, 2024). Upon these insights, Choung et al. (2023) highlight workers’ perceptions from a psychological basis of surrounding Algorithmic Decision-Making (ADM), illustrating positive responses from participants who viewed AI-algorithmic decisions as fairer and more trustworthy than those made by humans. Likewise, Giraldi et al. (2024) discuss a positive inclination among employees belonging to the public sector toward AI technologies perceived as augmentative rather than a replacement for humans.
The extent to which employees feel comfortable with AI-powered decisions depends on their perceptions of the benefits of AI-driven algorithmic decision-making, concerns about its risks, and trust in its reliability (Ivanov & Webster, 2024; Kim et al., 2024; Narayanan et al., 2024). In smart-city settings, Tayeb et al. (2025) found that transparency anxiety and work–life interface mediate how AI influences decision-making outcomes. Taken together, prior research has examined these determinants largely in isolation or within specific domains, such as sectoral settings or smart-city contexts, rather than within an integrated explanatory model focused on employees’ comfort with AI-driven decision-making in the workplace. Moreover, empirical evidence remains limited on how perceived benefits, concerns, and trust jointly operate across organizational contexts characterized by different levels of technological readiness and economic stability.
Employee perceptions play a crucial role in AI adoption and AI-driven ADM, particularly in terms of efficacy, precision, fairness, and error reduction. Employees who perceive AI as enhancing efficiency and fairness are more likely to embrace its execution (Brink et al., 2024; Klein et al., 2024). Nevertheless, concerns such as job displacement, lack of transparency, and complexity can hinder AI acceptance and create resistance (Fahnenstich et al., 2024). Furthermore, trust in AI systems, which is shaped by reliability, transparency, and alignment with human decision-making norms, serves as a central factor influencing employee comfort with AI adoption (Brink et al., 2024; Klein et al., 2024).
Although bibliometric and systematic literature reviews are valuable for mapping thematic streams and identifying research gaps in emerging fields, the present study adopts an empirical approach that integrates key constructs identified in prior research to examine employees’ comfort with AI-driven decision-making across contrasting organizational contexts. Specifically, this study examines the relationship between perceptions, concerns, and trust in determining employees’ comfort levels with AI-driven decision-making, with a particular focus on the Gulf Cooperation Council (GCC) and Lebanon. AI adoption in the GCC is strongly supported by government initiatives, resulting in rapid implementation across various industries. In contrast, Lebanon’s geopolitical and economic instability, coupled with limited technological investments, has slowed AI integration. These contrasting contexts provide an opportunity to explore how employees’ perceptions, concerns, and trust in AI differ based on the level of AI adoption in their respective regions.
Based on the objectives of the study and addressing the gaps in the literature, this study seeks to answer the following questions:
(RQ1) How do employees’ perceptions of AI benefits, such as efficiency, fairness, and error reduction, influence their comfort with AI-driven algorithmic decision-making in the workplace?
(RQ2) How can employees’ concerns about AI-driven algorithmic decision-making, including job displacement, lack of transparency, and complexity, affect their comfort with AI-driven algorithmic decision-making?
(RQ3) How does employees’ level of trust in AI systems impact their comfort with AI-driven decision-making processes in the workplace?
By addressing these questions, this study offers several important contributions. Theoretically, it advances AI adoption research by integrating perceived benefits, concerns, and trust into a unified model that explains employees’ comfort with AI-driven decision-making, an area largely fragmented in prior research. Empirically, the study expands the geographical scope of AI research by providing comparative evidence from the GCC and Lebanon, two contrasting yet underexplored Arab contexts with different levels of digital maturity. This cross-country analysis reveals how context shapes the strength of AI predictors, offering new region-specific insights absent in global literature. Practically, the study provides organizations with a clear roadmap for improving employee comfort with AI systems through transparency, trust-building, training, and contextualized implementation strategies tailored to diverse workforce environments. Together, these contributions enrich current knowledge on human–AI coexistence and support more responsible and employee-centered AI integration in the workplace.

2. Literature Review

2.1. AI and Algorithmic Decision-Making

AI comprises systems capable of performing complex tasks that typically require human intelligence, such as prediction, pattern recognition, and decision automation (Sheikh et al., 2023). AI applications range from narrow AI designed for specific tasks to broader generative systems (Damar et al., 2024). In the workplace, AI’s influence is primarily enacted through ADM, where algorithms collect, process, and model data to support or automate organizational decisions (Araujo et al., 2020). As organizations integrate ADM into hiring, scheduling, performance evaluation, and customer interaction processes, understanding employees’ perceptions becomes essential for successful adoption.

2.2. Theoretical Foundations

2.2.1. Algorithmic Decision-Making Theory

ADM theory posits that algorithms often outperform humans in precision, consistency, and fairness by applying rules uniformly and reducing subjective bias (Mahmud et al., 2022; Kasy & Abebe, 2021). However, ADM raises critical questions around transparency, accountability, and fairness. Binns (2020) highlights that opaque decision processes can undermine trust and exacerbate concerns regarding bias and discrimination. These theoretical mechanisms directly inform employees’ perceptions of benefits (efficiency, fairness, error reduction) and concerns (job displacement, opacity), shaping their comfort with automated decisions.

2.2.2. Responsible AI and Explainable AI

Responsible AI frameworks emphasize ethical, transparent, and fair decision-making by automated systems (Raji et al., 2020). Explainable AI (XAI) promotes transparency by helping employees understand how decisions are generated (Adadi & Berrada, 2020). Lack of explanation increases “transparency anxiety,” lowering fairness perceptions and comfort with ADM (Tayeb et al., 2025). These theories underscore the importance of trust, especially in high-stakes or ambiguous decision environments.

2.2.3. Human-AI Teaming Theory

Human–AI teaming theory suggests that AI should augment rather than replace human expertise. AI provides data-driven insights while humans contribute context, intuition, and ethical judgment (Seeber et al., 2020). Effective collaboration enhances perceptions of AI’s benefits and increases worker comfort, while poor system understanding or weak mutual adaptation can reduce acceptance (Schmutz et al., 2024). This theoretical lens supports the study’s inclusion of efficiency, fairness, and error reduction as key benefit perceptions.

2.3. Perceived Benefits of ADM

ADM is increasingly perceived as a driver of efficiency, accuracy, and workload reduction. Automation enhances productivity, prevents burnout, and supports employees in routine decision processes. Real-time AI analysis enables organizations to adapt more quickly to changing conditions, improving service quality and reducing decision errors (Perez, 2023). Xu et al. (2024) emphasize that workers who develop the competencies needed to collaborate with AI experience improved career outcomes and workplace well-being. These insights align with ADM theory, which argues that algorithms promote fairness through rule-based evaluations (Kasy & Abebe, 2021). However, prior studies rarely examine whether these perceived benefits translate into comfort with ADM in regions undergoing unequal technological development, such as the GCC and Lebanon.

2.4. Concerns About ADM

ADM also triggers major concerns. Rapid digital transformation heightens fears of job displacement and dehumanization. Employees often perceive automation as a threat to their skills, security, and professional identity. Peer influence, job automatability, and perceptions of ethical risks intensify anxiety (Khogali & Mekid, 2023). Zhao et al. (2024) add that fear of unemployment can increase emotional stress and harmful workplace behaviors. Transparency concerns, especially related to algorithmic opacity and lack of explanation, significantly reduce acceptance (Tayeb et al., 2025). While ADM theory predicts that concerns reduce comfort, empirical evidence is contradictory: some studies report resistance (Zirar et al., 2023), whereas others show acceptance when ADM is perceived as fair and unbiased (Chugunova & Luhan, 2024). This inconsistency forms part of the broader theoretical puzzle motivating the present study.

2.5. Trust in ADM

Trust is a central predictor of employee willingness to rely on ADM. Łapińska et al. (2021) found that trust shapes the extent to which AI systems influence decisions, especially in uncertain environments. Distrust arises from skepticism about organizational readiness, fear of job loss, and concerns about fairness or hidden biases. Köchling et al. (2024) argue that workers lose trust when AI-driven career decisions lack transparency or human involvement. Prior studies also highlight fluctuating trust levels across the adoption process (Davenport, 2019; Gillath et al., 2021; Glikson & Woolley, 2020). These dynamics suggest that trust interacts closely with perceived benefits and concerns, yet little is known about how these relationships unfold in different economic and cultural contexts.

2.6. Comfort with ADM

Comfort reflects employees’ ease and willingness to accept automated decisions affecting their work. Zirar et al. (2023) argue that discomfort arises when employees lack understanding of how AI decisions are made or how these decisions affect their roles. Research on human–AI coexistence reveals mixed findings: some scholars attribute discomfort to job insecurity (Arslan et al., 2022; Willcocks, 2020), while others show that employees may prefer AI decision-making over human decision-making in contexts where they perceive greater fairness and objectivity (Chugunova & Luhan, 2024). These inconsistent findings highlight the need for an integrated model examining how benefits, concerns, and trust jointly predict comfort.
As such, all definitions for the examined variables are delineated in Table 1.

2.7. Gap in the Literature

Despite the growing scholarship on ADM, several gaps remain unaddressed. First, prior studies offer inconsistent evidence on how employees evaluate AI in workplace decisions. Some scholars report strong positive reactions based on perceived efficiency and fairness (Ivanov & Webster, 2024; Brink et al., 2024), while others document employee discomfort and fear, particularly regarding job security and opacity (Khogali & Mekid, 2023; Zhao et al., 2024). Additional findings further complicate this picture: in some contexts, workers show no meaningful change in perceived risk (Klein et al., 2024) or even a preference for algorithmic decision-makers when fairness is expected (Chugunova & Luhan, 2024). Second, although trust is widely recognized as a central determinant of AI acceptance (Łapińska et al., 2021; Köchling et al., 2024), existing studies rarely integrate trust jointly with benefit perceptions and concerns in a unified predictive model of employee comfort. Third, almost all empirical research is conducted in Western or highly digitized settings, leaving a gap in understanding how economic instability, institutional fragility, and differing levels of technological investment shape AI perceptions. No existing study compares technologically advanced GCC countries, characterized by high AI readiness and large-scale national AI strategies, with Lebanon’s crisis-affected, low-investment environment. Consequently, little is known about how contextual differences moderate the relationship between benefits, concerns, trust, and comfort with ADM. This study addresses these gaps by developing and testing an integrated model grounded in ADM theory and Responsible AI principles, applied across two sharply contrasting regional contexts.

2.8. Research Context

The study takes place in the context of growing AI integration in business operations across a range of sectors, in companies in Lebanon and all six GCC countries, namely, Saudi Arabia, United Arab Emirates, Qatar, Kuwait, Bahrain, and Oman.
Based on Saudi Arabia’s economy and 2030 vision plan, the Kingdom would be able to diversify its largely oil-dependent revenue base, lower its steadily rising budget deficits, balance its budgets, and foster long-term economic growth with the help of Saudi Vision 2030 and the National Transformation Plan that goes along with it (Moshashai et al., 2020). As this unfolds, it is hard not to notice the evolution and fast development of the Saudi economy, which is also the biggest economy in the region (Mati & Rehman, 2023). Where huge investments are made in technology, Saudi Arabia’s biggest tech event, the kingdom recently announced more than $6.4 billion in investments in entrepreneurship and future technologies (Abedalrhman & Alzaydi, 2024). In recent years, new and developing technological innovations have helped the Middle East’s economy. Massive investments have been made by the public and private sectors in 5G, blockchain, cloud computing, cybersecurity, AI, and the Internet of Things (IoT).
As per the US-UAE Business Council (2024) report, the UAE is embracing AI to transform key sectors like technology, healthcare, education, agriculture, logistics, transportation, and energy. The country adopted the National Artificial Intelligence Strategy 2031 in 2017, aiming to become a global leader in AI by 2031. Homegrown companies like G42 are leading this effort, with the US-UAE private sector partnerships playing a pivotal role in AI development. Mainly, the same development applies at different levels in all GCC countries in terms of technology and the stability that these countries have on all levels. But, in Lebanon, things are different; this small country has been suffering continuous struggles and crises, whether geopolitical or economic. For example, the 2019 economic crisis, side by side with the pandemic and the massive port of Beirut explosion in 2020, has impacted all sectors of the country. Adding to all this, the geo-political conflicts, especially the devastating war with Israel that started on the 8th of November 2023, all these escalating factors combined have a detrimental effect on the economic level and the investment level in general, which logically affected investments in the AI and technology sector as well.
In brief, the selection of the GCC and Lebanon is theoretically grounded in ADM theory, Responsible AI principles, and cross-cultural technology adoption research. The GCC region represents high technological readiness, strong regulatory support for AI, and rapid top-down implementation through national strategies such as Saudi Vision 2030 and the UAE AI Strategy 2031. These conditions create environments where ADM systems are institutionally supported, embedded in organizational processes, and aligned with progressive regulatory ecosystems. In contrast, Lebanon provides a crisis-affected setting characterized by institutional fragility, limited technological investment, and economic insecurity, conditions known to heighten concerns about job loss, transparency, and fairness in ADM. These sharply contrasting contexts create a natural comparative setting that allows us to test whether ADM theory, originally developed in stable, digitally mature environments, holds under conditions of economic instability and limited AI exposure. This theoretical rationale strengthens the argument that regional context is not merely descriptive but constitutes a boundary condition that can amplify or weaken the pathways between perceived benefits, concerns, trust, and comfort with AI-driven decision-making.

2.9. Development of Hypotheses

The hypotheses focus on employee comfort as the central outcome variable while distinguishing between different cognitive and evaluative antecedents, namely, perceived benefits, concerns, and trust, to assess their unique and combined effects.

2.9.1. Perceived Benefits of ADM and Employee Comfort

ADM theory suggests that algorithmic systems can outperform human decision-makers by increasing consistency, reducing bias, and enhancing procedural fairness (Mahmud et al., 2022; Kasy & Abebe, 2021). Prior empirical studies show that employees respond positively to AI when they believe it improves efficiency, accuracy, and decision quality (Ivanov & Webster, 2024; Brink et al., 2024). Perceived benefits also influence cognitive and emotional evaluations of AI: when workers believe that ADM reduces errors, enhances fairness, or supports workload reduction, they report higher acceptance and comfort (Perez, 2023; Xu et al., 2024).
Employees may even perceive AI decisions as more objective and less biased compared to human judgments, especially in structured tasks, which strengthens comfort with automated processes (Chugunova & Luhan, 2024). Transparency also plays an important role. When AI systems provide clear explanations for their outputs, employees perceive them as more predictable and less risky (Tayeb et al., 2025). Together, these findings indicate that perceived functional and procedural benefits shape employees’ willingness to rely on AI in workplace decisions. Thus, the first hypothesis of this study is as follows:
H1. 
Employees who perceive AI as improving efficiency, fairness, and reducing errors are more likely to be comfortable with AI-driven algorithmic decision-making.

2.9.2. Concerns About ADM and Employee Comfort

Concerns about algorithmic opacity, job displacement, and fairness constitute major obstacles to workplace AI adoption (Khogali & Mekid, 2023; Zhao et al., 2024). ADM theory notes that when automated systems lack explainability or accountability, employees experience uncertainty, anxiety, and resistance (Binns, 2020). Responsible AI research further shows that perceived ethical risks, such as dehumanization, privacy invasion, or loss of autonomy, reduce employee comfort and acceptance (Narayanan et al., 2024; Köchling et al., 2024).
Transparency anxiety is particularly impactful: employees who do not understand how or why an AI system makes decisions feel unable to challenge decisions or protect their interests (Tayeb et al., 2025). Kim et al. (2024) also show that perceived unfairness or discrimination amplifies discomfort with ADM. Furthermore, Zirar et al. (2023) demonstrate that when employees struggle to comprehend the logic or consequences of automated decisions, they experience distrust and discomfort, especially in ambiguous or high-stakes work settings. Thus, the second hypothesis is as follows:
H2. 
Employees who express concerns about job displacement, lack of transparency, and the complexity of AI decision-making are less likely to be comfortable with AI-driven algorithmic decision-making.

2.9.3. Trust in AI Systems and Employee Comfort

Trust is a central determinant of AI acceptance and a foundational mechanism in ADM theory and Responsible AI frameworks (Łapińska et al., 2021; Raji et al., 2020). When employees perceive AI systems as reliable, transparent, and aligned with organizational values, they are more willing to rely on automated decisions (Glikson & Woolley, 2020). Trust reduces uncertainty and strengthens willingness to follow AI-generated recommendations (Fahnenstich et al., 2024).
Moreover, transparency and fairness explanations directly enhance trust: when AI systems provide interpretable outputs or justification for decisions, employees perceive them as more honest and predictable (Schoeffer et al., 2024; Tayeb et al., 2025). Conversely, distrust arising from perceived bias, opacity, or inconsistent performance decreases comfort with ADM (Köchling et al., 2024). Therefore, trust not only mediates but also directly drives comfort with AI in organizational settings. Thus, the third hypothesis is as follows:
H3. 
Higher levels of trust in AI systems will correlate positively with employee comfort with AI-driven algorithmic decision-making.
The conceptual model of this study is grounded in three complementary theoretical frameworks that collectively explain why perceived benefits, concerns, and trust shape employee comfort with ADM. ADM theory (Mahmud et al., 2022; Binns, 2020; Kasy & Abebe, 2021) highlights how algorithms can deliver consistent and fair decisions through rule-based processes, supporting the positive role of perceived benefits in enhancing comfort with ADM (H1). Simultaneously, ADM theory raises concerns related to opacity, accountability, and perceived unfairness, providing the theoretical foundation for the negative influence of AI-related concerns on comfort (H2). Responsible AI and Explainable AI frameworks (Raji et al., 2020; Adadi & Berrada, 2020) emphasize transparency and interpretability as prerequisites for building trust in AI systems, directly informing the expectation that trust enhances comfort with ADM (H3). Finally, Human–AI Teaming theory (Seeber et al., 2020; Schmutz et al., 2024) suggests that employees’ comfort with ADM depends on the perceived complementarity between human judgment and algorithmic outputs. Together, these theories form the conceptual foundation for the relationships proposed in this study and are visually represented in Figure 1.
Figure 1 visualizes the research framework, showing the relationships between perceived benefits, concerns, and trust in AI systems, and their proposed effects on employees’ comfort with AI-driven decision-making, which are formalized in the three hypotheses and addressed through the study’s research questions.

3. Methodology

3.1. Research Design

This study adopted a positivist philosophical stance and a deductive approach, consistent with explanatory research aimed at testing theoretically grounded hypotheses. A mono-method quantitative design was employed because the research objectives required examining statistical relationships among latent constructs, namely, perceived benefits, concerns, trust, and comfort with ADM, across a large and diverse employee population.
A structured questionnaire was used. No existing validated scale captures the four constructs examined together within the specific context of workplace ADM in Lebanon and the GCC. Therefore, the questionnaire items were newly developed for this study. Item development followed established scale-development guidelines (Hinkin, 1998) and relied on formal theoretical foundations and empirical literature. ADM Theory (Mahmud et al., 2022; Binns, 2020) informed items on efficiency, fairness, transparency, and perceived risk. Responsible AI and Explainable AI (Raji et al., 2020; Adadi & Berrada, 2020; Tayeb et al., 2025) informed items on transparency, reliability, and perceived ethical risks. Human–AI Teaming Theory (Seeber et al., 2020; Schmutz et al., 2024) informed items relating to comfort and human–AI coexistence. Empirical research on job displacement, anxiety, trust, and fairness (Perez, 2023; Khogali & Mekid, 2023; Zhao et al., 2024; Łapińska et al., 2021; Köchling et al., 2024; Zirar et al., 2023) informed item wording and dimension coverage. This ensured that while the scale was new, it was firmly grounded in theory and prior evidence.
As a first step, a brief introduction was written so that the participants would have a clearer idea about the purpose of the study and the type of questions that they were going to answer. Then, Section 1 included the demographics. In Section 2, five Likert scale agreement statements were related to the perceived benefits of ADM in the workplace. In Section 3, five Likert scale agreement statements were related to concerns about AI in the workplace. In Section 4, five Likert scale statements were related to trust in AI systems in the workplace. In Section 5, four Likert scale statements were related to employees’ comfort with ADM in the workplace.
To ensure content validity, three academic experts in HRM reviewed all items for relevance, clarity, and representativeness. A pre-test with 10 employees from Lebanon and the GCC assessed comprehension and item clarity. Minor stylistic adjustments were made to enhance readability. A pilot test with 20 participants was conducted. No comprehension issues were identified; thus, the same items were used for the main survey. The data collection period was from October 2024 to December 2024.

3.2. Procedures and Sample

Initially, simple random sampling targeted employees across the GCC countries and Lebanon. However, due to challenges in accessing comprehensive and diverse sampling frames across these regions, snowball sampling was also employed to complement the random sampling process. Participants who were identified through random sampling were asked to recommend peers and colleagues who are employees working in those regions. This led to recruiting additional respondents and maintaining the diversity of the sample. Those initial participants formed a heterogeneous base for snowball sampling since they were from different industries, job levels, and countries. Continuous monitoring of demographic and industry representation throughout data collection to ensure balance across regions reduced bias inherent to snowball sampling, although it cannot be eliminated. Participants were eligible if they were 18 years or older, were currently employed, worked in Lebanon or one of the six GCC countries, and had exposure to any form of AI-assisted or algorithmic decision-making in the workplace. To minimize the risk of unqualified respondents, a filter question confirmed employment status and region, IP-duplicate responses were blocked, completion times were screened for inattentive responses, and demographic patterns were monitored for anomalies. Combining both sampling methods yielded a final sample size of 388 participants.
Several procedural steps were taken to reduce potential bias. Anonymity and confidentiality were ensured, items were neutrally phrased, and constructs were placed in separate sections to minimize respondents’ ability to infer relationships. No identifying information was collected, reducing social desirability bias. A filter question confirmed respondent eligibility, and distribution across multiple channels helped avoid interviewer or organizational bias. These procedures help reduce the likelihood of common method and social desirability biases affecting responses.
The questionnaire was uploaded using Google Forms and distributed via WhatsApp, email, and professional social media platforms. Given the multi-channel online distribution method, a precise response rate cannot be determined. Descriptive statistics, reliability tests (Cronbach’s α, Composite Reliability), convergent validity (standardized loadings, AVE), discriminant validity (HTMT), and structural equation modeling (SEM) were used for data analysis.
Ethical considerations were maintained while conducting this study. IRB ethical approval was obtained. Informed consent was obtained using a filter question before filling out the questionnaire. Anonymity and confidentiality of responses were maintained.

4. Findings

4.1. Sample Profile

Table 2 shows the sample profile.

4.2. Model Fit

The results indicate that the measurement model demonstrates an acceptable to excellent fit across multiple indices (Table 3). The Comparative Fit Index (CFI = 0.950) and Normed Fit Index (NFI = 0.945) exceed the recommended threshold of 0.90, indicating strong incremental fit. The Root Mean Square Error of Approximation (RMSEA) value (0.071) falls within the commonly accepted upper bound of 0.08, suggesting reasonable approximation error. The Standardized Root Mean Square Residual (SRMR) (0.034) is well below the 0.08 threshold, reflecting excellent residual fit. Additionally, the Goodness-of-Fit Index (GFI = 0.988) indicates a very high degree of absolute fit. Collectively, these statistics confirm that the measurement model fits the data well and that the latent constructs are measured reliably.
A Harman’s single-factor test was conducted (Table 4) using exploratory factor analysis by forcing all items into a one-factor solution. Factor 1 accounted for 36.0% of the total variance, which is well below the 50% threshold. Therefore, common method variance is unlikely to be a significant concern in this study.

4.3. Discriminant Validity

The threshold for HTMT values is below 0.85 to be accepted as evidence of discriminant validity. All HTMT values (Table 5) are below the threshold of 0.85, confirming adequate discriminant validity between the factors.

4.4. Reliability

The thresholds for coefficient ω and coefficient α are values greater than 0.7. Table 6 shows that all factors demonstrate excellent reliability with ω and α values exceeding 0.8.
As part of the measurement model assessment, the indicator TR2 (Figure 2) displayed a comparatively lower standardized loading than the other items within the Trust construct. To evaluate whether TR2 should be removed, the model was re-estimated with and without this item, and the modification indices were inspected. The results showed that removing TR2 did not improve overall model fit. Furthermore, the Trust construct maintained high internal consistency (α = 0.894; ω = 0.894) when TR2 was retained, indicating that the item did not compromise reliability. Excluding this item would reduce the content validity of the Trust construct by omitting one of its essential conceptual elements. Therefore, TR2 was retained in the final model, as both psychometric evidence and theoretical justification support its inclusion.

4.5. Structural Equation Modeling

The results in Table 7 and Figure 2 show that perceived benefits have a positive and significant impact on comfort with ADM (β = 0.186; p < 0.001). In addition, concerns about job displacement, lack of transparency, and complexity do not seem to negatively impact comfort as hypothesized. They have a positive impact on comfort with ADM (β = 0.163; p < 0.001). This may indicate that not all concerns are as strongly felt or that other mitigating factors are present. Finally, trust in AI systems shows the strongest positive relationship with comfort in ADM (β = 0.508; p < 0.001).

4.6. SEM Based on Country of Residence

To further validate the results and conduct a comparative analysis among the surveyed countries, SEM was conducted for each country separately (Table 8) to provide insights into how the three variables impact comfort with ADM across different countries. This analysis is exploratory in nature and intended to illustrate contextual variation rather than to support country-level generalization, given the size of the national subsamples.
First, in Bahrain, PB shows a strong positive and significant impact on COMF (β = 0.242; p < 0.001), CON shows a positive but weaker effect (β = 0.157; p = 0.0240), and TR shows the strongest positive and significant impact (β = 0.430; p < 0.001).
Second, in Kuwait, PB shows a nonsignificant impact on COMF (β = 0.105; p = 0.1450), CON shows a positive and significant impact (β = 0.203; p = 0.040), and TR shows the strongest positive and significant impact (β = 0.631; p < 0.001).
Third, in Lebanon, PB shows a positive and significant impact on COMF (β = 0.105; p = 0.0250), CON shows a positive and significant impact (β = 0.168; p = 0.0100), and TR shows the strongest positive and significant impact (β = 0.629; p < 0.001).
Fourth, in Oman, PB shows a positive and significant impact on COMF (β = 0.263; p < 0.001), CON shows a positive but weaker impact (β = 0.136; p = 0.0490), and TR shows a positive and significant impact (β = 0.377; p < 0.001).
Fifth, in Qatar, PB shows a nonsignificant impact on COMF (β = 0.118; p = 0.1100), CON shows a positive and significant impact (β = 0.149; p = 0.0370), and TR shows a positive and significant impact (β = 0.559; p < 0.001).
Sixth, in Saudi Arabia, PB shows a nonsignificant impact on COMF (β = 0.109; p = 0.1350), CON shows a positive and significant impact (β = 0.204; p = 0.0040), and TR shows the strongest positive and significant impact (β = 0.622; p < 0.001).
Finally, in the UAE, PB shows a significant positive impact on COMF (β = 0.309; p < 0.0010), CON shows a positive and significant impact (β = 0.152; p = 0.0210), and TR shows a positive and significant impact (β = 0.359; p < 0.001).

5. Discussion

5.1. Perceived Benefits of ADM and Employee Comfort

The findings demonstrate that employees’ perceptions of AI’s functional and procedural benefits, namely efficiency, fairness, and error reduction, significantly increase their comfort with ADM. This supports H1 and is consistent with previous research showing that employees tend to view ADM positively when it enhances accuracy and consistency in workplace decisions (Ivanov & Webster, 2024). Brink et al. (2024) similarly emphasize that clear communication of AI’s advantages increases acceptance by helping workers understand how automation improves decision quality. In addition, the importance of fairness highlighted by Schoeffer et al. (2024) resonates with the present study’s finding that employees feel more at ease with ADM when they perceive it as equitable.
However, the multi-country analysis reveals that this relationship is context-dependent. While perceived benefits significantly predicted comfort in Bahrain, Oman, the UAE, and Lebanon, the relationship was nonsignificant in Kuwait, Qatar, and Saudi Arabia. These differences suggest that the strength of perceived benefits may vary across national environments depending on the maturity of workplace digital transformation, organizational readiness, and employees’ direct experiences with AI tools. Some GCC countries may have invested heavily in high-level AI infrastructure, yet employees might not directly observe or internalize the benefits at the operational level, reducing the impact of perceived advantages on comfort. Taken together, these results indicate that perceived benefits matter, but their influence depends on broader institutional, economic, and organizational factors.

5.2. Concerns About AI and Employee Comfort

In contrast to expectations, concerns about AI, including job displacement, lack of transparency, and complexity, were positively associated with comfort. This unexpected finding, which contradicts H2, was consistently observed across all countries. Several contextual and psychological explanations may account for this pattern.
First, concerns may reflect heightened awareness rather than resistance. Employees who think critically about AI may also be better informed about its limitations and advantages, leading to a more nuanced and engaged form of comfort rather than fear or rejection.
Second, in environments undergoing rapid technological development, such as several GCC countries, employees may adopt adaptive behaviors such as upskilling or proactively seeking training. In such cases, concerns become part of an adjustment process that ultimately increases comfort with AI. Third, trust may act as a mitigating force, dampening the negative emotional impact of concerns. When employees believe that ADM systems or their organizations are fair, reliable, or transparent, concerns may evolve into cautious optimism rather than discomfort.
Cultural norms may also play a role. In many Arab workplace environments, expressing concerns is not synonymous with rejecting a system; rather, it often reflects a desire for procedural fairness and accountability. Workers may feel more comfortable when they can voice concerns, especially if they believe their organizations are responsive to feedback. These interpretations diverge from studies such as Wesche et al. (2024) and Kim et al. (2024), which emphasize the negative consequences of concerns on trust and acceptance, yet align with more recent evidence showing that employees may accept ADM when they perceive it as fair, even if concerns persist (Chugunova & Luhan, 2024).
In brief, the findings suggest that concerns may be part of an adaptive process rather than a barrier in contexts where AI adoption is highly visible or strongly supported by national initiatives.

5.3. Trust in AI Systems and Employee Comfort

Trust emerged as the strongest predictor of comfort in both the overall sample and all individual country models, offering clear support for H3. This result affirms the central role of trust in ADM theory and Responsible AI frameworks, which argue that employees must perceive AI systems as transparent, predictable, and aligned with ethical principles for comfort to arise (Adadi & Berrada, 2020; Raji et al., 2020). The universality of this relationship across all seven countries suggests that trust operates as a foundational psychological mechanism irrespective of national context, economic conditions, or differences in digital maturity.
Trust’s strong influence may stem from its ability to reduce uncertainty and ease concerns about fairness or job security. When employees believe that AI systems make decisions in reliable and transparent ways, they are more willing to accept automated outcomes even when they carry personal consequences. This aligns with the findings of Park et al. (2024), who observed that trust in AI tools was central to predicting workplace acceptance, and Fahnenstich et al. (2024), who found that trust behaviors tend to favor AI over humans when tasks involve higher levels of risk.
The country-level analyses indicate that trust consistently shapes comfort across all regions, while perceived benefits and concerns display more variation depending on national economic stability, AI maturity, and organizational practices. Countries like the UAE, Bahrain, and Oman show a strong alignment between benefits and comfort, likely reflecting more embedded digital infrastructures and transparent AI integration. In contrast, nonsignificant effects in Kuwait, Qatar, and Saudi Arabia may reflect fast-paced, top-down AI adoption that employees have not yet fully assimilated. Lebanon’s significant relationships may stem from the heightened value employees place on reliability and efficiency in a crisis-affected economy, making both benefits and trust especially salient.
In brief, the findings of this study demonstrate that employee comfort with AI-driven ADM cannot be reduced to a simple or purely deductive logic whereby positive perceptions and trust automatically translate into acceptance. While perceived benefits (H1) and trust in AI systems (H3) show positive effects, the positive association between concerns and comfort (H2) is theoretically non-trivial and challenges dominant assumptions in the literature. This pattern suggests that employees may actively adapt to AI-driven decision environments by reconciling perceived risks with anticipated benefits and institutional realities, pointing to a more complex and dynamic process of human–AI interaction than previously assumed.

6. Conclusions

This study addressed three research questions. Regarding RQ1, the findings show that employees’ perceptions of AI benefits, particularly efficiency, fairness, and error reduction, positively influence their comfort with AI-driven decision-making. Regarding RQ2, the results reveal that employee concerns about AI, including job displacement, transparency, and complexity, do not reduce comfort as initially expected; instead, these concerns are positively associated with comfort, suggesting an adaptive or awareness-based response to AI adoption. Finally, in response to RQ3, trust in AI systems emerged as the strongest and most consistent predictor of employee comfort across all examined contexts.

6.1. Theoretical Implications

This study makes several theoretical contributions to the literature on ADM, Responsible AI, and human–AI collaboration. First, it extends ADM theory by demonstrating that employee comfort with ADM is shaped not only by perceived procedural advantages such as fairness and efficiency but also by deeper relational mechanisms such as trust. While existing ADM theories highlight the technical superiority of algorithmic processes, the present study shows that psychological perceptions, particularly the interplay between benefits, concerns, and trust, must be integrated into the theoretical understanding of how employees evaluate AI-driven decisions in organizational settings.
Second, the findings challenge prevailing assumptions in Responsible AI and XAI frameworks, which typically treat concerns, such as job displacement, opacity, and complexity, as barriers to acceptance. The unexpected positive association between concerns and comfort suggests that concerns may function as indicators of awareness, engagement, or proactive monitoring rather than simple resistance. This introduces a theoretically novel proposition: in environments undergoing rapid AI expansion or economic instability, concerns may coexist with acceptance, forming a more adaptive or reflective orientation toward ADM. This invites refinement of Responsible AI models to recognize different “types” of concerns, some of which may facilitate rather than hinder acceptance.
Third, the study reinforces the centrality of trust as a universal mechanism across diverse national and economic contexts. While prior research has emphasized trust at a conceptual level, the present findings empirically demonstrate its dominant role in shaping comfort across all seven countries studied, regardless of differences in technological readiness, digital maturity, or organizational infrastructure. This provides stronger theoretical support for integrating trust as a core construct within ADM and Human–AI Teaming theories, rather than treating it as a secondary or contextual factor.
Finally, by empirically testing a unified model combining perceived benefits, concerns, and trust, the study contributes a more holistic theoretical framework for understanding comfort with ADM. Existing theoretical approaches, such as fairness-based models, accuracy-focused frameworks, or cognitive structuration perspectives, tend to isolate individual predictors. This study advances theory by showing that comfort emerges from the combined influence of evaluative (benefits), affective (concerns), and relational (trust) mechanisms. The cross-regional comparison further highlights the need for ADM theories to incorporate contextual moderators, especially economic resilience and technological investment levels, when predicting employee responses to AI.
In brief, these theoretical implications show that comfort with ADM is a multidimensional construct that cannot be fully explained by existing linear models. The study contributes by refining theoretical expectations, challenging assumptions about concerns, elevating trust as a foundational mechanism, and proposing an integrated framework that better reflects employee experiences in digitally evolving workplaces.

6.2. Practical Implications

Based on the findings, several practical implications are provided.
First, using success stories and real-life cases, businesses can improve employee perceptions of AI’s advantages by explaining to them how it can reduce errors, make decisions more fairly, and increase efficiency. Additionally, they can train staff members on the effective use of AI systems and involve them in their implementation to understand their value and align it with their needs.
Second, companies should mitigate concerns about AI by increasing transparency, addressing job security, simplifying AI complexity, and establishing open communication channels.
Third, businesses should prioritize dependability and integrity, audit and monitor AI systems, integrate human oversight, and promote familiarity to establish and preserve trust in these systems.
Fourth, to address cultural and contextual differences in employee perceptions across countries, it is advised that businesses implement AI communication and implementation strategies (e.g., higher trust in the UAE versus higher skepticism in Kuwait and Lebanon). Additionally, they could show how AI systems are specifically relevant to the sector, making sure that use cases align with workers’ duties and responsibilities.
Fifth, it is essential to make investments in learning and development. Employees should have the skills needed to use AI, which will reduce concerns about losing their jobs and increase their confidence in utilizing AI tools. Businesses may simplify AI and assist staff in understanding its practical advantages and ethical implications by holding educational initiatives that include workshops, webinars, and internal communications. Along with this, a feedback loop is set up to evaluate employees’ attitudes towards AI regularly. This allows for the identification of changing concerns, perceptions, and trust levels. The feedback is then used to improve AI systems, with an emphasis on usability, transparency, and perceived fairness.
Finally, showing instances where AI assists employees in decision-making rather than functioning independently is essential to promoting AI as a collaborative tool and reaffirming its function as a partner rather than a substitute. By using cross-functional teams to deploy AI systems, businesses can guarantee a team-based implementation and gain support from a variety of employee groups.
These practical steps collectively address employee perceptions, concerns, and trust in AI, fostering greater acceptance and comfort with ADM in the workplace.

6.3. Limitations and Further Research

Although this study provides important insights into employees’ comfort with AI-driven decision-making in Lebanon and the GCC, several limitations must be acknowledged, each of which opens pathways for future research.
First, the study relied on a cross-sectional, self-reported survey, which limits causal inference and may be affected by perceptual biases. Although procedural steps were taken to reduce common method bias, future studies should incorporate longitudinal designs to examine how comfort, trust, and concerns evolve as ADM systems become more embedded in organizational workflows. Longitudinal research would also help capture dynamic shifts in employee attitudes as AI capabilities mature. Further, given the cross-sectional design, the directionality of the relationships is theoretically grounded but cannot be interpreted as strictly causal; future longitudinal or experimental studies could explore potential reciprocal effects between employee comfort and AI-related perceptions, concerns, and trust.
Second, the study employed a mono-method quantitative approach, which, while appropriate for hypothesis testing, restricts the depth of contextual understanding. Future research could adopt qualitative or mixed methods designs, such as interviews, focus groups, ethnographic observation, or multi-source data collection, to explore deeper psychological mechanisms underlying trust, fairness perceptions, fear, and ethical judgments.
Third, the study was conducted in Lebanon and the six GCC countries, which represent contrasting economic and technological environments. While this strengthens contextual relevance, generalizability remains limited. Also, while the study includes respondents from multiple countries, the country-level subsamples are not intended to be statistically representative. Accordingly, cross-country comparisons are interpreted as exploratory and illustrative of contextual variation rather than as grounds for national-level generalization. Future studies should extend the model to other geographical settings, such as Europe, East Asia, and Africa, to examine whether institutional, cultural, and regulatory differences moderate how employees perceive ADM systems. Cross-cultural comparative work is particularly encouraged in the ADM literature.
Fourth, although the study examined the direct relationships between perceived benefits, concerns, trust, and comfort, it did not investigate mediating or moderating mechanisms that may alter these effects. Prior research suggests the importance of factors such as organizational transparency, technological readiness, job role, digital literacy, and fairness perceptions. Future research could test whether trust mediates or buffers the negative impact of concerns on comfort; technological maturity, job complexity, or hierarchical position moderate acceptance of ADM; and individual differences (e.g., risk tolerance, AI literacy, openness to change) influence ADM comfort. Such analyses would deepen the theoretical nuance of ADM adoption processes.
Fifth, although the measurement model demonstrated strong validity, the study relied on a newly developed instrument. While item development followed theoretical and statistical validation procedures, future studies should further validate and refine the scale using confirmatory factor analysis on independent samples, measurement invariance testing across countries, and triangulation with behavioral or objective data.
Finally, the study used an online survey distributed through digital platforms, which may introduce sampling bias and limit control over response environments. Future research should consider organizationally administered surveys, stratified sampling, or multi-organizational datasets to enhance representativeness and reduce self-selection effects.
In brief, by addressing these limitations, future research can advance a more integrated understanding of how employees experience, evaluate, and adapt to AI-driven decision-making systems, thereby contributing to the refinement of ADM theory and the development of responsible and human-centered AI practices in the workplace.

6.4. Contributions of the Study

The insights provided in this study underscore a more comprehensive understanding of employees’ perceptions within multifaceted criteria of fairness, predictive, and equity in divergent cultural contexts. This differs from previous studies, which demonstrated limited generalizability of the findings since the participants belonged to one country, like Majrashi’s (2025) investigation on Saudi Arabian employees’ perceptions of AI fairness and Mabungela’s (2023) evaluation of South African people’s attitudes on AI and automated workplaces, whether they see it as a threat or not. Majrashi (2025) suggested including respondents from various countries to investigate the development of culturally aware AI algorithms.
Also, this study contributed to a better holistic approach to management practices and strategies concerning the adoption of AI-ADM systems in the workplace. It was conducted on employees in the GCC and Lebanon and focused intentionally on this region, where fewer similar studies were previously conducted, especially in Lebanon. This will raise awareness and help top leaders and decision-makers who lead organizations in the GCC and Lebanon to know what to expect and to be able to overcome possible challenges and potential resistance to change during the process of the integration of AI-based automated systems. By considering the scientific findings and suggestions of this study, businesses will become more efficient while introducing new AI-based systems to automate processes, in addition to enhancing communication on the management-employee level and reducing employee turnover since it improves their flexibility and resilience. Therefore, this study contributed to more productivity and to a higher success rate when introducing AI-based systems in the workplace.
Considering the growing interest in comprehending AI’s impact on employees in workplaces, this study contributes to advancing the discourse on AI decision-making by emphasizing that AI applications are best adopted as decision-support tools in organizations and cannot be used as the sole decision-maker due to employees’ expressed concerns and distrust. Thus, our study aligns with human and organizational values as we shed light on the importance of humans as final decision makers, and we advocate a well-considered employment of AI adoption that leverages the benefits and concerns of automation inside workplaces and therefore allows workers to take comfort with ADM. As such, this literature review offers a new angle that better understands organizational decision systems by comprehending the interaction between humans and technology acceptance from a decision-making perspective.

Author Contributions

Conceptualization, S.E.A., W.L. and N.J.A.M.; methodology, W.L. and N.J.A.M.; software, N.J.A.M.; validation, N.J.A.M.; formal analysis, N.J.A.M.; investigation, W.L. and N.J.A.M.; resources, W.L.; writing—original draft preparation, S.E.A., D.A., W.L. and N.J.A.M.; writing—review and editing, D.A. and N.J.A.M.; supervision, N.J.A.M.; project administration, N.J.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Research Ethics Committee (REC) of the Arab Open University, Lebanon (protocol code AOU-IRB-2024-125 and date of approval 9 September 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to ethical reasons (confidentiality and privacy).

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Abedalrhman, K., & Alzaydi, A. (2024). Saudi Arabia’s strategic leap towards a diversified economy and technological innovation. Available online: https://ssrn.com/abstract=5048258 (accessed on 1 September 2025).
  2. Adadi, A., & Berrada, M. (2020). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. [Google Scholar] [CrossRef]
  3. Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. [Google Scholar]
  4. Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2022). Artificial intelligence and human workers interaction at team level: A conceptual assessment of the challenges and potential HRM strategies. International Journal of Manpower, 43(1), 75–88. [Google Scholar] [CrossRef]
  5. Binns, R. (2020, January 27–30). On the apparent conflict between individual and group fairness. 2020 Conference on Fairness, Accountability, and Transparency (FAT) (pp. 514–524), Barcelona, Spain. [Google Scholar]
  6. Biswas, M. I., Talukder, M. S., & Khan, A. R. (2024). Who do you choose? Employees’ perceptions of artificial intelligence versus humans in performance feedback. China Accounting and Finance Review, 26(4), 512–532. [Google Scholar] [CrossRef]
  7. Brink, A., Benyayer, L. D., & Kupp, M. (2024). Decision-making in organizations: Should managers use AI? Journal of Business Strategy, 45(4), 267–274. [Google Scholar] [CrossRef]
  8. Choung, H., Sebergerb, J. S., & David, P. (2023). When AI is perceived to be fairer than a human: Understanding perceptions of algorithmic decisions in a job application context. International Journal of Human–Computer Interaction, 40(22), 7451–7468. [Google Scholar] [CrossRef]
  9. Chugunova, M., & Luhan, W. J. (2024). Ruled by robots: Preference for algorithmic decision makers and perceptions of their choices. Public Choice, 202(1), 1–24. [Google Scholar] [CrossRef]
  10. Damar, M., Özen, A., Çakmak, Ü. E., Özoğuz, E., & Erenay, F. S. (2024). Super AI, generative AI, narrow AI and chatbots: An assessment of artificial intelligence technologies for the public sector and public administration. Journal of AI, 8(1), 83–106. [Google Scholar] [CrossRef]
  11. Davenport, T. H. (2019). Can we solve AI’s ‘trust problem’? MIT Sloan Management Review, 60(2), 1. [Google Scholar]
  12. Fahnenstich, H., Rieger, T., & Roesler, E. (2024). Trusting under risk–comparing human to AI decision support agents. Computers in Human Behavior, 153, 108107. [Google Scholar] [CrossRef]
  13. Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607. [Google Scholar] [CrossRef]
  14. Giraldi, L., Rossi, L., & Rudawska, E. (2024). Evaluating public sector employee perceptions towards artificial intelligence and generative artificial intelligence integration. Journal of Information Science, 01655515241293775. [Google Scholar] [CrossRef]
  15. Glikson, E., & Woolley, W. A. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. [Google Scholar] [CrossRef]
  16. Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121. [Google Scholar] [CrossRef]
  17. Ivanov, S., & Webster, C. (2024). Automated decision-making: Hoteliers’ perceptions. Technology in Society, 76, 102430. [Google Scholar] [CrossRef]
  18. Kasy, M., & Abebe, R. (2021, March 3–10). Fairness, equality, and power in algorithmic decision-making. 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 576–586), Toronto, ON, Canada. [Google Scholar] [CrossRef]
  19. Khogali, H. O., & Mekid, S. (2023). The blended future of automation and AI: Examining some long-term societal and ethical impact features. Technology in Society, 73, 102232. [Google Scholar] [CrossRef]
  20. Kim, S., Oh, P., & Lee, J. (2024). Algorithmic gender bias: Investigating perceptions of discrimination in automated decision-making. Behaviour & Information Technology, 43(16), 4208–4221. [Google Scholar] [CrossRef]
  21. Klein, U., Depping, J., Wohlfahrt, L., & Fassbender, P. (2024). Application of artificial intelligence: Risk perception and trust in the work context with different impact levels and task types. AI & Society, 39(5), 2445–2456. [Google Scholar]
  22. Korteling, J. H., van de Boer-Visschedijk, G. C., Blankendaal, R. A., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human-versus artificial intelligence. Frontiers in Artificial Intelligence, 4, 622364. [Google Scholar] [CrossRef] [PubMed]
  23. Köchling, A., Wehner, M. C., & Ruhle, S. A. (2024). This (AI)n’t fair? Employee reactions to artificial intelligence (AI) in career development systems. Review of Managerial Science, 19(4), 1195–1228. [Google Scholar] [CrossRef]
  24. Łapińska, J., Escher, I., Gorka, J., Sudolska, A., & Brzustewicz, P. (2021). Employees’ trust in artificial intelligence in companies: The case of energy and chemical industries in Poland. Energies, 14(7), 1942. [Google Scholar] [CrossRef]
  25. Mabungela, M. (2023). Artificial intelligence (AI) and automation in the world of work: A threat to employees? Research in Social Sciences and Technology, 8(4), 135–146. [Google Scholar] [CrossRef]
  26. Mahmud, H., Islam, A. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. [Google Scholar] [CrossRef]
  27. Majrashi, K. (2025). Employees’ perceptions of the fairness of AI-based performance prediction features. Cogent Business & Management, 12(1), 2456111. [Google Scholar] [CrossRef]
  28. Masawi, T. J., Miller, E., Rees, D., & Thomas, R. (2025). Clinical perspectives on AI integration: Assessing readiness and training needs among healthcare practitioners. Journal of Decision Systems, 34(1), 2458874. [Google Scholar] [CrossRef]
  29. Mati, A., & Rehman, S. (2023). Saudi Arabia’s economy grows as it diversifies. International Monetary Fund. Available online: https://www.imf.org/en/news/articles/2023/09/28/cf-saudi-arabias-economy-grows-as-it-diversifies (accessed on 4 January 2025).
  30. Moshashai, D., Leber, A. M., & Savage, J. D. (2020). Saudi Arabia plans for its economic future: Vision 2030, the national transformation plan and Saudi fiscal reform. British Journal of Middle Eastern Studies, 47(3), 381–401. [Google Scholar] [CrossRef]
  31. Narayanan, D., Nagpal, M., McGuire, J., Schweitzer, S., & De Cremer, D. (2024). Fairness perceptions of artificial intelligence: A review and path forward. International Journal of Human–Computer Interaction, 40(1), 4–23. [Google Scholar] [CrossRef]
  32. Olan, F., Nyuur, R. B., & Arakpogun, E. O. (2024). AI: A knowledge sharing tool for improving employees’ performance. Journal of Decision Systems, 33(4), 700–720. [Google Scholar] [CrossRef]
  33. Park, J., Woo, S. E., & Kim, J. (2024). Attitudes towards artificial intelligence at work: Scale development and validation. Journal of Occupational and Organizational Psychology, 97(3), 920–951. [Google Scholar] [CrossRef]
  34. Perez, J. (2023). How automation drives business growth and efficiency. Harvard Business Review. Available online: https://hbr.org/sponsored/2023/04/how-automation-drives-business-growth-and-efficiency (accessed on 3 January 2025).
  35. Raji, I. D., Scheuerman, M. K., & Binns, R. (2020). The fallacy of AI functionalism: Why deploying faulty AI harms employees. ACM Transactions on Management Information Systems, 12(4), 1–19. [Google Scholar]
  36. Schmutz, J. B., Outland, N., Kerstan, S., Georganta, E., & Ulfert, A.-S. (2024). AI-teaming: Redefining collaboration in the digital era. Current Opinion in Psychology, 58, 101837. [Google Scholar] [CrossRef]
  37. Schoeffer, J., De-Arteaga, M., & Kuehl, N. (2024, May 11–16). Explanations, fairness, and appropriate reliance in human-AI decision-making. CHI Conference on Human Factors in Computing Systems (CHI ’24) (pp. 1–18), Article 836. Honolulu, HI, USA. [Google Scholar] [CrossRef]
  38. Seeber, I., Bittner, E., Briggs, R. O., De Vreede, T., De Vreede, G. J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & management, 57(2), 103174. [Google Scholar]
  39. Sharma, A. J., & Rathore, B. (2024). Examine the enablers of generative artificial intelligence adoption in supply chain: A mixed method study. Journal of Decision Systems, 1–33. [Google Scholar] [CrossRef]
  40. Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial intelligence: Definition and background. In Mission AI: The new system technology (pp. 15–41). Springer International Publishing. [Google Scholar] [CrossRef]
  41. Shekhar, S. S. (2019). Artificial intelligence in automation. Artificial Intelligence, 3085(06), 14–17. [Google Scholar]
  42. Tayeb, A., Alzubi, A., & Iyiola, K. (2025). Artificial intelligence and smart decision making in smart cities: A parallel moderated mediation approach. International Journal of Urban Sciences, 29(4), 753–781. [Google Scholar] [CrossRef]
  43. US-UAE Business Council. (2024). The UAE’s big bet on artificial intelligence. US-UAE Business Council Report. Available online: https://forms.worldgovernmentssummit.org/press/releases/wgs-2024--uae-s-big-bet-on-artificial-intelligence (accessed on 16 September 2025).
  44. Wesche, J. S., Hennig, F., Kollhed, C. S., Quade, J., Kluge, S., & Sonderegger, A. (2024). People’s reactions to decisions by human vs. algorithmic decision-makers: The role of explanations and type of selection tests. European Journal of Work and Organizational Psychology, 33(2), 146–157. [Google Scholar] [CrossRef]
  45. Willcocks, L. (2020). Robo-Apocalypse cancelled? Reframing the automation and future of work debate. Journal of Information Technology, 35(4), 286–302. [Google Scholar] [CrossRef]
  46. Xu, G., Xue, M., & Zhao, J. (2024). The relationship of artificial intelligence opportunity perception and employee workplace well-being: A moderated mediation model. International Journal of Environmental Research and Public Health (IJERPH), 20(3), 1974. [Google Scholar] [CrossRef] [PubMed]
  47. Zhao, H., Yuan, B., & Song, Y. (2024). Employees’ perception of generative artificial intelligence and the dark side of work outcomes. Journal of Hospitality and Tourism Management, 61, 191–199. [Google Scholar] [CrossRef]
  48. Zirar, A., Ali, S. I., & Islam, N. (2023). Worker and workplace artificial intelligence (AI) coexistence: Emerging themes and research agenda. Technovation, 124, 102747. [Google Scholar] [CrossRef]
Figure 1. Conceptual Model.
Figure 1. Conceptual Model.
Admsci 16 00049 g001
Figure 2. Path Diagram.
Figure 2. Path Diagram.
Admsci 16 00049 g002
Table 1. Operational Definitions.
Table 1. Operational Definitions.
VariablesOperational DefinitionsReferences
Perceived Benefits of ADM Employees’ beliefs that AI enhances efficiency, fairness, accuracy, and reduces errors in workplace decisions.Perez (2023); Xu et al. (2024)
Concerns about ADM Employees’ fears related to job displacement, lack of transparency, ethical risks, and the complexity of AI-based decision processes.Khogali and Mekid (2023); Zhao et al. (2024); Tayeb et al. (2025)
Trust in ADM The extent to which employees believe that AI systems are reliable, transparent, and capable of making fair decisions.Łapińska et al. (2021); Köchling et al. (2024)
Comfort with ADMThe degree to which employees feel at ease relying on AI-generated decisions that affect their tasks or outcomes.Zirar et al. (2023)
Table 2. Sample Profile.
Table 2. Sample Profile.
CategorySubcategoryPercentage
Age21–2511.3
26–3014.2
31–3517.5
36–4025.5
41–4517.8
46 and above13.7
GenderMale57.2
Female42.8
Total100
Years of experienceLess than 1 year8.5
1–3 years18.3
4–6 years17.8
7–10 years12.1
More than 10 years43.3
SectorPublic Sector13.7
Private Sector86.3
PositionEmployee45.9
Manager36.1
Supervisor/Team Leader18
Educational levelBachelor’s or below53.1
Masters41.5
Doctorate5.4
Country of ResidenceKuwait15.1
Lebanon31.7
Qatar20.7
Saudi Arabia16.3
United Arab Emirates16.2
Table 3. Model Fit Measures.
Table 3. Model Fit Measures.
MetricValue
CFI0.950
NFI0.945
RMSEA0.071
SRMR0.034
GFI0.988
Table 4. Harman’s Single-Factor Test.
Table 4. Harman’s Single-Factor Test.
Factor Characteristics
Unrotated SolutionRotated Solution
EigenvaluesSumSq. LoadingsProportion var.CumulativeSumSq. LoadingsProportion var.Cumulative
Factor 16.3425.7560.3600.3605.6940.3560.356
Table 5. HTMT Ratio.
Table 5. HTMT Ratio.
Factor 1Factor 2Factor 3Factor 4
1.000
0.4871.000
0.6110.4431.000
0.4050.1030.5161.000
Table 6. Reliability Test.
Table 6. Reliability Test.
Coefficient ωCoefficient α
Factor 10.9040.904
Factor 20.8940.894
Factor 30.8580.842
Factor 40.8500.848
total0.8920.755
Table 7. Regression Coefficients.
Table 7. Regression Coefficients.
PredictorOutcomeEstimateStd. Errorp
PBCOMF0.1860.027 <0.001
CONCOMF0.1630.026 <0.001
TRCOMF0.5080.034 <0.001
Table 8. Regression Coefficients Based on Country of Residence.
Table 8. Regression Coefficients Based on Country of Residence.
GroupPredictorOutcomeEstimateStd. Errorp
BahrainPBCOMF0.2420.073<0.001
CONCOMF0.1570.0700.024
TRCOMF0.4300.085<0.001
KuwaitPBCOMF0.1050.0720.145
CONCOMF0.2030.0710.004
TRCOMF0.6310.090<0.001
LebanonPBCOMF0.1500.0670.025
CONCOMF0.1680.0660.010
TRCOMF0.6290.101<0.001
OmanPBCOMF0.2630.074<0.001
CONCOMF0.1360.0690.049
TRCOMF0.3770.087<0.001
QatarPBCOMF0.1180.0740.110
CONCOMF0.1490.0720.037
TRCOMF0.5590.089<0.001
Saudi ArabiaPBCOMF0.1090.0730.135
CONCOMF0.2040.0710.004
TRCOMF0.6220.095<0.001
United Arab EmiratesPBCOMF0.3090.075<0.001
CONCOMF0.1520.0660.021
TRCOMF0.3590.085<0.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Achi, S.; Aoun, D.; Lahad, W.; Jabbour Al Maalouf, N. Employee Comfort with AI-Driven Algorithmic Decision-Making: Evidence from the GCC and Lebanon. Adm. Sci. 2026, 16, 49. https://doi.org/10.3390/admsci16010049

AMA Style

El Achi S, Aoun D, Lahad W, Jabbour Al Maalouf N. Employee Comfort with AI-Driven Algorithmic Decision-Making: Evidence from the GCC and Lebanon. Administrative Sciences. 2026; 16(1):49. https://doi.org/10.3390/admsci16010049

Chicago/Turabian Style

El Achi, Soha, Dani Aoun, Wael Lahad, and Nada Jabbour Al Maalouf. 2026. "Employee Comfort with AI-Driven Algorithmic Decision-Making: Evidence from the GCC and Lebanon" Administrative Sciences 16, no. 1: 49. https://doi.org/10.3390/admsci16010049

APA Style

El Achi, S., Aoun, D., Lahad, W., & Jabbour Al Maalouf, N. (2026). Employee Comfort with AI-Driven Algorithmic Decision-Making: Evidence from the GCC and Lebanon. Administrative Sciences, 16(1), 49. https://doi.org/10.3390/admsci16010049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop