1. Introduction
Artificial intelligence (AI) is transforming the workplace by automating complex tasks and reducing the need for human intervention (
Korteling et al., 2021;
Shekhar, 2019). Although AI adoption has several advantages for companies, organizations face numerous challenges related to employee perceptions of AI (
Biswas et al., 2024).
AI utilization is constantly becoming a significant mechanism for development, especially the exclusive adoption of AI technologies in the field of human resource management (HRM), since it holds great capacity to enhance employee performance across various tasks (
Olan et al., 2024). Workers’ concerns continue to grow regarding the loss of human intervention, data security, and biases (
Masawi et al., 2025), particularly with new burgeoning subfields of AI emerging, like Generative AI (Gen-AI), which concentrates on creating new content revolutionizing different industries by 2028 (
Sharma & Rathore, 2024). Upon these insights,
Choung et al. (
2023) highlight workers’ perceptions from a psychological basis of surrounding Algorithmic Decision-Making (ADM), illustrating positive responses from participants who viewed AI-algorithmic decisions as fairer and more trustworthy than those made by humans. Likewise,
Giraldi et al. (
2024) discuss a positive inclination among employees belonging to the public sector toward AI technologies perceived as augmentative rather than a replacement for humans.
The extent to which employees feel comfortable with AI-powered decisions depends on their perceptions of the benefits of AI-driven algorithmic decision-making, concerns about its risks, and trust in its reliability (
Ivanov & Webster, 2024;
Kim et al., 2024;
Narayanan et al., 2024). In smart-city settings,
Tayeb et al. (
2025) found that transparency anxiety and work–life interface mediate how AI influences decision-making outcomes. Taken together, prior research has examined these determinants largely in isolation or within specific domains, such as sectoral settings or smart-city contexts, rather than within an integrated explanatory model focused on employees’ comfort with AI-driven decision-making in the workplace. Moreover, empirical evidence remains limited on how perceived benefits, concerns, and trust jointly operate across organizational contexts characterized by different levels of technological readiness and economic stability.
Employee perceptions play a crucial role in AI adoption and AI-driven ADM, particularly in terms of efficacy, precision, fairness, and error reduction. Employees who perceive AI as enhancing efficiency and fairness are more likely to embrace its execution (
Brink et al., 2024;
Klein et al., 2024). Nevertheless, concerns such as job displacement, lack of transparency, and complexity can hinder AI acceptance and create resistance (
Fahnenstich et al., 2024). Furthermore, trust in AI systems, which is shaped by reliability, transparency, and alignment with human decision-making norms, serves as a central factor influencing employee comfort with AI adoption (
Brink et al., 2024;
Klein et al., 2024).
Although bibliometric and systematic literature reviews are valuable for mapping thematic streams and identifying research gaps in emerging fields, the present study adopts an empirical approach that integrates key constructs identified in prior research to examine employees’ comfort with AI-driven decision-making across contrasting organizational contexts. Specifically, this study examines the relationship between perceptions, concerns, and trust in determining employees’ comfort levels with AI-driven decision-making, with a particular focus on the Gulf Cooperation Council (GCC) and Lebanon. AI adoption in the GCC is strongly supported by government initiatives, resulting in rapid implementation across various industries. In contrast, Lebanon’s geopolitical and economic instability, coupled with limited technological investments, has slowed AI integration. These contrasting contexts provide an opportunity to explore how employees’ perceptions, concerns, and trust in AI differ based on the level of AI adoption in their respective regions.
Based on the objectives of the study and addressing the gaps in the literature, this study seeks to answer the following questions:
(RQ1) How do employees’ perceptions of AI benefits, such as efficiency, fairness, and error reduction, influence their comfort with AI-driven algorithmic decision-making in the workplace?
(RQ2) How can employees’ concerns about AI-driven algorithmic decision-making, including job displacement, lack of transparency, and complexity, affect their comfort with AI-driven algorithmic decision-making?
(RQ3) How does employees’ level of trust in AI systems impact their comfort with AI-driven decision-making processes in the workplace?
By addressing these questions, this study offers several important contributions. Theoretically, it advances AI adoption research by integrating perceived benefits, concerns, and trust into a unified model that explains employees’ comfort with AI-driven decision-making, an area largely fragmented in prior research. Empirically, the study expands the geographical scope of AI research by providing comparative evidence from the GCC and Lebanon, two contrasting yet underexplored Arab contexts with different levels of digital maturity. This cross-country analysis reveals how context shapes the strength of AI predictors, offering new region-specific insights absent in global literature. Practically, the study provides organizations with a clear roadmap for improving employee comfort with AI systems through transparency, trust-building, training, and contextualized implementation strategies tailored to diverse workforce environments. Together, these contributions enrich current knowledge on human–AI coexistence and support more responsible and employee-centered AI integration in the workplace.
3. Methodology
3.1. Research Design
This study adopted a positivist philosophical stance and a deductive approach, consistent with explanatory research aimed at testing theoretically grounded hypotheses. A mono-method quantitative design was employed because the research objectives required examining statistical relationships among latent constructs, namely, perceived benefits, concerns, trust, and comfort with ADM, across a large and diverse employee population.
A structured questionnaire was used. No existing validated scale captures the four constructs examined together within the specific context of workplace ADM in Lebanon and the GCC. Therefore, the questionnaire items were newly developed for this study. Item development followed established scale-development guidelines (
Hinkin, 1998) and relied on formal theoretical foundations and empirical literature. ADM Theory (
Mahmud et al., 2022;
Binns, 2020) informed items on efficiency, fairness, transparency, and perceived risk. Responsible AI and Explainable AI (
Raji et al., 2020;
Adadi & Berrada, 2020;
Tayeb et al., 2025) informed items on transparency, reliability, and perceived ethical risks. Human–AI Teaming Theory (
Seeber et al., 2020;
Schmutz et al., 2024) informed items relating to comfort and human–AI coexistence. Empirical research on job displacement, anxiety, trust, and fairness (
Perez, 2023;
Khogali & Mekid, 2023;
Zhao et al., 2024;
Łapińska et al., 2021;
Köchling et al., 2024;
Zirar et al., 2023) informed item wording and dimension coverage. This ensured that while the scale was new, it was firmly grounded in theory and prior evidence.
As a first step, a brief introduction was written so that the participants would have a clearer idea about the purpose of the study and the type of questions that they were going to answer. Then,
Section 1 included the demographics. In
Section 2, five Likert scale agreement statements were related to the perceived benefits of ADM in the workplace. In
Section 3, five Likert scale agreement statements were related to concerns about AI in the workplace. In
Section 4, five Likert scale statements were related to trust in AI systems in the workplace. In
Section 5, four Likert scale statements were related to employees’ comfort with ADM in the workplace.
To ensure content validity, three academic experts in HRM reviewed all items for relevance, clarity, and representativeness. A pre-test with 10 employees from Lebanon and the GCC assessed comprehension and item clarity. Minor stylistic adjustments were made to enhance readability. A pilot test with 20 participants was conducted. No comprehension issues were identified; thus, the same items were used for the main survey. The data collection period was from October 2024 to December 2024.
3.2. Procedures and Sample
Initially, simple random sampling targeted employees across the GCC countries and Lebanon. However, due to challenges in accessing comprehensive and diverse sampling frames across these regions, snowball sampling was also employed to complement the random sampling process. Participants who were identified through random sampling were asked to recommend peers and colleagues who are employees working in those regions. This led to recruiting additional respondents and maintaining the diversity of the sample. Those initial participants formed a heterogeneous base for snowball sampling since they were from different industries, job levels, and countries. Continuous monitoring of demographic and industry representation throughout data collection to ensure balance across regions reduced bias inherent to snowball sampling, although it cannot be eliminated. Participants were eligible if they were 18 years or older, were currently employed, worked in Lebanon or one of the six GCC countries, and had exposure to any form of AI-assisted or algorithmic decision-making in the workplace. To minimize the risk of unqualified respondents, a filter question confirmed employment status and region, IP-duplicate responses were blocked, completion times were screened for inattentive responses, and demographic patterns were monitored for anomalies. Combining both sampling methods yielded a final sample size of 388 participants.
Several procedural steps were taken to reduce potential bias. Anonymity and confidentiality were ensured, items were neutrally phrased, and constructs were placed in separate sections to minimize respondents’ ability to infer relationships. No identifying information was collected, reducing social desirability bias. A filter question confirmed respondent eligibility, and distribution across multiple channels helped avoid interviewer or organizational bias. These procedures help reduce the likelihood of common method and social desirability biases affecting responses.
The questionnaire was uploaded using Google Forms and distributed via WhatsApp, email, and professional social media platforms. Given the multi-channel online distribution method, a precise response rate cannot be determined. Descriptive statistics, reliability tests (Cronbach’s α, Composite Reliability), convergent validity (standardized loadings, AVE), discriminant validity (HTMT), and structural equation modeling (SEM) were used for data analysis.
Ethical considerations were maintained while conducting this study. IRB ethical approval was obtained. Informed consent was obtained using a filter question before filling out the questionnaire. Anonymity and confidentiality of responses were maintained.
5. Discussion
5.1. Perceived Benefits of ADM and Employee Comfort
The findings demonstrate that employees’ perceptions of AI’s functional and procedural benefits, namely efficiency, fairness, and error reduction, significantly increase their comfort with ADM. This supports H1 and is consistent with previous research showing that employees tend to view ADM positively when it enhances accuracy and consistency in workplace decisions (
Ivanov & Webster, 2024).
Brink et al. (
2024) similarly emphasize that clear communication of AI’s advantages increases acceptance by helping workers understand how automation improves decision quality. In addition, the importance of fairness highlighted by
Schoeffer et al. (
2024) resonates with the present study’s finding that employees feel more at ease with ADM when they perceive it as equitable.
However, the multi-country analysis reveals that this relationship is context-dependent. While perceived benefits significantly predicted comfort in Bahrain, Oman, the UAE, and Lebanon, the relationship was nonsignificant in Kuwait, Qatar, and Saudi Arabia. These differences suggest that the strength of perceived benefits may vary across national environments depending on the maturity of workplace digital transformation, organizational readiness, and employees’ direct experiences with AI tools. Some GCC countries may have invested heavily in high-level AI infrastructure, yet employees might not directly observe or internalize the benefits at the operational level, reducing the impact of perceived advantages on comfort. Taken together, these results indicate that perceived benefits matter, but their influence depends on broader institutional, economic, and organizational factors.
5.2. Concerns About AI and Employee Comfort
In contrast to expectations, concerns about AI, including job displacement, lack of transparency, and complexity, were positively associated with comfort. This unexpected finding, which contradicts H2, was consistently observed across all countries. Several contextual and psychological explanations may account for this pattern.
First, concerns may reflect heightened awareness rather than resistance. Employees who think critically about AI may also be better informed about its limitations and advantages, leading to a more nuanced and engaged form of comfort rather than fear or rejection.
Second, in environments undergoing rapid technological development, such as several GCC countries, employees may adopt adaptive behaviors such as upskilling or proactively seeking training. In such cases, concerns become part of an adjustment process that ultimately increases comfort with AI. Third, trust may act as a mitigating force, dampening the negative emotional impact of concerns. When employees believe that ADM systems or their organizations are fair, reliable, or transparent, concerns may evolve into cautious optimism rather than discomfort.
Cultural norms may also play a role. In many Arab workplace environments, expressing concerns is not synonymous with rejecting a system; rather, it often reflects a desire for procedural fairness and accountability. Workers may feel more comfortable when they can voice concerns, especially if they believe their organizations are responsive to feedback. These interpretations diverge from studies such as
Wesche et al. (
2024) and
Kim et al. (
2024), which emphasize the negative consequences of concerns on trust and acceptance, yet align with more recent evidence showing that employees may accept ADM when they perceive it as fair, even if concerns persist (
Chugunova & Luhan, 2024).
In brief, the findings suggest that concerns may be part of an adaptive process rather than a barrier in contexts where AI adoption is highly visible or strongly supported by national initiatives.
5.3. Trust in AI Systems and Employee Comfort
Trust emerged as the strongest predictor of comfort in both the overall sample and all individual country models, offering clear support for H3. This result affirms the central role of trust in ADM theory and Responsible AI frameworks, which argue that employees must perceive AI systems as transparent, predictable, and aligned with ethical principles for comfort to arise (
Adadi & Berrada, 2020;
Raji et al., 2020). The universality of this relationship across all seven countries suggests that trust operates as a foundational psychological mechanism irrespective of national context, economic conditions, or differences in digital maturity.
Trust’s strong influence may stem from its ability to reduce uncertainty and ease concerns about fairness or job security. When employees believe that AI systems make decisions in reliable and transparent ways, they are more willing to accept automated outcomes even when they carry personal consequences. This aligns with the findings of
Park et al. (
2024), who observed that trust in AI tools was central to predicting workplace acceptance, and
Fahnenstich et al. (
2024), who found that trust behaviors tend to favor AI over humans when tasks involve higher levels of risk.
The country-level analyses indicate that trust consistently shapes comfort across all regions, while perceived benefits and concerns display more variation depending on national economic stability, AI maturity, and organizational practices. Countries like the UAE, Bahrain, and Oman show a strong alignment between benefits and comfort, likely reflecting more embedded digital infrastructures and transparent AI integration. In contrast, nonsignificant effects in Kuwait, Qatar, and Saudi Arabia may reflect fast-paced, top-down AI adoption that employees have not yet fully assimilated. Lebanon’s significant relationships may stem from the heightened value employees place on reliability and efficiency in a crisis-affected economy, making both benefits and trust especially salient.
In brief, the findings of this study demonstrate that employee comfort with AI-driven ADM cannot be reduced to a simple or purely deductive logic whereby positive perceptions and trust automatically translate into acceptance. While perceived benefits (H1) and trust in AI systems (H3) show positive effects, the positive association between concerns and comfort (H2) is theoretically non-trivial and challenges dominant assumptions in the literature. This pattern suggests that employees may actively adapt to AI-driven decision environments by reconciling perceived risks with anticipated benefits and institutional realities, pointing to a more complex and dynamic process of human–AI interaction than previously assumed.
6. Conclusions
This study addressed three research questions. Regarding RQ1, the findings show that employees’ perceptions of AI benefits, particularly efficiency, fairness, and error reduction, positively influence their comfort with AI-driven decision-making. Regarding RQ2, the results reveal that employee concerns about AI, including job displacement, transparency, and complexity, do not reduce comfort as initially expected; instead, these concerns are positively associated with comfort, suggesting an adaptive or awareness-based response to AI adoption. Finally, in response to RQ3, trust in AI systems emerged as the strongest and most consistent predictor of employee comfort across all examined contexts.
6.1. Theoretical Implications
This study makes several theoretical contributions to the literature on ADM, Responsible AI, and human–AI collaboration. First, it extends ADM theory by demonstrating that employee comfort with ADM is shaped not only by perceived procedural advantages such as fairness and efficiency but also by deeper relational mechanisms such as trust. While existing ADM theories highlight the technical superiority of algorithmic processes, the present study shows that psychological perceptions, particularly the interplay between benefits, concerns, and trust, must be integrated into the theoretical understanding of how employees evaluate AI-driven decisions in organizational settings.
Second, the findings challenge prevailing assumptions in Responsible AI and XAI frameworks, which typically treat concerns, such as job displacement, opacity, and complexity, as barriers to acceptance. The unexpected positive association between concerns and comfort suggests that concerns may function as indicators of awareness, engagement, or proactive monitoring rather than simple resistance. This introduces a theoretically novel proposition: in environments undergoing rapid AI expansion or economic instability, concerns may coexist with acceptance, forming a more adaptive or reflective orientation toward ADM. This invites refinement of Responsible AI models to recognize different “types” of concerns, some of which may facilitate rather than hinder acceptance.
Third, the study reinforces the centrality of trust as a universal mechanism across diverse national and economic contexts. While prior research has emphasized trust at a conceptual level, the present findings empirically demonstrate its dominant role in shaping comfort across all seven countries studied, regardless of differences in technological readiness, digital maturity, or organizational infrastructure. This provides stronger theoretical support for integrating trust as a core construct within ADM and Human–AI Teaming theories, rather than treating it as a secondary or contextual factor.
Finally, by empirically testing a unified model combining perceived benefits, concerns, and trust, the study contributes a more holistic theoretical framework for understanding comfort with ADM. Existing theoretical approaches, such as fairness-based models, accuracy-focused frameworks, or cognitive structuration perspectives, tend to isolate individual predictors. This study advances theory by showing that comfort emerges from the combined influence of evaluative (benefits), affective (concerns), and relational (trust) mechanisms. The cross-regional comparison further highlights the need for ADM theories to incorporate contextual moderators, especially economic resilience and technological investment levels, when predicting employee responses to AI.
In brief, these theoretical implications show that comfort with ADM is a multidimensional construct that cannot be fully explained by existing linear models. The study contributes by refining theoretical expectations, challenging assumptions about concerns, elevating trust as a foundational mechanism, and proposing an integrated framework that better reflects employee experiences in digitally evolving workplaces.
6.2. Practical Implications
Based on the findings, several practical implications are provided.
First, using success stories and real-life cases, businesses can improve employee perceptions of AI’s advantages by explaining to them how it can reduce errors, make decisions more fairly, and increase efficiency. Additionally, they can train staff members on the effective use of AI systems and involve them in their implementation to understand their value and align it with their needs.
Second, companies should mitigate concerns about AI by increasing transparency, addressing job security, simplifying AI complexity, and establishing open communication channels.
Third, businesses should prioritize dependability and integrity, audit and monitor AI systems, integrate human oversight, and promote familiarity to establish and preserve trust in these systems.
Fourth, to address cultural and contextual differences in employee perceptions across countries, it is advised that businesses implement AI communication and implementation strategies (e.g., higher trust in the UAE versus higher skepticism in Kuwait and Lebanon). Additionally, they could show how AI systems are specifically relevant to the sector, making sure that use cases align with workers’ duties and responsibilities.
Fifth, it is essential to make investments in learning and development. Employees should have the skills needed to use AI, which will reduce concerns about losing their jobs and increase their confidence in utilizing AI tools. Businesses may simplify AI and assist staff in understanding its practical advantages and ethical implications by holding educational initiatives that include workshops, webinars, and internal communications. Along with this, a feedback loop is set up to evaluate employees’ attitudes towards AI regularly. This allows for the identification of changing concerns, perceptions, and trust levels. The feedback is then used to improve AI systems, with an emphasis on usability, transparency, and perceived fairness.
Finally, showing instances where AI assists employees in decision-making rather than functioning independently is essential to promoting AI as a collaborative tool and reaffirming its function as a partner rather than a substitute. By using cross-functional teams to deploy AI systems, businesses can guarantee a team-based implementation and gain support from a variety of employee groups.
These practical steps collectively address employee perceptions, concerns, and trust in AI, fostering greater acceptance and comfort with ADM in the workplace.
6.3. Limitations and Further Research
Although this study provides important insights into employees’ comfort with AI-driven decision-making in Lebanon and the GCC, several limitations must be acknowledged, each of which opens pathways for future research.
First, the study relied on a cross-sectional, self-reported survey, which limits causal inference and may be affected by perceptual biases. Although procedural steps were taken to reduce common method bias, future studies should incorporate longitudinal designs to examine how comfort, trust, and concerns evolve as ADM systems become more embedded in organizational workflows. Longitudinal research would also help capture dynamic shifts in employee attitudes as AI capabilities mature. Further, given the cross-sectional design, the directionality of the relationships is theoretically grounded but cannot be interpreted as strictly causal; future longitudinal or experimental studies could explore potential reciprocal effects between employee comfort and AI-related perceptions, concerns, and trust.
Second, the study employed a mono-method quantitative approach, which, while appropriate for hypothesis testing, restricts the depth of contextual understanding. Future research could adopt qualitative or mixed methods designs, such as interviews, focus groups, ethnographic observation, or multi-source data collection, to explore deeper psychological mechanisms underlying trust, fairness perceptions, fear, and ethical judgments.
Third, the study was conducted in Lebanon and the six GCC countries, which represent contrasting economic and technological environments. While this strengthens contextual relevance, generalizability remains limited. Also, while the study includes respondents from multiple countries, the country-level subsamples are not intended to be statistically representative. Accordingly, cross-country comparisons are interpreted as exploratory and illustrative of contextual variation rather than as grounds for national-level generalization. Future studies should extend the model to other geographical settings, such as Europe, East Asia, and Africa, to examine whether institutional, cultural, and regulatory differences moderate how employees perceive ADM systems. Cross-cultural comparative work is particularly encouraged in the ADM literature.
Fourth, although the study examined the direct relationships between perceived benefits, concerns, trust, and comfort, it did not investigate mediating or moderating mechanisms that may alter these effects. Prior research suggests the importance of factors such as organizational transparency, technological readiness, job role, digital literacy, and fairness perceptions. Future research could test whether trust mediates or buffers the negative impact of concerns on comfort; technological maturity, job complexity, or hierarchical position moderate acceptance of ADM; and individual differences (e.g., risk tolerance, AI literacy, openness to change) influence ADM comfort. Such analyses would deepen the theoretical nuance of ADM adoption processes.
Fifth, although the measurement model demonstrated strong validity, the study relied on a newly developed instrument. While item development followed theoretical and statistical validation procedures, future studies should further validate and refine the scale using confirmatory factor analysis on independent samples, measurement invariance testing across countries, and triangulation with behavioral or objective data.
Finally, the study used an online survey distributed through digital platforms, which may introduce sampling bias and limit control over response environments. Future research should consider organizationally administered surveys, stratified sampling, or multi-organizational datasets to enhance representativeness and reduce self-selection effects.
In brief, by addressing these limitations, future research can advance a more integrated understanding of how employees experience, evaluate, and adapt to AI-driven decision-making systems, thereby contributing to the refinement of ADM theory and the development of responsible and human-centered AI practices in the workplace.
6.4. Contributions of the Study
The insights provided in this study underscore a more comprehensive understanding of employees’ perceptions within multifaceted criteria of fairness, predictive, and equity in divergent cultural contexts. This differs from previous studies, which demonstrated limited generalizability of the findings since the participants belonged to one country, like
Majrashi’s (
2025) investigation on Saudi Arabian employees’ perceptions of AI fairness and
Mabungela’s (
2023) evaluation of South African people’s attitudes on AI and automated workplaces, whether they see it as a threat or not.
Majrashi (
2025) suggested including respondents from various countries to investigate the development of culturally aware AI algorithms.
Also, this study contributed to a better holistic approach to management practices and strategies concerning the adoption of AI-ADM systems in the workplace. It was conducted on employees in the GCC and Lebanon and focused intentionally on this region, where fewer similar studies were previously conducted, especially in Lebanon. This will raise awareness and help top leaders and decision-makers who lead organizations in the GCC and Lebanon to know what to expect and to be able to overcome possible challenges and potential resistance to change during the process of the integration of AI-based automated systems. By considering the scientific findings and suggestions of this study, businesses will become more efficient while introducing new AI-based systems to automate processes, in addition to enhancing communication on the management-employee level and reducing employee turnover since it improves their flexibility and resilience. Therefore, this study contributed to more productivity and to a higher success rate when introducing AI-based systems in the workplace.
Considering the growing interest in comprehending AI’s impact on employees in workplaces, this study contributes to advancing the discourse on AI decision-making by emphasizing that AI applications are best adopted as decision-support tools in organizations and cannot be used as the sole decision-maker due to employees’ expressed concerns and distrust. Thus, our study aligns with human and organizational values as we shed light on the importance of humans as final decision makers, and we advocate a well-considered employment of AI adoption that leverages the benefits and concerns of automation inside workplaces and therefore allows workers to take comfort with ADM. As such, this literature review offers a new angle that better understands organizational decision systems by comprehending the interaction between humans and technology acceptance from a decision-making perspective.