Next Article in Journal
Contributing to Responsible Tuna Management in the Indian Ocean: Updating Catch Reporting for the Sea of Oman and the Arabian Sea
Previous Article in Journal
Assessment and Prediction of Land Use and Landscape Ecological Risks in the Henan Section of the Yellow River Basin
Previous Article in Special Issue
Exploring High-Performance Work Systems and Sustainable Development in the Hospitality Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Humanizing AI in Service Workplaces: Exploring Supervisor Support as a Moderator in HPWSs

by
Temitope Ayodeji Atoyebi
* and
Joshua Sopuru
Management Information Systems, Girne American University, Kyrenia 99300, Cyprus
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(17), 7892; https://doi.org/10.3390/su17177892
Submission received: 22 June 2025 / Revised: 19 August 2025 / Accepted: 26 August 2025 / Published: 2 September 2025

Abstract

As artificial intelligence (AI) becomes increasingly embedded within service-oriented High-Performance Work Systems (HPWSs), understanding its implications for employee well-being and organizational sustainability is critical. This study examines the relationship between AI service quality and job satisfaction, considering the mediating effect of perceived organizational justice and the moderating influence of supervisor support. Drawing on the ISS model, equity, organizational justice, and Leader–Member Exchange (LMX) theory, data were collected from a diverse sample of service sector employees through a cross-sectional design. The findings indicate that higher AI service quality significantly enhances job satisfaction, particularly in environments with strong supervisor support. Contrary to expectations, perceived organizational justice did not mediate the AI-satisfaction link, suggesting that perceived organizational justice constructs may be less influential in AI-mediated contexts. Instead, supervisor support emerged as a key contextual enabler, strengthening employees’ positive perceptions and emotional responses to AI systems. These results emphasize that technological optimization alone is insufficient for building sustainable service workplaces. Effective leadership and human-centered practices remain essential to fostering trust, satisfaction, and long-term engagement in digitally transforming organizations. This study offers practical and theoretical insights into integrating AI and human resource strategies in support of socially sustainable service systems.

1. Introduction

Artificial intelligence (AI) is increasingly central in reshaping organizational functions, accelerating innovation, and influencing competitive positioning across industries. In service-based sectors, AI enhances operational efficiency, improves customer interaction, and streamlines service delivery, all of which can support sustainable business models and reinforce High-Performance Work Systems (HPWSs) [1]. However, achieving long-term sustainability extends beyond mere technological efficiency; it fundamentally requires a strong focus on ethical implementation, employee well-being, and organizational fairness. While AI facilitates process improvement and smart factory development in manufacturing settings [2], its success in human-centric sectors such as tourism, hospitality, and healthcare critically hinges on integrating technology in ways that uphold human values and promote fairness alongside efficiency. This intricate interplay between technology and human factors can be effectively framed and understood through the lens of the Information Systems Success (ISS) model [3], which provides a robust theoretical framework for understanding the link between system quality, information quality, service quality, and their impact on user satisfaction and organizational benefits.
The adoption of AI in service workplaces profoundly impacts perceived organizational justice—how employees perceive decision-making processes, resource allocation, and interpersonal treatment. Algorithmic decision-making, despite its potential for consistency, can introduce ambiguity or even perpetuate biases, which can undermine trust and psychological safety in the workplace [4]. When AI systems are viewed as opaque or inequitable, employee engagement and well-being may suffer significantly. Healthcare provides a clear example: AI technologies such as robotic surgery systems (e.g., the Da Vinci System) enhance service capabilities but demand that professionals adapt to and trust these complex tools. Sustainable AI integration, therefore, necessitates not only technological efficacy but also a high degree of employee confidence, adaptability, and a perception of fairness in its application.
While AI excels in repetitive and analytical tasks [5], challenges emerge when its pervasive use diminishes employee autonomy, obscures decision-making processes, or increases job stress. Prior research warns that insufficient support in AI–human collaboration can widen inequalities, particularly if frontline employees lack the necessary training and resources to effectively engage with new systems [6,7]. The rise of generative AI introduces further complexity, as it increases productivity but may exacerbate existing digital skill gaps, particularly among less technologically fluent workers [8]. Without robust safeguards, AI may inadvertently reinforce systemic biases, making transparency, accountability, and regular auditing essential components of ethical AI system design [9]. Ensuring fairness, therefore, must be an intentional part of the system architecture and implementation process, directly influencing the “information quality” and “service quality” dimensions within the ISS model.
A sustainable and ethical approach to AI demands organizational readiness and strong leadership commitment. Cultural values, inclusive practices, comprehensive training, and clear ethical frameworks are foundational to this process. As [10] emphasizes, transparency, shared responsibility, and employee inclusion are vital for successful AI integration. Supervisors, in particular, play a critical role in building trust in AI systems by demystifying their functions, fostering perceptions of fairness, and providing essential emotional and instrumental support. Their proactive involvement can significantly reduce technology-induced stress and boost job satisfaction, acting as a crucial bridge between the “system quality” and “service quality” of AI (as per the ISS model) and employee outcomes.
In service environments, frontline employees frequently interact with AI tools such as chatbots, virtual assistants, and service robots [11]. These tools directly shape their day-to-day work experiences, making the “service quality” of these AI systems paramount for user satisfaction according to the ISS model. However, despite the growing importance of AI, few studies have comprehensively examined how HPWSs can be leveraged to improve service quality and support sustainability in tourism and related sectors [11,12]. Although emerging literature points to a relationship between AI service quality and job satisfaction [10,13], the underlying mechanisms and boundary conditions remain underexplored. This study addresses that critical gap by proposing perceived organizational justice as a key mediator and supervisor support as a significant moderator in this relationship. We contend that transparent and high-quality AI systems can directly enhance trust and fairness perceptions, while effective leadership, specifically supervisor support, may amplify these positive effects or buffer against potential negative effects of AI on employee outcomes, thus enriching the understanding of “net benefits” derived from information systems as conceptualized by the ISS model.
This research contributes significantly to the growing literature on sustainable digital transformation by offering empirical insights into the human factors that shape the success of AI integration in service sectors. It provides practical recommendations for building ethical, performance-driven, and employee-centered workplaces. This study also aligns with the United Nations Sustainable Development Goals, specifically SDG 8 (Decent Work and Economic Growth), SDG 9 (Industry, Innovation and Infrastructure), and SDG 10 (Reduced Inequalities), promoting a vision of AI that supports not just productivity, but also fairness, empowerment, and long-term workforce resilience.

2. Theoretical Background and Hypothesis Development

2.1. AI Service Quality and Job Satisfaction

AI service quality, in the context of this study, refers to the employee’s perception of the effectiveness, reliability, responsiveness, and user-friendliness of AI systems used within their work environment. It encompasses aspects such as the AI’s accuracy in task execution, consistency, speed in providing relevant information or completing processes, and how intuitively and easily employees can interact with it. Building on adaptations of the SERVQUAL model, it also considers “empathy” in the form of human-centered design, implying that the AI should support employees’ tasks without causing undue stress or ambiguity and should complement rather than replace human effort, thereby preserving meaning and dignity in work [14,15,16].
The Information Systems Success (ISS) model [3] provides a robust theoretical framework for understanding the link between system characteristics and user outcomes. A core tenet of the ISS model is that perceived system quality directly influences user satisfaction and acceptance. In the context of AI, “system quality” refers to the perceived technical excellence and functionality of the AI. When employees perceive the AI system as high quality (i.e., effective, reliable, and user-friendly), this positive perception is hypothesized to lead to greater satisfaction with the system and, by extension, with their overall work experience. This model reinforces the idea that the technical attributes of AI systems are crucial in shaping employee attitudes and experiences [3,17].
Well-designed AI systems can significantly reduce employee stress and boost engagement by optimizing work processes and enhancing the employee experience. Specifically, AI can automate repetitive, tedious, or physically demanding tasks, thereby freeing up employees to focus on more complex, strategic, and interpersonally rich activities [18]. This shift can reduce monotony, mental fatigue, and the burden of routine work. Furthermore, AI can empower employees by providing data-driven insights, improving decision-making, minimizing errors, increasing responsiveness, and making their work more efficient and effective [19,20]. When employees perceive AI as a supportive tool that complements their skills and allows them to engage in more meaningful work, it can lead to a sense of accomplishment, reduced burnout, and increased psychological safety, ultimately fostering greater engagement and job satisfaction [16,17]. Conversely, poorly implemented AI can lead to confusion, overload, and reduced autonomy, fostering resistance and dissatisfaction [21].
Drawing from the definitions and theoretical connections above, we can directly link AI service quality to job satisfaction. Given that AI service quality encompasses the perceived effectiveness, reliability, responsiveness, and user-friendliness of AI systems, and considering the ISS model’s emphasis on system quality influencing user satisfaction, it logically follows that high AI service quality will positively impact employees’ overall job satisfaction. When AI systems are perceived as high quality (i.e., accurate, reliable, easy to use, and supportive of tasks without causing stress), they are likely to reduce stress, boost engagement by allowing employees to focus on more meaningful work, and enhance overall work efficiency. This positive experience directly contributes to an employee’s positive emotional response to their job, which is the essence of job satisfaction [17,18,22]. Therefore, we hypothesize:
H1: 
AI service quality positively influences employee job satisfaction in service organizations implementing High-Performance Work Systems (HPWSs).

2.2. Mediation of Perceived Organizational Justice

Perceived organizational justice captures employees’ judgments about the fairness of their work environment, particularly in terms of outcomes, procedures, and interpersonal interactions. This concept, fundamentally grounded in Equity Theory [23] and Organizational Justice Theory [24], is crucial for understanding how fairness influences employee motivation, engagement, and satisfaction.
According to Equity Theory, individuals assess fairness by comparing their contributions (e.g., time, effort, skills) to the rewards they receive, relative to others. A perceived balance in this exchange is typically associated with greater job satisfaction and commitment. This provides a foundational lens through which employees evaluate their work environment, including interactions with new technologies.
Organizational Justice Theory offers a more nuanced view by identifying three dimensions: distributive justice (fairness in the allocation of outcomes), procedural justice (fairness in the processes that lead to decisions), and interactional justice (fairness in interpersonal treatment and communication) [25]. These elements are particularly relevant in today’s AI-driven workplaces, where algorithmic systems are increasingly embedded in tasks that impact decision-making, communication, and performance evaluations. While AI technologies can offer benefits like reduced human bias and consistent decisions, they also raise concerns about transparency, accountability, and employee autonomy [26,27]. These issues are especially critical in sustainability-focused service sectors such as tourism and healthcare, where ethical integrity and workforce well-being are paramount.
Employees’ perceptions of AI systems often hinge on whether they see these technologies as fair and transparent [28]. When AI is viewed as consistent, understandable, and supportive, aligned with the principles of procedural and interactional justice, it can enhance trust, strengthen motivation, and reinforce organizational loyalty. Conversely, systems that appear opaque or biased can undermine trust, reduce job satisfaction, and hinder the long-term sustainability of workforce practices [29]. Thus, organizational justice acts as a crucial mechanism through which the perceived quality of AI services influences employee experiences. In High-Performance Work Systems (HPWSs), where trust and alignment between staff and management are central, fairness perceptions become even more significant. For example, when AI tools used in performance appraisals or task assignments are seen as impartial and explainable (reflecting strong procedural justice), employees are more likely to accept their outcomes. Research has shown that perceived fairness in AI systems positively influences job satisfaction [2].
As [10] highlights, employees interpret AI outputs through a fairness lens, which shapes their overall response to these systems [30]. This perspective, directly linked to Organizational Justice Theory, determines whether high-quality AI—defined by attributes like reliability, responsiveness, and usability—leads to positive emotional and motivational outcomes. This connection is especially important in sectors driven by sustainability goals, such as tourism. Employees often face complex challenges, including emotional labor, fluctuating workloads, and high customer expectations [30]. AI systems that equitably support task distribution, scheduling, or assistance can ease these pressures and create healthier, more sustainable work environments. As [31] points out, when employees perceive AI as fair and dependable, it boosts their trust in the organization and enhances their job satisfaction [32]. This underscores the link between perceived fairness (organizational justice) and desired employee outcomes.
Both Organizational Justice Theory and Equity Theory, alongside empirical evidence, suggest that perceived organizational justice is a key mediator in the relationship between AI service quality and job satisfaction. In the context of HPWSs, this mediating role highlights the importance of integrating digital innovation with ethical, inclusive, and sustainability-oriented management practices. When AI systems are perceived not as threats to fairness but as tools that promote justice, they can enhance employee trust, engagement, and well-being, contributing to the long-term resilience of service sector workforces [29].
Accordingly, we propose the following hypotheses, with strengthened theoretical linkages:
H2: 
AI service quality is positively associated with perceived organizational justice.
H3: 
Perceived organizational justice positively influences job satisfaction in service organizations implementing High-Performance Work Systems (HPWSs).
H4: 
Perceived organizational justice mediates the relationship between AI service quality and job satisfaction.

2.3. Moderation of Supervisor Support

In service-oriented organizations, effective supervision extends far beyond administrative control; it is a strategic function that mitigates errors, nurtures employee potential, and contributes to sustainable organizational performance [33]. Within High-Performance Work Systems (HPWSs), supervisor support acts as a vital link between operational goals and employee development, fostering outcomes that are both high-performing and sustainability-aligned. This support includes both technical guidance and emotional reinforcement, enabling employees to successfully navigate technological shifts, particularly the integration of artificial intelligence (AI).
Supervisor support encompasses a range of behaviors, from task instruction and constructive feedback to empathy and problem-solving, all of which shape employee attitudes and influence workplace behavior [34]. In sectors such as tourism, where sustainability and service quality depend heavily on human interactions, supervisors play a central role in promoting job satisfaction and fostering long-term employee engagement. Their actions help establish psychological safety and readiness for innovation, both crucial in transforming digital environments.
Within HPWS configurations, supervisors are tasked with maintaining the equilibrium between performance expectations and employee well-being. Their ability to align everyday tasks with core organizational values is especially important in industries that prioritize customer satisfaction and sustainability [35]. When supervisors communicate transparently and demonstrate emotional sensitivity, they foster a work climate that encourages learning, satisfaction, and service excellence. Research indicates that employees’ emotional responses, key drivers of job satisfaction [36], are closely linked to how supported they feel by their immediate supervisors.
In sustainability-focused service organizations, supervisor support operates on two levels. Functionally, it involves offering resources, training, and guidance, including support in using AI-enabled tools [37], ensuring that employees can deliver consistent and innovative service. Emotionally, it entails providing empathy, encouragement, and stress relief [38], which are crucial in retaining staff in roles that involve high emotional labor. Supervisors help employees align their abilities with sustainability targets, boosting morale and reducing role ambiguity.
The perceived quality of supervisor support also influences organizational justice perceptions. Supervisory behaviors that include fair feedback, equitable access to resources, and transparent leadership reinforce employee trust and perceptions of fairness [39]. According to the Conservation of Resources (COR) theory [40], when employees receive strong supervisory support, they gain psychological and functional resources, which buffer the negative effects of job demands and enhance satisfaction. In HPWS-driven service environments, supervisors play a key role in translating strategic goals into sustainable daily practices. As [41] asserts, supervisors help employees maintain a sense of direction and accountability while also sustaining morale during periods of technological change. Supportive gestures such as timely feedback, open body language, and emotional reassurance, referred to as nonverbal immediacy, can enhance trust and improve service delivery [41,42,43].
Supervisor support becomes especially important during digital transitions, such as the implementation of AI. When supervisors provide hands-on coaching, clarify AI functions, and address employee concerns, resistance to technology decreases. This is in line with Leader–Member Exchange (LMX) theory, which highlights how quality relationships characterized by trust and mutual respect facilitate adaptation and enhance job satisfaction [44,45]. Supportive leadership fosters such relationships, creating a foundation for successful AI adoption.
In AI-enabled HPWS environments, supervisor support serves as a key moderator. Supervisors help employees interpret AI-generated decisions, reinforce the rationale behind them, and ensure that technology is perceived as fair and helpful. Evidence shows that this kind of support boosts morale and strengthens the positive impact of AI service quality, especially in high-contact service sectors like tourism, where interactions are emotionally demanding [41,46,47]. Thus, supervisors act as critical intermediaries, ensuring that AI technologies enhance, rather than diminish, employee experience and performance.
Research from Kenya and other developing economies supports this view. Studies demonstrate that structured, responsive supervision leads to improvements in both employee productivity and psychological outcomes across diverse institutional settings [48,49,50]. These findings underline the importance of supervisory support in embedding HPWSs within sustainable service systems.
In conclusion, supervisor support amplifies the benefits of AI-driven service innovations by offering clarity, reassurance, and fairness. It reduces uncertainty, encourages continuous learning, and reframes AI as a complement rather than a threat to human labor. Ultimately, supervisor support is a critical enabler in the pursuit of sustainable, high-performance outcomes in service-based organizations.
H5: 
The relationship between AI service quality and job satisfaction is moderated by supervisor support.

2.4. The Conceptual Framework

The conceptual research framework is provided below (Figure 1). It was developed from the prior discussion based on the literature review and used in the development of the hypotheses.

3. Methods

To measure AI service quality, this study adopted the validated framework proposed by [10,18], which includes four key dimensions: AI assurance, responsiveness, empathy, and reliability. Each dimension was assessed using three items, resulting in twelve items rated on a 7-point Likert scale ranging from “strongly disagree” to “strongly agree.” Job satisfaction was evaluated using the three-item scale developed by [51], also measured on a 7-point Likert scale.
Supervisor support was assessed using five items drawn from the 14-item scale originally developed by [52]. These items were selected for their relevance to the service-sector context and were similarly rated on a 7-point Likert scale. Perceived organizational justice was measured using an adapted version of the scale by [53], which captures three distinct dimensions: distributive, procedural, and interactional justice. Each dimension was represented by three items, resulting in a nine-item scale. Responses for these items also followed a 7-point Likert format.
To evaluate the reliability and relationships among constructs, this study employed descriptive statistics, intercorrelation analysis, and Cronbach’s alpha coefficients. These methods provided a robust foundation for analyzing the internal consistency and structural validity of the measures within the service-based HPWS context.

Sampling and Data Collection

Data for this study were collected through an online survey distributed via Amazon Mechanical Turk (MTurk). MTurk was selected as a viable and cost-effective sampling technique due to its access to a large, geographically dispersed workforce across various service sectors, aligning with the study’s focus on high-performance work systems (HPWSs) and sustainable organizational practices. A total of 494 individuals responded to the survey. After data cleaning and exclusion of 66 cases due to incomplete responses or failed attention checks, the final dataset comprised 428 valid responses. The final sample included 305 male and 123 female participants, covering a range of industries such as tourism, hospitality, retail, healthcare, and professional services. Multiple attention check questions were embedded to improve response reliability and reduce the risk of inattentive participation. To ensure data quality, the inclusion criteria required respondents to have a minimum Human Intelligence Task (HIT) approval rate of 95% and to be located within the United States.

4. Results

A total of 494 participants initially responded to the survey. After excluding 66 cases with missing data, the final sample comprised 428 participants (305 males, 123 females). This study did not employ full Structural Equation Modeling (SEM). Instead, we conducted regression-based mediation and moderated mediation analyses using Python (version 3.8 for data cleaning and Process version 5) and R (version 4.5 model 5).
Construct reliability and validity were also assessed. Composite scores were calculated for each primary variable, AI service quality, job satisfaction, supervisor support, and perceived organizational justice, by averaging their respective scale items. The skewness values ranged from −1.31 to 0.54, and kurtosis values ranged from −1.01 to 2.85, indicating that all items fell within acceptable thresholds for normality and were suitable for parametric testing. Cronbach’s alpha values for all constructs exceeded the recommended threshold of 0.70, indicating strong internal consistency. To further validate the constructs, confirmatory factor analysis (CFA) was conducted, revealing satisfactory model fit (CFI > 0.90, RMSEA < 0.08).
Most respondents held managerial or supervisory roles, reported having 4 to 7 years of work experience, held a bachelor’s degree, and worked in departments with 41 to 60 employees.

4.1. Descriptive Statistics and Correlations

Means, standard deviations, Cronbach’s alpha values, and Pearson correlation coefficients for the study variables are presented in Table 1.
Perceived organizational justice exhibited weak negative correlations with both AI service quality and job satisfaction, and a weak positive correlation with supervisor support. All constructs demonstrated acceptable internal consistency (α > 0.70), except for perceived organizational justice (α = 0.61).

4.1.1. Mediation Analysis

A simple mediation analysis was performed to test whether perceived organizational justice mediates the relationship between AI service quality and job satisfaction. The results are summarized in Table 2.
The total effect of AI service quality on job satisfaction was significant, β = 0.9314, SE = 0.0279, t = 33.40, p < 0.001. AI service quality significantly predicted perceived organizational justice (β = −0.0665, SE = 0.0131, t = −5.08, p < 0.001). However, perceived organizational justice did not significantly predict job satisfaction (β = 0.0208, SE = 0.1033, t = 0.20, p = 0.84). The direct effect of AI service quality on job satisfaction remained significant when controlling for perceived organizational justice (β = 0.9328, SE = 0.0287, t = 32.45, p < 0.001).
The bootstrap estimate for the indirect effect was not significant (indirect effect = −0.0014, 95% CI [−0.0128, 0.0103]), indicating that perceived organizational justice did not mediate the relationship.

4.1.2. Moderated Mediation Analysis

A moderated mediation model was estimated to test whether supervisor support moderated the strength of the relationship between AI service quality and job satisfaction. The results are summarized in Table 3.
The results show that AI service quality has a significant and positive effect on job satisfaction (β = 0.46, p < 0.001), and the interaction between AI service quality and supervisor support is also significant (β = 0.06, p < 0.001). This suggests that supervisor support moderates the direct effect of AI service quality on job satisfaction.
The internal consistency of the survey instrument was assessed using Cronbach’s alpha, a measure of reliability and accuracy. A Cronbach’s alpha value above 0.7 is generally considered acceptable. As shown in Table 4, the components AI service quality, job satisfaction, supervisor support, and perceived organizational justice have Cronbach’s alpha values of 0.93, 0.79, 0.86, and 0.61, respectively. These values indicate that all components meet the recommended threshold except for perceived organizational justice, confirming the instrument’s reliability. Furthermore, the overall Cronbach’s alpha for the total scale is 0.94, well above the acceptable limit, suggesting that the items strongly correlate with their respective constructs and demonstrate internal consistency.
Table 4. Internal Consistency.
Table 4. Internal Consistency.
ConstructComposite Reliability (Cronbach’s alpha)AVECR
Component 1: AI service quality0.930.5500.933
Component 2: Job satisfaction0.790.5650.796
Component 3: Supervisor support0.860.5500.859
Component 4: Perceived organizational justice0.610.4060.654
Source: Authors’ work.
Convergent validity is considered adequate when the average variance extracted (AVE) is 0.5 or higher. The AVE values for AI service quality, job satisfaction, supervisor support, and perceived organizational justice are 0.550, 0.565, 0.550, and 0.406, respectively. Based on [54], AVE values of 0.5 or higher confirm convergent validity. All constructs meet this criterion, except for perceived organizational justice. However, the composite reliability (CR) scores for the respective components are 0.933, 0.796, 0.859, and 0.654. Ref. [54] also notes that if the AVE is below 0.5 but the CR exceeds 0.6, the construct can still have sufficient convergent validity. Therefore, despite the lower AVE for perceived organizational justice, its composite reliability supports the adequacy of its measurement, reinforcing internal consistency among the scale items.
As shown in Table 5, the VIFs suggest that our model has significant multicollinearity with certain independent variables. We chose to take no action regarding the multicollinearity. According to [55], “Even with VIF values that greatly exceed the rules of 4 or 10, one can often confidently conclude from regression analyses.” This is because multicollinearity is one possible source of high standard error.
Table 5. Variance inflation factors.
Table 5. Variance inflation factors.
AIPOJSSPOJ: SS
VIF12.121.1215.9734.48
Source: Authors’ work.
Simple slope analyses confirm that the positive association between AI service quality and job satisfaction becomes stronger with increasing levels of supervisor support. At low, mean, and high levels of supervisor support, the relationship remains significant and positive, with the strongest effect observed at high levels of support.
A visual interaction plot (Figure 2) illustrates that the slope of AI service quality on job satisfaction increases as supervisor support increases, confirming the moderating effect. These findings highlight the critical role of supportive leadership in enhancing the positive effects of AI on job satisfaction.

5. Discussion

5.1. Theoretical Implications

This study deepens theoretical insight at the intersection of artificial intelligence (AI), human resource management, and sustainability by exploring how AI-driven service quality, perceived fairness, and supervisor support jointly influence job satisfaction within service-oriented High-Performance Work Systems (HPWSs). Grounded in equity theory and organizational justice theory, these findings offer a nuanced understanding of how digital technologies interact with human-centered practices in sustainability-driven organizations.
The observed strong positive link between AI service quality and job satisfaction confirms earlier theoretical assertions that when AI tools are perceived as responsive, reliable, and supportive, employees report significantly higher satisfaction levels. This supports a growing body of literature suggesting that when employees view AI tools as equitable and effective, these technologies contribute positively to engagement, motivation, and organizational sustainability [2,31]. These results reinforce the potential for AI, when embedded thoughtfully into HRM systems, to act as both a performance enhancer and a promoter of employee well-being.
However, this study presents an unexpected divergence from traditional theory: perceived organizational justice did not mediate the relationship between AI service quality and job satisfaction. Although higher AI service quality was associated with improved fairness perceptions, these perceptions did not significantly influence job satisfaction. This challenges conventional assumptions in organizational justice theory, which typically link fairness evaluations to emotional and motivational outcomes. One explanation may lie in the psychological detachment often associated with algorithmic decision-making. Unlike human supervisors, AI systems may lack the relational and contextual sensitivity necessary to elicit strong affective responses. Employees might cognitively recognize AI processes as fair without forming the emotional bonds typically associated with justice perceptions. This calls for a reconsideration of how organizational justice is conceptualized and measured in increasingly digital workplaces, suggesting the need for updated frameworks that account for technology-mediated decision-making.
The most impactful theoretical contribution emerges from the moderated mediation analysis. Supervisor support significantly strengthened the positive relationship between AI service quality and job satisfaction, highlighting its role as a critical boundary condition. This finding echoes existing HPWS literature, which emphasizes the importance of leadership in facilitating employee adaptation to technological change [56]. From a theoretical standpoint, supervisors function not only as operational guides but also as emotional mediators, helping employees interpret and adapt to AI systems. By offering empathy, clarification, and support, supervisors render AI more approachable and meaningful within the work context.
This direct relationship affirms existing theoretical models and practical perspectives that see well-integrated AI as an enabler of meaningful work. While perceived organizational justice was hypothesized as a mediator, the data did not support this pathway, possibly due to contextual factors, measurement limitations, or the indirect nature of justice perceptions in this sample. In contrast, supervisor support significantly moderated the AI–satisfaction link, reinforcing its importance as a humanizing force in AI-driven environments. Although the positive link between AI service quality and job satisfaction aligns with global literature suggesting that well-designed AI systems can enhance psychological availability and job motivation by offloading repetitive tasks [57,58], the generalizability of this relationship may vary significantly across cultures. For example, in collectivist cultures such as China or Japan, where AI is often perceived as a tool for enhancing group efficiency, employees may react more positively to AI adoption compared to individualist cultures like the USA, where AI might be viewed as a threat to personal autonomy or job identity [59,60]. This cultural filter could influence whether AI is seen as empowering or displacing, thus affecting its impact on job satisfaction. Moreover, global differences in AI adoption maturity—such as higher adoption rates in Asia and lower integration in Europe and the USA [61]—may lead to varying levels of familiarity and trust in AI systems, further affecting outcomes. In terms of occupational generalization, our sample largely consisted of mid-level employees in managerial or supervisory roles within structured departments, which may limit the applicability of these findings to frontline service roles or physically intensive jobs where AI’s influence on satisfaction is mediated through entirely different pathways, such as physical relief or customer interaction support [62,63]. Studies in healthcare, hospitality, and mining sectors across Asia and Africa have shown divergent mediating or moderating mechanisms, such as psychological need satisfaction or job complexity, indicating that the nature of the job and the sector can significantly alter AI’s effects [10,58,64,65]. Therefore, while the core relationship observed here holds promise, it should not be assumed to universally apply without adjusting for cultural attitudes, AI adoption levels, industry-specific tasks, and organizational structures.
Collectively, these findings advance the theory of sustainable HRM by underscoring the combined importance of digital capability and human connection in HPWS environments. AI systems alone, regardless of their performance, are insufficient for achieving lasting job satisfaction. Rather, it is the integration of supportive leadership that enables organizations to bridge the technical–emotional divide, reinforcing trust, motivation, and sustainable engagement. This study thus contributes to an emerging theoretical perspective—hybrid sustainability, which advocates for a balance between technological advancement and relational management in the design of future-ready, socially sustainable workplaces.

5.2. Practical Implications

This study presents actionable guidance for service organizations aiming to implement sustainable, human-centered AI strategies within HPWSs. As AI becomes more prevalent in human resource functions, from scheduling to performance assessment, organizations must recognize that technology alone cannot guarantee positive outcomes. Successful integration hinges on thoughtful design, transparent communication, and supportive leadership structures that prioritize both efficiency and employee well-being.
First, the positive influence of AI service quality on job satisfaction highlights the importance of optimizing AI systems for both performance and employee experience. Organizations should treat AI not only as a technical tool but as a contributor to the work environment. Practical interventions could include co-designing interfaces with employee feedback, embedding clarity into AI outputs, and offering real-time system responsiveness. These efforts can increase employees’ trust in AI tools and foster a sense of agency and control, which is essential for sustainable work engagement. When AI is seen as fair, accurate, and user-friendly, it supports motivation, productivity, and psychological comfort.
Second, although perceived organizational justice did not serve as a mediator in the AI–satisfaction link, the observed tensions between AI quality and fairness perceptions show a critical implementation gap. High-performing systems may still be viewed as impersonal if employees do not understand their underlying logic or decision-making criteria. To bridge this gap, service organizations should prioritize AI transparency and procedural clarity. Initiatives such as AI orientation programs, interactive dashboards, or staff briefings on algorithmic processes can demystify AI operations. Promoting AI literacy enhances not only user competence but also fosters perceptions of fairness, an important aspect of organizational justice in technology-driven settings.
Third, and most significantly, the moderating role of supervisor support focuses on the centrality of human leadership in AI-integrated HPWSs. Supervisors act as mediators between AI systems and employee experiences. Training programs that equip leaders to explain AI decisions, facilitate user feedback, and respond empathetically to employee concerns can significantly strengthen the outcomes of AI implementation. Supportive supervisory behaviors such as encouragement, emotional awareness, and collaborative problem-solving contribute to a psychologically safe environment where employees feel heard and valued despite increasing digitalization. These leadership practices are particularly critical in service industries where interpersonal dynamics heavily influence job satisfaction and performance.
More broadly, these findings stress that achieving sustainability in service organizations requires a dual commitment: integrating high-quality digital tools and reinforcing human-centric work cultures. Technology should not displace human judgment and interaction but should be complemented by leadership that reinforces fairness, inclusion, and well-being. In this sense, AI becomes a vehicle not just for operational efficiency but for social sustainability when paired with responsible supervisory practices.
For tourism and service enterprises striving toward sustainable transformation, this means adopting an integrative approach to HPWSs, where AI tools are purposefully aligned with leadership behaviors and justice-enhancing practices. Organizations that invest in both smart systems and supportive supervisors are more likely to cultivate resilient, engaged, and future-ready workforces capable of thriving in evolving digital service environments.

5.3. Limitations and Suggestions for Future Research

While this study advances understanding of how AI service quality, supervisory support, and perceived organizational justice interact to influence job satisfaction within service-based HPWSs, several limitations must be acknowledged. The results of this study, which demonstrate the significant positive effect of AI service quality on job satisfaction and the moderating role of supervisor support, must be viewed not just as an immediate snapshot but as part of a larger, evolving picture. These limitations provide direction for future studies aiming to refine sustainable work systems in increasingly digitized environments.
To begin with, the study’s cross-sectional research design limits its ability to draw causality. Although the associations among variables were statistically strong, temporal patterns remain unexplored. Future investigations should employ longitudinal, time-lagged, or experimental methodologies to examine whether improvements in AI functionality or changes in supervisory behavior result in sustained effects on employee job satisfaction and fairness perceptions. Such designs could offer more nuanced insights into the long-term dynamics of AI adoption and leadership support in HPWS environments.
Second, the participant sample was recruited through Amazon Mechanical Turk (MTurk), a platform that offers a wide demographic reach but introduces potential biases. MTurk workers may differ from conventional service sector employees in terms of their organizational affiliation, experience with AI tools, and job structures. These contextual differences may influence their perceptions of justice and support. To enhance external validity, future studies should replicate this research across multiple service industries and real-world organizational settings, particularly in sectors like tourism, hospitality, or retail, where AI is integrated into customer-facing roles.
Another limitation stems from the measurement of perceived organizational justice. The internal consistency of the justice scale was relatively modest (α = 0.61), suggesting that the construct may not have been fully captured. Future studies should adopt more robust instruments that reflect the multidimensional nature of perceived organizational justice, including in-scope distributive, procedural, and interactional fairness, especially in the context of AI-mediated decisions. Given that perceptions of fairness are likely to evolve as employees gain more experience with AI systems, dynamic or context-sensitive justice scales may offer a better fit for these emerging environments.
Moreover, the non-significant mediating role of perceived justice between AI service quality and job satisfaction raises theoretical and empirical questions. Although AI quality influenced perceptions of fairness, these perceptions did not significantly impact satisfaction outcomes. This divergence suggests that fairness, as traditionally defined, may hold different meanings in algorithmic settings. Future research could explore additional moderating variables such as AI transparency, user trust, organizational culture, or employee technological readiness. These factors may shape how justice perceptions are interpreted and whether they translate into affective work outcomes.
Interpreting the results on AI service quality and work satisfaction requires careful consideration of the sample’s notable gender disparity, with men making up roughly 71% and women 29%. This disparity may not merely reflect sampling error but instead mirror broader, well-documented gender gaps in AI engagement and perception. Recent research indicates that women generally report higher levels of AI anxiety and lower perceived competence, trust, and usage of AI technologies compared to men [66,67]. Furthermore, observational evidence suggests that women are less likely to engage directly with generative AI tools, a trend that is influenced by both broader structural and psychological aspects as well as employment positions [68]. These disparities suggest that female employees may see and assess the quality of AI services differently from their male counterparts, possibly with more skepticism or concern. This study’s findings may therefore be biased toward more positive views of AI due to the preponderance of male respondents, under-representing the difficulties, fears, or opposition that may be more common among women. This emphasizes the need for further gender-balanced research to guarantee that workplace interventions and AI integration techniques are inclusive and sensitive to a range of perspectives.
Finally, although supervisor support emerged as a critical moderator in this study, the underlying mechanisms remain underexplored. It is unclear which specific supervisory behaviors, such as mentoring and emotional reassurance, are most effective in supporting employees’ adaptation to AI technologies. Future research should investigate the micro-dynamics of supervisor–employee interactions in digitally transformed workplaces, with a focus on identifying leadership competencies that promote resilience, clarity, and well-being during technological change. This could inform training programs aimed at promoting sustainable, supportive leadership in AI-enabled service environments.
In summary, the limitations identified here point to a growing research agenda at the intersection of digital transformation, human resource management, and organizational justice. Future studies should adopt interdisciplinary and multi-method approaches that address these complexities, enabling the design of High-Performance Work Systems that integrate technological innovation with fairness, human-centered leadership, and long-term sustainability.

6. Conclusions

This study explored the relationship between AI service quality and job satisfaction within the context of High-Performance Work Systems (HPWSs) in service organizations, examining the mediating role of perceived organizational justice and the moderating role of supervisor support. The findings reveal that AI service quality positively influences job satisfaction, and this relationship is significantly strengthened by supportive supervisory practices. However, perceived organizational justice did not significantly mediate the relationship, highlighting the complexity of fairness perceptions in AI-integrated workplaces.

Author Contributions

The proposed research framework was conceptualized by T.A.A. The structure of this study was discussed by T.A.A. and J.S. This article was written by T.A.A. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and approved by the Business Faculty Research Ethics Committee of Girne American University (Ref No.: 2023-2024-SPR-003, date of approval: 15 March 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable opinions that allowed us to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, K.; Lepak, D.P.; Hu, J.; Baer, J.C. How does human resource management influence organizational outcomes? A meta-analytic investigation of mediating mechanisms. Acad. Manag. J. 2012, 55, 1264–1294. [Google Scholar] [CrossRef]
  2. Kim, S.-W.; Kong, J.-H.; Lee, S.-W.; Lee, S. Recent advances of artificial intelligence in manufacturing industrial sectors: A review. Int. J. Precis. Eng. Manuf. 2022, 23, 111–129. [Google Scholar] [CrossRef]
  3. DeLone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  4. Ottosson, F.; Westling, M. Artificial Intelligence and Its Breakthrough in the Nordics: A Study of the Relationship Between AI Usage and Financial Performance in the Nordic Market. Unpublished Manuscript. 2020. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1446023&dswid=5791 (accessed on 10 March 2025).
  5. Tekic, Z.; Koroteev, D. From disruptively digital to proudly analog: A holistic typology of digital transformation strategies. Bus. Horiz. 2019, 62, 683–693. [Google Scholar] [CrossRef]
  6. McColl, R.; Michelotti, M. Sorry, could you repeat the question? Exploring video-interview recruitment practice in HRM. Hum. Resour. Manag. J. 2019, 29, 637–656. [Google Scholar] [CrossRef]
  7. Abraham, M.; Niessen, C.; Schnabel, C.; Lorek, K.; Grimm, V.; Möslein, K.; Wrede, M. Electronic monitoring at work: The role of attitudes, functions, and perceived control for the acceptance of tracking technologies. Hum. Resour. Manag. J. 2019, 29, 657–675. [Google Scholar] [CrossRef]
  8. Ooi, K.B.; Tan, G.W.-H.; Al-Emran, M.; Al-Sharafi, A.M.; Capatina, A.; Chakraborty, A.; Dwivedi, Y.K.; Huang, T.-L.; Kar, A.K.; Lee, V.-H.; et al. The potential of generative artificial intelligence across disciplines: Perspectives and future directions. J. Comput. Inf. Syst. 2023, 65, 76–107. [Google Scholar] [CrossRef]
  9. Tsamados, A.; Aggarwal, N.; Cowls, J.; Morley, J.; Roberts, H.; Taddeo, M.; Floridi, L. The ethics of algorithms: Key problems and solutions. Ai Soc. 2022, 37, 215–230. [Google Scholar] [CrossRef]
  10. Nguyen, T.M.; Malik, A. A two-wave cross-lagged study on AI service quality: The moderating effects of the job level and job role. Br. J. Manag. 2022, 33, 1221–1237. [Google Scholar] [CrossRef]
  11. Kaya, O.; Schildbach, J.; Schneider, S. Artificial intelligence in banking: A lever for profitability with limited implementation to date. Dtsch. Bank Res. 2019, 1–9. [Google Scholar]
  12. Alafeshat, R.; Tanova, C. Servant leadership style and high-performance work system practices: Pathway to a sustainable Jordanian airline industry. Sustainability 2019, 11, 6191. [Google Scholar] [CrossRef]
  13. Öksüz, M.; Tosyalı, H.; Tosyali, F. The link between supervisor support, servicing efficacy and job satisfaction among frontline hotel employees: An investigation in Turkey. Pers. Rev. 2023, 52, 1773–1790. [Google Scholar] [CrossRef]
  14. Prentice, C.; Nguyen, M. Engaging and retaining customers with AI and employee service. J. Retail. Consum. Serv. 2020, 56, 102186. [Google Scholar] [CrossRef]
  15. Wixom, B.H.; Todd, P.A. A theoretical integration of user satisfaction and technology acceptance. Inf. Syst. Res. 2005, 16, 85–102. [Google Scholar] [CrossRef]
  16. De Cremer, D.; Kasparov, G. AI should augment human intelligence, not replace it. Harv. Bus. Rev. 2021, 18. [Google Scholar]
  17. Abdullah, M.I.; Huang, D.; Sarfraz, M.; Ivascu, L.; Riaz, A. Effects of internal service quality on nurses’ job satisfaction, commitment and performance: Mediating role of employee well-being. Nurs. Open 2021, 8, 607–619. [Google Scholar] [CrossRef]
  18. Rathore, B. Digital transformation 4.0: A case study of LK Bennett from marketing perspectives. Int. J. Enhanc. Res. Manag. Comput. Appl. 2023, 10, 45–54. [Google Scholar] [CrossRef]
  19. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Acad. Manag. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
  20. Plastino, E.; Purdy, M. Game changing value from Artificial Intelligence: Eight strategies. Strategy Leadersh. 2018, 46, 16–22. [Google Scholar] [CrossRef]
  21. Sun, Y.; Li, S.; Yu, L. The dark sides of AI personal assistant: Effects of service failure on user continuance intention. Electron. Mark. 2022, 32, 17–39. [Google Scholar] [CrossRef]
  22. Dorta-Afonso, D.; Romero-Domínguez, L.; Benítez-Núñez, C. It’s worth it! High performance work systems for employee job satisfaction: The mediational role of burnout. Int. J. Hosp. Manag. 2023, 108, 103364. [Google Scholar] [CrossRef]
  23. Adams, J.S. Inequity in social exchange. In Advances in Experimental Social Psychology; Academic Press: Cambridge, MA, USA, 1965; Volume 2, pp. 267–299. [Google Scholar]
  24. Greenberg, J. A taxonomy of organizational justice theories. Acad. Manag. Rev. 1987, 12, 9–22. [Google Scholar] [CrossRef]
  25. Colquitt, J.A.; Conlon, D.E.; Wesson, M.J.; Porter, C.O.; Ng, K.Y. Justice at the millennium: A meta-analytic review of 25 years of organizational justice research. J. Appl. Psychol. 2001, 86, 425. [Google Scholar] [CrossRef] [PubMed]
  26. Leventhal, G.S. What should be done with equity theory? New approaches to the study of fairness in social relationships. In Social Exchange: Advances in Theory and Research; Springer: New York, NY, USA, 1980; pp. 27–55. [Google Scholar]
  27. Binns, R.; Van Kleek, M.; Veale, M.; Lyngs, U.; Zhao, J.; Shadbolt, N. ‘It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  28. Biswas, M.I.; Talukder, M.S.; Khan, A.R. Who do you choose? Employees’ perceptions of artificial intelligence versus humans in performance feedback. China Account. Financ. Rev. 2024, 26, 512–532. [Google Scholar] [CrossRef]
  29. Technology, Q. Leading with Transparency: AI Ethics and Trust in the Workplace. qualee. Available online: https://www.qualee.com/blog/leading-with-transparency-ai-ethics-and-trust-in-the-workplace (accessed on 7 March 2025).
  30. Köchling, A.; Wehner, M.C.; Ruhle, S.A. This (AI)n’t fair? Employee reactions to artificial intelligence (AI) in career development systems. Rev. Manag. Sci. 2024, 19, 1195–1228. [Google Scholar] [CrossRef]
  31. Gursoy, D.; Chi, C.G.; Lu, L.; Nunkoo, R. Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int. J. Inf. Manag. 2019, 49, 157–169. [Google Scholar] [CrossRef]
  32. Yu, L.; Li, Y. Artificial intelligence Decision-Making transparency and employees’ trust: The parallel multiple mediating effect of effectiveness and discomfort. Behav. Sci. 2022, 12, 127. [Google Scholar] [CrossRef]
  33. Rulandari, N. The effect of supervision and professionalism on staff performance at the office of social affairs in east Jakarta administrative city. Int. J. Humanit. Soc. Sci. 2017, 7, 184–192. [Google Scholar]
  34. Maertz, C.P., Jr.; Griffeth, R.W.; Campbell, N.S.; Allen, D.G. The effects of perceived organizational support and perceived supervisor support on employee turnover. J. Organ. Behav. 2007, 28, 1059–1075. [Google Scholar] [CrossRef]
  35. Okeke, M.; Onyekwelu, N.; Akpua, J.; Dunkwu, C. Performance management and employee productivity in selected large organizations in south-East, Nigeria. J. Bus. Manag. 2019, 5, 57–70. [Google Scholar]
  36. Weiss, H.M.; Cropanzano, R. Affective events theory. Res. Organ. Behav. 1996, 18, 1–74. [Google Scholar]
  37. Reblin, M.; Uchino, B.N. Social and emotional support and its implication for health. Curr. Opin. Psychiatry 2008, 21, 201–205. [Google Scholar] [CrossRef]
  38. Ryan, E. Building the Emotionally Learned Negotiator; MIT Press: Cambridge, MA, USA, 2006; Volume 22, pp. 209–225. [Google Scholar]
  39. Lee, H.; Chui, J. The mediating effect of interactional justice on human resource practices and organizational support in a healthcare organization. J. Organ. Eff. People Perform. 2019, 6, 129–144. [Google Scholar] [CrossRef]
  40. Hobfoll, S.E. Conservation of resources: A new attempt at conceptualizing stress. Am. Psychol. 1989, 44, 513. [Google Scholar] [CrossRef] [PubMed]
  41. Eisenberger, R.; Stinglhamber, F.; Vandenberghe, C.; Sucharski, I.L.; Rhoades, L. Perceived supervisor support: Contributions to perceived organizational support and employee retention. J. Appl. Psychol. 2002, 87, 565. [Google Scholar] [CrossRef] [PubMed]
  42. Buller, D.B.; Burgoon, J.K. Interpersonal deception theory. Commun. Theory 1996, 6, 203–242. [Google Scholar] [CrossRef]
  43. Syarienda, Y.; Basri, H.; Fahlevi, H. Problematika Penerapan Akuntansi Berbasis Akrual Pada Pemerintah Daerah Aceh Tengah. J. Perspekt. Ekon. Darussalam 2018, 4, 56–68. [Google Scholar] [CrossRef]
  44. Graen, G.B.; Wakabayashi, F. Cross-cultural leadership making: Bridging American and Japanese diversity for team advantage. Handb. Ind. Organ. Psychol. 1994, 4, 415–446. [Google Scholar]
  45. Brower, H.H.; Schoorman, F.D.; Tan, H.H. A model of relational leadership: The integration of trust and leader–member exchange. Leadersh. Q. 2000, 11, 227–250. [Google Scholar] [CrossRef]
  46. Chiang, C.-F.; Wu, K.-P. The influences of internal service quality and job standardization on job satisfaction with supports as mediators: Flight attendants at branch workplace. Int. J. Hum. Resour. Manag. 2014, 25, 2644–2666. [Google Scholar] [CrossRef]
  47. Liden, R.C.; Wayne, S.J.; Sparrowe, R.T. An examination of the mediating role of psychological empowerment on the relations between the job, interpersonal relationships, and work outcomes. J. Appl. Psychol. 2000, 85, 407. [Google Scholar] [CrossRef]
  48. Mwasawa, D.N.; Wainaina, L. Performance Supervision and Employees’ Productivity in the Ministry of Lands, Environment and Natural Resources of Taita Taveta County, Kenya. Eur. Sci. J. ESJ 2021, 17, 128. [Google Scholar] [CrossRef]
  49. Hannang, A.; Qamaruddin, M.Y. The effect of supervision levels on employees’ performance levels. In Proceedings of International Conference on Community Development (ICCD 2020); Atlantis Press: Dordrecht, The Netherlands, 2020; pp. 1–5. [Google Scholar]
  50. Lee, C.-W.; Kusumah, A. Influence of supervision on employee performance with work motivation as an intervening variable. Rev. Integr. Bus. Econ. Res. 2020, 9, 240–252. [Google Scholar]
  51. Netemeyer, R.G.; Boles, J.S.; McKee, D.O.; McMurrian, R. An investigation into the antecedents of organizational citizenship behaviors in a personal selling context. J. Mark. 1997, 61, 85–98. [Google Scholar] [CrossRef]
  52. Zellars, K.L.; Perrewé, P.L. Affective personality and the content of emotional social support: Coping in organizations. J. Appl. Psychol. 2001, 86, 459. [Google Scholar] [CrossRef] [PubMed]
  53. Rego, A.; Cunha, M.P.E. Organisational justice and citizenship behaviors: A study in the Portuguese cultural context. Appl. Psychol. 2010, 59, 404–430. [Google Scholar] [CrossRef]
  54. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  55. O’brien, R.M. A caution regarding rules of thumb for variance inflation factors. Qual. Quant. 2007, 41, 673–690. [Google Scholar] [CrossRef]
  56. Chen, M.Y.C.; Lin, C.Y.Y.; Lin, H.E.; McDonough, E.F. Does transformational leadership facilitate technological innovation? The moderating roles of innovative culture and incentive compensation. Asia Pac. J. Manag. 2012, 29, 239–264. [Google Scholar] [CrossRef]
  57. Liu, X.; Li, Y. Examining the Double-Edged Sword Effect of AI Usage on Work Engagement: The Moderating Role of Core Task Characteristics Substitution. Behav. Sci. 2025, 15, 206. [Google Scholar] [CrossRef]
  58. Azzabi, M.; Bouchnak, M. AI Service Quality, Employee Satisfaction, and Well-Being in Modern Workplaces. In IGI Global eBooks; Scientific Publishing: Singapore, 2024; pp. 71–102. [Google Scholar] [CrossRef]
  59. Simmol. Cultural Differences in How People React to AI Replacing Jobs? R/Singularity. Available online: https://www.reddit.com/r/singularity/comments/1lpe5o0/cultural_differences_in_how_people_react_to_ai/ (accessed on 18 July 2025).
  60. Xavier, D.F.; Korunka, C.; Reiter-Palmon, R. AI integration and workforce development: Exploring job autonomy and creative self-efficacy in a global context. PLoS ONE 2025, 20, e0319556. [Google Scholar] [CrossRef] [PubMed]
  61. Iamandi, I.; Constantin, L.; Munteanu, S.M.; Cernat-Gruici, B. Insights on the Relationship between Artificial Intelligence Skills and National Culture. Amfiteatru Econ. 2024, 26, 741. [Google Scholar] [CrossRef]
  62. Gaskell, A. Does AI Affect Job Satisfaction? Cybernews. Available online: https://cybernews.com/ai-news/does-ai-affect-job-satisfaction/ (accessed on 7 July 2025).
  63. Ronanki, R. Rethinking Work with AI: What Stanford’s Groundbreaking Workforce Study Means for Healthcare’s Future. Forbes. Available online: https://www.forbes.com/sites/forbesbooksauthors/2025/07/18/rethinking-work-with-ai-what-stanfords-groundbreaking-workforce-study-means-for-healthcares-future/ (accessed on 18 July 2025).
  64. Arboh, F.; Zhu, X.; Atingabili, S.; Yeboah, E.; Drokow, E.K. From fear to empowerment: The impact of employees AI awareness on workplace well-being–a new insight from the JD–R model. J. Health Organ. Manag. 2025. [Google Scholar] [CrossRef]
  65. Ansong, A.; Gnankob, R.I.; Agyemang, I.O.; Issau, K.; Okorley, E.N.A. Organizational justice, supervisor-provided resources and duty orientation: Lessons from the mining sector. Eur. J. Manag. Bus. Econ. 2024. [Google Scholar] [CrossRef]
  66. Russo, C.; Romano, L.; Clemente, D.; Iacovone, L.; Gladwin, T.E.; Panno, A. Gender differences in artificial intelligence: The role of artificial intelligence anxiety. Front. Psychol. 2025, 16, 1559457. [Google Scholar] [CrossRef]
  67. Franken, S.; Mauritz, N.; Wattenberg, M. Gender Differences Regarding the Perception of Artificial Intelligence. ResearchGate. 2020. Available online: https://www.researchgate.net/publication/339177652_Gender_Differences_Regarding_the_Perception_of_Artificial_Intelligence (accessed on 18 July 2025).
  68. Møgelvang, A.; Bjelland, C.; Grassini, S.; Ludvigsen, K. Gender Differences in the Use of Generative Artificial Intelligence Chatbots in Higher Education: Characteristics and Consequences. Educ. Sci. 2024, 14, 1363. [Google Scholar] [CrossRef]
Figure 1. Theoretical Model. Source: Authors’ work.
Figure 1. Theoretical Model. Source: Authors’ work.
Sustainability 17 07892 g001
Figure 2. Source: Authors’ work.
Figure 2. Source: Authors’ work.
Sustainability 17 07892 g002
Table 1. Means, standard deviations, cronbach’s alpha values, and correlation coeffcients.
Table 1. Means, standard deviations, cronbach’s alpha values, and correlation coeffcients.
Variable1234MeanSDA
1. AI Service Quality 5.290.930.93
2. Job Satisfaction0.851 ** 5.271.020.79
3. Supervisor Support0.899 **0.805 ** 5.281.000.86
4. Perceived Organizational Justice−0.239 **−0.198 **0.271 **4.290.260.61
** p < 0.001; Source: Authors’ work.
Table 2. Mediation effect.
Table 2. Mediation effect.
EffectbSEt95% CI (Bootstrap)
Total Effect
AI Service Quality → Job Satisfaction0.93140.027933.4024 **
Path a (Predictor to Mediator)
AI Service Quality → Perceived Organizational Justice−0.06650.0131−5.0787 **
Path b (Mediator to Outcome)
Perceived Organizational Justice → Job Satisfaction0.02080.10330.2015
Direct Effect (Controlling for Mediator)
AI Service Quality → Job Satisfaction (Direct Effect)0.93280.028732.4465 **
Indirect Effect (Bootstrap)−0.00140.0057(−0.0128, 0.0103)
Indirect Effect (Normal Distribution)−0.00140.0070−0.1976
** p < 0.001; Source: Authors’ work.
Table 3. Moderation effect.
Table 3. Moderation effect.
Model ComponentbSEt
Dependent Variable: Job Satisfaction
AI Service Quality (AI)0.460.094.86 **
Perceived Organizational Justice (POJ)0.110.101.08
Supervisor Support (SS)−0.090.10−0.90
AI × Supervisor Support Interaction (int_1)0.060.023.75 **
Conditional Indirect Effect of AI on Job Satisfaction
At 16th percentile of Supervisor Support0.730.0611.79 **
At 50th percentile of Supervisor Support0.790.0612.30 **
At 84th percentile of Supervisor Support6.200.0712.16 **
** p < 0.001; Source: Authors’ work.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Atoyebi, T.A.; Sopuru, J. Humanizing AI in Service Workplaces: Exploring Supervisor Support as a Moderator in HPWSs. Sustainability 2025, 17, 7892. https://doi.org/10.3390/su17177892

AMA Style

Atoyebi TA, Sopuru J. Humanizing AI in Service Workplaces: Exploring Supervisor Support as a Moderator in HPWSs. Sustainability. 2025; 17(17):7892. https://doi.org/10.3390/su17177892

Chicago/Turabian Style

Atoyebi, Temitope Ayodeji, and Joshua Sopuru. 2025. "Humanizing AI in Service Workplaces: Exploring Supervisor Support as a Moderator in HPWSs" Sustainability 17, no. 17: 7892. https://doi.org/10.3390/su17177892

APA Style

Atoyebi, T. A., & Sopuru, J. (2025). Humanizing AI in Service Workplaces: Exploring Supervisor Support as a Moderator in HPWSs. Sustainability, 17(17), 7892. https://doi.org/10.3390/su17177892

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop