Previous Article in Journal / Special Issue
Leaders’ STARA Competencies and Green Innovation: The Mediating Roles of Challenge and Hindrance Appraisals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algorithmic Management in Hospitality: Examining Hotel Employees’ Attitudes and Work–Life Balance Under AI-Driven HR Systems

by
Milena Turčinović
1,
Aleksandra Vujko
1,* and
Vuk Mirčetić
2
1
Faculty of Tourism and Hospitality Management, Singidunum University, Danijelova No. 32, 11000 Belgrade, Serbia
2
Faculty of Applied Management, Economics and Finance in Belgrade, University Business Academy in Novi Sad, Jevrejska 24, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
Tour. Hosp. 2025, 6(4), 203; https://doi.org/10.3390/tourhosp6040203 (registering DOI)
Submission received: 29 June 2025 / Revised: 8 September 2025 / Accepted: 23 September 2025 / Published: 4 October 2025
(This article belongs to the Special Issue Digital Transformation in Hospitality and Tourism)

Abstract

This study investigates hotel employees’ perceptions of AI-driven human resource (HR) management systems within the Accor Group’s properties across three major European cities: Paris, Berlin, and Amsterdam. These diverse urban contexts, spanning a broad portfolio of hotel brands from luxury to economy, provide a rich setting for exploring how AI integration affects employee attitudes and work–life balance. A total of 437 employees participated in the survey, offering a robust dataset for structural equation modeling (SEM) analysis. Exploratory factor analysis identified two primary factors shaping perceptions: AI Perceptions, which encompasses employee views on AI’s impact on job performance, communication, recognition, and retention, and balanced management, reflecting attitudes toward fairness, personal consideration, productivity, and skill development in AI-managed environments. The results reveal a complex but optimistic view, where employees acknowledge AI’s potential to enhance operational efficiency and career optimism but also express concerns about flexibility loss and the need for human oversight. The findings underscore the importance of transparent communication, contextual sensitivity, and continuous training in implementing AI systems that support both organizational goals and employee well-being. This study contributes valuable insights to hospitality management by highlighting the relational and ethical dimensions of algorithmic HR systems across varied organizational and cultural settings.

1. Introduction

The advent of algorithmic management has significantly transformed the hospitality industry, reshaping operational processes, employee experiences, and ethical considerations (Yang, 2025; Jianu et al., 2025; Maiti et al., 2025). As digital technologies profoundly affect all economies and become increasingly integrated into service delivery (Mirčetić & Mihić, 2022), algorithmic systems promise enhanced efficiency, precision, and responsiveness (Luo & Yi, 2025). One of the primary advantages of algorithmic management in hospitality is its ability to significantly enhance operational efficiency (H. Zhang et al., 2025). Through sophisticated algorithms, managers can optimize resource allocation, such as staffing levels and inventory management, ensuring that resources are deployed where they are needed most, thereby reducing waste and increasing productivity (Diwan, 2025). According to Wu et al. (2024), dynamic scheduling systems powered by algorithms can adjust shift patterns in real-time based on fluctuating customer demand, which is especially critical during peak hours or special events. This not only improves service delivery but also reduces labor costs by preventing overstaffing (Schwartz et al., 2025). According to Webster and Ivanov (2020) and Mandić et al. (2024), automation minimizes human error in decision-making processes. According to Mojoodi et al. (2025), algorithms that handle reservation systems can prevent overbooking and double-booking issues, which traditionally relied on manual oversight prone to mistakes. According to Rojas and Jatowt (2025), algorithmic management enhances responsiveness to customer preferences and behaviors. By analyzing data from online reviews, booking patterns, and social media interactions, these systems can tailor offerings, personalize marketing, and optimize service delivery, thereby elevating customer satisfaction (Jia et al., 2025). The integration of such data-driven approaches leads to a more agile and competitive hospitality operation, capable of adapting swiftly to changing market conditions (J. Xu et al., 2024; Contessi et al., 2024; G. I. Huang et al., 2025).
Given the rapid digital transformation of the hospitality industry, particularly in the realm of human resource management (Ivanov et al., 2020; Vujko et al., 2025a), there is growing interest in how algorithmic or AI-driven systems shape employee experiences. While such technologies promise efficiency, transparency, and performance optimization, their impact on frontline employees remains insufficiently explored. This study contributes to the expanding scholarly discourse on algorithmic management and artificial intelligence (AI) integration in the hospitality industry by empirically examining the effects of AI-driven human resource (HR) systems on hotel employees’ perceptions of performance, autonomy, fairness, and work–life balance. While existing research has primarily emphasized the operational efficiencies afforded by AI in hospitality contexts (Buhalis & Leung, 2018; Zhu & Chen, 2025; Alam et al., 2025), there remains a notable gap concerning how these systems impact employee well-being and organizational trust. Addressing this gap, the present study adopts a post-positivist epistemological approach and applies structural equation modeling (SEM) to a large, demographically diverse sample of 437 employees from Accor Group hotels in Paris, Berlin, and Amsterdam.
This study investigates how hotel employees perceive the implementation of AI-driven HR systems—especially in terms of job attitudes, workplace satisfaction, and work–life balance. The subject of the research centers on employees working in Accor Group hotels across Paris, Berlin, and Amsterdam, offering insight into how algorithmic management is experienced across different operational and cultural contexts. The research is grounded in the central hypothesis (H): Employees’ perceptions of algorithmic (AI-driven) management systems significantly influence their overall attitudes and work–life balance. Accordingly, the aim is to analyze whether AI is perceived as a supportive and developmental tool or as a restrictive force that reduces autonomy and emotional engagement. The research addresses the following question: How do hotel employees perceive the impact of AI-driven HR systems on their work experience, job attitudes, and work–life balance, and what factors influence these perceptions? In doing so, the study contributes to a more nuanced understanding of digital transformation in hospitality work environments, with implications for ethical management, employee retention, and sustainable human resource strategies. This study advances theoretical integration by combining concepts from the algorithmic management literature (Kellogg et al., 2020), organizational justice theory (Leventhal, 1980; Colquitt, 2001), and technology acceptance models (Davis, 1989; Venkatesh et al., 2003). Additionally, the analysis examines how demographic factors (e.g., age, gender, education, and experience) moderate these relationships, providing nuanced insights into the human consequences of digital transformation in hospitality. As such, this research fills a critical empirical and conceptual gap, offering actionable knowledge for the ethical and sustainable integration of AI in HR practices.
Two latent constructs—AI Perceptions and Balanced Management—were identified through exploratory factor analysis and validated through SEM, capturing both the supportive and disruptive dimensions of algorithmic management. It can be concluded that the integration of AI-driven management systems in the hospitality sector brings both promising advancements and critical challenges. Employees increasingly encounter algorithmic tools in areas such as scheduling, performance monitoring, and communication, and their perceptions of these systems significantly shape workplace outcomes. While AI offers potential gains in efficiency, recognition, and career development, it can also generate concerns related to reduced flexibility, monotony, and the replacement of human judgment. These opposing experiences suggest that the effects of AI in the workplace are not merely technical but deeply relational and contextual. It can also be concluded that employee acceptance of algorithmic management depends largely on how fairly, transparently, and supportively these systems are introduced and maintained.

2. Literature Review

The integration of algorithmic management systems and AI-powered tools, such as service robots, is rapidly transforming operations in the hospitality industry (Cheng & Hwang, 2025). These technologies are not merely technical instruments; rather, they are interpreted through the lens of employees’ individual beliefs, emotional responses, and personal attributions (Tan & Li, 2025). As such, employee perceptions significantly influence how algorithmic systems are received and how they shape workplace engagement, satisfaction, and adaptability (J. Y. Bai et al., 2025; Vujko et al., 2025b). Despite their technical standardization, algorithmic tools are socially situated—shaped by contextual and psychological dynamics specific to service-oriented environments like hospitality. A growing body of research highlights that employee attitudes are profoundly shaped by how they interpret the purpose and fairness of these systems (Kellogg et al., 2020). When AI tools are perceived as enhancing objectivity and performance clarity, they may elicit trust and cooperation (Ahn & Chen, 2022; Huo et al., 2025). Conversely, when viewed as rigid or dehumanizing, they can lead to autonomy concerns, reduced morale, and resistance (T. Xu et al., 2025).
To explain these contrasting responses, scholars have drawn upon several psychological frameworks. Among them, Regulatory Focus Theory (RFT) offers a particularly useful lens, distinguishing between promotion-focused and prevention-focused motivational orientations (Higgins, 1997; H. Li et al., 2025). Promotion-focused individuals are driven by aspirations and advancement, and thus are more likely to interpret algorithmic management as an opportunity for career development and operational efficiency (Madanchian & Taherdoost, 2025; Bennett & Martin, 2025). Their forward-looking orientation often facilitates engagement with AI-powered HR tools and robotic systems. In contrast, prevention-focused employees prioritize stability and error avoidance (Madanchian et al., 2023; Shin et al., 2025). These individuals may perceive algorithmic systems as threatening their autonomy, particularly when human oversight is minimal or when implementation lacks transparency (Sun et al., 2025; Y. Huang & Gursoy, 2024). Their skepticism can manifest as stress, reduced engagement, and active resistance—especially in interpersonal service contexts where relational and emotional labor is central (Cauchi et al., 2017; Y. Zhang et al., 2025).
However, the existing literature often treats these responses in isolation, failing to synthesize them into a cohesive framework adapted to hospitality’s unique labor conditions—such as high emotional intensity, task unpredictability, and cultural diversity among employees. Fragmented reliance on RFT, organizational justice, and the technology acceptance model leaves a gap in capturing the nuanced, socially embedded nature of algorithmic adoption in this sector. To bridge this gap, we propose an integrative conceptual model—Algorithmic Management Accommodation Framework (AMAF)—that links motivational orientations (RFT), fairness perceptions (organizational justice), and perceived usefulness/ease of use (TAM) to three key hospitality-specific mediators: emotional labor expectations, task discretion, and human–AI role complementarity. This model emphasizes that successful digital transformation is not merely a function of system design but of psychological congruence between employee profiles and the affordances of AI tools (Yadav & Dhar, 2021).
Moreover, this framework recognizes that algorithmic systems interact with team dynamics and organizational culture in ways that can amplify either cohesion or polarization (Zhou et al., 2024). Promotion-focused individuals may drive innovation and openness, whereas prevention-focused attitudes can fuel disengagement and distrust (Kirshner & Lawson, 2025). A hospitality-specific synthesis thus requires attention not only to individual psychology but to how AI reshapes service relationships and emotional labor norms. Effective integration requires context-sensitive leadership (Jianu et al., 2025). Managers should avoid one-size-fits-all solutions and instead foster transparent communication and tailored support strategies (Jerez-Jerez, 2025). Training programs aligned with motivational profiles can enhance acceptance and reduce uncertainty (S. Bai & Zhang, 2025). Critically, algorithmic tools must be positioned as augmentations of human potential, not as substitutes for relational expertise—especially in labor-intensive service encounters. While existing theories provide important insights, a novel conceptual synthesis is necessary to address the hospitality sector’s distinct operational and emotional dynamics. The proposed AMAF model offers such a pathway, highlighting the psychological and social mediators that condition algorithmic management outcomes in frontline service work. This integrative perspective enables more inclusive and effective implementation strategies and opens new directions for theory-building in service technology research.
To synthesize these insights, the present study draws upon the Algorithmic Management Accommodation Framework (AMAF), which integrates motivational orientations (Regulatory Focus Theory), fairness perceptions (Organizational Justice Theory), and perceived utility (Technology Acceptance Model) to explain how employees interpret and respond to AI-driven HR systems. This model emphasizes that the consequences of algorithmic management are conditioned by both individual and contextual factors—including emotional labor expectations and human–AI role complementarity. From this integrated foundation, we derive the central hypothesis: employees’ perceptions of algorithmic management systems significantly influence their work attitudes and work–life balance. Specifically, we anticipate that positive perceptions of AI systems (e.g., transparency, fairness, developmental support) are associated with greater engagement and satisfaction, while negative perceptions (e.g., rigidity, depersonalization, lack of control) correlate with stress and detachment. By grounding the hypothesis in this multidimensional framework, the study addresses not only the technical design of AI tools but also their ethical, emotional, and motivational implications in hospitality labor environments.
Building on this integrative perspective, the present study seeks to empirically assess how employees perceive and respond to AI-driven HR systems in the hospitality sector. Drawing from the Algorithmic Management Accommodation Framework (AMAF), we propose that employee reactions are shaped by both cognitive appraisals of AI’s functionality and affective responses to its fairness and developmental potential. Accordingly, two sub-hypotheses are formulated. Sub-Hypothesis 1 (H1a) posits that employees who perceive artificial intelligence (AI) systems as beneficial—particularly in terms of enhancing job performance, reducing workload, and improving communication—will exhibit higher levels of career optimism and stronger intentions to remain with their organization. Sub-Hypothesis 2 (H1b) suggests that employees who view algorithmic management as achieving a balance between operational efficiency and individual consideration will report lower perceptions of flexibility loss and higher satisfaction with skill development opportunities. These hypotheses guide the empirical investigation and reflect the dual emphasis of AMAF on both technological utility and human-centered alignment.

3. Materials and Methods

This study focuses on hotel employees working within the Accor Group’s properties across three major European cities: Paris, Berlin, and Amsterdam. Accor maintains a substantial presence in these destinations, with approximately 147 hotels in Paris, 35 in Berlin, and 25 in Amsterdam. These properties span a broad portfolio of brands—from luxury to economy—catering to diverse market segments and offering a rich context for analyzing employee experiences in varied organizational environments. To ensure statistical robustness and representativeness, a total of 437 hotel employees were surveyed. This sample size exceeds the minimum threshold recommended by Hair et al. (2019) for structural equation modeling (SEM), thereby supporting the reliability and validity of the analytical results. The sample was proportionally distributed across the three cities to capture geographic and organizational diversity in employee experiences with AI-driven human resource systems.
Demographic characteristics of the sample indicate a balanced representation across key variables. The gender distribution was nearly equal, with 50.3% male and 49.7% female respondents. The age structure reveals a mature workforce: 3.2% were aged 18–24, 3.4% were 25–34, 24.0% were 35–44, 30.2% were 45–54, 25.4% were 55–64, and 13.7% were over 65 years old. This composition suggests a workforce with considerable professional experience and potentially well-formed attitudes toward HR policies and technology. Educational background varied, with 51.7% of participants holding a high school diploma, 41.9% possessing a college or faculty degree, and 4.6% having obtained a master’s or doctoral degree. Only 1.8% of respondents reported completing only elementary education. The relatively high level of formal education among employees reflects the growing professionalization of the hospitality sector and supports the assumption that participants were capable of meaningfully engaging with questions regarding technological systems in HR. In terms of work experience, 4.3% of employees had been employed for less than one year, 53.3% had between one and five years of experience, 31.4% had between six and fifteen years, and 11.0% had over sixteen years of service. These findings suggest that a significant portion of the sample had extensive tenure in the hotel industry, which is particularly relevant for exploring attitudes toward long-term career development and organizational loyalty in the context of digital transformation. While the age distribution in the sample is skewed toward older employees—with approximately 70% over the age of 45—this reflects the characteristics of the surveyed hotels, which tend to employ more experienced staff and exhibit lower turnover rates. Consequently, the sample may not fully represent the broader, often younger, hospitality workforce. However, this age profile offers a valuable perspective on how experienced employees engage with algorithmic HR systems, especially in terms of long-term trust, professional development, and organizational loyalty. This demographic imbalance is acknowledged as a limitation and addressed in the Section 6.
This study adopts a post-positivist epistemological stance, recognizing that employees’ perceptions of algorithmic management are shaped by subjective experiences yet can be systematically measured and analyzed. To capture both measurable patterns and embodied, affective dimensions, the research employed a mixed-methods design combining a large-scale quantitative survey with qualitative autoethnographic sensory walks. Between June 2024 and June 2025, data were collected from hotel employees working in Accor Group properties across Paris, Berlin, and Amsterdam. The quantitative survey involved 437 participants and was administered using a mixed-mode approach: the majority of responses were gathered face-to-face during on-site visits, while others completed the survey digitally via secure email invitations. These hotels were selected for their active integration of AI-driven HR systems, such as digital recruitment platforms, shift scheduling tools, and chatbot-based employee support.
To complement the survey data, six sensory walks were conducted across Accor Group hotels in Paris, Berlin, and Amsterdam. Each walk lasted between 60 and 90 min and was carried out in spaces where employee interaction with AI-driven HR systems was most evident, such as back-of-house corridors, staff break rooms, digital kiosks, and scheduling terminals. The researcher adopted an autoethnographic stance, moving through these environments both as an observer and as an embodied participant attuned to sensory and affective dimensions. Observations were systematically recorded through detailed fieldnotes and audio memos taken during and immediately after the walks. These included descriptions of ambient sounds, spatial arrangements, textures, and odors, as well as embodied impressions such as fatigue, unease, or comfort. A reflective journal was maintained throughout the data collection period, where the researcher expanded on initial impressions, noted recurring themes, and connected sensory experiences to broader organizational and technological contexts. The process of data selection was guided by analytic relevance rather than exhaustiveness. Reflections and sensory observations were retained if they illuminated employee perceptions of algorithmic management or highlighted dimensions of AI integration not fully captured by the survey instrument. For example, instances where environmental cues (e.g., signage, noise from automated kiosks, or the spatial clustering of staff around digital terminals) intersected with feelings of surveillance, monotony, or empowerment were prioritized in the analytic narrative. This approach ensured transparency in the generation of qualitative insights, while maintaining methodological rigor through systematic recording, iterative reflection, and thematic alignment with the study’s research questions. In doing so, the sensory walks provided a textured, affective account that deepened the interpretation of survey results and allowed for the identification of contextual nuances in employees’ engagements with AI-driven HR systems.
The qualitative data helped contextualize and deepen the understanding of employee experiences that may not be fully captured by structured survey instruments. Reflections and sensory data included in the analysis were selected for their relevance to the research questions and their ability to reveal nuanced emotional, ethical, and contextual aspects of working with AI-driven HR technologies. Together, this mixed-methods approach provides a comprehensive and rigorous examination of how hotel employees perceive and engage with algorithmic human resource management in operational hospitality environments.
To examine the core premise of this study, two sub-hypotheses were formulated based on the underlying factor structure identified in preliminary analyses.
-
Sub-Hypothesis 1 (H1a): It posits that employees who perceive artificial intelligence (AI) systems as beneficial—specifically in terms of enhancing job performance, reducing workload, and improving communication—will exhibit higher levels of career optimism and stronger intentions to remain with their organization.
-
Sub-Hypothesis 2 (H1b): This hypothesis suggests that employees who view algorithmic management as striking a balance between operational efficiency and personal consideration will report lower perceptions of flexibility loss and greater satisfaction regarding skill development opportunities.
To explore hotel employees’ attitudes, satisfaction, and perceptions of work–life balance under AI-driven human resource (HR) systems, a structured questionnaire was developed, consisting of 40 evaluative statements. Each item was measured on a five-point Likert scale (1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree), enabling respondents to express their level of agreement with statements reflecting key constructs such as performance, autonomy, recognition, fairness, and digital support. The questionnaire design was grounded in an extensive literature review and drew upon several established theoretical models and validated measurement instruments: Constructs related to algorithmic control, flexibility, decision-making, and fairness were informed by research on algorithmic management, especially Kellogg et al. (2020) and Deldadehasl et al. (2025). Items measuring job satisfaction and work engagement were adapted from the Job Satisfaction Survey (Spector, 1997) and the Job Diagnostic Survey (Hackman & Oldham, 1975). Measures of work–life balance and scheduling stress were based on Netemeyer et al. (1996). Employee reactions to digital systems—such as perceived usefulness, feedback mechanisms, and communication improvements—were derived from the Technology Acceptance Model (Davis, 1989) and the Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. (2003). To evaluate perceived fairness, inclusivity, and trust in automated decisions, items were guided by the Organizational Justice framework developed by Colquitt (2001) and Leventhal (1980). Finally, digital HR practices within hospitality were contextualized using insights from Buhalis and Leung (2018) and Ivanov and Webster (2017), particularly concerning service automation and smart HR environments.
The research was conducted between June 2024 and June 2025, during which the authors visited multiple hospitality workplaces across Europe. Data were collected using a mixed-mode approach: while the majority of responses were gathered face-to-face through on-site engagement, additional participants were reached via email invitations with secure survey links. This method enabled access to both operational and support staff across different levels of the organization. A significant part of the fieldwork was carried out in Accor Group hotels in Paris, Berlin, and Amsterdam. These hotels were purposefully selected due to their integration of AI-powered HR systems, including platforms for recruitment, shift scheduling, and chatbot-based employee support. During site visits, the researchers interacted with employees to collect responses in person, while others completed the survey digitally after being contacted through the hotels’ communication channels. These sites provided real-world insight into how algorithmic management functions in active hospitality operations and how it is perceived by those affected by it.
To uncover the latent structure underlying the 40 questionnaire items and identify coherent dimensions of employee perception, an exploratory factor analysis (EFA) was conducted. EFA reduces data complexity by grouping correlated variables into broader conceptual categories known as factors. This statistical approach is especially useful in identifying core constructs that shape how employees experience and interpret algorithmic systems in the workplace.
Χ i = λ 1 F 1 + λ 2 F 2 + ε i
where Χ i represents the observed variable (e.g., Job Monotony, Performance Impact), λ are factor loadings, F are the latent constructs (AI Perceptions and Balanced Management), and ε is the error term, representing variance not explained by the factors. To examine the relationships between employees’ perceptions of algorithmic management and their evaluations of balanced and fair HR practices, the study employed structural equation modeling (SEM) using AMOS. The model is based on two latent constructs: Factor 1 (F1)—AI Perceptions, and Factor 2 (F2)—Balanced Management. The measurement model specifies how observed indicators reflect their underlying latent constructs through confirmatory factor analysis (CFA).
The two-factor model was developed to reflect both the supportive and developmental aspects of AI (captured by F1) and employees’ perceptions of balance, fairness, and personal consideration under algorithmic management (captured by F2).
Factor 1, AI Perceptions, encompasses employees’ overall perceptions and attitudes toward algorithmic management within the hospitality context. This factor captures a comprehensive range of experiences related to how AI-driven HR systems influence job performance, employee retention, workload, recognition, and communication, as well as the perceived balance between efficiency and employee satisfaction. The factor reflects a spectrum of employee experiences, highlighting both the advantages and challenges brought by algorithmic management. The construct “Performance Impact” indicates that employees believe AI enhances their job performance, while “Retention Intention” measures their likelihood to remain employed due to the presence of AI management. Conversely, constructs such as “Flexibility Loss” and “Turnover Risk” reveal concerns related to reduced work flexibility and the potential for increased employee turnover associated with AI systems. Organizational support is also addressed through items like “Support Availability” and “Training Adequacy,” assessing employees’ perceptions of assistance and preparation in adapting to AI tools. The constructs “Recognition Timeliness” and “Communication Improvement” evaluate the role of AI in providing prompt feedback and fostering better communication between staff and supervisors. Concerns about job engagement and autonomy appear in “Job Monotony” and “Decision Replacement,” which reflect feelings of reduced job variety and the displacement of human decision-making by AI. The construct “Satisfaction Tradeoff” emphasizes the tension between increased efficiency and potential decreases in employee satisfaction. The factor further incorporates elements related to inclusivity and future outlooks, such as “Diversity Respect,” which measures the consideration of diverse employee needs, and “Career Optimism,” which reflects confidence that AI-driven HR practices will enhance long-term career prospects. In summary, Factor 1 offers a rich, multidimensional understanding of hotel employees’ perceptions of algorithmic management, capturing the varied ways AI integration affects attitudes, job satisfaction, and loyalty within the hospitality industry. For Factor 1, AI Perceptions, the calculated values were composite reliability (CR) = 0.96 and average variance extracted (AVE) = 0.91. These results reflect exceptionally high reliability and very strong convergent validity, indicating that the factor is both statistically robust and conceptually coherent.
Factor 2, Balanced Management, captures employees’ perceptions of how algorithmic management influences the balance between organizational efficiency and individual well-being within the hospitality workplace. This factor reflects the extent to which AI-driven HR systems are perceived to consider personal circumstances, support productivity, and foster fair treatment. The construct “Balance Disruption” highlights concerns that algorithmic scheduling may negatively impact employees’ work–life balance, indicating a challenge in managing the demands of AI-driven shift assignments. In contrast, “Personal Consideration” reflects the degree to which employees feel their individual needs and circumstances are acknowledged by AI management systems when assigning shifts. Employees’ perceived productivity is captured by “Productivity Boost,” which measures whether working under algorithmic management enhances their efficiency and output. The construct “Fair Efficiency” emphasizes the hotel’s ability to maintain fairness and equity alongside the pursuit of AI-driven operational efficiency. Factor 2 also includes “Skill Development,” reflecting employees’ views that AI management supports their learning and growth by helping them acquire new skills. Finally, “Future Growth” captures employees’ belief in the continuing expansion and importance of AI-based management within the hospitality industry. Overall, Factor 2 represents a nuanced perspective on the interplay between AI-driven efficiency and employee-centric considerations, highlighting both the benefits and potential challenges of algorithmic management in fostering a balanced and supportive work environment.
This construct captures employees’ attitudinal evaluation of how AI-based management systems integrate operational efficiency with human-centered considerations in the hospitality workplace. Rather than isolating technological performance from employee well-being, Balanced Management reflects a unified perception that AI-enabled practices can simultaneously promote productivity, support skill development, and uphold principles of fairness and personal respect. Although the associated items refer to seemingly distinct themes—such as fairness in task assignment, opportunities for learning and professional growth, and sensitivity to individual circumstances—they collectively load onto a single factor that encapsulates employees’ overall appraisal of algorithmic management as a sustainable and humane approach to workplace governance. This integrative factor structure aligns with emerging frameworks on responsible AI use in service industries, which emphasize the importance of balancing efficiency, equity, and personal empowerment in technology adoption. The construct demonstrated strong psychometric properties, with a composite reliability (CR) of 0.93 and an average variance extracted (AVE) of 0.74, both exceeding the recommended thresholds of 0.70 for CR and 0.50 for AVE. These values indicate excellent internal consistency and robust convergent validity. Based on these empirical results, Balanced Management is retained as a single latent construct in this study, while its multidimensional potential (e.g., fairness versus developmental support) is acknowledged as a direction for future research.
The Fornell–Larcker criterion was formally satisfied, as the square roots of AVE for both constructs (0.90 for AI Perceptions and 0.88 for Balanced Management) were greater than, or at least equal to, their inter-construct correlation (0.88). The HTMT ratio was also 0.88, which falls below the liberal threshold of 0.90 (Henseler et al., 2015) but slightly above the stricter 0.85 cut-off (Kline, 2016). These results indicate that the constructs are empirically distinguishable, albeit with a narrow margin of separation.
While the item pool was theoretically grounded, the constructs—AI Perceptions and Balanced Management—had not previously been operationalized in the specific context of AI-integrated human resource systems in the hospitality industry, particularly from the perspective of frontline employees. Given this contextual novelty, exploratory factor analysis (EFA) was employed as an initial empirical technique to assess whether the theoretically derived items cohered into internally consistent dimensions relevant to this population. The intent was not to discover latent structures in the absence of theory, but to verify construct validity in a setting where these measures had not yet been tested. Following this, a confirmatory factor analysis (CFA) was conducted to test the adequacy and robustness of the two-factor structure identified through EFA. This sequential use of EFA and CFA reflects a hybrid validation strategy, balancing theoretical guidance with empirical rigor to ensure both the conceptual soundness and context-specific reliability of the measurement model.
These latent constructs were then utilized in a structural equation model (SEM) to test the proposed hypotheses and examine how demographic variables (e.g., age, gender, experience) may moderate these relationships. To evaluate the adequacy of the structural equation model, several goodness-of-fit indices were examined. The Comparative Fit Index (CFI) was 0.913, and the Tucker–Lewis Index (TLI) was 0.904. Both indices exceed the commonly accepted threshold of 0.90, indicating an acceptable level of model fit. The Goodness-of-Fit Index (GFI) also yielded a value of 0.913, further supporting the model’s fit to the observed data. The root mean square error of approximation (RMSEA) was 0.062, which falls within the acceptable range of 0.05 to 0.08, suggesting a reasonable approximation of the model in the population. Collectively, these fit indices indicate that the proposed structural model demonstrates a good overall fit, supporting its use for testing the relationships among employee attitudes, job satisfaction, and work–life balance under AI-driven HR systems in the hospitality sector.
Building on these validated relationships, the next step involved exploring whether perceptions of algorithmic management varied across different national contexts. To examine potential national-level differences in employee attitudes, one-way analyses of variance (ANOVAs) were conducted using city of employment (Paris, Berlin, and Amsterdam) as the grouping variable. This approach enabled the comparison of mean scores for individual perception items and factors across the three cities. Post hoc comparisons were performed using Tukey’s honest significant difference (HSD) tests to identify specific group differences when ANOVA results indicated statistical significance. Statistical significance was assessed at the conventional alpha level of 0.05.

4. Results

To assess the suitability of the dataset for factor analysis, the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity were conducted (Table 1). The KMO value was 0.970, which is well above the recommended threshold of 0.60 (Kaiser, 1974), indicating excellent sampling adequacy. This result confirms that the patterns of correlations among the variables are compact enough to yield distinct and reliable factors. Bartlett’s test of sphericity was highly significant, with a chi-square value of 22,053.516, degrees of freedom (df) = 378, and p < 0.001. This test evaluates whether the correlation matrix significantly differs from an identity matrix (i.e., a matrix in which all correlations are zero). The significant result indicates that the observed correlations are sufficient for conducting exploratory or confirmatory factor analysis. Taken together, these results provide strong statistical evidence that the dataset is appropriate for factor analysis and further structural equation modeling, supporting the validity of using this approach to explore underlying constructs such as employee attitudes, job satisfaction, and work–life balance in the context of AI-driven HR systems.
To identify the underlying factor structure of hotel employees’ perceptions of AI-driven HR systems, an exploratory factor analysis (EFA) was performed using principal component analysis (PCA) with Varimax rotation (Table 2). The analysis revealed two significant factors based on eigenvalues greater than 1. The first factor had an initial eigenvalue of 22.056, explaining 78.77% of the total variance, while the second factor had an eigenvalue of 1.295, accounting for an additional 4.63%. Together, these two factors explained 83.40% of the variance before rotation. After extraction, the first factor explained 77.71% and the second 4.05%, summing to 81.76% of the total variance, indicating a strong and well-defined factor structure. Varimax rotation further clarified the results, redistributing variance so that the first rotated factor explained 47.67% and the second 34.09%, while maintaining the total explained variance at 81.76%. This rotation enhanced the interpretability of the factors without sacrificing explanatory power.
The two resulting factors (Table 3) correspond to key dimensions in employee perceptions: Factor 1 represents AI Perceptions, encompassing employees’ overall attitudes toward algorithmic management and its impact on job performance and workplace communication; Factor 2 represents Balanced Management, reflecting employees’ perceptions of fairness, personal consideration, work–life balance, and skill development under AI-driven HR systems. These findings confirm a clear and interpretable two-factor structure underlying hotel employees’ views on AI management.
The structural equation modeling (SEM) results (Figure 1) reveal a well-fitting model comprising two factors, F1—AI Perceptions and F2—Balanced Management, along with the influence of demographic variables (education, age, gender, and work experience). These findings provide insight into how hotel employees assess the integration of AI in workplace management, balancing efficiency gains with human-centered values. The first latent construct, AI Perceptions (F1), is composed of 14 observed variables reflecting employee attitudes toward the effects of AI-driven or algorithmic management systems. The factor includes variables representing both positive and negative outcomes, indicating the duality of experiences employees report when interacting with automated managerial tools. Positive perception indicators include the following: Performance Impact: employees associate AI systems with improved overall performance; Retention Intention: some employees are more inclined to remain in their jobs due to perceived improvements linked to AI; Recognition Timeliness and Support Availability: these reflect timely acknowledgment and assistance in AI-managed environments; Workload Reduction and Training Adequacy: employees recognize practical support in easing job demands and adapting to new systems; Communication Improvement and Career Optimism: positive long-term views of AI’s role in internal communication and professional growth. Negative indicators reflect nuanced concerns: Flexibility Loss and Turnover Risk: employees express that algorithmic scheduling reduces their control over work hours and may elevate turnover; Job Monotony and Satisfaction Tradeoff: these signal a perceived decline in job engagement and emotional satisfaction; and Decision Replacement and Diversity Respect: skepticism remains about AI replacing nuanced human decisions and the system’s ability to accommodate diverse employee needs. The factor loadings for F1 were high, ranging from 0.88 to 0.97, indicating strong internal consistency and reliable measurement of the construct. This suggests that employees are able to clearly differentiate and assess the diverse impacts of AI on their professional experience.
The second latent construct, Balanced Management (F2), consists of six observed variables that capture employees’ perceptions of fairness, personal development, and future orientation in AI-managed systems. “Balance Disruption” and “Personal Consideration” reflect concerns about the potential negative impact of AI on work–life balance and individual circumstances. “Productivity Boost” and “Fair Efficiency” represent perceived gains in performance coupled with fairness in implementation. “Skill Development” and “Future Growth” indicate employee recognition of AI’s potential to support ongoing learning and long-term career prospects. The standardized factor loadings for these items range from 0.80 to 0.96, demonstrating strong internal consistency and confirming that perceptions of balance and fairness are well-defined and meaningfully interpreted by respondents. All standardized factor loadings were below 1.0, ranging from 0.86 to 0.99 for AI Perceptions and 0.80 to 0.96 for Balanced Management, indicating that no Heywood cases (factor loadings > 1.0) were present in the final solution. The figure has been updated to show standardized estimates consistently, and diagnostic checks (outliers, multicollinearity, and model identification) confirmed no estimation issues.
The structural model reveals a strong and statistically significant path from Balanced Management (F2) to AI Perceptions (F1), with a standardized regression weight of 0.879. This indicates that employees who perceive AI systems as fair, considerate of individual needs, and supportive of personal and professional development are substantially more likely to hold positive views of AI integration in the workplace. This relationship underscores a crucial mediating dynamic: while AI systems may alter traditional managerial practices, their acceptance and perceived legitimacy depend on whether they align with employees’ social, ethical, and developmental expectations. Concepts such as fairness, individual recognition, and future career support emerge as central to building trust, acceptance, and engagement with algorithmic management.
Among the demographic predictors included in the model, education demonstrates a strong positive influence on AI Perceptions (unstandardized coefficient = 2.49; standardized coefficient = 0.38). This finding suggests that individuals with higher levels of education are more likely to hold favorable views of AI-based management systems. This may be attributed to greater technological literacy, adaptability, and readiness to engage with digital tools among more educated employees. In contrast, gender, age, and work experience exhibit weak or non-significant effects on Balanced Management (F2), with standardized coefficients close to or below 0.05. These results indicate that perceptions of fairness, work–life balance, and developmental support in AI-managed environments are broadly consistent across demographic groups, and are not significantly shaped by individual background characteristics.
The standardized regression weights in Table 4 provide strong empirical evidence supporting the internal validity of the two latent constructs in the model: F1—AI Perceptions and F2—Balanced Management. The data also offer insights into the influence of key demographic variables on perceptions of AI-driven management systems. The structural path from F1 (AI Perceptions) to F2 (Balanced Management) shows a very high and statistically significant standardized regression weight (β = 0.879). This suggests a robust and positive relationship between the two constructs. In practical terms, employees who have favorable perceptions of AI—seeing it as supportive, fair, and useful—are also more likely to perceive AI-based management systems as balanced, inclusive, and development-oriented. This central pathway reinforces the idea that the way AI is perceived on an emotional and operational level directly influences employees’ trust in its fairness and its ability to integrate human values. In other words, the more employees experience AI as enhancing their job performance, recognition, communication, and development, the more they believe AI is being applied in a fair and personally considerate manner.
The structural equation model revealed a strong and statistically significant relationship between the two latent constructs, AI Perceptions (F1) and Balanced Management (F2), with a standardized path coefficient of β = 0.88. This high coefficient indicates that employees who perceive AI systems as helpful, transparent, and ethically implemented are considerably more likely to report favorable outcomes in terms of fairness, skill development, and work–life integration. Although the strength of this association warrants close attention, it does not undermine the conceptual distinctiveness of the constructs. The constructs were derived through rigorous factor analysis and supported by theoretical differentiation—AI Perceptions captures employees’ appraisal of the functionality and impact of algorithmic systems, while Balanced Management reflects their evaluation of how these systems influence fairness, personal consideration, and development opportunities. The absence of cross-loadings and the clear factor structure support their empirical separation. Therefore, the strong path coefficient should be interpreted not as a sign of conceptual redundancy, but rather as evidence that positive AI perceptions significantly shape employees’ broader sense of managerial balance. This relationship reinforces the model’s core proposition: that the success of algorithmic management depends not solely on its technical efficiency, but on how employees experience and interpret its integration into human resource practices.
Demographic variables show weak to negligible effects on the Balanced Management construct (F2): Age shows a minor positive effect (β = 0.130), suggesting that older employees may slightly favor AI when it is perceived as balancing efficiency with fairness; Gender (β = 0.053) and Education (β = −0.052) show minimal influence, indicating perceptions of balanced AI management are largely consistent across these groups; Experience has a small negative relationship (β = −0.097), possibly reflecting a subtle resistance or skepticism among more experienced workers toward AI’s ability to fairly manage human-centric tasks. Although these effects are weak, they point to the possibility that younger or less experienced employees may be more open to AI-based systems, especially if they perceive them as aligned with contemporary organizational practices.
All observed variables related to the AI Perceptions construct demonstrate very strong loadings, confirming that they are reliable indicators of the underlying latent factor. Notably, Communication Improvement (β = 0.990) and Job Monotony (β = 0.966) are among the highest-loading items, highlighting the dual nature of employee experiences—improved communication on one hand, but a potential loss of engagement and creativity on the other. Performance Impact (β = 0.965), Recognition Timeliness (β = 0.964), and Career Optimism (β = 0.963) reflect the perception that AI enhances both short-term task performance and long-term professional development. Items reflecting potential downsides, such as Flexibility Loss (β = 0.897), Turnover Risk (β = 0.899), and Decision Replacement (β = 0.909), also load highly on the same construct, confirming that employees interpret AI as a multidimensional force—capable of supporting performance but potentially undermining autonomy and decision-making. This coexistence of positive and negative associations within the same factor highlights the complexity of how employees engage with algorithmic systems, reinforcing the need for nuanced management strategies.
The indicators for the Balanced Management construct also show high reliability and conceptual coherence: Skill Development (β = 0.962) and Balance Disruption (β = 0.923) load particularly strongly, suggesting that employees judge the quality of AI management based on its ability to foster learning while respecting work–life balance. Future Growth (β = 0.909) and Personal Consideration (β = 0.887) indicate the importance of both strategic foresight and individualized attention in shaping trust toward AI systems. Fair Efficiency (β = 0.849) and Productivity Boost (β = 0.796) reflect employees’ attention to both equity and output—a balance they seek in AI-driven environments. These results confirm that F2 captures a value-oriented dimension of AI management, centered on fairness, adaptability, and growth potential. Together, the standardized regression weights from this model provide compelling evidence that AI management systems are perceived through a lens that balances operational efficiency with human values. The strength of the F1→F2 path suggests that positive perceptions of AI are a necessary foundation for trust in its balanced application. High loadings across both constructs affirm that employees engage deeply with AI on multiple fronts—technological, emotional, developmental, and ethical. These insights underscore the importance for hospitality managers and system designers to go beyond efficiency and automation, focusing instead on transparent, personalized, and developmental uses of AI, particularly in employee-facing roles.
Sub-Hypothesis 1 (H1a) is reflected in Factor 1, AI Perceptions, which captures employees’ recognition of the positive impacts of AI-driven management systems—specifically improvements in job performance, workload reduction, and enhanced communication. The findings support this hypothesis, demonstrating that employees who perceive these benefits tend to exhibit greater career optimism and stronger intentions to remain with their organization. Sub-Hypothesis 2 (H1b) corresponds to Factor 2, Balanced Management, encompassing employees’ perceptions of how algorithmic management balances operational efficiency with personal consideration. The results confirm that when employees feel their individual circumstances are acknowledged and supported—particularly regarding work–life balance and skill development—they report lower perceptions of flexibility loss and higher satisfaction with growth opportunities. Together, these sub-hypotheses emphasize the dual nature of employee experiences with algorithmic management, underscoring the importance of both technological effectiveness and a human-centered approach to AI implementation. The strong structural link between Balanced Management and AI Perceptions further reinforces that trust in AI depends on how well it aligns with human-centered values. The findings reaffirm that algorithmic management is not a neutral intervention. Its adoption reshapes power dynamics, decision-making processes, and the lived experience of work. The legitimacy and long-term sustainability of such systems hinge on how well they balance operational efficiency with principles of fairness, dignity, inclusiveness, and personal development. In the hospitality sector—where service quality, emotional labor, and human connection are central—the integration of AI must be approached with particular sensitivity. Only when digital innovations are embedded within a broader social contract that values employees as partners in technological change can AI fulfill its promise to improve both business outcomes and worker well-being.
To explore potential national-level differences in employee attitudes toward algorithmic management, one-way ANOVAs were conducted with city of employment (Paris, Berlin, and Amsterdam) as the grouping variable. The first factor, AI Perceptions, examined respondents’ attitudes toward the use of artificial intelligence in hospitality management across these locations. Descriptive statistics indicated generally positive perceptions of AI, with mean scores ranging from approximately 4.15 to 4.44 on a 5-point Likert scale. One-way ANOVA tests revealed significant differences between cities on several perception items: Performance Impact (F(2,434) = 5.39, p = 0.005), Retention Intention (F(2,434) = 3.69, p = 0.026), Flexibility Loss (F(2,434) = 3.90, p = 0.021), Recognition Timeliness (F(2,434) = 9.23, p < 0.001), and Training Adequacy (F(2,434) = 7.35, p = 0.001). Other items did not show statistically significant differences across cities (p > 0.05). Post hoc Tukey tests indicated that respondents in Amsterdam consistently rated AI perceptions lower than those in Paris and Berlin. For example, Amsterdam’s mean rating for perceived AI usefulness was significantly lower than Paris (mean difference = 0.19, p = 0.006) and Berlin (mean difference = 0.14, p = 0.042). In contrast, differences between Paris and Berlin respondents were generally not statistically significant across items.
Overall, Paris respondents demonstrated the most positive attitudes toward AI, Berlin showed moderately positive perceptions, and Amsterdam exhibited the most cautious or reserved stance. Despite these differences, mean scores across all groups remained above 4.0, indicating broadly favorable perceptions of AI integration in hospitality management. The findings on AI Perceptions highlight meaningful geographical variations in how AI is viewed by hospitality employees in Paris, Berlin, and Amsterdam. Respondents in Paris showed the strongest positive perceptions, suggesting a cultural environment supportive of technological innovation in service industries. This aligns with France’s national emphasis on digital transformation and innovation-friendly policies, which may foster employee optimism about AI’s role in enhancing work quality and operational efficiency. Berlin participants reflected a balanced outlook, indicative of Germany’s pragmatic approach to AI adoption—embracing innovation while maintaining strong labor protections and transparency. The moderate positivity in Berlin suggests confidence in AI systems being implemented responsibly within a regulated framework. Amsterdam’s respondents, while still positive, were comparatively more skeptical. This caution may stem from local concerns about automation risks such as job displacement or algorithmic management, which have been prominent in Dutch public discourse. Slower or less visible AI integration in hospitality workplaces could contribute to uncertainty or wariness. The overall favorable perceptions across cities—scores above 4 on a 5-point scale—indicate a shared acknowledgment of AI’s potential benefits. However, the variation underscores the importance of context-sensitive AI adoption strategies. For Paris, reinforcing positive perceptions through employee engagement and highlighting success stories may sustain acceptance. In Berlin, transparent communication and participatory approaches will help maintain trust. In Amsterdam, addressing employee concerns directly and providing clear information about AI’s role could reduce skepticism and foster smoother adoption. These results suggest that cultural and institutional factors play critical roles in shaping AI perceptions and should be carefully considered by hospitality managers and policymakers aiming to implement AI technologies effectively.
Significant group differences emerged for two key items related to Balanced Management. For Personal Consideration, the ANOVA yielded a statistically significant result, F(2, 434) = 9.82, p < 0.001. Post hoc Tukey tests indicated that employees in Paris (M = 4.51) reported significantly higher levels of personal consideration than those in Berlin (M = 4.31, p = 0.003) and Amsterdam (M = 4.25, p < 0.001). Similarly, for Fairness Perception, the ANOVA was also significant, F(2, 434) = 14.90, p < 0.001. Respondents in Paris (M = 4.63) scored significantly higher than those in Berlin (M = 4.33, p < 0.001) and Amsterdam (M = 4.34, p < 0.001), with no significant difference between Berlin and Amsterdam. For the remaining items—Balance Disruption (algorithmic scheduling’s negative effect on work–life balance), Fair Efficiency (balancing AI efficiency with fair employee treatment), Skill Development (AI’s role in learning new skills), and Future Growth (expectations for AI’s growing importance)—no statistically significant differences were found across cities (all p > 0.05), although Paris generally had slightly higher mean scores. These findings suggest that cultural and organizational context play a role in shaping how employees perceive algorithmic management, particularly regarding issues of personal respect and fairness. Employees in Paris consistently reported higher levels of perceived fairness and personalized consideration than their counterparts in Berlin and Amsterdam, possibly reflecting cultural expectations related to workplace dignity and institutional norms in France’s hospitality sector. Conversely, perceptions related to skill development and learning opportunities appeared more stable across countries, showing no significant variation. This points to the potential for shared professional values in hospitality regarding career growth, regardless of national setting. These results underline the need for context-sensitive AI implementation strategies in multinational hospitality operations, particularly in dimensions that involve ethical treatment, fairness, and personal respect. Future research should build on these findings by incorporating full measurement invariance testing and deeper qualitative exploration of cultural meaning-making around AI management.

5. Discussion

The results of this study reveal a complex but ultimately optimistic picture of how employees perceive the role of AI in hospitality management. The findings demonstrate that employee attitudes toward algorithmic management are neither uniformly positive nor negative, but rather shaped by nuanced interpretations of how these technologies are introduced, experienced, and embedded within broader workplace practices. On one hand, when AI tools are well-integrated, transparent, and support employee development, they are associated with numerous benefits—including improved performance, timely recognition, reduced workloads, enhanced communication, and increased optimism about career progression. These positive associations reflect a growing recognition among staff that digital systems, when implemented thoughtfully, can improve clarity, predictability, and operational efficiency in a demanding service environment. However, this potential is counterbalanced by significant concerns. When algorithmic systems are perceived as dehumanizing, inflexible, or indifferent to personal needs, negative reactions become pronounced. Employees report reduced flexibility, heightened monotony, and feelings of alienation, particularly when AI tools override human judgment or fail to account for individual circumstances and diversity. This perception of impersonal automation not only undermines morale but may also increase turnover intention, as employees struggle to find meaning and agency in their roles. These contrasting perceptions underscore a critical insight: the success of AI in hospitality management is not solely a technical matter but is fundamentally relational and ethical. Employees do not respond to the technology itself in isolation; they respond to the management philosophy and practices surrounding its use. Whether AI systems are experienced as empowering or constraining depends largely on how they are framed, communicated, and supported within the organization. Although discriminant validity criteria were formally met, the high correlation (β = 0.88) and borderline HTMT value (0.88) suggest that employees may not clearly differentiate between general perceptions of AI and perceptions of fairness and balance. This conceptual proximity highlights that, in practice, these domains are closely intertwined rather than fully distinct. In other words, employees’ overall evaluations of AI systems appear strongly aligned with their sense of fairness, balance, and developmental opportunities within algorithmic management. This overlap, while a methodological limitation, also represents an important empirical finding that provides insight into how employees experience and interpret AI in the workplace.
This has direct implications for human resources (HR) and operational management. To maximize the benefits and minimize the risks of algorithmic management, hospitality organizations must adopt a hybrid approach that combines technological efficiency with human-centered values. Specifically, three key strategies are essential: 1. Transparent Communication: Employees need clear and honest explanations about how AI systems make decisions—especially regarding scheduling, performance evaluation, and shift allocation. This transparency helps build trust and ensures that staff does not feel manipulated or unfairly treated by opaque algorithms. 2. Human Oversight and Context Sensitivity: AI tools should complement, not replace, managerial discretion. Supervisors must be empowered to override algorithmic decisions when contextual, emotional, or ethical considerations demand it. This ensures that individual needs, life circumstances, and interpersonal dynamics are respected. 3. Continuous Training and Development: Staff must be equipped with the skills to use, interpret, and adapt to AI systems. Ongoing training not only reduces resistance but also positions AI as a career-enhancing tool rather than a threat. When employees feel competent and supported, their engagement with digital tools becomes more constructive.

5.1. Positive Perception of AI as Supportive and Developmental

The analysis of employee perceptions reveals a strong positive dimension to the integration of AI-based management systems within hospitality settings. Several indicators highlight how algorithmic tools are not only accepted but often valued by employees when implemented in ways that enhance job performance, professional development, and workplace communication. One of the most prominent benefits reported is the improved efficiency and accuracy of job performance. Employees perceive that AI-supported systems contribute meaningfully to streamlining daily operations. By automating routine administrative tasks, such as shift allocation, performance tracking, and data entry, AI allows staff to dedicate more time to core service activities that require human interaction and emotional intelligence. This shift is seen as a way to enhance both productivity and the perceived value of employees’ contributions. The indicator related to Performance Impact confirms that algorithmic tools are recognized not as replacements for human labor, but as enhancers of job effectiveness. Linked to this is the perception that AI contributes to greater job security and organizational commitment. The Retention Intention item illustrates that when AI systems are perceived as fair and consistent, employees feel more inclined to remain in their positions. This can be attributed to reduced ambiguity in job expectations, improved clarity of performance metrics, and the presence of structured, transparent systems that support equitable decision-making. AI-driven environments, when properly managed, can foster a sense of organizational stability and trust in leadership processes.
The model highlights that AI systems are perceived to improve timeliness and consistency in employee recognition, as reflected in the indicators Recognition Timeliness and Support Availability. Unlike human managers who may vary in responsiveness, AI tools can immediately register, log, and communicate positive employee behaviors or outcomes. For example, performance dashboards, automated feedback notifications, or digital acknowledgments can ensure that employees feel seen and appreciated in real time. In addition, AI systems can help supervisors identify performance dips or behavioral cues early, allowing them to provide support or guidance before problems escalate. This proactive dimension of AI management contributes to a workplace culture where recognition and assistance are timely and data-informed. Another important finding concerns the perception of workload reduction. Many respondents report that AI systems help to reduce the mental and physical strain associated with operational complexity. Workload Reduction is not merely a matter of removing tasks but optimizing how and when those tasks are assigned. This is especially relevant in hospitality contexts, where irregular hours and high service demands can result in burnout. Algorithmic tools that dynamically adjust workloads, flag staffing gaps, or assist in resource planning are seen as relieving pressure, thus contributing to better job satisfaction. Crucially, this effect is amplified when paired with adequate training. The Training Adequacy indicator underscores the importance of equipping staff with the digital skills needed to interact confidently with AI systems. Without proper training, AI may be perceived as intrusive or disempowering. However, when training is sufficient, AI is more likely to be viewed as a supportive ally in the workplace rather than a controlling force.
The integration of AI is viewed as enhancing communication and professional growth. Respondents highlight that digital tools improve the clarity and flow of communication between staff and supervisors. Features such as automated scheduling updates, real-time messaging, or centralized information portals reduce misunderstandings and foster a sense of organizational cohesion. The indicator Communication Improvement reflects this perceived enhancement in transparency and interaction. At the same time, employees express Career Optimism in relation to AI deployment, seeing it as a sign that their workplace is evolving and investing in future-forward practices. The presence of algorithmic systems is interpreted not as a threat but as a signal of modernization that could unlock new pathways for learning, advancement, and alignment with broader industry trends. Together, these findings suggest that when AI management is implemented with attention to usability, fairness, training, and communication, it can have a profoundly supportive and developmental impact on employees. Rather than eroding human value, well-designed AI systems can affirm it—by reducing operational burdens, increasing recognition, enabling clearer communication, and fostering a work environment conducive to growth. These insights provide strong justification for hospitality organizations to view AI not only as a tool for cost reduction or process optimization but also as a mechanism for enhancing employee experience and engagement.

5.2. Negative Perception of AI as Restrictive and Impersonal

While the integration of AI into hospitality management brings several perceived benefits, the findings also highlight a substantial set of concerns that frame algorithmic systems as restrictive, impersonal, and occasionally counterproductive from the employee’s perspective. These negative perception indicators underscore the risks associated with poorly contextualized or rigid AI applications, particularly when they compromise human autonomy, emotional satisfaction, or equitable treatment. One of the most frequently cited drawbacks relates to reduced flexibility and increased turnover risk. The indicators Flexibility Loss and Turnover Risk reflect the view that AI-driven scheduling systems often prioritize operational optimization over individual preferences or personal life circumstances. Employees report diminished autonomy in negotiating their schedules, difficulty making shift adjustments, and a general lack of agency in navigating their work–life balance. This rigidity not only undermines morale but also appears to increase dissatisfaction and burnout, contributing to a higher intention to leave the organization. These outcomes are particularly problematic in service industries like hospitality, where unpredictable demand patterns already make scheduling a stress point. When employees perceive that technology replaces dialogue and negotiation with impersonal allocation, their emotional connection to the job weakens.
In addition to reduced autonomy, many respondents express concern about the monotonization of work. The indicators Job Monotony and Satisfaction Tradeoff suggest that algorithmic management, by focusing heavily on efficiency and output metrics, may lead to oversimplified and repetitive job roles. Employees report that their work becomes increasingly task-oriented and stripped of relational or creative elements. This loss of variety and meaningful interaction erodes the intrinsic rewards of hospitality work, such as providing personalized service or exercising interpersonal skills. Although operational gains may be realized, they come at the cost of emotional engagement, a core ingredient of service quality and long-term employee retention. The perceived tradeoff between efficiency and fulfillment points to a fundamental tension in the deployment of AI in emotionally demanding roles—one where technological precision must be weighed against human motivation and purpose. Perhaps more critically, employees raise ethical and procedural concerns about the replacement of human decision-making and the lack of inclusive sensitivity in algorithmic logic. The indicators Decision Replacement and Diversity Respect bring attention to the limitations of AI in handling individual differences, complex life contexts, and cultural nuances. Employees feel that when decisions related to scheduling, evaluations, or conflict resolution are delegated entirely to automated systems, they risk becoming detached from the lived realities of the workforce. AI systems, especially those designed using generic or historically biased data, may fail to accommodate diverse needs—whether based on gender, age, family obligations, health conditions, or cultural practices. This perceived lack of adaptability contributes to a sense of injustice and marginalization, particularly for those who fall outside the norm assumed by the algorithm’s training data.
These concerns are not simply about fairness in outcome but also about procedural justice. Employees want to be seen, heard, and treated as individuals—not merely as data points within a system. When AI is perceived as an opaque authority that cannot explain its logic or flex its rules, it can undermine trust in organizational decision-making and damage the psychological contract between employer and employee. Taken together, these negative perceptions suggest that algorithmic systems must be implemented with caution, transparency, and human oversight. The absence of emotional intelligence, nuance, and contextual judgment in AI tools can lead to perceptions of coldness, alienation, and systemic unfairness—even if the tools are technically effective. Importantly, these issues are not inherent to AI itself, but rather reflect design choices and managerial practices that fail to anticipate the social dynamics of digital transformation. To mitigate these risks, organizations should prioritize ethical algorithm design, employee involvement in digital transitions, and mechanisms for appeal or override when employees feel that a system has failed to account for their circumstances. AI must be positioned not as a replacement for empathy or discretion, but as a tool to support fair and flexible human-centered management.
These perception indicators collectively reflect a dual reality: while many employees see AI as a facilitator of performance, communication, and growth, others fear a loss of autonomy, human oversight, and emotional connection to their work. The results suggest that AI systems must be carefully calibrated—not just for operational efficiency but also for personalization, empathy, and adaptability. If AI tools are perceived as neutral assistants that empower employees, they are welcomed. However, when experienced as rigid systems that marginalize human context, resistance and dissatisfaction rise. Importantly, these findings underscore the need for hybrid management models, where technology enhances—but does not replace—human judgment and care.

6. Conclusions

This study set out to examine hotel employees’ perceptions of AI-driven human resource management systems within the context of the Accor Group’s operations in three major European cities: Paris, Berlin, and Amsterdam. It sought to understand how these perceptions shape employee attitudes, job satisfaction, and work–life balance, in a period of accelerating technological transformation across the hospitality industry. Through robust quantitative analysis—including factor analysis and structural equation modeling—two core latent constructs emerged as central to employee experiences: AI Perceptions and Balanced Management. The findings offer a multidimensional understanding of how AI technologies are received by frontline hospitality workers. On the one hand, employees recognize a broad range of potential benefits when AI systems are implemented with care and transparency. AI was often associated with increased efficiency, improved scheduling, consistent performance tracking, faster internal communication, and greater access to development opportunities. In these cases, algorithmic management is not viewed as a threat but as a facilitator of smoother workflows and more equitable career progression. Employees who perceived these systems as fair, transparent, and supportive of their growth tended to report higher levels of satisfaction, trust in management, and overall engagement in their roles.
On the other hand, the research also uncovered significant concerns. Many employees expressed anxiety over the depersonalization of management, the monotony introduced by systematized tasks, and the erosion of human discretion in decision-making. Some viewed AI systems as rigid and inflexible, unable to capture the nuances of individual performance or the complexities of day-to-day hotel operations. Others noted that algorithmic management tools—when poorly communicated or insufficiently contextualized—could foster feelings of alienation and a lack of autonomy. In these contexts, AI was not perceived as a neutral tool, but as a force that could undermine morale, strain interpersonal dynamics, and contribute to burnout or attrition.
These findings confirm the study’s central hypothesis (H): Employees’ perceptions of algorithmic (AI-driven) management systems significantly influence their overall attitudes and work–life balance. This conclusion is directly supported by the statistical evidence derived from structural equation modeling. The analysis revealed a strong and statistically significant relationship between the two latent constructs—AI Perceptions (F1) and Balanced Management (F2)—with a standardized path coefficient of β = 0.88. This indicates that when employees perceive algorithmic systems as transparent, fair, and facilitative, they are considerably more likely to report favorable outcomes in terms of productivity, perceived fairness, opportunities for skill development, and overall well-being. The strength of this association underscores the pivotal role of perception in mediating the effects of digital transformation within the workplace. Furthermore, the model fit indices (CFI = 0.913; TLI = 0.904; GFI = 0.913; RMSEA = 0.062) confirm that the proposed model offers a statistically valid and reliable representation of the relationship between AI-driven management and human-centered outcomes. It is not the presence of AI itself, but the way it is introduced, governed, and experienced by employees that determines its impact. Where organizations invest in ethical design, employee involvement, and mechanisms for feedback and accountability, AI can serve as a positive force. Conversely, when AI is deployed in a top-down, opaque, or overly mechanistic manner, it risks alienating the very workforce it is intended to support.
The study contributes to a growing body of literature on the intersection of technology and human resource management in service industries. It advances our understanding by foregrounding the voices of employees—those most directly affected by digital transformation—within the specific, high-pressure context of hospitality. In an industry where emotional labor, adaptability, and human interaction remain central, the challenge is not simply to automate, but to integrate. The most successful AI systems will be those that amplify human capabilities rather than attempt to replace them, embedding discretion, empathy, and context-awareness into automated decision-making processes.
From a managerial perspective, the implications are clear. First, AI should be implemented alongside comprehensive training programs that ensure employees understand not only how systems function, but why they are being used. Second, there must be channels for ongoing dialogue and feedback, so that employees feel empowered to voice concerns or propose adjustments. Third, organizations must maintain a strong emphasis on fairness, inclusion, and psychological safety in digital environments, reinforcing that technology is a tool for support—not surveillance or control. Moreover, ethical considerations should be central to every stage of AI system design and deployment. This includes transparency in algorithmic decision-making, the ability to audit system outputs, and the retention of human judgment in high-stakes decisions such as promotions, disciplinary actions, or workload distribution. AI should support human dignity, not diminish it.
Looking forward, future research should explore these dynamics in a broader range of organizational and cultural contexts. Comparative studies across different hotel chains, job roles, or national settings could provide deeper insight into how cultural attitudes toward technology shape acceptance and resistance. Longitudinal studies may also help track how perceptions evolve over time, especially as AI systems become more embedded in organizational life. Additionally, qualitative research could offer richer, more textured accounts of employee experiences that complement and extend the quantitative findings presented here. The integration of AI into human resource management is neither inherently beneficial nor detrimental. Its success depends on the choices organizations make—about design, governance, communication, and culture. In hospitality, where human connection remains at the heart of service, AI must be aligned with the values of empathy, fairness, and respect. Only by maintaining this balance can organizations ensure that technological innovation enhances, rather than erodes, the human experience of work.
Future research and organizational practice must give greater attention to the ethical risks associated with AI implementation in HR contexts. Beyond concerns of fairness and transparency, issues such as algorithmic bias, disproportionate surveillance, and data privacy violations pose real threats to employee well-being and autonomy. If left unaddressed, these risks could erode trust, reinforce inequality, and diminish the ethical legitimacy of digital management systems. Thus, any deployment of algorithmic tools must be accompanied by critical ethical review processes, participatory governance mechanisms, and clear safeguards against misuse.

6.1. Limitations and Future Research

A key limitation of this study lies in the demographic structure of the sample, with approximately 70% of respondents over the age of 45, with minimal representation from younger age groups (only 3.4% aged 25–34). While this reflects the workforce composition of the surveyed Accor hotels, it does not fully represent the broader hospitality sector, which typically includes a younger employee base. The skewed distribution likely reflects the structure of the participating hotels, many of which employ more tenured staff and maintain relatively stable workforces—contexts in which algorithmic HR systems are more likely to be implemented and sustained. While this limits the generalizability of the findings to younger or more transient employee populations, it offers important insights into the perceptions of experienced workers who may be more deeply engaged with organizational change processes and digital transformation. Future research should aim to include a more demographically diverse sample, particularly early-career employees, to capture a broader spectrum of responses to AI-driven management practices.
Another significant limitation concerns the specification of the AI Perceptions construct. In the current study, this construct was modeled reflectively, supported by high loadings and internal consistency (CR = 0.96, AVE = 0.91). Nonetheless, the indicators are conceptually diverse and may also lend themselves to a formative or higher-order specification. Exploring alternative model specifications represents an important avenue for future research, as this may better capture the complex and multidimensional nature of employee experiences with AI-driven management systems.

6.2. Practical Implications

For hotels implementing AI-driven management systems, it is crucial to focus not only on technological efficiency but also on maintaining a human-centered approach. Transparent communication about how AI algorithms function—especially in areas such as scheduling, performance evaluations, and shift allocations—can help build trust and reduce employee anxiety about automated decision-making. Human oversight must be preserved by empowering managers to intervene and adjust AI-generated decisions when unique personal circumstances or ethical issues arise, ensuring fairness and empathy in workplace management. Continuous and comprehensive training programs are essential to help employees develop the necessary digital literacy and skills to confidently interact with AI tools. This fosters a sense of empowerment and reduces resistance, transforming AI from a perceived threat into a supportive resource that enhances job performance.
AI systems should be designed and applied with sensitivity to diversity and individual needs, such as accommodating different work–life balance requirements and personal circumstances. This respect for personal consideration can improve employee satisfaction and reduce turnover risk. Additionally, hotels can leverage AI to provide timely, consistent feedback and improve communication channels between staff and supervisors, thereby increasing transparency and engagement. Ultimately, by balancing AI-driven operational efficiency with fairness, personal development opportunities, and ethical management practices, hotels can create a more supportive work environment. This balanced approach enhances employee motivation, loyalty, and overall well-being—factors that are critical to sustaining high service quality and competitive advantage in the hospitality industry. Ethical safeguards must be embedded into the lifecycle of AI system design and deployment. This includes routine audits to detect algorithmic bias, limiting surveillance capabilities to only what is operationally necessary, and ensuring employees have control over their personal data. By proactively addressing these ethical risks, hotels can foster not only operational efficiency but also a culture of trust, fairness, and accountability that aligns with both organizational values and broader societal expectations.
Beyond hotel operations, the findings have broader implications for tourism practitioners, place managers, and educators. The study’s attention to atmosphere and storytelling—through both employee perceptions and embodied fieldwork—suggests that experiential design in tourism could benefit from integrating affective and sensory dimensions. For example, place managers could consider how AI tools mediate not only efficiency but also the emotional tone of workplace and visitor environments. In regenerative tourism initiatives, which prioritize holistic well-being and socio-ecological resilience, planning processes might be enhanced by including affective and sensory feedback from both workers and guests. Similarly, tourism educators might use these insights to develop curricula that prepare future professionals to critically engage with the ethical and emotional dimensions of AI in service contexts—balancing technical skills with an understanding of human experience.

Author Contributions

Conceptualization, A.V. and M.T.; methodology, A.V. and V.M.; software, M.T.; validation, A.V. and M.T.; formal analysis, A.V.; investigation, M.T.; resources, A.V.; data curation, V.M.; writing—original draft preparation, A.V.; writing—review and editing, V.M.; visualization, M.T.; supervision, A.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Singidunum University (approval code: protocol code 169 and approval date: 1 May 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ahn, M. J., & Chen, Y.-C. (2022). Digital transformation toward AI-augmented public administration: The perception of government employees and the willingness to use AI in government. Government Information Quarterly, 39(2), 101664. [Google Scholar] [CrossRef]
  2. Alam, S. S., Kokash, H. A., Ahsan, M. N., & Ahmed, S. (2025). Relationship between technology readiness, AI adoption and value creation in hospitality industry: Moderating role of technological turbulence. International Journal of Hospitality Management, 127, 104133. [Google Scholar] [CrossRef]
  3. Bai, J. Y., Wong, I. A., Huan, T. C. T., Okumus, F., & Leong, A. M. W. (2025). Ethical perceptions of generative AI use and employee work outcomes: Role of moral rumination and AI-supported autonomy. Tourism Management, 111, 105242. [Google Scholar] [CrossRef]
  4. Bai, S., & Zhang, X. (2025). My coworker is a robot: The impact of collaboration with AI on employees’ impression management concerns and organizational citizenship behavior. International Journal of Hospitality Management, 128, 104179. [Google Scholar] [CrossRef]
  5. Bennett, N., & Martin, C. L. (2025). AI as a talent management tool: An organizational justice perspective. Business Horizons. [Google Scholar] [CrossRef]
  6. Buhalis, D., & Leung, R. (2018). Smart hospitality—Interconnectivity and interoperability towards an ecosystem. International Journal of Hospitality Management, 71, 41–50. [Google Scholar] [CrossRef]
  7. Cauchi, N., Macek, K., & Abate, A. (2017). Model-based predictive maintenance in building automation systems with user discomfort. Energy, 138, 306–315. [Google Scholar] [CrossRef]
  8. Cheng, W., & Hwang, J. (2025). Robots’ anthropomorphic designs: Psychological mechanism of consumer acceptance in restaurant contexts of service success, failure, and recovery. International Journal of Hospitality Management, 131, 104322. [Google Scholar] [CrossRef]
  9. Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. [Google Scholar] [CrossRef]
  10. Contessi, D., Viverit, L., Pereira, L. N., & Heo, C. Y. (2024). Decoding the future: Proposing an interpretable machine learning model for hotel occupancy forecasting using principal component analysis. International Journal of Hospitality Management, 121, 103802. [Google Scholar] [CrossRef]
  11. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  12. Deldadehasl, M., Karahroodi, H. H., & Haddadian Nekah, P. (2025). Customer clustering and marketing optimization in hospitality: A hybrid data mining and decision-making approach from an emerging economy. Tourism and Hospitality, 6(2), 80. [Google Scholar] [CrossRef]
  13. Diwan, S. A. (2025). Optimizing guest experience in smart hospitality: Integrated fuzzy-AHP and machine learning for centralized hotel operations with IoT. Alexandria Engineering Journal, 116, 535–547. [Google Scholar] [CrossRef]
  14. Hackman, J. R., & Oldham, G. R. (1975). Development of the job diagnostic survey. Journal of Applied Psychology, 60(2), 159–170. [Google Scholar] [CrossRef]
  15. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis (8th ed.). Cengage Learning. [Google Scholar]
  16. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. [Google Scholar] [CrossRef]
  17. Higgins, E. T. (1997). Beyond pleasure and pain. American Psychologist, 52(12), 1280–1300. [Google Scholar] [CrossRef]
  18. Huang, G. I., Wong, I. A., Zhang, C. J., & Liang, Q. (2025). Generative AI inspiration and hotel recommendation acceptance: Does anxiety over lack of transparency matter? International Journal of Hospitality Management, 126, 104112. [Google Scholar] [CrossRef]
  19. Huang, Y., & Gursoy, D. (2024). How does AI technology integration affect employees’ proactive service behaviors? A transactional theory of stress perspective. Journal of Retailing and Consumer Services, 77, 103700. [Google Scholar] [CrossRef]
  20. Huo, W., Xie, J., Yan, J., Long, T., & Liang, B. (2025). Approach or avoidance? Relationship between perceived AI explainability and employee job crafting. Acta Psychologica, 257, 105097. [Google Scholar] [CrossRef]
  21. Ivanov, S. H., Seyitoğlu, F., & Markova, M. (2020). Hotel managers’ perceptions towards the use of robots: A mixed-methods approach. Information Technology & Tourism, 22(4), 505–535. Available online: https://ssrn.com/abstract=3824271 (accessed on 25 June 2025). [CrossRef]
  22. Ivanov, S. H., & Webster, C. (2017, October 19–21). Adoption of robots, artificial intelligence and service automation by travel, tourism and hospitality companies—A cost-benefit analysis. International Scientific Conference “Contemporary Tourism—Traditions and Innovations”, Sofia, Bulgaria. Available online: https://ssrn.com/abstract=3007577 (accessed on 22 June 2025).
  23. Jerez-Jerez, M. J. (2025). A study of employee attitudes towards AI, its effect on sustainable development goals and non-financial performance in independent hotels. International Journal of Hospitality Management, 124, 103987. [Google Scholar] [CrossRef]
  24. Jia, S. J., Chi, O. H., & Chi, C. G. (2025). Unpacking the impact of AI vs. human-generated review summary on hotel booking intentions. International Journal of Hospitality Management, 126, 104030. [Google Scholar] [CrossRef]
  25. Jianu, B., Ashton, M., & Lugosi, P. (2025). Integrating algorithmic management in hotels: Emerging challenges and opportunities for frontline managers. International Journal of Hospitality Management, 129, 104168. [Google Scholar] [CrossRef]
  26. Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. [Google Scholar] [CrossRef]
  27. Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), 366–410. [Google Scholar] [CrossRef]
  28. Kirshner, S. N., & Lawson, J. (2025). Preventing promotion-focused goals: The impact of regulatory focus on responsible AI. Computers in Human Behavior: Artificial Humans, 3, 100112. [Google Scholar] [CrossRef]
  29. Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). The Guilford Press. [Google Scholar]
  30. Leventhal, G. S. (1980). What should be done with equity theory? New approaches to the study of fairness in social relationships. In K. J. Gergen, M. S. Greenberg, & R. H. Willis (Eds.), Social exchange: Advances in theory and research (pp. 27–55). Plenum Press. [Google Scholar]
  31. Li, H., Xi, J., Hsu, C. H. C., Yu, B. X. B., & Zheng, X. K. (2025). Generative artificial intelligence in tourism management: An integrative review and roadmap for future research. Tourism Management, 110, 105179. [Google Scholar] [CrossRef]
  32. Luo, X., & Yi, Z. (2025). Efficiency management of engineering projects based on particle swarm multi-objective optimization algorithm. Systems and Soft Computing, 7, 200320. [Google Scholar] [CrossRef]
  33. Madanchian, M., & Taherdoost, H. (2025). Criteria for AI adoption in HR: Efficiency vs. ethics. Procedia Computer Science, 258, 233–241. [Google Scholar] [CrossRef]
  34. Madanchian, M., Taherdoost, H., & Mohamed, N. (2023). AI-based human resource management tools and techniques: A systematic literature review. Procedia Computer Science, 229, 367–377. [Google Scholar] [CrossRef]
  35. Maiti, M., Kayal, P., & Vujko, A. (2025). A study on ethical implications of artificial intelligence adoption in business: Challenges and best practices. Future Business Journal, 11, 34. [Google Scholar] [CrossRef]
  36. Mandić, D., Knežević, M., Borovčanin, D., & Vujko, A. (2024). Robotisation and service automation in the tourism and hospitality sector: A meta-study (1993–2024). Geojournal of Tourism and Geosites, 55, 1271–1280. [Google Scholar] [CrossRef]
  37. Mirčetić, V., & Mihić, M. (2022). Smart tourism as a strategic response to challenges of tourism in the post-COVID. In Sustainable business management and digital transformation: Challenges and opportunities in the post-COVID era (pp. 445–463). Springer. [Google Scholar] [CrossRef]
  38. Mojoodi, A., Kumail, T., Ahmadzadeh, S. M., & Jalalian, S. (2025). Perceptual mapping and key factors influencing hotel choices: A web mining approach to Booking.com reviews. International Journal of Hospitality Management, 131, 104308. [Google Scholar] [CrossRef]
  39. Netemeyer, R. G., Boles, J. S., & McMurrian, R. (1996). Development and validation of work-family conflict and family-work conflict scales. Journal of Applied Psychology, 81, 400–410. [Google Scholar] [CrossRef]
  40. Rojas, C., & Jatowt, A. (2025). Transformer-based probabilistic forecasting of daily hotel demand using web search behavior. Knowledge-Based Systems, 310, 112966. [Google Scholar] [CrossRef]
  41. Schwartz, Z., Webb, T. D., Altin, M., & Riasi, A. (2025). Overbooking and performance in hotel revenue management. International Journal of Hospitality Management, 129, 104192. [Google Scholar] [CrossRef]
  42. Shin, H. H., Choi, S., & Kim, H. (2025). Artificial intelligence (AI) in human resource management (HRM): A driver of organizational dehumanization and negative employee reactions. International Journal of Hospitality Management, 131, 104230. [Google Scholar] [CrossRef]
  43. Spector, P. E. (1997). Job satisfaction: Application, assessment, causes, and consequences. Sage Publications, Inc. [Google Scholar]
  44. Sun, H., Kim, M., Kim, S., & Choi, L. (2025). A methodological exploration of generative artificial intelligence (AI) for efficient qualitative analysis on hotel guests’ delightful experiences. International Journal of Hospitality Management, 124, 103974. [Google Scholar] [CrossRef]
  45. Tan, L., & Li, J. (2025). Working with robots makes service employees counterproductive? The role of moral disengagement and task interdependence. Tourism Management, 111, 105233. [Google Scholar] [CrossRef]
  46. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  47. Vujko, A., Cvijanović, D., El Bilali, H., & Berjan, S. (2025a). The appeal of rural hospitality in Serbia and Italy: Understanding tourist motivations and key indicators of success in sustainable rural tourism. Tourism and Hospitality, 6(2), 107. [Google Scholar] [CrossRef]
  48. Vujko, A., Knežević, M., & Arsić, M. (2025b). The future is in sustainable urban tourism: Technological innovations, emerging mobility systems and their role in shaping smart cities. Urban Science, 9(5), 169. [Google Scholar] [CrossRef]
  49. Webster, C., & Ivanov, S. H. (2020). Robots in travel, tourism and hospitality: Key findings from a global study. Zangador. Available online: https://ssrn.com/abstract=3542208 (accessed on 21 June 2025).
  50. Wu, L., Wang, Z., Liao, Z., Xiao, D., Han, P., Li, W., & Chen, Q. (2024). Multi-day tourism recommendations for urban tourists considering hotel selection: A heuristic optimization approach. Omega, 126, 103048. [Google Scholar] [CrossRef]
  51. Xu, J., Zhang, W., Li, H., Zheng, X. K., & Zhang, J. (2024). User-generated photos in hotel demand forecasting. Annals of Tourism Research, 108, 103820. [Google Scholar] [CrossRef]
  52. Xu, T., Zheng, Y.-H., Zhang, J., & Wang, Z. (2025). “They” threaten my work: How AI service robots negatively impact frontline hotel employees through organizational dehumanization. International Journal of Hospitality Management, 128, 104162. [Google Scholar] [CrossRef]
  53. Yadav, A., & Dhar, R. L. (2021). Linking frontline hotel employees’ job crafting to service recovery performance: The roles of harmonious passion, promotion focus, hotel work experience, and gender. Journal of Hospitality and Tourism Management, 47, 485–495. [Google Scholar] [CrossRef]
  54. Yang, Z. (2025). Application of evolutionary deep learning algorithm in construction engineering management system. Systems and Soft Computing, 7, 200317. [Google Scholar] [CrossRef]
  55. Zhang, H., Xiang, Z., & Zach, F. J. (2025). Generative AI vs. humans in online hotel review management: A task-technology fit perspective. Tourism Management, 110, 105187. [Google Scholar] [CrossRef]
  56. Zhang, Y., Wang, J., Zhang, J., & Wang, Y. (2025). To be right on the button: How and when hotel frontline service employees’ AI awareness influences deviant behavior. International Journal of Hospitality Management, 126, 104090. [Google Scholar] [CrossRef]
  57. Zhou, S., Yi, N., Rasiah, R., Zhao, H., & Mo, Z. (2024). An empirical study on the dark side of service employees’ AI awareness: Behavioral responses, emotional mechanisms, and mitigating factors. Journal of Retailing and Consumer Services, 79, 103869. [Google Scholar] [CrossRef]
  58. Zhu, D. J., & Chen, M.-H. (2025). Modeling AI’s impact on hospitality firm profit: Demand and productivity effect. International Journal of Hospitality Management, 131, 104256. [Google Scholar] [CrossRef]
Figure 1. Structural equation modeling (SEM). Source: prepared by the authors (2025).
Figure 1. Structural equation modeling (SEM). Source: prepared by the authors (2025).
Tourismhosp 06 00203 g001
Table 1. KMO and Bartlett’s test of sphericity.
Table 1. KMO and Bartlett’s test of sphericity.
MeasureValue
Kaiser–Meyer–Olkin (KMO) Measure0.970
Bartlett’s Test of Sphericity (Chi-Square)22,053.52
Degrees of Freedom (df)378
Significance (p-value)<0.001
Table 2. Total variance explained.
Table 2. Total variance explained.
FactorInitial Eigenvalue% of VarianceCumulative %
122.0678.77%78.77%
21.304.63%83.40%
Extraction method: principal component analysis. Only factors with eigenvalues > 1 are reported.
Table 3. Rotated factor loadings (pattern matrix).
Table 3. Rotated factor loadings (pattern matrix).
ItemAI PerceptionsBalanced Management
Performance Impact0.8440.468
Retention Intention0.7830.516
Flexibility Loss0.7560.485
Turnover Risk0.7690.467
Recognition Timeliness0.8520.450
Job Monotony0.8430.474
Balance Disruption0.5950.701
Satisfaction Tradeoff0.7250.500
Support Availability0.8300.447
Workload Reduction0.7710.471
Training Adequacy0.7610.395
Communication Improvement0.8660.482
Decision Replacement0.7810.467
Diversity Respect0.8070.420
Career Optimism0.8390.469
Personal Consideration0.4220.783
Productivity Boost0.3270.738
Fair Efficiency0.3580.779
Skill Development0.4810.828
Future Growth0.4600.776
Extraction method: principal component analysis. Rotation method: Varimax with Kaiser normalization. Loadings < 0.30 are suppressed for clarity.
Table 4. Standardized regression weights (structural equation model).
Table 4. Standardized regression weights (structural equation model).
Dependent VariablePredictorEstimate
Balanced Management (F2)AI Perceptions (F1)0.879
Experience−0.097
Gender0.053
Age0.130
Education−0.052
Observed Indicators of F1 (AI Perceptions)
Performance ImpactF10.965
Retention IntentionF10.936
Flexibility LossF10.897
Turnover RiskF10.899
Recognition TimelinessF10.964
Job MonotonyF10.966
Satisfaction TradeoffF10.879
Support AvailabilityF10.943
Workload ReductionF10.902
Training AdequacyF10.856
Communication ImprovementF10.990
Decision ReplacementF10.909
Diversity RespectF10.909
Career OptimismF10.963
Observed Indicators of F2 (Balanced Management)
Future GrowthF20.909
Skill DevelopmentF20.962
Fair EfficiencyF20.849
Productivity BoostF20.796
Personal ConsiderationF20.887
Balance DisruptionF20.923
Note: Standardized estimates from the default structural model (Group 1).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Turčinović, M.; Vujko, A.; Mirčetić, V. Algorithmic Management in Hospitality: Examining Hotel Employees’ Attitudes and Work–Life Balance Under AI-Driven HR Systems. Tour. Hosp. 2025, 6, 203. https://doi.org/10.3390/tourhosp6040203

AMA Style

Turčinović M, Vujko A, Mirčetić V. Algorithmic Management in Hospitality: Examining Hotel Employees’ Attitudes and Work–Life Balance Under AI-Driven HR Systems. Tourism and Hospitality. 2025; 6(4):203. https://doi.org/10.3390/tourhosp6040203

Chicago/Turabian Style

Turčinović, Milena, Aleksandra Vujko, and Vuk Mirčetić. 2025. "Algorithmic Management in Hospitality: Examining Hotel Employees’ Attitudes and Work–Life Balance Under AI-Driven HR Systems" Tourism and Hospitality 6, no. 4: 203. https://doi.org/10.3390/tourhosp6040203

APA Style

Turčinović, M., Vujko, A., & Mirčetić, V. (2025). Algorithmic Management in Hospitality: Examining Hotel Employees’ Attitudes and Work–Life Balance Under AI-Driven HR Systems. Tourism and Hospitality, 6(4), 203. https://doi.org/10.3390/tourhosp6040203

Article Metrics

Back to TopTop