Previous Article in Journal
Expanding Motivational Frameworks in Sports Tourism: Inclusiveness, Digital Interaction and Runner Segmentation in the Half Marathon Magaluf (Mallorca, Spain)
Previous Article in Special Issue
No Room for Clio? Digital Approaches to Historical Awareness and Cultural Heritage Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Guest Acceptance of Smart and AI-Enabled Hotel Services in an Emerging Market: Evidence from Albania

by
Majlinda Godolja
*,
Romina Muka
,
Tea Tavanxhiu
and
Kozeta Sevrani
Faculty of Economy, University of Tirana, 1010 Tirana, Albania
*
Author to whom correspondence should be addressed.
Tour. Hosp. 2026, 7(1), 14; https://doi.org/10.3390/tourhosp7010014
Submission received: 28 November 2025 / Revised: 23 December 2025 / Accepted: 30 December 2025 / Published: 2 January 2026
(This article belongs to the Special Issue Digital Transformation in Hospitality and Tourism)

Abstract

The rapid integration of artificial intelligence (AI) and smart technologies is transforming hospitality operations, yet guest acceptance remains uneven, shaped by utilitarian, experiential, ethical, and cultural evaluations. This study develops and empirically tests a multicomponent framework to explain how these factors jointly influence two behavioral outcomes: whether AI-enabled features affect hotel choice and whether guests are willing to pay a premium. A cross-sectional survey of 689 hotel guests in Tirana, Albania, an emerging hospitality market and rapidly growing tourist destination in the Western Balkans, was analyzed using cumulative link models, partial proportional-odds models, nonlinear and interaction extensions, and binary robustness checks. Results show that prior experience with smart or AI-enabled hotels, higher awareness, and trust in AI, especially trust in responsible data handling, consistently increase both acceptance and willingness to pay. Perceived value, operationalized through the breadth of identified benefits and desired features, also exhibits robust positive effects. In contrast, privacy concerns selectively suppress strong acceptance, particularly financial willingness, while cultural–linguistic fit and support for human–AI collaboration contribute positively but modestly. Interaction analyses indicate that trust can mitigate concerns about reduced personal touch. Open-ended responses reinforce these patterns, highlighting the importance of privacy, human interaction, and staff–AI coexistence. Overall, findings underscore that successful AI adoption in hospitality requires aligning technological innovation with ethical transparency, experiential familiarity, and cultural adaptation.

1. Introduction

The hospitality industry is undergoing a rapid and structural transformation driven by the integration of artificial intelligence (AI) and smart technologies. Hotels increasingly deploy AI-powered concierges, service robots, mobile check-in systems, and in-room voice assistants to enhance operational efficiency, personalize guest experiences, and address workforce shortages (Marghany et al., 2025; Ren et al., 2025). The adoption of such technologies accelerated markedly during the COVID-19 pandemic, when digital and contact-light service processes became essential for reducing physical interactions and maintaining operational continuity (S. Kim et al., 2021; Yang et al., 2021). As AI continues to develop, understanding how guests evaluate these innovations has become strategically important for the future competitiveness and sustainability of hotel operations.
Despite significant technological progress, guest acceptance remains uneven and often ambivalent. While some travelers value the convenience, efficiency, and novelty of smart and AI-enabled services, others express reservations related to impersonality, reduced human warmth, job displacement, and the loss of emotional connection traditionally associated with hospitality encounters (Gursoy, 2025; Kang et al., 2023). Privacy and data-security concerns amplify this ambivalence: many guests hesitate to adopt systems requiring behavioral or personal data, citing fears of surveillance, profiling, or misuse (Jia et al., 2024; Hu & Min, 2025). This creates a concrete managerial risk: if guests do not accept AI-enabled services, investments may be underused, generate dissatisfaction, or create reputational concerns, particularly in emerging markets where exposure to such technologies is still uneven.
Albania provides a compelling context for examining these issues. As one of the fastest-growing tourism destinations in the Western Balkans, the country has experienced rapid increases in international arrivals and substantial investment in accommodation infrastructure. Yet the integration of smart and AI-enabled systems in Albanian hotels remains at an early developmental stage, characterized by heterogeneous adoption, unequal technological readiness, and limited guest familiarity. This setting allows observation of AI acceptance under conditions of rapid tourism growth and still-developing digital service expectations, with implications for managerial decision-making in comparable emerging destinations.
Despite growing research on AI in hospitality, evidence remains limited on how experiential, ethical, interpersonal, and cultural factors jointly shape guest acceptance, particularly in emerging markets where exposure to AI-enabled services is still uneven (Shum et al., 2024; Chi et al., 2023). A second gap concerns financial acceptance: few studies assess whether guests are willing to pay more for AI-enhanced services, even though this is central to investment decisions. A third gap involves mechanism: it remains unclear whether trust in AI can buffer concerns about reduced personal touch and thereby sustain acceptance.
To address these gaps, this study integrates three domains: (i) utilitarian and experiential drivers (utilitarian evaluations, familiarity, and awareness), (ii) ethical and human dimensions (trust, privacy concerns, and perceived loss of personal touch), and (iii) contextual and value considerations (cultural–linguistic fit and value perceptions). The framework is tested against two ordered acceptance outcomes: whether smart/AI features influence hotel choice and whether guests are willing to pay more for AI-enabled services.
The hypotheses are developed in the Literature Review, and the measurement and modelling details are provided in Section 3.
Using a large in-person survey conducted in Albania, the study provides new empirical evidence on how guests in an emerging market evaluate AI-enabled hospitality services. The study contributes by linking a multi-domain acceptance framework to both hotel-choice influence and willingness to pay, and by testing whether trust mitigates concerns about reduced personal touch. These insights offer practical relevance for researchers and practitioners designing ethically transparent, culturally adaptive, and guest-centered AI-enabled hospitality services in Albania and comparable emerging destinations.

2. Literature Review

Artificial intelligence (AI) has rapidly become one of the most transformative forces shaping contemporary hospitality services. Hotels increasingly deploy AI-enabled systems such as intelligent check-in kiosks, natural-language chatbots, predictive recommendation engines, facial-recognition entry, voice-controlled smart rooms, and automated service fulfillment. These technologies rely on machine learning, natural language processing, and real-time data analytics to enhance convenience, streamline interactions, and support personalized service delivery (Tussyadiah, 2020; Buhalis & Leung, 2018; Ivanov & Webster, 2019a). As these systems expand across both front-stage and back-stage operations, understanding how guests form evaluations and intentions toward AI-enabled hospitality services has become a central research priority (Mariani & Borghi, 2023; Huang & Rust, 2018).
Within this context, the literature highlights three broad domains: utilitarian-experiential foundations, human–social and ethical considerations, and contextual value assessments, that align closely with the constructs measured in this study. These domains are complementary: utilitarian and experiential evaluations shape perceived usefulness and feasibility; ethical and interpersonal evaluations shape perceived safety and service quality; and contextual fit shapes whether AI-mediated encounters feel appropriate and trustworthy in a given setting. The following sections review these domains using terminology parallel to the survey instrument and analytical framework.

2.1. Core Acceptance Drivers: Utilitarian, Experiential, and Prior Experience

Technology acceptance theories such as TAM (Davis, 1989) and UTAUT/UTAUT2 (Venkatesh et al., 2003, 2012) consistently emphasize functionality, performance expectancy, and ease of use as foundational drivers of technology adoption. In AI-enabled hospitality, these drivers typically manifest as perceived improvements in convenience, speed, and personalization, shaping guests’ evaluations of whether AI-enabled services are useful, reliable, and conducive to a smooth hotel experience (Gursoy et al., 2019; Prentice et al., 2020). Operationally, such gains are reflected in technology-mediated guest journeys, where self-service interfaces reduce perceived waiting burdens at check-in (Kokkinou & Cranage, 2013), AI-supported personalization strengthens the perceived relevance of recommended services (Makivić et al., 2024), and smart-hotel attributes include in-room control features (e.g., lighting/room settings) that enhance convenience and perceived performance (J. J. Kim et al., 2020).
Consistent with this theoretical foundation, awareness of smart and AI-enabled technologies emerges as an important antecedent of acceptance. Awareness can shape expectations about functionality and reduce ambiguity by helping individuals understand what AI systems can do and when they are appropriate to use. In tourism and hospitality contexts, evidence also indicates that consumers differ substantially in their familiarity with AI tools and in the benefits/disadvantages they attribute to them, supporting the premise that knowledge and awareness condition subsequent evaluations and intentions (Sousa et al., 2024). Greater awareness should therefore increase acceptance by reducing uncertainty and increasing perceived controllability of AI-enabled encounters.
Similarly, prior smart/AI hotel experience is expected to predict acceptance because experiential familiarity reduces uncertainty and increases confidence in navigating technology-mediated service encounters. In smart-hotel research, perceived usefulness and ease of use are empirically linked to technology amenities and visiting intentions, supporting the role of direct exposure and learning-by-using in strengthening acceptance (Yang et al., 2021). Related evidence from AI personalization in hotels also shows that technological experience is integral to how guests evaluate AI-enabled value creation and service outcomes (Makivić et al., 2024). In other words, prior experience can strengthen acceptance by clarifying performance expectations and lowering perceived risk associated with novel service processes.
Perceived value plays a central role in shaping both attitudinal and financial acceptance. In this study, value is operationalized through the number of perceived benefits associated with AI-enabled hospitality services and the number of desired AI features guests would like hotels to adopt. These measures reflect functional, emotional, and epistemic value dimensions commonly identified in hospitality technology research (Mariani & Borghi, 2021; Prentice et al., 2020). Perceived benefits include convenience, speed, personalization, multilingual assistance, and enhanced accuracy, while desired features capture interest in additional AI capabilities such as smart-room automation, predictive recommendations, or enhanced check-in efficiency (Said, 2023). Research consistently shows that guests who identify more benefits or express interest in more AI features demonstrate higher acceptance and greater willingness to pay (Ivanov & Webster, 2024). Because willingness to pay implies a higher-cost commitment than general preference, perceived value is expected to be especially relevant for financial acceptance.
Collectively, awareness, prior experience, and perceived value, captured through perceived benefits and desired features, represent the utilitarian and experiential core of AI acceptance.

2.2. Human and Social Dimensions: Interaction, Trust, and Ethics

AI-enabled hospitality interactions are shaped not only by functional evaluations but also by human–social and ethical expectations. Hospitality is a service domain where warmth, empathy, and human interaction traditionally play central roles (Barnes et al., 2020). Accordingly, constructs such as trust in AI, privacy concerns, perceived loss of personal touch, and support for human–AI collaboration capture the interpersonal and ethical evaluations that shape adoption.
Trust in AI, defined as confidence in the accuracy, fairness, responsibility, and data-handling competence of AI systems, is widely recognized as one of the strongest determinants of acceptance (Wirtz et al., 2018; Hoffman et al., 2013). When guests trust that AI systems operate reliably and ethically, they experience lower uncertainty and are more likely to rely on AI-enabled services. Trust also reduces perceived risk in contexts involving sensitive information or automated decision-making (McLean et al., 2020; J. J. Kim et al., 2020). This implies that trust should support both choice-based acceptance and willingness to pay by lowering perceived downside risk and strengthening confidence in service outcomes.
Conversely, privacy concerns represent a major inhibitor of AI adoption. Because AI systems often rely on personal, behavioral, or biometric data, guests frequently worry about how information is collected, stored, and used (Culnan & Armstrong, 1999; Morosan & DeFranco, 2015). Privacy concerns have especially strong effects on financial acceptance, suppressing the willingness to pay for AI-enabled services even among guests who express general curiosity or mild interest. This aligns directly with the operationalization used in this study. A privacy-calculus perspective suggests that perceived benefits promote adoption, but perceived risks become more salient when acceptance requires paying a premium.
Interpersonal expectations further shape acceptance. Perceived loss of personal touch, measured directly in the survey, captures concerns that AI interactions may feel less warm, less empathetic, or less emotionally attuned. These concerns often arise in interactions involving chatbots, automated recommendations, or standardized AI responses. Research shows that such interpersonal reservations may not always reduce acceptance directly but may influence how guests interpret other constructs, such as trust (Kang et al., 2023). Accordingly, depersonalization concerns can be conceptualized as a relational cost that shapes acceptance more strongly when ethical confidence is low.
This is especially relevant for the interaction mechanism tested in the present study, where trust in AI is hypothesized and found to weaken the negative implications of perceived loss of personal touch. Prior literature supports this buffering effect: trust can mitigate concerns about depersonalization by increasing comfort with automated interactions (Wirtz et al., 2018). When trust is high, an AI-mediated service may be interpreted as competent and safe, reducing the extent to which reduced warmth is experienced as a loss in overall service quality.
Finally, support for human–AI collaboration captures attitudes toward hybrid service models in which AI augments rather than replaces staff. Studies show that guests often prefer AI systems that assist employees (e.g., by automating routine tasks or providing real-time recommendations), enabling staff to focus on emotional labor and personalized service (Tuomi et al., 2021; Ivanov & Webster, 2024). This construct aligns with the collaborative-service logic embedded in the instrument.
Together, these human–social and ethical constructs reflect a multidimensional evaluation that goes beyond functionality and addresses the relational and emotional expectations that define hospitality.

2.3. Contextual and Value Considerations: Cultural Fit and Willingness to Pay

Acceptance of AI-enabled hospitality services also depends on contextual and cultural fit. Cultural–linguistic fit, measured as the perceived alignment between AI system communication and local language or cultural norms, plays a critical role in shaping comfort and trust (Holmqvist et al., 2017). AI interactions that reflect appropriate language structures, politeness norms, and culturally sensitive communication patterns are perceived as more natural and reliable. Conversely, poorly localized AI outputs may generate friction, reduce perceived authenticity, or signal technological immaturity, especially in emerging markets (Mariani & Borghi, 2023). Because language and interaction style are central to service encounters, cultural–linguistic fit can reduce friction and increase perceived competence, thereby supporting acceptance and willingness to pay.
A further contextual factor is guests’ digital familiarity (technology readiness) (Parasuraman, 2000). Guests with lower digital familiarity may anticipate higher effort and lower controllability when interacting with AI-enabled services, which can reduce acceptance even when awareness is present. This distinction matters because awareness reflects knowledge of AI, whereas digital familiarity reflects perceived ability and comfort in using technology during service encounters.
These contextual perceptions shape behavioral outcomes. In this study, acceptance is operationalized through two distinct behavioral and financial outcomes: whether AI-enabled services influence hotel choice and whether guests are willing to pay more for such services. These measures align with the hospitality literature, which distinguishes between attitudinal interest and financial readiness (Prentice et al., 2020). Financial acceptance represents a higher-threshold decision and can therefore be expected to respond more strongly to perceived value cues (benefits/features) and more sensitively to perceived risks (privacy).
The privacy calculus framework predicts that perceived benefits increase both outcomes, whereas privacy concerns suppress them, particularly willingness to pay (Culnan & Armstrong, 1999; Morosan & DeFranco, 2015). Similarly, cultural–linguistic fit enhances both behavioral acceptance and perceived value, contributing to guests’ readiness to support AI-integrated experiences (Ren et al., 2025). Together, these perspectives suggest that acceptance reflects a balance between perceived value and perceived risk, conditioned by whether AI-mediated interaction feels culturally appropriate and easy to navigate.
These insights emphasize that AI acceptance is not solely a matter of technical performance but depends on cultural resonance, ethical confidence, and perceived value relative to cost.

2.4. Conceptual Framework, Research Questions, and Hypotheses

Building on the preceding domains, the conceptual framework proposes that guests’ acceptance of AI-enabled hospitality services is shaped by (i) utilitarian–experiential drivers (awareness and prior experience), (ii) perceived value (benefits and desired features), (iii) ethical confidence and perceived risk (trust and privacy concerns), and (iv) relational and contextual evaluations (loss of personal touch, importance of human interaction, digital familiarity, and cultural–linguistic fit). Acceptance is captured through two ordered outcomes: whether AI-enabled services influence hotel choice and whether guests are willing to pay more, reflecting behavioral endorsement and financial readiness.
Based on this framework, the study addresses five research questions. RQ1 examines whether experiential familiarity and awareness increase acceptance. RQ2 examines whether trust and privacy concerns shape acceptance, and whether privacy concerns are particularly consequential for willingness to pay. RQ3 examines how perceived value (benefits and desired features) influences both behavioral and financial acceptance. RQ4 examines whether interpersonal expectations and contextual factors (importance of human interaction, digital familiarity, and cultural–linguistic fit) influence acceptance. RQ5 examines whether trust moderates the relationship between perceived loss of personal touch and acceptance.
Experiential and awareness-related determinants: Guests with prior smart/AI hotel experience are expected to exhibit higher acceptance of AI-enabled hospitality services (H1). Greater awareness of smart and AI-enabled hospitality technologies is expected to increase acceptance, particularly willingness to pay more (H2).
Trust, privacy, and ethical considerations: Higher trust in AI and responsible data handling is expected to increase acceptance (H3). Privacy concerns are expected to reduce acceptance (H4) and to suppress willingness to pay more strongly than general choice influence, consistent with the idea that financial commitment heightens attention to perceived risk (H6).
Perceived value and feature interest: Guests who perceive more benefits and desire more AI features are expected to show higher acceptance and greater willingness to pay (H5).
Interpersonal expectations and contextual alignment: Lower digital familiarity is expected to reduce acceptance by increasing anticipated effort and lowering perceived controllability during AI-mediated service use (H7). The importance placed on human interaction may relate to acceptance in an ambiguous direction (H8), reflecting the possibility that AI may be viewed as either a substitute for or a complement to service warmth. Greater perceived cultural–linguistic fit is expected to enhance acceptance by reducing interaction friction and increasing comfort (H9).
Interaction mechanism: Finally, trust in AI is expected to weaken the negative effect of perceived loss of personal touch on acceptance (H10), such that guests with higher trust are less likely to interpret reduced interpersonal warmth as a reduction in overall service quality.

3. Materials and Methods

Methodological roadmap: The study proceeded in four steps: (1) instrument development and construct operationalization (Section 3.2 and Section 3.3; full items and coding in Table A1, Appendix A); (2) intercept survey data collection in Tirana (Section 3.4); (3) scripted data preparation and variable construction with fully documented recoding rules (Section 3.5; Code S1, Supplementary Materials); and (4) ordinal-regression modelling with assumption checks, robustness analyses, and targeted extensions, with detailed outputs reported in the Supplementary Tables (Section 3.6).

3.1. Study Design and Context

This study investigates hotel guests’ acceptance of smart and AI-enabled technologies in accommodation settings. A cross-sectional quantitative survey design was employed, consistent with methodological approaches in hospitality-technology and AI-acceptance research that emphasize structured behavioral-intention modelling (Chiu & Chen, 2025; Ozturk et al., 2023; Ren et al., 2025; Soliman et al., 2025). The conceptual framework integrates multiple theoretical streams. First, technology-acceptance perspectives from TAM (Davis, 1989) and UTAUT/UTAUT2 (Venkatesh et al., 2012) inform the utilitarian foundations of the instrument. Second, human–social dimensions of technology-mediated service encounters: trust, privacy, perceived loss of human touch, and preferences for interpersonal interaction, draw on empirical research in service automation and AI-enabled hospitality contexts (Wirtz et al., 2018; S. Kim et al., 2021; Lin & Mattila, 2021). Third, contextual and cultural value considerations, including cultural–linguistic fit and support for human–AI collaboration, reflect emerging literature on service-ecosystem adaptation (Holmqvist et al., 2014, 2017; Ivanov et al., 2022).
Within this integrated framework, the study investigates two ordered behavioral outcomes: whether smart or AI-enabled features influence hotel choice and whether guests are willing to pay more for such services. Both outcomes were measured as three-category ordinal variables and analyzed using cumulative link models (CLMs) and, where necessary, partial proportional-odds models (PPOMs), which are appropriate for ordinal data and enable direct modelling of category transitions (Agresti, 2010; Christensen, 2023; Peterson & Harrell, 1990).
The empirical setting is Tirana, the capital of Albania, a rapidly expanding tourism hub in the Western Balkans, where the adoption of smart and AI-enabled systems in accommodation remains emergent. Skanderbeg Square, the city’s central plaza, was selected due to its heterogeneous, high-footfall mix of domestic and international visitors, providing access to diverse respondents rather than a statistically representative population. Accordingly, inference is framed as evidence from a heterogeneous visitor pool (rather than population estimation), and the non-probability nature of the sampling design is acknowledged when interpreting external validity.

3.2. Instrument Development and Constructs

The survey instrument, Guest Acceptance of Smart and AI Technologies in Hospitality, was developed following an extensive review of contemporary hospitality-technology research and empirical studies on AI-enabled service encounters, robotics, and digital guest experiences. TAM and UTAUT/UTAUT2 informed the utilitarian determinants. Research on hedonic motivation, trust, privacy, ethics, and anthropomorphism guided the human–social domain (Wirtz et al., 2018; Lin & Mattila, 2021; S. Kim et al., 2021). Cultural adaptation and human–AI collaboration items were designed in line with service-ecosystem and cultural-fit literature (Holmqvist et al., 2017; Ivanov & Webster, 2019a).
Items from validated scales were adapted where applicable and examples of adapted constructs (e.g., trust, privacy, human-interaction importance) are documented in Table A1 to ensure transparency. For constructs lacking validated measures, particularly cultural fit and support for AI–staff collaboration, items were developed following best-practice guidelines for clarity and non-leading wording (Dillman et al., 2015). Content and face validity were strengthened via expert review (two hospitality-technology academics) and a small pilot test (N = 20), which confirmed comprehension and resulted in minor refinements to wording and response consistency.
To keep the instrument coherent across multiple domains, item wording was consolidated through a structured screening-and-refinement process: (i) create an initial pool aligned with the conceptual domains, (ii) remove duplicates/near-duplicates, (iii) prioritize items with clear behavioral relevance to hotel service encounters, and (iv) standardize response formats to support ordinal modelling. The final questionnaire contained four conceptual blocks: (a) awareness/experience/comfort, (b) perceived benefits and desired features, (c) ethical–human–trust evaluations, and (d) behavioral outcomes, and the complete item list, variable names, and response formats are reported in Table A1 (Appendix A).

3.3. Measurement Model and Reliability Considerations

The survey instrument comprised utilitarian, experiential, and ethical constructs measured using single-item evaluations, Likert-type items, and multi-response checklists. The study employed ordinal and logistic regression models rather than a latent-variable SEM framework; consequently, constructs were operationalized through theoretically appropriate single-item or formative indicators. Item specifications and coding rules are detailed in Table A1 and in the accompanying reproducible R script (Code S1, Supplementary Materials).
Several constructs, including trust in AI, cultural-linguistic fit, privacy concern, and perceived loss of personal touch, were intentionally designed as single, conceptually narrow items. Psychometric research supports the use of single-item measures when the underlying construct is narrow, unambiguous, and readily understood by respondents (Bergkvist & Rossiter, 2007; Fuchs & Diamantopoulos, 2009). Because these constructs were measured with single items, internal-consistency reliability indices (e.g., Cronbach’s α or McDonald’s ω) and related multi-indicator metrics (e.g., composite reliability) were not applicable and were therefore not reported (DeVellis, 2017; Tavakol & Dennick, 2011; Fuchs & Diamantopoulos, 2009).
To strengthen interpretability despite single-item measurement, each single-item construct was defined narrowly (e.g., data-handling trust, perceived loss of personal touch) and analyzed within a multi-predictor framework that includes experiential and value controls, reducing the risk that any one item proxies general “tech optimism/pessimism.” In addition, key findings were checked with top-box logistic models to verify that conclusions were not artefacts of the ordinal threshold structure.
Perceived benefits and desired features were collected through multi-response selection lists. These were treated analytically as formative “breadth indicators,” where each selected option contributes distinct information about perceived value, and omissions do not constitute measurement error.

3.4. Sample and Data Collection

Data were collected between May and October 2025 using an intercept survey administered face-to-face by trained undergraduate students from the University of Tirana. Respondents were screened to ensure that they had recently stayed in a hotel or planned to do so during their current visit. Participation was voluntary and anonymous, and all respondents provided informed consent.
Survey administration was conducted digitally via Google Forms, accessed through QR codes or tablets provided by the students’ team. This ensured immediate electronic capture, minimized transcription errors, and allowed real-time monitoring for data quality. A total of 689 complete responses were collected. After excluding cases with missing dependent-variable values, analytic subsamples consisted of N_infl = 687 for the influence-on-choice models, N_wtp = 687 for willingness-to-pay models, and N_both = 686 for joint analyses.
The voluntary, intercept-based data collection conducted in a public urban space introduces potential self-selection and time-of-day or day-of-week sampling biases. Because no weighting adjustments were feasible, these limitations are explicitly acknowledged in the Discussion.
All procedures complied with the ethical standards of the University of Tirana and adhered to principles of anonymity, voluntary participation, confidentiality, and the right to withdraw at any time.

3.5. Data Preparation and Coding

Data preparation followed standard analytical procedures and was conducted using a fully scripted workflow to ensure transparency and reproducibility. Raw responses were screened for completeness and inconsistencies, and missing patterns were examined descriptively. Categorical variables were harmonized across all items. Tri-level evaluative items such as privacy concerns, perceived loss of personal touch, awareness of smart technologies, and prior AI experience were recoded into ordered 0–2 formats to preserve ordinal distinctions.
Likert-type items (1–5) capturing comfort with AI, trust in AI, human-interaction importance, cultural-linguistic fit, and support for AI–staff collaboration were treated as approximately interval-scaled. This practice is supported by methodological research showing that parametric analyses applied to Likert-type measures are robust in samples of this size (Norman, 2010; Harpe, 2015; Sullivan & Artino, 2013). The use of numeric codings for these items does not affect the ordinal-logit modeling of dependent variables, which remain strictly ordinal.
Multiple-response questions assessing perceived benefits and desired smart or AI features were converted into binary indicators for each selected option and aggregated into two count variables representing the breadth of perceived value (n_benefits and n_features). These indicators reflect the number of selections made rather than a reflective latent construct.
The two primary behavioral outcomes were recoded as ordered factors: infl_choice3 (No/Unsure/Yes) reflecting whether smart or AI technologies influence hotel choice, and wtp3 (No/Depends/Yes) capturing willingness to pay more for AI-enabled services. In addition, binary “top-box” indicators (infl_yes, wtp_yes) were created for robustness analyses focusing exclusively on unequivocal acceptance, with acknowledgment that dichotomization reduces information but enhances interpretability in robustness checks.
For readability, detailed recoding rules (including category harmonization and derived-variable construction) are fully documented in Code S1, while summary descriptive outputs (distributions, missingness patterns, correlations) are reported in the Appendix A/Supplementary Tables.

3.6. Statistical Modelling Strategy

The modelling strategy followed a sequential structure aligned with the conceptual organization of the questionnaire. Both behavioral outcomes: whether smart and AI technologies influence hotel choice and whether guests are willing to pay more, were measured using three ordered response categories. Cumulative link models (CLMs) with a logit link (i.e., proportional-odds models) were therefore used as the primary analytical framework, as they are well-suited to ordinal outcomes and estimate cumulative odds across ordered thresholds (Agresti, 2010; Christensen, 2023; McCullagh, 1980).
The first stage estimated parsimonious baseline models (A1, A2) incorporating core determinants of technology acceptance, including awareness of smart technologies, prior AI-related experience, comfort with AI, perceived value (captured through the breadth of selected benefits and features), and demographic factors. Ethical and privacy-related constructs: privacy concerns and perceived reduction in personal touch, were added in the second stage (B1, B2) to assess whether reservations related to surveillance, data protection, or diminished human warmth attenuate acceptance independently of utilitarian evaluations. These factors are well established as inhibitors of service-technology adoption (Wirtz et al., 2018; McLeay et al., 2021; Jia et al., 2024; Lv et al., 2025).
Full attitudinal models (C1, C2) incorporated broader human–social and cultural factors: trust in AI to manage personal data, the importance placed on human interaction during hotel stays, cultural–linguistic fit, and support for human–AI collaboration. Including these constructs enabled a comprehensive assessment of whether cultural alignment, interpersonal expectations, and ethical considerations shape acceptance of AI-enabled hospitality services.
For each CLM, the proportional-odds assumption was evaluated using nominal tests. When predictors violated the proportional-odds (parallel-slopes) assumption, partial proportional-odds models were fitted within the VGAM vector generalized linear modeling framework (Peterson & Harrell, 1990; Yee, 2010). Nominal-test outputs and PPOM estimates are reported in Supplementary Materials.
Robustness checks were conducted using binary logistic regressions for the top-box outcomes (infl_yes, wtp_yes), which isolate respondents expressing unequivocal acceptance. Model adequacy was evaluated using the Akaike information criterion (AIC) and McFadden’s pseudo-R2 (Akaike, 1974; McFadden, 1974). Multicollinearity was assessed using variance inflation factors (VIFs) (O’brien, 2007; Fox & Monette, 1992; Liu & Zhang, 2018; Greenwell et al., 2018). VIF outputs and top-box coefficient tables are reported in Supplementary Materials.
To keep the main text concise, ordinal-model residual diagnostics (surrogate residuals) and influence checks are documented in Code S1, with supporting summaries in the Supplementary Materials. Nonlinearities were examined by including centered quadratic terms for n_benefits and n_features (models D1, D2). Interaction models (E1, E2) tested whether trust moderated the effect of perceived loss of personal touch, with predicted probabilities and 95% confidence intervals computed to aid interpretation. Model-fit summaries and full outputs for nonlinear and interaction specifications are reported in Supplementary Materials.
Finally, open-ended recommendations were analyzed using a lightweight text-mining procedure. Responses were tokenized, stop words were removed, and unigram and bigram frequencies were computed to identify salient themes that complement the quantitative findings. Frequency tables are reported in the Supplementary Materials.

3.7. Software, Transparency, and Reproducibility

All analyses were conducted in R (R Core Team, 2024) using widely adopted and well-documented packages for data import, preprocessing, ordinal modelling, diagnostics, visualization, and text processing. Data were imported with readxl and prepared using the tidyverse ecosystem (e.g., dplyr, tidyr, stringr, forcats, tibble) (Wickham et al., 2019). Ordinal outcomes were analysed using cumulative link models implemented in ordinal (Christensen, 2023) and, where proportional-odds violations required relaxation, partial proportional-odds models were estimated using VGAM (Yee, 2010). Diagnostic procedures drew on the car. Visualizations were produced with ggplot2 (Wickham, 2016). Open-ended recommendations were processed using tidy text-mining workflows with tidytext (Silge & Robinson, 2016).
To ensure transparency and reproducibility, the complete data-cleaning and modelling workflow, including all recoding rules, derived-variable construction, model specifications, diagnostics, robustness checks, and exported outputs, was documented in a fully reproducible R script provided in Code S1, Supplementary Materials.
Generative AI tools were used exclusively for language refinement and organizational editing of the manuscript; no generative AI systems were used for data handling, model estimation, statistical analysis, or interpretation (Porsdam Mann et al., 2024).

4. Results

4.1. Sample Characteristics and Outcome Prevalence

The final dataset comprised 689 respondents, with 687 non-missing observations for each of the two main outcomes (“Smart technologies and AI influences hotel choice” and “willingness to pay more”). The age structure was skewed toward younger adults: 35.1% were 18–24 years, 20.5% were 25–34, and 17.3% were 35–44, while 12.5% were 45–54, 10.9% were 55+, and 3.8% were under 18 (Table 1). All reported proportions are unweighted.
Women represented 55.4% of the sample, men 43.8%, and a small fraction reported “other/missing” (Table 2). Hotel-stay frequency was generally moderate, with almost exactly four in five respondents reporting 1–5 hotel stays per year and only about one-fifth staying more than six times (Table 3). Hotel-stay frequency refers to all hotel stays, domestic or international.
Outcome distributions showed that most respondents were cautious or uncertain about smart technologies and AI in hospitality. For hotel choice, 17.9% stated that smart technologies and AI would not influence their choice, 53.7% were unsure, and 28.4% indicated that smart technologies and AI would influence their choice (Table 4). For willingness to pay more, 26.2% responded “No,” 51.4% “Depends,” and 22.4% “Yes” (Table 5).
Cross-tabulations by gender and age indicate broadly similar patterns across groups, with some tendencies for older respondents and women to be less willing to pay more, but these differences are modest at the descriptive level (Table A2, Table A3, Table A4 and Table A5, Appendix A). Missing data were minimal across all variables (Table S1, Supplementary Materials). Model-specific complete-case sample sizes ranged from N = 677 to N = 682 relative to the full analytic sample of N = 689, and are summarized in Table S2 (Supplementary Materials).

4.2. Descriptive Patterns in Key Constructs

Descriptive statistics for the main numeric constructs are reported in Table 6. On average, respondents identified 2.79 smart and AI-related benefits (SD = 1.39, range 1–7) and 6.21 desired technological features (SD = 3.38, range 1–16). Frequency distributions for these two counts (Table A6 and Table A7, Appendix A) show that most guests see multiple benefits rather than a single isolated advantage, and that many desire relatively rich feature sets.
Comfort with AI-enabled hotel services was moderate, with a mean of 3.35 on a 1–5 scale (SD = 1.05). Awareness of smart technologies in hospitality was relatively high (mean 1.65 on a 0–2 scale), while prior experience with smart and AI hotels was more limited but non-trivial (mean 1.18 on a 0–2 scale).
Ethical and human-centered attitudes showed substantial variation. Privacy concerns were roughly evenly distributed across the three levels (0, 1, 2), and more than half of respondents felt that AI makes hotel services “somewhat” or “much” less personal (Table A8 and Table A9, Appendix A). In contrast, trust in AI for handling personal data was centered around the mid to high end of the scale (mean 3.15, SD 0.98), with a clear majority in the “3–4” range and a smaller group at the extremes (Table A10, Appendix A).
Cultural and linguistic fit and support for staff–AI training were evaluated very positively: approximately 70% selected the two highest categories for both “AI fits local language and culture” and “Hotels should train staff to collaborate with AI” (Table A11 and Table A12, Appendix A). These results indicate generally favorable evaluations of cultural alignment and hybrid service delivery, even as concerns about privacy and depersonalization remain salient for a sizeable subgroup.
These patterns are summarized graphically in Figure 1, which displays distributions for key ethical and attitudinal indicators. Percentages are shown as full-sample response distributions rather than inferential estimates.
Overall, the descriptive results suggest a nuanced profile: respondents recognize several potential benefits and are interested in a variety of smart features, but they also express non-negligible privacy concerns and a strong desire to preserve human interaction, while simultaneously supporting AI as a tool that complements and augments staff.

4.3. Correlations and Collinearity Diagnostics

Pearson correlations among the numeric predictors (Table A13, Appendix A) show that the strongest association occurs between the number of perceived benefits and the number of desired features (r = 0.51), indicating that these variables capture a shared underlying tendency toward valuing smart and AI-enabled hotel services. All remaining relationships among predictors are small to modest in magnitude, with most absolute correlations falling below |r| < 0.25 and only a limited number extending into the 0.22–0.30 range. Comfort with AI shows modest associations with the number of desired features (r = 0.22), prior experience with AI-enabled hotels (r = 0.22), and trust in AI (r = 0.23). Awareness of smart technologies correlates moderately with prior AI experience (r = 0.30), reflecting expected experiential links. Privacy concerns exhibit a small positive association with perceiving AI as making hotel services less personal (r = 0.18) and a modest negative association with trust in AI (r = −0.25). Taken together, the correlation structure suggests that “value/experience” indicators and “risk/ethics” indicators are related but distinct dimensions in this sample.
Extending the analysis to include the ordinal outcomes (Table S3, Supplementary Materials) shows that both behavioral variables exhibit small-to-moderate associations with theoretically relevant attitudinal predictors. For the “influence on hotel choice” outcome, the strongest correlations appear with trust in AI (r = 0.19), prior AI hotel experience (r = 0.18), and awareness of smart technologies (r = 0.13). For willingness to pay more, awareness (r = 0.25), prior AI experience (r = 0.30), and trust (r = 0.26) are again the most notable correlates. Associations with demographic variables (age, gender) were weak. These descriptive patterns align with the modelling strategy that prioritizes experiential and attitudinal predictors over demographics as primary explanatory variables.
To assess the potential for multicollinearity to bias model estimation, variance inflation factors (VIFs) were computed for predictor sets in the binary logistic robustness models (Tables S4 and S5, Supplementary Materials). Observed VIFs (1.0–1.6) indicate negligible multicollinearity, supporting coefficient stability in both ordinal and logistic specifications.

4.4. Ordinal Models for AI Influence on Hotel Choice

Table 7 presents the results from three cumulative link models (CLMs) estimating the ordinal outcome “AI influences hotel choice” (“No,” “Unsure,” “Yes”). Model A1 includes core technology-relevant constructs: perceived benefits, desired smart-feature counts, comfort with AI, awareness of smart technologies, and prior AI or smart hotel experience, together with demographic controls (age, gender, and hotel-stay frequency). Model B1 extends this baseline by adding two ethical concerns: privacy and perceived loss of personal touch. Model C1 further incorporates attitudinal predictors: trust in AI, importance of human interaction, perceived cultural and linguistic fit, and support for AI–staff collaboration.
Model performance improves across specifications (Table 7). The full coefficient and odds-ratio tables for all CLM, logistic, nonlinear, and PPOM models are provided in Supplementary Materials (Table S6). Reference categories for all analyses are: age = 18–24, gender = female, and privacy concern = 0 (no concerns reported). The full coefficient estimates for Model C1 are reported in Tables S7 and S8 (Supplementary Materials).
In Model C1, prior stays in AI-enabled hotels and trust in AI emerge as the most consistent positive predictors of stronger reported AI influence on hotel choice (Tables S7 and S8). A pronounced negative age gradient is also evident, with respondents aged 55+ reporting lower AI-driven influence than younger guests (Table 1). Other predictors show weaker and less consistent evidence in this outcome.
Several variables, including perceived benefits, desired features, comfort with AI, privacy concerns, perceived loss of personal touch, and cultural expectations, do not reach conventional significance thresholds in Model C1, although their directions are broadly consistent with the descriptive frequency distributions of these attitudes (Table A2, Table A3, Table A8, Table A9, Table A10, Table A11 and Table A12). Notably, human-interaction importance and cultural–linguistic fit are directionally positive but comparatively weak for this outcome and are not consistently supported across specifications; they are therefore interpreted as suggestive rather than robust predictors of AI-driven hotel choice.
This pattern is visualized in Figure A1 (Appendix A), where marginal effects were computed using the ggeffects package and show that higher trust in AI mitigates the dampening effect associated with perceiving AI as making hotel services less personal.
Proportional-odds diagnostics (nominal_test() from the ordinal package) are reported in Table S9 (Supplementary Materials) and identify privacy concerns (and more weakly n_features) as predictors whose effects vary across ordinal thresholds. These diagnostics align with the descriptive cross-tabulations: respondents with higher privacy concerns are markedly less likely to choose “Yes” (Table A14 and Table A15, Appendix A). Binary logistic robustness models (Tables S10 and S11, Supplementary Materials) reinforce this pattern, showing that privacy concerns are most clearly associated with reduced likelihood of unequivocal endorsement (“Yes”), particularly for willingness to pay, and more weakly for hotel-choice influence.
To account for these threshold-specific effects, a partial proportional-odds model (PPOM F1) was estimated for the influence outcome, relaxing the parallel-slopes assumption for the predictors flagged by the nominal tests (privacy concerns and n_features). The PPOM results (Tables S12 and S13, Supplementary Materials) indicate that privacy concerns operate primarily as a threshold-based inhibitor, exerting their strongest negative association with the highest response category (“Yes”) and weaker associations with the transition from “No” to “Unsure.” Accordingly, privacy concerns mainly suppress strong endorsement rather than shifting responses uniformly across the response scale.
Robustness checks using binary logistic regressions confirm these conclusions. Dichotomizing responses (“Yes” vs. “Not yes”) yields pseudo-R2 values of 0.096 for influence and 0.154 for willingness to pay (Table A16, Appendix A), closely tracking the ordinal results. Trust in AI and prior smart/AI-hotel experience remain the strongest predictors, while privacy concerns again show a negative association, highly significant for willingness to pay and weaker for influence, consistent with the CLM and PPOM patterns.
Multicollinearity diagnostics support model stability: all variance-inflation factors fall between 1.05 and 1.60 (Tables S4 and S5, Supplementary Materials), consistent with the moderate correlations in the numeric predictor matrix (Table A13, Appendix A) and the extended matrix including outcomes (Table S3, Supplementary Materials).
Taken together, results from the CLM, PPOM, and logistic analyses converge on a coherent conclusion: trust in AI, prior smart/AI-hotel experience, and age differences are the most reliable determinants of whether AI influences hotel choice. Ethical concerns, especially privacy, exert selective, threshold-based effects, primarily reducing strong acceptance rather than shifting moderate or uncertain responses.

4.5. Ordinal Models for Willingness to Pay More

Parallel to the analysis of AI influence on hotel choice, three cumulative link models (CLMs) were estimated for the ordinal outcome willingness to pay more (“No,” “Depends,” “Yes”). The baseline Model A2 includes core technology-acceptance predictors; Model B2 adds privacy and personal-touch concerns; and Model C2 incorporates the full attitudinal block, including trust in AI, human-interaction importance, cultural expectations, and views on AI–staff collaboration.
Model performance improves steadily across specifications (Table 7). Pseudo-R2 increases from 0.089 in the baseline model (A2) to 0.099 in the extended model (B2) and 0.125 in the attitudinal model (C2), while AIC declines from 1321.4 to 1309.6 and 1282.8. Relative to the influence-on-choice models, willingness-to-pay shows stronger model fit, consistent with the clearer “Yes” gradient observed in the outcome distributions (Table 5) and related descriptive breakdowns (Table A3).
The full estimates for Model C2 are reported in Table S8 (Supplementary Materials) and reveal several statistically and substantively meaningful predictors. Higher awareness of smart and AI technologies is associated with a greater willingness to pay more. Prior stays in smart and AI-enabled hotels also show a robust positive association, indicating that direct experience increases the perceived value of smart and AI-supported services. Respondents who desire a broader set of smart features exhibit modestly higher willingness to pay, consistent with descriptive patterns (Table A3). Higher trust in AI is consistently associated with movement toward a higher willingness-to-pay category. In contrast, privacy concerns are negatively associated with willingness to pay, and this effect is statistically significant (p ≈ 0.009). This pattern is also visible in the descriptive cross-tabulations (Table A3), where the share selecting “Yes” is notably lower among respondents with moderate or high privacy concerns than among those with low concerns.
Two attitudinal variables: importance of human interaction and perceived cultural fit of AI, show directionally positive but borderline effects. These coefficients suggest that guests who place a higher value on interpersonal service and perceive AI as culturally aligned may be somewhat more open to paying a premium; however, the evidence is not uniformly strong across specifications and is sensitive to threshold-specific modelling with clearer separation at the top category (“Yes”) in the PPOM (Appendix A, Table A17). Gender differences also emerge: men report greater willingness to pay than women (p ≈ 0.038), whereas gender differences were less pronounced for influence on hotel choice.
Diagnostic tests indicate that several predictors violate the proportional-odds assumption. The nominal-effects tests (using nominal_test() from the ordinal package) in Table S9 (Supplementary Materials) show significant violations for privacy concerns, desired feature counts, and the two borderline attitudinal variables (human-interaction importance and cultural fit). Accordingly, these predictors may distinguish “Yes” from “No/Depends” more strongly than they distinguish “No” from “Depends.” These patterns are consistent with the descriptive crosstabs for privacy concerns (Table A8, Appendix A) and with the broader correlation structure including outcomes (Table S3, Supplementary Materials).
To account for these violations, a partial proportional-odds model (PPOM F2) was estimated, allowing privacy concerns, feature counts, human-interaction importance, and cultural fit to vary across thresholds. The PPOM results, reported in Appendix A, Table A17, reproduce the core findings of the CLM: privacy concerns continue to exert a strong negative effect, and trust and prior experience remain positive predictors of willingness to pay. Threshold-specific slopes indicate that the clearest separations typically occur at the top category (“Yes”), consistent with the marginal-effects patterns (Figure A2, Appendix A).
Binary logistic regressions (wtp_yes vs. all other responses) provide an additional robustness check. As shown in Appendix A, Table A16, pseudo-R2 reaches 0.154, while trust in AI, prior experience, and privacy concerns again emerge as significant predictors. These results closely mirror those from the CLM and PPOM models, supporting the stability of the main effects.
Finally, multicollinearity diagnostics (VIF values in Supplementary Tables S4 and S5) remain low, indicating that the predictor set does not pose a threat to model stability. This aligns with the moderate correlations observed in the numeric matrix (Table A13, Appendix A) and the extended correlation structure including outcomes (Table S3, Supplementary Materials).
Overall, the willingness-to-pay models show a coherent pattern across specifications: awareness of smart technologies, prior smart/AI-hotel experience, trust in AI, and a broader desired-feature set reliably increase willingness to pay a premium, while privacy concerns reduce it. Human-interaction importance and cultural–linguistic fit appear directionally supportive but comparatively weak and threshold sensitive and are therefore interpreted as suggestive rather than consistently strong determinants of willingness to pay. These results are consistent across the CLM, PPOM, and binary logistic frameworks and are summarized in the consolidated odds-ratio evidence reported in Table S6 (Supplementary Materials).

4.6. Nonlinearities and Interaction Effects

To assess whether the effects of perceived value exhibit nonlinear patterns, Models D1 and D2 extended the attitudinal specifications by including mean-centered quadratic terms for the number of perceived benefits and features (mean-centering was used to maintain interpretability and reduce collinearity). Across both outcomes, these nonlinear components were small in magnitude and not consistently statistically significant (Tables S14–S16, Supplementary Materials). For willingness to pay, the squared term for n_benefits showed a borderline effect (p ≈ 0.07), suggesting a mildly convex association in which incremental perceived benefits may exert slightly stronger effects among respondents who already report more benefits. Nevertheless, effect sizes remained modest, and improvements in model fit relative to the corresponding linear models were limited (ΔAIC < 4; pseudo-R2 increasing only from 0.073 to 0.075 for influence and from 0.125 to 0.130 for willingness to pay). Given these minimal gains, interpretation focuses on the linear specifications for parsimony.
Interaction Models E1 and E2 evaluated whether trust in AI moderates the relationship between perceived loss of personal touch and the two behavioral outcomes. Predictors were centered prior to model estimation to reduce multicollinearity. For influence on hotel choice, the interaction between less_personal_num and trust_ai_num was statistically significant (estimate ≈ −0.25, p ≈ 0.015), resulting in a modest improvement in predictive performance (pseudo-R2 = 0.078 vs. 0.073; Table S19, Supplementary Materials). Predicted probabilities from Model E1 (Figure A1, Appendix A), estimated using ggeffects::ggpredict, indicate that trust changes how “loss of personal touch” relates to endorsement. Among respondents with low trust in AI, higher perceived loss of personal touch corresponds to higher probabilities of reporting “Yes” (AI influences hotel choice), whereas among high-trust respondents, higher perceived loss of personal touch corresponds to slightly lower “Yes” probabilities. Overall, the interaction implies that trust attenuates the practical relevance of depersonalization concerns in hotel-choice influence, consistent with a buffering role.
For willingness to pay, the interaction term was not statistically significant (p ≈ 0.63), and changes in model fit relative to the attitudinal model were negligible (Model E2 vs. C2; Tables S17 and S18, Supplementary Materials). Predicted probabilities (Figure A2, Appendix A) indicate that, once overall trust and privacy concerns are accounted for, willingness to pay is driven predominantly by these broader attitudinal factors rather than by the interaction between perceived personal-touch loss and trust.

4.7. Binary Robustness Checks

To assess the robustness of the ordinal findings and to isolate respondents expressing unequivocal acceptance, additional binary logistic regressions were estimated for both outcomes, coded as “Yes” versus all other responses. Because dichotomization reduces information, these models are treated as confirmatory robustness checks rather than primary evidence. Results are summarized in Appendix A, Table A16, and complete odds-ratio estimates with 95% confidence intervals are reported in Table S6 (Supplementary Materials).
The model predicting whether smart technologies and AI influence hotel choice achieved an AIC of 784.7 with a McFadden pseudo-R2 of 0.096, while the willingness-to-pay model yielded an AIC of 662.7 and a pseudo-R2 of 0.154 (Appendix A, Table A16). These fit indices are broadly consistent with the ordinal models, indicating stable explanatory performance across modelling approaches.
For the outcome reflecting whether smart technologies and AI influence hotel choice, the binary specification reproduces the strongest effects identified in the ordinal C1 model. Prior experience with smart or AI-enabled hotels shows a clear positive association (e.g., OR ≈ 1.53, 95% CI ≈ 1.18–1.98; Table S6, Supplementary Materials), indicating that respondents with direct experience are more likely to answer “Yes.” Trust in AI likewise displays a robust positive effect (OR ≈ 1.41, 95% CI ≈ 1.22–1.65), again emerging as one of the most influential predictors. Hotel-stay frequency also contributes positively to the likelihood of endorsement, whereas respondents aged 55+ exhibit significantly lower odds (OR ≈ 0.61, 95% CI ≈ 0.42–0.88), confirming the age gradient observed in the ordinal models. Ethical and privacy-related constructs, particularly privacy concerns and perceptions of reduced personal touch, do not achieve statistical significance in the binary model, consistent with the more threshold-specific and comparatively weaker patterns observed in the ordinal/PPOM specifications.
For willingness to pay more for AI-enhanced services, the binary results similarly reinforce the conclusions from the ordinal analysis. Awareness of smart technologies (OR ≈ 1.36), prior smart/AI hotel experience (OR ≈ 1.46), trust in AI (OR ≈ 1.49), and higher hotel-stay frequency all increase the probability of selecting “Yes,” with effect sizes closely reflecting those in the C2 model (Tables S61 and S11, Supplementary Materials). Privacy concerns exert a strong negative association (OR ≈ 0.68, 95% CI ≈ 0.54–0.86), confirming that privacy-related reservations most strongly constrain financial willingness rather than general interest. The perceived importance of human interaction shows a small and only marginal positive association, suggesting a directionally supportive but not consistently strong role once trust, experience, and privacy are accounted for.
Across both outcomes, the binary models yield odds-ratio patterns highly consistent with the cumulative link and partial proportional-odds models. The direction and magnitude of key effects: trust in AI, prior experience, age differences, and privacy concerns, remain stable across modelling frameworks, supporting the robustness of the core findings. Finally, multicollinearity diagnostics (VIF values in Tables S4 and S5, Supplementary Materials) are within acceptable limits, aligning with the moderate correlations observed in the predictor matrices (Table A13, Appendix A; Table S3, Supplementary Materials).

4.8. Analysis of Open-Ended Recommendations

To complement the quantitative findings, the open-ended responses were analyzed using a lightweight text-mining approach. Responses were preprocessed to support a transparent frequency-based summary of recurring themes: recommendations were tokenized into single words and bigrams, lowercased, and stripped of punctuation and stop-words. A small custom stop-word list was added to reduce non-informative generic terms.
The resulting frequency distributions highlight the dominant themes respondents emphasized when describing expectations for AI-enabled hospitality services. Across all responses, the most frequent individual words (Table S20, Supplementary Materials) include “AI”, “hotels”, “data”, “staff”, and “guests”. The prominence of “data” suggests that privacy and responsible data use are salient concerns in respondents’ own framing of AI-enabled services.
Bigram analysis (Table S21, Supplementary Materials) reveals more structured themes. The most common bigrams, “guest experience,” “human interaction,” “smart technologies,” “human touch,” and “personal data”, map closely onto the study’s core domains, particularly interpersonal service expectations and privacy evaluations. Together, the lexical patterns indicate that respondents view acceptance as a trade-off between convenience and personalization on one hand, and privacy and service warmth on the other.
To support interpretive accuracy, extracted themes were reviewed manually for face validity by the research team. Overall, the open-ended responses reinforce the central conclusion that guests evaluate AI-enabled services through a combined lens of perceived value, privacy confidence, and expectations of human-centered hospitality.

5. Discussion

5.1. Experiential and Awareness Factors

The results provide strong and consistent support for H1: prior experience with smart or AI-enabled hotels is a stable predictor of acceptance across model specifications. This aligns with research emphasizing that hands-on exposure reduces uncertainty and strengthens perceived usefulness in service technologies (Tavitiyaman et al., 2022; Yang et al., 2021; Venkatesh et al., 2003). The positive association between awareness of smart technologies and acceptance, most clearly for willingness to pay, supports H2 and suggests that informational exposure can increase perceived feasibility and perceived value.
These findings are consistent with Rogers’ (2003) Diffusion of Innovations framework. Prior smart-hotel experience reflects trialability, whereby opportunities to try an innovation in low-risk settings reduce uncertainty and can strengthen adoption intentions. Likewise, the role of awareness corresponds to the knowledge stage of the innovation-decision process, in which individuals first become informed about an innovation before forming evaluations and intentions.
Importantly, awareness relates more strongly to willingness to pay than to general acceptance, indicating that informational exposure may matter most when guests consider financial commitment rather than general openness. In emerging destinations such as Albania, where exposure to AI-enabled hospitality remains uneven, these patterns suggest that acceptance can be accelerated by making the technology visible and understandable (e.g., in-hotel demonstrations, guided onboarding, and opt-in trials that clarify what data are used and what benefits guests receive).

5.2. Trust, Privacy, and Ethical Evaluations

The results support H3: trust in AI, especially confidence in responsible data handling, emerges as a central driver of both outcomes. This aligns with service-automation and e-commerce research showing that trust reduces perceived risk and strengthens behavioral intentions toward technology-mediated services (Della Corte et al., 2023; Pavlou, 2003). Because the trust item in this study is explicitly framed around data handling, the effect should be interpreted primarily as “data-governance trust,” rather than as a broader assessment of AI competence or service quality (Mayer et al., 1995). Future work could separate integrity/benevolence trust from competence trust to test whether they contribute independently to acceptance in hospitality settings.
The evidence for H4 is also robust: privacy concerns consistently reduce strong acceptance, particularly willingness to pay more. Importantly, the pattern is not uniform across response levels: privacy concerns mainly depress movement into the highest endorsement category (“Yes”), rather than simply shifting respondents from “No” to “Depends/Unsure.” This suggests a “commitment threshold” effect: as guests move from tentative openness to firm endorsement, perceived privacy risks become more decision-relevant (Morosan & DeFranco, 2015; Lee & Cranage, 2011; Karwatzki et al., 2017). Managerially, this implies that privacy assurance is especially critical when hotels seek premium uptake or paid upgrades, not only general openness.
Trust and privacy concerns also appear intertwined. While they are modeled as distinct predictors, their negative association suggests a practical tension: building credible, transparent governance can strengthen trust and indirectly reduce privacy resistance, whereas unresolved privacy concerns can erode trust over time. Because the design is cross-sectional, these relationships should be interpreted as associative rather than causal, and unobserved dispositions (e.g., generalized technology skepticism) may contribute to both.
Effect sizes are substantively meaningful, reinforcing trust-building and privacy assurance as actionable levers for increasing both adoption and monetization (Table S6, Supplementary Materials).
The Albanian context adds interpretive relevance. Albania has recently updated its personal data protection framework through Law no. 124/2024 on personal data protection (2024) and the national supervisory authority (the Commissioner) publicly positions the reform as part of alignment with EU data-protection standards. Earlier Law no. 9887 dated 10.03.2008 on protection of personal data (2008) is listed as repealed in the same official legislative repository. EU reporting on Albania also discusses institutional activity and public awareness on data protection, providing relevant context for why privacy assurance may be particularly salient when guests encounter AI systems requesting personal data. Accordingly, visible privacy safeguards and clear communication (what data are collected, why, for how long, and with what opt-outs) are likely to support both compliance and acceptance (Albania 2020 report SWD(2020) 354, 2020).

5.3. Perceived Value and Financial Acceptance

The results provide strong support for H5: respondents who identify more benefits and desire more smart features are significantly more likely to express both acceptance and willingness to pay. Perceived benefits and desired features show clear, consistent positive associations with both acceptance and willingness to pay (Table S6, Supplementary Materials). Although incremental at the unit level, these effects accumulate across the observed ranges, such that guests recognizing many benefits exhibit substantially higher predicted acceptance than those identifying only one or two. For practitioners, this implies that increasing the salience of the multidimensional value proposition of AI-enabled services can strengthen both attitudinal acceptance and monetization potential.
These findings align with value-driven adoption mechanisms central to TAM and UTAUT/UTAUT2 (Davis, 1989; Venkatesh et al., 2012). However, the operationalization used here differs from conventional reflective “usefulness” scales. Perceived value was captured through formative “breadth indicators,” namely, counts of distinct benefits and desired features. This conceptualizes value as cumulative scope (how many ways AI is perceived as useful) rather than intensity on a single dimension. In smart-hospitality contexts, where AI can deliver efficiency, personalization, convenience, multilingual support, and other functions, breadth-based measures may better reflect how guests construct value judgments. Future research could test whether breadth and intensity of perceived usefulness contribute independently to acceptance.
Exploratory nonlinear analyses suggest that perceived benefits may show mild acceleration for willingness to pay at higher benefit levels; however, the linear specification remains the primary interpretation.
Regarding H6, privacy concerns constrain willingness to pay more strongly than they constrain general openness, indicating that privacy risk becomes especially decision-relevant when guests consider financial commitment. One parsimonious interpretation is that payment contexts trigger more deliberate cost–risk evaluation, making potential privacy losses more salient than when respondents remain in an exploratory stance (Lee & Cranage, 2011; Morosan & DeFranco, 2015; Tsai et al., 2011). Accordingly, communicating functional benefits may be insufficient to secure price premiums unless hotels also provide credible privacy assurances.
It should be acknowledged that guests who identify more benefits or desire more features may differ systematically in unmeasured ways (e.g., technology readiness, innovativeness), which limits causal interpretation. At the same time, the dispersion in benefits and feature counts points to meaningful heterogeneity that can inform segmentation. Hotels might differentiate between “value-sensitive” segments that require clearer benefit communication and reassurance, and “tech-enthusiast” segments that are already inclined to adopt and pay, enabling tiered offerings and tailored messaging.
These patterns underscore a strategic implication for emerging markets such as Albania: the commercial viability of AI-enabled upgrades depends not only on perceived functional value but also on ethical comfort, particularly around data use. Given the relatively modest mean number of perceived benefits (M = 2.79 out of 7), there may be substantial scope to expand benefit recognition, but conversion into willingness to pay is likely to depend on simultaneously reducing perceived privacy risk.

5.4. Interpersonal, Cultural, and Moderation Effects

The findings provide partial support for H7, indicating that digital familiarity, captured through awareness and prior experience, is associated with higher acceptance. Although older respondents were less accepting in some specifications, the overall pattern suggests that exposure and familiarity help explain the acceptance gradient more than age alone.
Evidence for H8 is mixed and generally weak. The importance placed on human interaction does not consistently reduce acceptance and is, at most, directionally positive in some models but not robust across specifications. This suggests that valuing human warmth can coexist with acceptance of AI for routine, transactional tasks, consistent with the Paradoxes of Technology perspective (Mick & Fournier, 1998). Because “importance of human interaction” was measured with a single global item, it may also blend distinct preferences (e.g., warmth-seeking vs. technology discomfort) and mask context-specific trade-offs.
Support for H9 is modest and directionally consistent, with cultural–linguistic fit positively related to acceptance, particularly for willingness to pay, but not uniformly strong across models. This aligns with cultural congruence arguments that service technologies are evaluated not only for performance but also for the perceived appropriateness of communication styles and norms (Holmqvist & Grönroos, 2012; Holmqvist et al., 2014). Because cultural–linguistic fit was operationalized as a single item, the finding should be treated as indicative rather than definitive; future research should use multi-item measures or test specific localization features (e.g., language formality, politeness norms, locally relevant recommendations) (Holmqvist et al., 2017; Paparoidamis et al., 2019).
Consistent with H10, the interaction model indicates that trust moderates the association between perceived loss of personal touch and AI influence on hotel choice, with a modest improvement in fit (Table S19; Figure A1). Substantively, the interaction suggests that trust changes how guests “price” interpersonal warmth when evaluating AI-enabled choice influence; however, the pattern is small and should be interpreted cautiously. The moderation does not extend to willingness to pay (Tables S17 and S18; Figure A2), reinforcing the broader result that financial commitment is more directly shaped by perceived value and privacy-related evaluations than by interpersonal trade-offs.
Taken together, these results support a complementarity view of AI in hospitality: acceptance appears highest when AI is positioned as augmenting staff rather than replacing relational service, consistent with human–AI collaboration perspectives (Huang & Rust, 2018; Wirtz et al., 2018) and with the high descriptive support for staff–AI collaboration (Table 6).

5.5. Integration of Quantitative and Qualitative Findings

To complement the quantitative models, the open-ended recommendation item (“Do you have any recommendations for hotels planning to integrate AI and smart technologies?”) was analyzed using a lightweight text-mining procedure. Of the 689 respondents, 228 provided written recommendations (33.1%). Responses were generally brief, and participation did not differ meaningfully by age or gender, although some self-selection toward more engaged respondents is possible, as with any optional open-ended item.
Text preprocessing followed a transparent, reproducible pipeline (documented in the Supplementary R script): responses were lowercased, punctuation was removed, and stop-words were excluded. For readability, detailed token and bigram frequency outputs are reported in Tables S20 and S21 (Supplementary Materials). The most prominent lexical themes emphasize (i) privacy and data governance, (ii) interpersonal warmth and human touch, and (iii) staff integration and training, alongside operational value cues (e.g., efficiency and convenience). Notably, references to “data,” “privacy,” and “personal data” appear frequently without being explicitly prompted, reinforcing the centrality of information governance in respondents’ spontaneous framing (Tables S20 and S21, Supplementary Materials).
Bigram patterns add contextual structure to these themes. Frequent combinations such as “guest experience,” “human interaction,” and “human touch” underscore that relational expectations remain salient even when guests discuss technological improvements (Table S21, Supplementary Materials). A recurring cluster of replacement-related phrases (e.g., “replace human/staff”) also appears, suggesting that displacement concerns emerge organically in guest discourse. This reinforces the importance of communicating AI as augmentative rather than substitutive, consistent with the hospitality automation literature (Ivanov & Webster, 2019b) and with the strong descriptive endorsement of staff–AI collaboration in the structured item (Table 6).
Overall, the open-ended recommendations provide convergent qualitative support for the model-based results: privacy assurance is salient, human warmth remains a reference point, and guests frequently describe implementation in terms of staff integration rather than full replacement. At the same time, the text-mining evidence should be interpreted as complementary: frequency metrics capture prominence rather than sentiment or argument structure, and only one-third of respondents provided textual recommendations.
Practically, the phrasing used by respondents suggests simple communication cues. Messages that emphasize “human touch” alongside efficiency gains may reduce perceived depersonalization, while plain-language explanations of “personal data,” “privacy,” “security,” and “data handling” can make governance assurances more credible and understandable. Where relevant, explicitly signaling GDPR-aligned practices may also reassure some guests, but the broader priority is a transparent, guest-facing explanation rather than legalistic wording.

5.6. Implications for Practice

The findings yield actionable implications for hospitality managers and technology designers, with particular relevance for Albania and comparable emerging tourism markets where guest-facing AI adoption remains nascent and expectations are evolving. The recommendations below are grouped by evidential strength: (i) model-consistent effects that replicate across specifications, and (ii) directionally suggestive patterns that are weaker or threshold-sensitive.
Trust-building should be treated as the primary adoption lever. Trust in AI is the most consistent attitudinal predictor across specifications (Table S6, Supplementary Materials), making trust-building the primary lever for adoption. Practically, this implies making data governance visible at guest touchpoints rather than relying on abstract assurances, for example, concise notices at booking/check-in/Wi-Fi login, explicit opt-in consent for non-essential data uses, and staff scripts that explain data practices clearly. A key operational risk is credibility loss from overpromising: if “transparency” is claimed but retention rules, vendor roles, or opt-out options are unclear, trust may erode.
Privacy-by-design becomes especially important when hotels seek price premiums. Privacy concerns are most constraining for willingness to pay (Table S6, Supplementary Materials), so a privacy-by-default design is critical when hotels seek premiums. Hotels that aim to monetize AI features should prioritize “privacy-by-default plus optional personalization,” combining simple data toggles, clear deletion options at checkout, explicit retention periods, and plain-language explanations of what is personalized and why. An implementation risk is friction: overly complex consent interfaces can frustrate guests or inadvertently signal excessive data collection; default-minimal data collection with optional enhancements helps preserve usability while maintaining autonomy.
Experiential familiarity can be cultivated through low-risk onboarding. Prior experience is a robust predictor across specifications (Table S6, Supplementary Materials), implying that low-risk trials and onboarding can reduce uncertainty and accelerate uptake. In lower-exposure markets, lowering the “risk of first use” can be a high-return tactic: short demonstrations at check-in, QR-linked tutorials, staff-guided opt-in introductions, and phased activation (basic functions first; advanced personalization later). Poor onboarding can backfire, so piloting with staff and small guest cohorts before full rollout remains advisable.
AI should be framed and implemented as an augmentation rather than a replacement, while preserving human pathways. Descriptive support for staff–AI collaboration is high (Table 6), and open-ended recommendations frequently reference human touch and staff integration (Tables S20 and S21, Supplementary Materials). Operationally and communicatively, hotels can frame AI as freeing staff for hospitality and service recovery (routine tasks automated; humans for complex or emotionally sensitive needs) and ensuring easy escalation to staff when AI fails. Internal alignment also matters: if employees interpret AI as surveillance or workforce reduction, their skepticism may transmit to guests and weaken trust.
Two additional refinements appear directionally helpful but should be treated as incremental. Cultural–linguistic localization shows modest, positive associations, more evident for willingness to pay than for hotel choice, but is not uniformly strong across models. Localization is therefore best understood as a quality multiplier: high-quality Albanian language support, appropriate formality and greeting conventions, and locally grounded recommendations. The risk is superficial adaptation: poor translations or culturally inappropriate outputs can appear inauthentic and damage trust.
Finally, hotels may benefit from differentiated pathways rather than one-size-fits-all deployment. The variability in perceived benefits, desired features, and attitudes implies heterogeneous guest orientations. A pragmatic approach is to offer tiers and choice architectures: an opt-in “basic smart convenience” layer for broad uptake, and a premium layer for higher-trust/higher-exposure guests, without requiring intrusive profiling.
Overall, successful AI implementation in Albanian hospitality is most likely when hotels prioritize trust-building and privacy protection, create low-risk opportunities to try the technology, frame AI as augmenting human service, and treat localization and segmentation as incremental refinements rather than core prerequisites.

5.7. Limitations and Directions for Future Research

This study has several limitations that should be acknowledged when interpreting the findings and their practical implications.
First, the sampling and fieldwork design constrain generalizability. Data were collected through a voluntary, intercept-based approach in a public urban setting, which may introduce self-selection and coverage biases. Respondents who were socially oriented, had more discretionary time, or were more willing to engage with survey administrators may be overrepresented. The May–October collection window may further oversample peak-season profiles and underrepresent off-season domestic travel patterns. In addition, the geographic focus on Tirana limits coverage of other Albanian tourism contexts (e.g., coastal resorts, mountain destinations, and heritage sites) where guest compositions and technology expectations may differ. Accordingly, inferential statistics should be interpreted as describing uncertainty within the observed sample rather than providing population estimates for all hotel guests in Albania.
Second, the study relies on self-reported attitudes and stated behavioral intentions rather than observed behavior. Although diagnostic checks supported stable model performance, intention-based measures are subject to the intention–behavior gap. Stated willingness to pay and privacy concerns may not translate directly into actual booking decisions, feature usage, or real monetary trade-offs. Future work should therefore complement survey evidence with revealed-preference or behavioral data, such as booking choices, feature-usage logs, field experiments, or incentive-compatible designs that incorporate real costs.
Third, measurement and method factors warrant consideration. Collecting all measures from the same respondents at one time point raises the possibility of common method variance. While anonymity reduces social desirability pressure, intercept administration may still have influenced response styles. Several key constructs (e.g., trust in AI, cultural–linguistic fit, privacy concern, perceived loss of personal touch) were measured using single items. Although single-item measures can be appropriate for conceptually narrow constructs, this approach limits the assessment of internal consistency and the separation of true score variance from measurement error, potentially attenuating effects and increasing uncertainty around construct interpretation. Future work should operationalize these constructs with multi-item scales and test measurement equivalence across key groups (e.g., domestic vs. international guests).
Fourth, unmeasured confounding and causal ambiguity remain. The analyses controlled for demographics, hotel-stay frequency, and prior AI experience, but relevant drivers such as technology readiness, income, travel purpose (business vs. leisure), and individual difference variables (e.g., openness to experience or general risk aversion) were not measured. These factors may jointly influence both predictors and outcomes, limiting causal interpretation. Moreover, despite robustness checks, the cross-sectional design does not support causal claims, and statistical power for detecting interaction effects may be more limited than for main effects, especially when interactions are modest in magnitude.
Several extensions would strengthen the evidence base and clarify mechanisms. Experimental research could test pathways more directly, particularly the moderation pattern in which trust shaped the association between perceived loss of personal touch and hotel-choice influence more clearly than willingness to pay. For example, randomized interventions varying the clarity of data-governance communication, the presence of opt-in controls, or the framing of AI as augmentation versus replacement could isolate causal effects. Given the threshold-sensitive nature of some predictors in the ordinal models, future studies should also test whether commitment decisions involve genuine psychological thresholds or reflect response-category framing and measurement granularity. The open-ended responses also highlighted displacement concerns; future instruments should more systematically incorporate labor-ethics and workforce-impact items alongside privacy and data-handling measures.
From a design perspective, future research would benefit from probabilistic or stratified sampling and broader geographic coverage across Albania to improve external validity. Longitudinal panel designs, surveying guests prior to first exposure to smart/AI-enabled services, immediately after use, and at follow-up, could capture how acceptance evolves with familiarity and lived experience. Cross-national comparative studies (e.g., Albania versus neighboring Balkan markets and selected EU destinations) could clarify the role of market maturity and cultural context. Finally, qualitative fieldwork in operational hotels (e.g., observations and in-depth interviews) could enrich understanding of how guests interpret AI-enabled service encounters and how expectations, trust, and privacy concerns are negotiated in practice.
More broadly, the findings should be interpreted in light of the rapid evolution of AI technologies and public awareness. Patterns observed in 2025 may shift as AI becomes more prevalent, regulatory frameworks mature, and high-profile incidents influence public perceptions. Repeated cross-sectional surveys or longitudinal tracking, combined with replications across diverse markets, will be important for assessing the temporal stability and boundary conditions of the relationships identified in this study.

6. Conclusions

This study examined hotel guests’ acceptance of smart and AI-enabled technologies through an integrated framework linking utilitarian, experiential, ethical, and cultural evaluations with two behavioral outcomes: whether such technologies influence hotel choice and whether guests are willing to pay a premium.
Across both outcomes, experiential familiarity and data-governance trust emerged as the most reliable levers of acceptance. Guests with prior smart/AI-hotel experience and higher trust in responsible data handling were more supportive. Perceived value, captured as the breadth of benefits recognized and features desired, also showed clear positive associations, while privacy concerns were a recurring barrier, most visibly when respondents moved from tentative interest to financial commitment.
Interpersonal and contextual considerations added nuance rather than acting as uniform “yes/no” determinants. Perceived loss of personal touch did not consistently suppress acceptance on its own, but its implications depended on trust, indicating that ethical confidence can change how guests trade off efficiency against warmth. Cultural–linguistic fit was directionally positive, but its effects were not uniformly strong, suggesting it functions more as an incremental quality multiplier than a standalone driver.
The findings carry several theoretical implications. First, they support an “experience–trust–value” pathway for AI acceptance in hospitality, where familiarity and trust shape the translation of perceived benefits into both choice and willingness to pay. Second, the results highlight a threshold-like role of privacy concerns: privacy appears less decisive for tentative openness than for strong endorsement and premium payment. Third, interpersonal and cultural considerations appear best understood as conditional or incremental influences rather than universal inhibitors or enablers.
The results also yield practical implications. Hotels should prioritize trust-building through visible, guest-facing data governance (clear notices, opt-in controls, and staff ability to explain data handling). To monetize AI features, a privacy-by-default design is critical because privacy concerns are most constraining for willingness to pay. Especially in emerging markets, low-risk onboarding that lets guests “try before they commit” can accelerate acceptance by converting uncertainty into experiential familiarity.
This evidence comes with important constraints: the intercept, non-probability sample, self-reported intentions, and cross-sectional design limit population generalization and causal inference. Nonetheless, the convergent quantitative–qualitative pattern indicates that AI adoption is most likely to succeed when functional value is paired with transparent data practices, preserved human pathways, and culturally competent delivery.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/tourhosp7010014/s1, The de-identified dataset used in this study is available as Dataset S1, together and the full set of R script used for data preparation, modelling, diagnostics, and visualization (Code S1). Additional supplementary tables include: item-level missingness and sample-size diagnostics (Tables S1 and S2), the full predictor correlation matrix (Table S3), and variance-inflation factors for binary models (Tables S4 and S5). Core modelling outputs are provided in: the Master Odds-Ratio Summary Table for all acceptance models (Table S6), cumulative link model estimates for AI influence on hotel choice and willingness to pay (Tables S7 and S8), and proportional-odds assumption tests (Table S9). Robustness analyses include logistic regression coefficients (Tables S10 and S11), partial proportional-odds model estimates (Tables S12 and S13), and nonlinear model specifications (Tables S14–S16). Interaction analyses are reported in model-fit summaries and estimates (Tables S17–S19). Finally, qualitative analyses of open-ended recommendations are summarized through word-frequency and bigram-frequency tables (Tables S20 and S21).

Author Contributions

Conceptualization, M.G. and T.T.; methodology, M.G., R.M., and T.T.; formal analysis, M.G.; original draft preparation, M.G. and R.M.; supervision, M.G. and K.S.; funding acquisition, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Agency for Scientific Research and Innovation of Albania (AKKSHI) under Grant PTI 2024.

Institutional Review Board Statement

The study was approved by the Ethics Committee of the University of Tirana (protocol code NO.1007/1, date of approval: 5 July 2024).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in the study.

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request owing to privacy restrictions.

Acknowledgments

The authors express their gratitude to the group of undergraduate students from the Faculty of Economy, University of Tirana, who assisted with the intercept data collection in Skanderbeg Square and supported the fieldwork throughout the study. Their effort in engaging domestic and international guests was essential to the successful execution of this research.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the study design; collection, analyses, or interpretation of data; writing of the manuscript; or decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CLMCumulative Link Model
PPOMPartial Proportional-Odds Model
TAMTechnology Acceptance Model
UTAUT/UTAUT2Unified Theory of Acceptance and Use of Technology (and its extension UTAUT2)
SRAMService Robot Acceptance Model
WTPWillingness to Pay
NHTNeed for Human Touch
AICAkaike Information Criterion
VIFVariance Inflation Factor
OROdds Ratio
CIConfidence Interval
RQResearch Question
HHypothesis
SDStandard Deviation

Appendix A

Table A1. Survey Instrument Structure and Item List Across the Four Conceptual Blocks.
Table A1. Survey Instrument Structure and Item List Across the Four Conceptual Blocks.
BlockConstruct DomainItem WordingVariable NameResponse Format
Block 1. Awareness, Experience, ComfortAwarenessAre you aware of smart technologies used in accommodations?aware_smartNo/Not sure/Yes
Prior ExperienceHave you previously stayed in a hotel that used AI or smart technology?prior_ai_stayNo/Not sure/Yes
Comfort with AIHow would you rate your comfort level in using AI-based hotel services?comfort_ai1–5 Likert
Hotel FrequencyHow often do you stay in a hotel per year?hotel_freqOrdinal categories
Block 2. Perceived Benefits & FeaturesDesired FeaturesWhich smart or AI technologies would you like to experience? (multi-select)want_featuresMultiple response
Smart room controls; Keyless entry; AI concierge/chatbot; Personalized service; Voice assistants; Facial recognition; Smart mirrors; In-room tablets; Multilingual translation tools; Automatic check-in/outfeat_ *0/1 Dummies
Perceived BenefitsWhat do you see as the main benefits of AI and smart technologies? (multi-select)benefitsMultiple response
Faster service; Personalized experiences; Room customization; Energy efficiency; Contactless services; Innovative guest experience; Cost savingsben_ *0/1 Dummies
Block 3. Human, Ethical, Privacy, Trust, CulturalHuman InteractionHow important is human interaction during your hotel stay?human_importance1–5 Likert
Loss of Personal TouchDo smart/AI technologies reduce the sense of personal touch?less_personalNo/Not sure/Yes
Privacy ConcernsDo you have concerns about privacy or surveillance?privacy_concernNo/Not sure/Yes
Trust in AIHow much would you trust a hotel that uses AI to handle personal data?trust_ai1–5 Likert
Cultural FitAI and smart technologies should reflect local culture and language.ai_culture1–5 Likert
AI–Staff CollaborationHotels should train staff to work together with AI instead of replacing them.ai_staff_train1–5 Likert
Block 4. Behavioral OutcomesInfluence on Hotel ChoiceWould the presence of smart/AI technologies influence your hotel choice?influence_choiceNo/Unsure/Yes
Willingness to Pay MoreWould you be willing to pay a higher rate for smart/AI services?wtp_moreNo/Depends/Yes
Open RecommendationsDo you have any recommendations for hotels adopting AI?open_recommendationsOpen Response
Note: * denotes a family of binary dummy variables (one per checklist option); each item is coded 1 = selected, 0 = not selected. The resulting dummies (feat_, ben_) were summed to compute n_features and n_benefits.
Table A2. Influence of Smart Techonolgies and AI on Hotel Choice by Gender.
Table A2. Influence of Smart Techonolgies and AI on Hotel Choice by Gender.
GenderResponse CategorynRow %
FemaleNo6015.71%
Unsure21355.76%
Yes10928.53%
MaleNo6321.00%
Unsure15351.00%
Yes8428.00%
Missing/OtherUnsure360.00%
Yes240.00%
Table A3. Willingness to Pay More for Smart Technologies and AI-Enhanced Services by Gender.
Table A3. Willingness to Pay More for Smart Technologies and AI-Enhanced Services by Gender.
GenderResponse CategorynRow %
FemaleNo10828.35%
Depends20253.02%
Yes7118.64%
MaleNo6922.92%
Depends15049.83%
Yes8227.24%
Missing/OtherNo360.00%
Depends120.00%
Yes120.00%
Table A4. Influence of Smart Technologies and AI on Hotel Choice by Age Group.
Table A4. Influence of Smart Technologies and AI on Hotel Choice by Age Group.
Age GroupResponse CategoryNRow %
18–24No3614.94%
Unsure13455.60%
Yes7129.46%
25–34No2114.89%
Unsure8157.45%
Yes3927.66%
35–44No1714.41%
Unsure6655.93%
Yes3529.66%
45–54No1719.77%
Unsure4248.84%
Yes2731.40%
55+No2837.33%
Unsure3445.33%
Yes1317.33%
Under 18No415.38%
Unsure1246.15%
Yes1038.46%
Table A5. Willingness to Pay More for Smart Technologies and AI on Hotel Choice by Age Group.
Table A5. Willingness to Pay More for Smart Technologies and AI on Hotel Choice by Age Group.
Age GroupResponse CategorynRow %
18–24No5422.50%
Depends13757.08%
Yes4920.42%
25–34No3222.70%
Depends7956.03%
Yes3021.28%
35–44No2521.01%
Depends6151.26%
Yes3327.73%
45–54No2529.07%
Depends3844.19%
Yes2326.74%
55+No3749.33%
Depends2533.33%
Yes1317.33%
Under 18No726.92%
Depends1350.00%
Yes623.08%
Table A6. Frequency Distribution of the Number of Perceived AI-Related Benefits.
Table A6. Frequency Distribution of the Number of Perceived AI-Related Benefits.
Number of Benefitsn%
115021.77
215422.35
318226.42
413018.87
5527.55
691.31
7121.74
Table A7. Frequency Distribution of the Number of Desired Smart and AI Features.
Table A7. Frequency Distribution of the Number of Desired Smart and AI Features.
Number of Featuresn%
1466.68
2547.84
3659.43
47210.45
57811.32
67611.03
7669.58
8639.14
9537.69
10436.24
11243.48
12182.61
13142.03
1440.58
1520.29
16111.60
Table A8. Frequency Distribution of Privacy Concern Levels.
Table A8. Frequency Distribution of Privacy Concern Levels.
Privacy Concern Level (0–2)n%
024635.70
123634.25
220429.61
NA30.44
Table A9. Frequency Distribution of Perceived Reduction in Personal Touch.
Table A9. Frequency Distribution of Perceived Reduction in Personal Touch.
Less Personal (0–2)n%
014320.75
118627.00
235851.96
NA20.29
Table A10. Frequency Distribution of Trust in AI for Handling Personal Data.
Table A10. Frequency Distribution of Trust in AI for Handling Personal Data.
Trust in AI (1–5)n%
1344.93
213119.01
327640.06
419027.58
5568.13
NA20.29
Table A11. Frequency Distribution of Perceived Cultural–Linguistic Fit of AI Systems.
Table A11. Frequency Distribution of Perceived Cultural–Linguistic Fit of AI Systems.
AI Cultural–Linguistic Fit (1–5)n%
130.44
2223.19
315622.64
436653.12
513719.88
NA50.73
Table A12. Frequency Distribution of Support for Staff–AI Collaboration and Training.
Table A12. Frequency Distribution of Support for Staff–AI Collaboration and Training.
Support for AI–Staff Training (1–5)n%
1111.60
2152.18
39313.50
434449.93
522332.37
NA30.44
Table A13. Pearson Correlation Matrix for Key Numeric Predictors.
Table A13. Pearson Correlation Matrix for Key Numeric Predictors.
n_benefitsn_featurescomfort_ai_numaware_smart_numprior_ai_stay_numhuman_importance_numprivacy_concern_numless_personal_numtrust_ai_numai_culture_numai_staff_train_num
n_benefits1.000.510.170.160.17−0.180.03−0.120.210.030.15
n_features0.511.000.220.110.10−0.140.02−0.050.110.050.16
comfort_ai_num0.170.221.000.130.22−0.170.03−0.080.230.010.05
aware_smart_num0.160.110.131.000.30−0.03−0.04−0.050.090.020.00
prior_ai_stay_num0.170.100.220.301.000.01−0.11−0.130.190.020.04
human_importance_num−0.18−0.14−0.17−0.030.011.00−0.010.25−0.230.190.07
privacy_concern_num0.030.020.03−0.04−0.11−0.011.000.18−0.250.100.09
less_personal_num−0.12−0.05−0.08−0.05−0.130.250.181.00−0.210.010.04
trust_ai_num0.210.110.230.090.19−0.23−0.25−0.211.00−0.040.03
ai_culture_num0.030.050.010.020.020.190.100.01−0.041.000.24
ai_staff_train_num0.150.160.050.000.040.070.090.040.030.241.00
Figure A1. Marginal Effects of the Interaction Between Perceived Loss of Personal Touch and Trust in AI on the Probability That AI Influences Hotel Choice (“Yes”).
Figure A1. Marginal Effects of the Interaction Between Perceived Loss of Personal Touch and Trust in AI on the Probability That AI Influences Hotel Choice (“Yes”).
Tourismhosp 07 00014 g0a1
Figure A2. Marginal Effects of the Interaction Between Perceived Loss of Personal Touch and Trust in AI on the Probability of Willingness to Pay More for AI-Enabled Services (“Yes”).
Figure A2. Marginal Effects of the Interaction Between Perceived Loss of Personal Touch and Trust in AI on the Probability of Willingness to Pay More for AI-Enabled Services (“Yes”).
Tourismhosp 07 00014 g0a2
Table A14. crosstab_influence_privacy.
Table A14. crosstab_influence_privacy.
privacy_concern_numinfl_choice3Freq
0No48
0Unsure99
0Yes99
1No42
1Unsure154
1Yes40
2No33
2Unsure115
2Yes55
Table A15. crosstab_wtp_privacy.
Table A15. crosstab_wtp_privacy.
privacy_concern_numwtp3Freq
0No58
0Depends98
0Yes90
1No58
1Depends144
1Yes34
2No64
2Depends110
2Yes30
Table A16. Summary of Binary Logistic Regression Fit Statistics.
Table A16. Summary of Binary Logistic Regression Fit Statistics.
OutcomeVersionAICPseudo-R2
infl_yes (binary)Logit_full784.72030.096302
wtp_yes (binary)Logit_full662.70590.153782
Table A17. Partial Proportional-Odds Model (PPOM) Fit Statistics.
Table A17. Partial Proportional-Odds Model (PPOM) Fit Statistics.
OutcomeModelAICPseudo-R2
influence_choicePPOM_F11359.614
wtpPPOM_F21271.286

References

  1. Agresti, A. (2010). Analysis of ordinal categorical data (2nd ed., Wiley Series in Probability and Statistics). Wiley. [Google Scholar]
  2. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. [Google Scholar] [CrossRef]
  3. Albania 2020 report SWD(2020) 354. (2020). EUROPEAN COMMISSION.
  4. Barnes, S. J., Mattsson, J., Sørensen, F., & Jensen, J. F. (2020). The mediating effect of experiential value on tourist outcomes from encounter-based experiences. Journal of Travel Research, 59(2), 367–380. [Google Scholar] [CrossRef]
  5. Bergkvist, L., & Rossiter, J. R. (2007). The predictive validity of multiple-item versus single-item measures of the same constructs. Journal of Marketing Research, 44(2), 175–184. [Google Scholar] [CrossRef]
  6. Buhalis, D., & Leung, R. (2018). Smart hospitality—Interconnectivity and interoperability towards an ecosystem. International Journal of Hospitality Management, 71, 41–50. [Google Scholar] [CrossRef]
  7. Chi, O. H., Chi, C. G., Gursoy, D., & Nunkoo, R. (2023). Customers’ acceptance of artificially intelligent service robots: The influence of trust and culture. International Journal of Information Management, 70, 102623. [Google Scholar] [CrossRef]
  8. Chiu, F.-R., & Chen, Y.-K. (2025). Travelers’ accommodation intention towards smart hotels: A two-stage analysis using SEM and fsQCA. Tourism and Hospitality Management, 31(3), 413–425. [Google Scholar] [CrossRef]
  9. Christensen, R. H. B. (2023). Regression models for ordinal data. V. 2023.12-4.1. Released. Available online: https://github.com/runehaubo/ordinal (accessed on 24 November 2025).
  10. Culnan, M. J., & Armstrong, P. K. (1999). Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10(1), 104–115. [Google Scholar] [CrossRef]
  11. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319. [Google Scholar] [CrossRef]
  12. Della Corte, V., Sepe, F., Gursoy, D., & Prisco, A. (2023). Role of trust in customer attitude and behaviour formation towards social service robots. International Journal of Hospitality Management, 114, 103587. [Google Scholar] [CrossRef]
  13. DeVellis, R. F. (2017). Scale development: Theory and applications (4th ed.). SAGE. [Google Scholar]
  14. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2015). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley. [Google Scholar]
  15. Fox, J., & Monette, G. (1992). Generalized collinearity diagnostics. Journal of the American Statistical Association, 87(417), 178–183. [Google Scholar] [CrossRef]
  16. Fuchs, C., & Diamantopoulos, A. (2009). Using single-item measures for construct measurement in management research: Conceptual issues and application guidelines. Die Betriebswirtschaft, 69(2), 195–210. [Google Scholar]
  17. Greenwell, B. M., McCarthy, A. J., Boehmke, B. C., & Liu, D. (2018). Residuals and diagnostics for binary and ordinal regression models: An introduction to the sure package. The R Journal, 10(1), 381. [Google Scholar] [CrossRef]
  18. Gursoy, D. (2025). Artificial intelligence (AI) technology, its applications and the use of AI powered devices in hospitality service experience creation and delivery. International Journal of Hospitality Management, 129, 104212. [Google Scholar] [CrossRef]
  19. Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. [Google Scholar] [CrossRef]
  20. Harpe, S. E. (2015). How to analyze likert and other rating scale data. Currents in Pharmacy Teaching and Learning, 7(6), 836–850. [Google Scholar] [CrossRef]
  21. Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). Trust in automation. IEEE Intelligent Systems, 28(1), 84–88. [Google Scholar] [CrossRef]
  22. Holmqvist, J., & Grönroos, C. (2012). How does language matter for services? Challenges and propositions for service research. Journal of Service Research, 15(4), 430–442. [Google Scholar] [CrossRef]
  23. Holmqvist, J., Van Vaerenbergh, Y., & Grönroos, C. (2014). Consumer willingness to communicate in a second language: Communication in service settings. Management Decision, 52(5), 950–966. [Google Scholar] [CrossRef]
  24. Holmqvist, J., Van Vaerenbergh, Y., & Grönroos, C. (2017). Language use in services: Recent advances and directions for future research. Journal of Business Research, 72, 114–118. [Google Scholar] [CrossRef]
  25. Hu, Y., & Min, H. K. (2025). Information transparency, privacy concerns, and customers’ behavioral intentions regarding AI-powered hospitality robots: A situational awareness perspective. Journal of Hospitality and Tourism Management, 63, 177–184. [Google Scholar] [CrossRef]
  26. Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172. [Google Scholar] [CrossRef]
  27. Ivanov, S., & Webster, C. (2019a). Perceived appropriateness and intention to use service robots in tourism. In J. Pesonen, & J. Neidhardt (Eds.), Information and communication technologies in tourism 2019. Springer International Publishing. [Google Scholar] [CrossRef]
  28. Ivanov, S., & Webster, C. (Eds.). (2019b). Robots, artificial intelligence, and service automation in travel, tourism and hospitality. Emerald Publishing Limited. [Google Scholar] [CrossRef]
  29. Ivanov, S., & Webster, C. (2024). Automated decision-making: Hoteliers’ perceptions. Technology in Society, 76, 102430. [Google Scholar] [CrossRef]
  30. Ivanov, S., Webster, C., & Berezina, K. (2022). Handbook of E-tourism. Springer International Publishing. [Google Scholar] [CrossRef]
  31. Jia, S., Chi, O. H., & Lu, L. (2024). Social robot privacy concern (SRPC): Rethinking privacy concerns within the hospitality domain. International Journal of Hospitality Management, 122, 103853. [Google Scholar] [CrossRef]
  32. Kang, S.-E., Koo, C., & Chung, N. (2023). Creepy vs. cool: Switching from human staff to service robots in the hospitality industry. International Journal of Hospitality Management, 111, 103479. [Google Scholar] [CrossRef]
  33. Karwatzki, S., Dytynko, O., Trenz, M., & Veit, D. (2017). Beyond the personalization–privacy paradox: Privacy valuation, transparency features, and service personalization. Journal of Management Information Systems, 34(2), 369–400. [Google Scholar] [CrossRef]
  34. Kim, J. J., Lee, M. J., & Han, H. (2020). Smart hotels and sustainable consumer behavior: Testing the effect of perceived performance, attitude, and technology readiness on word-of-mouth. International Journal of Environmental Research and Public Health, 17(20), 7455. [Google Scholar] [CrossRef]
  35. Kim, S., Kim, J., Badu-Baiden, F., Giroux, M., & Choi, Y. (2021). Preference for robot service or human service in hotels? Impacts of the COVID-19 pandemic. International Journal of Hospitality Management, 93, 102795. [Google Scholar] [CrossRef]
  36. Kokkinou, A., & Cranage, D. A. (2013). Using self-service technology to reduce customer waiting times. International Journal of Hospitality Management, 33, 435–445. [Google Scholar] [CrossRef]
  37. Law no. 124/2024 on personal data protection. (2024). Republic of Albania. Available online: https://idp.al/wp-content/uploads/2025/04/Law-no.124-2024-DP.pdf (accessed on 24 November 2025).
  38. Law no. 9887 dated 10.03.2008 on protection of personal data. (2008). Republic of Albania. Available online: https://idp.al/wp-content/uploads/2024/01/LDP_english_version_amended_2014-1.pdf (accessed on 24 November 2025).
  39. Lee, C. H., & Cranage, D. A. (2011). Personalisation–privacy paradox: The effects of personalisation and privacy assurance on customer responses to travel web sites. Tourism Management, 32(5), 987–994. [Google Scholar] [CrossRef]
  40. Lin, I. Y., & Mattila, A. S. (2021). The value of service robots from the hotel guest’s perspective: A mixed-method approach. International Journal of Hospitality Management, 94, 102876. [Google Scholar] [CrossRef]
  41. Liu, D., & Zhang, H. (2018). Residuals and diagnostics for ordinal regression models: A surrogate approach. Journal of the American Statistical Association, 113(522), 845–854. [Google Scholar] [CrossRef]
  42. Lv, X., Yang, Y., Qin, D., & Liu, X. (2025). AI service may backfire: Reduced service warmth due to service provider transformation. Journal of Retailing and Consumer Services, 85, 104282. [Google Scholar] [CrossRef]
  43. Makivić, R., Vukolić, D., Veljović, S., Bolesnikov, M., Dávid, L. D., Ivanišević, A., Silić, M., & Gajić, T. (2024). AI impact on hotel guest satisfaction via tailor-made services: A case study of Serbia and Hungary. Information, 15(11), 700. [Google Scholar] [CrossRef]
  44. Marghany, M. N., Elmohandes, N. M., Mohamad, I., Elshawarbi, N. N., Saleh, M. I., Ghazy, K., & Helal, M. Y. (2025). Robots at your service: Understanding hotel guest acceptance with meta-UTAUT investigation. International Journal of Hospitality Management, 130, 104227. [Google Scholar] [CrossRef]
  45. Mariani, M., & Borghi, M. (2021). Customers’ evaluation of mechanical artificial intelligence in hospitality services: A study using online reviews analytics. International Journal of Contemporary Hospitality Management, 33(11), 3956–3976. [Google Scholar] [CrossRef]
  46. Mariani, M., & Borghi, M. (2023). Exploring environmental concerns on digital platforms through big data: The effect of online consumers’ environmental discourse on online review ratings. Journal of Sustainable Tourism, 31(11), 2592–2611. [Google Scholar] [CrossRef]
  47. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709. [Google Scholar] [CrossRef]
  48. McCullagh, P. (1980). Regression models for ordinal data. Journal of the Royal Statistical Society Series B: Statistical Methodology, 42(2), 109–127. [Google Scholar] [CrossRef]
  49. McFadden, D. (1974). Frontiers in econometrics (Conditional Logit Analysis of Qualitative Choice Behavior). Academic Press. [Google Scholar]
  50. McLean, G., Osei-Frimpong, K., Wilson, A., & Pitardi, V. (2020). How live chat assistants drive travel consumers’ attitudes, trust and purchase intentions: The role of human touch. International Journal of Contemporary Hospitality Management, 32(5), 1795–1812. [Google Scholar] [CrossRef]
  51. McLeay, F., Osburg, V. S., Yoganathan, V., & Patterson, A. (2021). Replaced by a robot: Service implications in the age of the machine. Journal of Service Research, 24(1), 104–121. [Google Scholar] [CrossRef]
  52. Mick, D. G., & Fournier, S. (1998). Paradoxes of technology: Consumer cognizance, emotions, and coping strategies. Journal of Consumer Research, 25(2), 123–143. [Google Scholar] [CrossRef]
  53. Morosan, C., & DeFranco, A. (2015). Disclosing personal information via hotel apps: A privacy calculus perspective. International Journal of Hospitality Management, 47, 120–130. [Google Scholar] [CrossRef]
  54. Norman, G. (2010). Likert scales, levels of measurement and the ‘laws’ of statistics. Advances in Health Sciences Education, 15(5), 625–632. [Google Scholar] [CrossRef] [PubMed]
  55. O’brien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality & Quantity, 41(5), 673–690. [Google Scholar] [CrossRef]
  56. Ozturk, A. B., Pizam, A., Hacikara, A., An, Q., Chaulagain, S., Balderas-Cejudo, A., Buhalis, D., Fuchs, G., Hara, T., Vieira de Souza Meira, J., & Garcia Revilla, R. (2023). Hotel customers’ behavioral intentions toward service robots: The role of utilitarian and hedonic values. Journal of Hospitality and Tourism Technology, 14(5), 780–801. [Google Scholar] [CrossRef]
  57. Paparoidamis, N. G., Tran, H. T. T., & Leonidou, C. N. (2019). Building customer loyalty in intercultural service encounters: The role of service employees’ cultural intelligence. Journal of International Marketing, 27(2), 56–75. [Google Scholar] [CrossRef]
  58. Parasuraman, A. (2000). Technology readiness index (Tri): A multiple-item scale to measure readiness to embrace new technologies. Journal of Service Research, 2(4), 307–320. [Google Scholar] [CrossRef]
  59. Pavlou, P. A. (2003). Consumer acceptance of electronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7(3), 101–134. [Google Scholar] [CrossRef]
  60. Peterson, B., & Harrell, F. E. (1990). Partial proportional odds models for ordinal response variables. Applied Statistics, 39(2), 205. [Google Scholar] [CrossRef]
  61. Porsdam Mann, S., Vazirani, A. A., Aboy, M., Earp, B. D., Minssen, T., Cohen, I. G., & Savulescu, J. (2024). Guidelines for ethical use and acknowledgement of large language models in academic writing. Nature Machine Intelligence, 6(11), 1272–1274. [Google Scholar] [CrossRef]
  62. Prentice, C., Weaven, S., & Wong, I. A. (2020). Linking AI quality performance and customer engagement: The moderating effect of AI preference. International Journal of Hospitality Management, 90, 102629. [Google Scholar] [CrossRef]
  63. R Core Team. (2024). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Released. Available online: https://www.R-project.org/ (accessed on 24 November 2025).
  64. Ren, G., Wang, G., & Huang, T. (2025). What influences potential users’ intentions to use hotel robots? Sustainability, 17(12), 5271. [Google Scholar] [CrossRef]
  65. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press. [Google Scholar]
  66. Said, S. (2023). The role of artificial intelligence (AI) and data analytics in enhancing guest personalization in hospitality. Journal of Modern Hospitality, 2(1), 1–13. [Google Scholar] [CrossRef]
  67. Shum, C., Kim, H. J., Calhoun, J. R., & Putra, E. D. (2024). ‘I was so scared i quit’: Uncanny valley effects of robots’ human-likeness on employee fear and industry turnover intentions. International Journal of Hospitality Management, 120, 103762. [Google Scholar] [CrossRef]
  68. Silge, J., & Robinson, D. (2016). Tidytext: Text mining and analysis using tidy data principles in R. The Journal of Open Source Software, 1(3), 37. [Google Scholar] [CrossRef]
  69. Soliman, M., Buaniew, A., Hassama, A., Assalihee, M., & Adel, R. (2025). Investigating the role of smart hotel technologies in enhancing guest experiences and sustainable tourism in Thailand. Discover Sustainability, 6(1), 1091. [Google Scholar] [CrossRef]
  70. Sousa, A. E., Cardoso, P., & Dias, F. (2024). The use of artificial intelligence systems in tourism and hospitality: The tourists’ perspective. Administrative Sciences, 14(8), 165. [Google Scholar] [CrossRef]
  71. Sullivan, G. M., & Artino, A. R. (2013). Analyzing and interpreting data from likert-type scales. Journal of Graduate Medical Education, 5(4), 541–542. [Google Scholar] [CrossRef]
  72. Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, 53–55. [Google Scholar] [CrossRef]
  73. Tavitiyaman, P., Zhang, X., & Tsang, W. Y. (2022). How tourists perceive the usefulness of technology adoption in hotels: Interaction effect of past experience and education level. Journal of China Tourism Research, 18(1), 64–87. [Google Scholar] [CrossRef]
  74. Tsai, J. Y., Egelman, S., Cranor, L., & Acquisti, A. (2011). The effect of online privacy information on purchasing behavior: An experimental study. Information Systems Research, 22(2), 254–268. [Google Scholar] [CrossRef]
  75. Tuomi, A., Tussyadiah, I. P., & Stienmetz, J. (2021). Applications and implications of service robots in hospitality. Cornell Hospitality Quarterly, 62(2), 232–247. [Google Scholar] [CrossRef]
  76. Tussyadiah, I. (2020). A review of research into automation in tourism: Launching the annals of tourism research curated collection on artificial intelligence and robotics in tourism. Annals of Tourism Research, 81, 102883. [Google Scholar] [CrossRef]
  77. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425. [Google Scholar] [CrossRef]
  78. Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology1. MIS Quarterly, 36(1), 157–178. [Google Scholar] [CrossRef]
  79. Wickham, H. (2016). Ggplot2. Use R! Springer International Publishing. [Google Scholar] [CrossRef]
  80. Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L. D. A., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., & Kuhn, M. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686. [Google Scholar] [CrossRef]
  81. Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new world: Service robots in the frontline. Journal of Service Management, 29(5), 907–931. [Google Scholar] [CrossRef]
  82. Yang, H., Song, H., Cheung, C., & Guan, J. (2021). How to enhance hotel guests’ acceptance and experience of smart hotel technology: An examination of visiting intentions. International Journal of Hospitality Management, 97, 103000. [Google Scholar] [CrossRef]
  83. Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32(10), 1–34. [Google Scholar] [CrossRef]
Figure 1. Response-category distributions for trust in AI, privacy concern, perceived loss of personal touch, cultural–linguistic fit, and support for AI–staff collaboration (percent of respondents in each response category).
Figure 1. Response-category distributions for trust in AI, privacy concern, perceived loss of personal touch, cultural–linguistic fit, and support for AI–staff collaboration (percent of respondents in each response category).
Tourismhosp 07 00014 g001
Table 1. Age Group Distribution of Respondents.
Table 1. Age Group Distribution of Respondents.
Age Groupn%
18–2424235.12%
25–3414120.46%
35–4411917.27%
45–548612.48%
55+7510.89%
Under 18263.77%
Table 2. Gender Distribution of Respondents.
Table 2. Gender Distribution of Respondents.
Gendern%
Female38255.44%
Male30243.83%
Missing/Other50.73%
Table 3. Frequency of Hotel Stays per Year Among Respondents.
Table 3. Frequency of Hotel Stays per Year Among Respondents.
Hotel-Stay Frequencyn%
1–2 times27339.62%
3–5 times27239.48%
6–10 times9814.22%
More than 10 times466.68%
Table 4. Distribution of Responses on Whether Smart Technologies and AI Influence Hotel Choice.
Table 4. Distribution of Responses on Whether Smart Technologies and AI Influence Hotel Choice.
Response Categoryn%
No12317.90%
Unsure36953.71%
Yes19528.38%
Table 5. Distribution of Responses on Willingness to Pay More for Smart and AI-Enhanced Services.
Table 5. Distribution of Responses on Willingness to Pay More for Smart and AI-Enhanced Services.
Response Categoryn%
No18026.20%
Depends35351.38%
Yes15422.42%
Table 6. Descriptive Statistics for Key Numeric Constructs (N = 689).
Table 6. Descriptive Statistics for Key Numeric Constructs (N = 689).
VariableNMeanSDMinMax
Number of perceived benefits (n_benefits)6892.791.3917
Number of desired features (n_features)6896.213.38116
Comfort with AI services (comfort_ai_num)6863.351.0515
Awareness of smart technologies (aware_smart_num)6871.650.6602
Prior stay in smart/AI hotel (prior_ai_stay_num)6891.180.8902
Importance of human interaction (human_importance_num)6883.820.9915
Privacy concerns (privacy_concern_num)6860.940.8102
Perceived reduction in personal touch (less_personal_num)6871.310.8002
Trust in AI (trust_ai_num)6873.150.9815
Cultural–linguistic fit of AI (ai_culture_num)6843.890.7715
Support for staff–AI collaboration (ai_staff_train_num)6864.100.8315q
AI influences hotel choice (binary) (infl_yes)6870.280.4501
Willingness to pay more (binary) (wtp_yes)6870.220.4201
Table 7. Model Fit Summary for Ordinal Models.
Table 7. Model Fit Summary for Ordinal Models.
OutcomeModel VersionAICPseudo-R2
Influence on hotel choiceBaseline A11342.940.047
Influence on hotel choiceExtended B11338.1550.053
Influence on hotel choiceAttitudinal C11318.4490.073
Willingness to payBaseline A21321.3960.088
Willingness to payExtended B21309.5710.099
Willingness to payAttitudinal C21282.7890.125
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Godolja, M.; Muka, R.; Tavanxhiu, T.; Sevrani, K. Guest Acceptance of Smart and AI-Enabled Hotel Services in an Emerging Market: Evidence from Albania. Tour. Hosp. 2026, 7, 14. https://doi.org/10.3390/tourhosp7010014

AMA Style

Godolja M, Muka R, Tavanxhiu T, Sevrani K. Guest Acceptance of Smart and AI-Enabled Hotel Services in an Emerging Market: Evidence from Albania. Tourism and Hospitality. 2026; 7(1):14. https://doi.org/10.3390/tourhosp7010014

Chicago/Turabian Style

Godolja, Majlinda, Romina Muka, Tea Tavanxhiu, and Kozeta Sevrani. 2026. "Guest Acceptance of Smart and AI-Enabled Hotel Services in an Emerging Market: Evidence from Albania" Tourism and Hospitality 7, no. 1: 14. https://doi.org/10.3390/tourhosp7010014

APA Style

Godolja, M., Muka, R., Tavanxhiu, T., & Sevrani, K. (2026). Guest Acceptance of Smart and AI-Enabled Hotel Services in an Emerging Market: Evidence from Albania. Tourism and Hospitality, 7(1), 14. https://doi.org/10.3390/tourhosp7010014

Article Metrics

Back to TopTop