Next Article in Journal
The Importance of Ethics in Organisations, Their Leaders, and Sustainability
Previous Article in Journal
Mapping Gender Pay Disparities in Chinese Finance: A Systematic Literature and Bibliometric Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

From Brochures to Bytes: Destination Branding through Social, Mobile, and AI—A Systematic Narrative Review with Meta-Analysis

by
Chryssoula Chatzigeorgiou
,
Evangelos Christou
* and
Ioanna Simeli
Department of Organization Management, Marketing and Tourism, International Hellenic University, P.O. Box 141, 57400 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Adm. Sci. 2025, 15(9), 371; https://doi.org/10.3390/admsci15090371
Submission received: 4 August 2025 / Revised: 10 September 2025 / Accepted: 12 September 2025 / Published: 19 September 2025
(This article belongs to the Special Issue New Scrutiny in Tourism Destination Management)

Abstract

Digital transformation has re-engineered tourism marketing and how destination branding competes for tourist attention, yet scholarship offers little systematic quantification of these changes. Drawing on 160 peer-reviewed studies published between 1990 and 2025, we combine grounded-theory thematic synthesis with a random-effect meta-analysis of 60 datasets to trace branding performance across five technological eras (pre-Internet and brochure era: to mid-1990s; Web 1.0: 1995–2004; Web 2.0: 2004–2013; mobile first: 2013–2020; AI-XR: 2020–2025). Results reveal three structural shifts: (i) dialogic engagement replaces one-way promotion, (ii) credibility migrates to user-generated content, and (iii) artificial intelligence–driven personalisation reconfigures relevance, while mobile and virtual reality marketing extend immersion. Meta-analytic estimates show the strongest gains for engagement intentions (g = 0.57), followed by brand awareness (g = 0.46) and image (g = 0.41). Other equity dimensions (attitudes, loyalty, perceived quality) also improved on average, but to a lesser degree. Visual, UGC-rich, and influencer posts on highly interactive platforms consistently outperform brochure-style content, while robustness checks (fail-safe N, funnel symmetry, leave-one-out) confirm stability. We conclude that digital tools amplify, rather than replace, co-creation, credibility, and context. By fusing historical narrative with statistical certainty, the study delivers a data-anchored roadmap for destination marketers, researchers, and policymakers preparing for the AI-mediated decade ahead.

1. Introduction

1.1. From Brochures to Bytes: Rationale and Gap Analysis

Destination branding has shifted from twentieth-century brochures and agency slogans to algorithmic curation and real time co-creation (Buhalis & Law, 2008; Zeng & Gerritsen, 2014). Classic campaigns such as I ♥ NY employed broadcast media to project idyllic imagery (Morgan & Pritchard, 1998), casting travellers as passive recipients (Munar, 2011). Today, hyper-targeted social platforms, location-aware apps, and AI recommenders personalise narratives at negligible cost. A single influencer post or TripAdvisor review can recalibrate perceptions within minutes (Revilla Hernández et al., 2016; Ayeh et al., 2013). Global connectivity and ubiquitous user-generated content have displaced mass marketing logics, making branding an interactive, data-rich encounter (Confetto et al., 2023; Misirlis et al., 2018; Xiang & Gretzel, 2010; Lee et al., 2023).
These discontinuities demand a longitudinal synthesis that traces which principles endure, which mutate, and which vanish as technologies change (Gertner, 2011; Aman et al., 2024). Each wave—Web 1.0 brochureware, Web 2.0 networking, mobile locative media, and the AI web—multiplied touchpoints, yet reposed the same puzzle: do legacy tactics still build equity, and how has the visitor morphed from audience to strategist? (U. Gretzel et al., 2015; Silvanto & Ryan, 2023; Z. J. Huang et al., 2024). Mapping that arc reveals turning points such as the legitimation of UGC (user-generated content) and the rise of big data micro-targeting (Litvin et al., 2008).
However, existing reviews are fragmented. Many dissect a single platform, offering granular insight, but no sense of systemic interaction (Giannopoulos et al., 2020; Judijanto et al., 2024). Others fetishise digital novelty and overlook robust lessons from pre-internet practice (Mandagi et al., 2024). Tran and Rudolf (2022) even note that no prior synthesis jointly interrogates destination branding and social media, exposing disciplinary silos. Micro-studies—TripAdvisor credibility (Amanatidis et al., 2020), one-off Instagram campaigns (Le & Khuong, 2023), or discrete influencer tactics (Alzboun et al., 2025)—provide depth without breadth, making cross-channel comparisons elusive and leaving open whether UGC and influencer marketing change outcomes or merely the medium (Aboalganam et al., 2025).
Temporal coverage is equally uneven. Some surveys end with the pre-internet era (G. I. Huang et al., 2023); others begin with social media and never look back (Almeyda-Ibáñez & George, 2017). Because academic interest accelerated only in the late 1990s (Pike, 2002), early studies could not anticipate Instagram or ChatGPT (GPT-2 in 2019, GPT-3 in 2020, and GPT-4 in 2023), while recent work often lacks historical baselines (G. I. Huang et al., 2023). Pike and Page’s (2014) audit showed scarce evidence that any campaign—digital or otherwise—boosts brand equity, and meta-analytic confirmation remains limited (Ruiz-Real et al., 2020). Bibliometric analyses reveal a pivot toward UGC, big data analytics, and social listening (Judijanto et al., 2024), but simultaneously illustrate how legacy strategies have become academically invisible.
Conceptual fragmentation compounds these gaps: definitions of image, engagement, and equity diverge, while metrics oscillate between sentiment counts, recall surveys, and financial proxies, consequently thwarting synthesis. Cross-platform synergies—how paid Instagram posts, earned TripAdvisor reviews, and owned websites reinforce or contradict each other—remain under-theorised, leaving practitioners and DMOs (destination management organisations) with little evidence for sequencing or budget allocation. Such methodological inconsistency also obscures causality, hampering theory building and evidence-based policymaking.
This study meets the challenge by aligning ‘brochures with bytes’ and applying uniform evaluative criteria across eras. It surfaces enduring drivers of destination equity—narrative coherence, authenticity cues, stakeholder co-creation—while revealing practices neutralised or amplified by data-driven mediation. The resulting scaffold offers scholars a shared vocabulary and provides practitioners with empirically grounded guidance for resource allocation amid rapid technological churn.
Beyond tourism marketing, destination-brand building represents a natural experiment in how publicly funded organisations (DMOs, municipal bureaus, or ministries) adopt data-intensive, citizen-centred communication. This speaks to two OS/PA (organisation studies/public administration) conversations: (i) digital-era organisational ambidexterity—balancing legacy broadcast routines with agile co-creation capabilities, and (ii) network governance, wherein public actors orchestrate heterogeneous, semi-autonomous contributors (tourists, residents, platforms, algorithms) to co-produce public value.
In sum, digital transformation has outrun scholarly consolidation. Reviews circumscribed by channel, timeframe, or method have produced a patchwork view of destination branding. A rigorous multi-era, multi-platform synthesis is thus indispensable for consolidating knowledge, testing unexamined assumptions, and steering research and practice in an AI (artificial intelligence)-mediated future.

1.2. Research Questions

To reconcile conceptual debate and empirical inconsistency in destination branding, this article adopts a systematic narrative synthesis paired with quantitative meta-analysis (Turnbull et al., 2023). This combined research strategy illuminates historical pathways while interrogating the AI frontier. Four tightly coupled research questions (RQs) guide the investigation, moving sequentially from description to explanation and finally to methodological reflexivity.
RQ1—Transformation. How have destination branding strategies and touch points evolved from a pre-digital environment to today’s AI-enabled personalisation? (informed by relationship-orientation principles in communication theory). We chronicle the migration of tactics from brochures, trade fairs, and broadcast advertising to websites, social networks, mobile apps, extended reality showcases, and conversational agents (U. Gretzel et al., 2015; Buhalis, 2020). Establishing these phases reveals watershed moments in narrative focus, stakeholder power, and visitor agency, thereby offering a temporal scaffold that mitigates presentism and positions later analyses within a coherent developmental arc (Buhalis & Sinarta, 2019).
RQ2—Effectiveness. Has digital engagement, on average, strengthened destination-brand equity—awareness, image, and loyalty—relative to pre-digital benchmarks? (grounded in customer-based brand-equity metrics). By pooling effect sizes across eras, we test whether the often-assumed virtues of reach, interactivity, and personalisation create durable value or merely reconfigure communication channels (Dedeoğlu et al., 2020; Stojanovic et al., 2022). The findings clarify the substantive, not just stylistic, consequences of the shift ‘from brochures to bytes’, informing both media substitution theory and managerial budget allocation.
RQ3—Drivers and Moderators. Which technological phases and campaign attributes amplify or attenuate branding outcomes (drawing on diffusion and technology-adoption theory to explain heterogeneity)? The analysis models contextual contingencies, comparing early web, Web 2.0, mobile, and AI epochs and considering design variables such as user-generated content density, influencer category, interaction richness, and algorithmic tailoring (U. Gretzel et al., 2015). For instance, micro-influencers may outperform celebrities when authenticity matters (Hernández-Méndez et al., 2024), whereas AI-curated messages can exceed static posts in conversion efficiency (Chen & Kim, 2025). Identifying significant moderators refines contingency theory and equips practitioners with evidence-based configuration heuristics.
RQ4—Methodological Insight. What lessons arise concerning measurement validity, construct comparability, and data quality across eras (aligned with customer-based brand equity measurement validity concerns and Mixed Methods Appraisal Tool quality dimensions)? Longitudinal synthesis illustrates heterogeneity in how key variables are operationalised. Earlier studies relied on small-sample surveys, whereas contemporary work mines social-media sentiment and clickstream logs (Lin et al., 2021). Additionally, the proliferation of proxy metrics risks conflating transient online engagement with enduring brand value. Evaluating operational coherence and sampling rigour produces guidelines that facilitate cross-temporal comparability and underpin cumulative knowledge.
By weaving together conceptual, empirical, and methodological strands, this study contributes threefold: first, it historicises digital disruption, tempering techno-determinist narratives; second, it quantifies benefits and trade-offs, challenging untested optimism; and third, it codifies evaluative standards that subsequent studies can replicate, fostering cumulative science over episodic reportage. The agenda also informs policymakers charged with stewarding sustainable destination competitiveness amid rapid technological churn. Together, the four research questions transcend descriptive catalogues of digital affordances, enabling rigorous assessment of substantive change, causal efficacy, boundary conditions, and research design. Addressing them promises a theory integrated, and evidence a calibrated framework that aligns three decades of scholarship with the nascent realities of AI-mediated branding. Finally, this agenda situates tourism research within broader debates on data ethics, algorithmic bias, and responsible innovation in marketing and governance.

2. Conceptual and Historical Framework

2.1. Defining ‘Digital Evolution’ in Destination Branding

Digital evolution in destination branding denotes the cumulative reconfiguration of place-branding practices precipitated by successive technological waves (Buhalis & Sinarta, 2019). Initially reliant on print collateral and mass-media advertisements, destination-marketing organisations (DMOs) have migrated toward interactive, data-centric environments, deploying social networks, mobile interfaces, and algorithmic systems to curate and disseminate place meaning (Huerta-Álvarez et al., 2020). Accordingly, digital destination branding is “a comprehensive set of branding choices and activities utilising the digital environment to raise destination awareness, engage visitors, and create a complete experience” (Confetto et al., 2023). Crucially, this trajectory reshapes the brand ecosystem—how narratives are co-produced, how publics intervene, and how effectiveness is appraised.
The pivot from unidirectional messaging to real-time, dialogic exchange within a networked ‘virtual society’ positions technology not as context, but as constitutive driver of brand meaning (Pencarelli, 2020). Practice qualifies as digitally evolutionary only when change is demonstrably anchored in communicative innovation—whether Web-1.0 hypertext, social-media affordances, or AI-mediated recommendation engines (Singh & Sibi, 2023). Continuous iteration obliges destinations to negotiate hybrid architectures that integrate legacy symbolism with participatory touchpoints. Such recalibration demands strategic agility and scholarly scrutiny to elucidate emergent power relations within branding networks. Tourism scholarship therefore recognises the internet and its progeny as a structural rupture compelling destinations to sustain salience across converging physical–digital domains (Xiang et al., 2015).
Digital evolution thus functions as an analytic lens, tracing the layered metamorphosis of destination branding from analogue modernity to today’s algorithmically orchestrated landscape.

2.2. Phase Delineation Rationale and Digital Evolution of Destination Branding

Following Confetto et al. (2023), we segment the digital evolution of destination branding into five technology-anchored phases. This schema recognises that media affordances, not chronology alone, restructure how places narrate identity, mobilise stakeholders, and monitor performance. By compressing granular history into analytically distinct eras, we expose recurring tensions—between managerial control and participatory authorship, between curated image and experiential authenticity—that theory must explain. Each phase is defined by its dominant interface (print, static web, social web, mobile convergence, AI-XR) and by a characteristic bundle of strategic, narrative, and measurement practices. Table 1 condenses these traits. The narrative below focuses on transformative inflection points, foregrounding what changed, what endured, and why those shifts matter for contemporary scholarship and practice.
In the pre-internet brochure era, branding operated within an analogue broadcast ecosystem (Echtner & Ritchie, 1993). National and regional DMOs disseminated meticulously composed images and slogans through brochures, guidebooks, travel fairs, and mass media, exercising near-total narrative sovereignty (Gartner, 1993; Baloglu & McCleary, 1999; Bălţescu, 2019). Consumer feedback was occasional, channelled through postal enquiries or offline word-of-mouth, and evaluation hinged on macro-indicators such as arrivals and survey-based familiarity (Pike, 2002; Krakover & Corsale, 2021). Although strategic discourse was embryonic, this phase crystallised a ‘brochure mindset’: linear storytelling, aesthetic standardisation, and centralised gatekeeping—assumptions later technologies would destabilise.
The Web 1.0 phase (≈1995–2004) digitised the brochure without subverting its logic. Early-adopter DMOs launched HTML (Hypertext Markup Language) websites that extended global reach while replicating print content verbatim (Buhalis & Law, 2008; Doolin et al., 2002; Asthana, 2022). Static pages, slow connections, and limited plug-ins restricted interactivity. Guestbooks or email links were the height of dialogue (Ho & Lee, 2007). However, the medium introduced operational economies—round-the-clock access, searchable archives, and rudimentary metrics such as page views and downloads (Law et al., 2010). Scholars began formal audits measuring usability and content richness (Yağmur & Aksu, 2022). Managerial control persisted, but the notion of a digitally empowered visitor capable of active information-seeking had been established.
Web 2.0 (≈2004–2013) recast tourists from audience to co-authors. Interactive platforms—blogs, TripAdvisor, YouTube, Flickr, Facebook, and Twitter/X—multiplied participatory affordances and shattered narrative monopolies (Sigala, 2015; Zafiropoulos et al., 2015). User-generated content flooded the symbolic marketplace, rendering authenticity, peer trust, and emotional storytelling potent brand currencies (M. Mariani, 2020; Fragidis & Kotzaivazoglou, 2022). DMOs pivoted toward dialogue: hashtag competitions, comment moderation, and influencer partnerships became routine (Barreda et al., 2020; M. M. Mariani et al., 2016). Measurement expanded from passive reach to engagement—likes, shares, follower growth, and sentiment analytics offered proximate, if imperfect, representations for brand salience (Roque & Raposo, 2015; Barreda et al., 2015). Empirical uncertainty nevertheless grew: UGC could amplify or contaminate official positioning, and demonstrable ROI (return on investment) remained elusive despite heightened visibility (Tran & Rudolf, 2022; Alves et al., 2022).
The mobile-first phase (≈2013–2020) layered context and immediacy onto social connectivity. Smartphones, 4G, and integrated apps anchored an always-on, location-aware consumption pattern (Neuhofer et al., 2015; J. J. Kim & Fesenmaier, 2015). Branding migrated into pocket-sized screens, demanding vertical video, story formats, and responsive design (Li et al., 2017). Platform ecosystems fused inspiration, planning, navigation, and booking, compelling DMOs to orchestrate coherent cross-channel narratives within SoLoMo (Social, Local, Mobile) and SoCoMo (Social, Contextual, Mobile) marketing frameworks (Kannan & Li, 2017; Xu et al., 2017; Machado et al., 2025). Real-time push notifications, GPS (Global Positioning System) targeting and AR (augmented reality) overlays enabled situational engagement during the visit itself (Mohammadi et al., 2021; Yin et al., 2021; N. Sousa et al., 2024). Gamified quests, chatbots, and micro-influencer takeovers enriched experiential depth and social proof (Nadalipour et al., 2024; Michael et al., 2025). Analytical sophistication escalated: geodata, multisource dashboards, and attribution modelling traced conversion pathways and visitor mobility (Yallop & Seraphin, 2020; Luong, 2024; Marco-Gardoqui et al., 2023).
Since 2020, an AI-infused, data-rich ecosystem has extended convergence while automating personalisation at scale. Machine-learning pipelines segment audiences, predict intent, and deliver programmatic creatives, whereas conversational agents provide 24/7, brand-consistent assistance (L. Lu et al., 2019; Krabokoukis, 2025). XR (extended reality) technologies enhance both dreaming and doing through VR (virtual reality) walk-throughs, AR storytelling, and nascent metaverse precincts (I. P. Tussyadiah et al., 2018; Koo et al., 2022). Virtual influencers and AI-generated assets widen narrative bandwidth (Hernández-Méndez et al., 2024; Argyris et al., 2021). The COVID-19 pandemic accelerated adoption, validating virtual substitutes and contactless interfaces (Sigala, 2020). Big-data infrastructure spanning social media, IoT (Internet of Things) sensors, and booking engines feeds continuous learning loops, enabling destination-marketing campaigns to be geo-fenced, paused, or localised in real time (Mirzaalian & Halpenny, 2019). Novel indicators—VR dwell time, chatbot satisfaction, algorithmic equity indices—promise granularity, yet complicate longitudinal comparability (Florido-Benítez & del Alcázar Martínez, 2024).
Viewed diachronically, the phases chart a drift from monopoly storytelling toward polyphonic, data-mediated branding. Functional aspirations—differentiation, credibility, equity growth—persist, but the repertoire of legitimation has shifted from curated iconography to algorithmic relevance. The brochure mindset prized consistency. Social and mobile logics reward immediacy, while AI demands predictive adaptation. Methodologically, ever-finer metrics have improved internal validity within phases, yet hinder cross-phase synthesis, underscoring the need for standardised equity scales that bridge analogue and digital contexts (Tran & Rudolf, 2022). Further, the field’s gravitation toward micro-behavioural indicators risks eclipsing macro-outcomes such as community well-being and carrying capacity, topics already foregrounded in foundational policy work (Gartner, 1993). Scholars must therefore examine not only how technologies enable branding, but also whose interests and imaginaries they amplify or marginalise across evolving media landscapes.
This phased framework supplies a concise, holistic lens for analysing destination branding’s digital transformation. By mapping inflection points, articulating signature practices, and highlighting unresolved tensions, it creates a common vocabulary for comparative research and evidence-based strategy. Subsequent sections leverage this scaffold to test hypotheses about effectiveness, moderators, and measurement across three decades of technological churn. Table 1 provides a high-level summary of the five phases and their key characteristics for comparison, serving as a quick reference for academics and practitioners navigating the shift from brochures to bytes and beyond. Equally importantly, this typology illuminates research blind spots: the uneven theorisation of resident agency, the ethics of algorithmic persuasion, and the resilience of destination identity in the face of deepfakes and AI-generative media. Recognising these gaps clarifies agendas for interdisciplinary collaboration among marketing scholars, computer scientists, and DMO policymakers committed to sustainable, inclusive place promotion in both mature and emerging markets.

2.3. Core Constructs Across Eras

Digital transformation has not dismantled the foundational pillars of destination branding; rather, it has progressively reframed four core constructs—imagery, narratives, engagement, and metrics. A longitudinal perspective therefore reveals both continuity and rupture and highlights the necessity of integrative theorisation attuned to accelerating technological change. While early scholarship treated digital tools as mere distribution channels, contemporary studies argue that they constitute an ontological shift in how destination meaning is produced, circulated, and contested, demanding a reassessment of established branding theory.

2.3.1. Imagery (Visual Representation)

The representational grammar of tourism visuals has travelled from staged homogeneity to polyphonic, immersive expression. Brochure photography, once meticulously orchestrated by DMOs, projected a single, idealised viewpoint. In the brochure era, DMO-staged vistas dominated (e.g., photo spreads in 1990 guides). Today’s AI-XR phase lets users explore 360° heritage tours in VR, dramatically shifting immersion and trust cues. Web 1.0 merely transplanted this top-down aesthetic into a new channel. Social platforms by contrast disrupted this hierarchy: prodigious volumes of user-generated content inserted mundane moments and unconventional angles that frequently challenged official destination iconography (Martins & Santos, 2024; Marine-Roig & Anton Clavé, 2016). Recognising the reputational dividends of perceived authenticity, DMOs shifted from selection to orchestration, curating visitor material through hashtag campaigns and reposting practices. The current AI/XR era deepens this trajectory. Interactive virtual tours, 360-degree videos, and augmented overlays enable prospective visitors to inhabit distant spaces (Guttentag, 2010; Flavián et al., 2019). Yet abundance intensifies the managerial task of coherence: moderated co-creation bolsters brand credibility, whereas unfiltered proliferation can erode it (Mak, 2017). Crucially, platform algorithms privilege visually arresting content: destinations that master thumbnail aesthetics and short-form video editing increasingly shape first impressions, raising questions about the commodification of everyday life and the invisibility of less Instagrammable spaces (Christou et al., 2025). Accordingly, visual strategy now elevates relational authenticity over pictorial perfection.

2.3.2. Narratives: Brand Story and Messaging

Storytelling remains the cognitive scaffold through which disparate images gain meaning (Kavaratzis & Hatch, 2013). Early slogans—’Incredible India,’ ‘Pure Michigan’—embodied a monologic voice foregrounding a handful of attributes. For instance, a 1990s ‘Incredible India’ print ad delivered a one-size-fits-all message, whereas today’s ChatGPT-powered itinerary chatbot crafts personalised stories attuned to each traveller’s interests (Kietzmann et al., 2018). Web 2.0 fractured this unity: travellers, bloggers, and itinerary platforms introduced alternative plotlines that often subverted official framings (Munar, 2011). Influencer partnerships evolved as a governance device: charismatic third parties translate official frames into ostensibly authentic first-person narratives, enhancing trust and narrative transportation (De Veirman et al., 2017). Real-time mobile affordances (live streams, momentary stories) added immediacy and experiential texture (Flavián et al., 2019). Extended-reality applications further expand narrative modalities: immersive heritage tours merge spatial presence with historical exposition (M. J. Kim et al., 2020). Simultaneously, plural narratives can generate dissonance—luxury, budget, and overtourism discourses often collide—requiring reflexive management and sometimes strategic silence to avoid amplifying negative storylines. Effective DMOs therefore act as ‘discourse curators’, aligning distributed voices with a stable brand essence of identity, authenticity, and affect (Barreda et al., 2015; Coudounaris et al., 2025).

2.3.3. Engagement Mechanisms

Interactional depth has expanded from passive reception to intelligent, real-time dialogue. In the print era, engagement was inferred from eventual bookings. Early websites added rudimentary interactivity such as email queries. Web 2.0 recalibrated expectations toward reciprocal conversation, Facebook pages, Twitter/X replies, and hashtag prompts operationalised Brodie et al.’s (2011) two-way symmetrical communication. Gamified challenges and crowdsourced video contests illustrated the transition from audience to community (Xu et al., 2017). Mobile ubiquity collapsed temporal distance: location-based prompts, QR (Quick Response) enabled AR games, and instant service recovery fused online and on-site realms (Sihombing et al., 2024). AI now mediates engagement through chatbots that deliver round-the-clock assistance (I. P. Tussyadiah, 2020), adaptive recommendation engines (Li et al., 2017), and robot greeters that embody a tech-forward destination identity (Nadalipour et al., 2024). Empirical work links such personalised, multisensory interactions to heightened satisfaction and advocacy behaviours (Mitev et al., 2024). Hybrid events that combine physical festivals with virtual attendance exemplify this convergence logic: participants accrue digital badges while experiencing local culture, thereby blurring consumption and production roles and extending engagement far beyond the visit. Consequently, the critical competence has become designing liminal co-creative spaces where visitors fluidly oscillate between consuming, producing, and mediating brand meaning.

2.3.4. Metrics and Performance Indicators

Evaluation paradigms have progressed from sparse, retrospective indicators to granular, predictive analytics. Traditional counts of brochures or arrivals offered limited diagnostic power. The online shift yielded click-through rates and session statistics (Law et al., 2010), soon complemented by social metrics such as follower growth, share volume, and automated sentiment scores (X. Y. Leung, 2019; Marine-Roig & Anton Clavé, 2016). Mobile datasets introduced geospatial attribution, enabling real-time linkage between digital stimuli and physical movement. Sophisticated attribution models weight exposure sequences to estimate conversion probability (Harrigan et al., 2018). Machine-learning applications now extrapolate from engagement patterns to forecast demand segments and test scenario outcomes (Doan Do et al., 2024). Classic brand-equity dimensions—awareness, image, quality, loyalty—are triangulated through surveys (Kladou & Kehagias, 2014) and digital proxies such as share of positive voice (Ruiz-Real et al., 2020). Big-data benchmarking continuously contrasts a destination’s online footprint with rivals (Xiang et al., 2017). Scholars nevertheless caution that exclusive reliance on dashboards risks managerial myopia. Quantitative indicators must be interpreted through qualitative, context-sensitive judgement (Phuthong & Chotisarn, 2025). Despite technical sophistication, attribution remains probabilistic. Marketers and DMOs must therefore complement quantitative signals with stakeholder interviews and community impact assessments to advance a more holistic accountability agenda.

2.3.5. Critical Synthesis

The four constructs form a helix of escalating complexity: immersive imagery enriches story potential; richer stories invite deeper engagement; and intensified engagement produces expansive data that feed advanced metrics, which in turn refine subsequent visual and narrative decisions. Digitisation thus redistributes symbolic power. Control has migrated from institutional gatekeepers to heterogeneous actors—travellers, algorithms, influencers—making negotiation rather than transmission the central communicative task. The literature consequently pivots from transactional to co-creative logics, positioning dialogic ethics and relational authenticity as prerequisites for sustainable brand equity. Integrating critical tourism studies with marketing analytics could thus produce richer explanatory models and guard against techno-optimism. Future research should interrogate tensions between algorithmic personalisation and collective identity, the ecological footprint of immersive technologies, and the governance of data ownership. Longitudinal designs that integrate visual analytics, ethnographic observation, and predictive modelling can illuminate feedback loops across constructs, while normative inquiry might re-evaluate success beyond economic yield toward socio-cultural resilience.
In sum, destination branding has travelled from broadcast monologue to algorithmically mediated polylogue. The contemporary managerial imperative for DMOs is orchestration: harmonising decentralised visuals, polyphonic narratives, interactive touchpoints and multilayered metrics into a coherent, value-laden sense of place that resonates on-screen and on the ground for diverse stakeholders worldwide and beyond.

2.4. Integrating Communication, Technology-Adoption, and Brand-Equity Theories

Understanding the digital reconfiguration of destination branding requires a composite theoretical scaffolding—no single lens captures its systemic complexity. Accordingly, we juxtapose communication, technology-adoption, and customer-based brand-equity perspectives, treating each as complementary rather than substitutive (Kavaratzis & Hatch, 2013). This triadic frame allows us to trace what changed, why it changed, and how those shifts recalibrate evaluative criteria for brand performance.
Communication theory explains the migration from broadcast monologue to networked dialogue. Early campaigns mirrored the Lasswellian one-way pipeline, prioritising message control over audience response (Lasswell, 1948). Web 2.0 precipitated a two-way symmetric regime: destinations now negotiate meaning with tourists rather than impose it, consistent with relationship-orientation principles (Brodie et al., 2011). Concurrent peer-to-peer exchanges amplify or erode official narratives. Influencers operate as contemporary opinion leaders, sustaining a digital Two-Step Flow dynamic (De Veirman et al., 2017). Uses-and-gratifications research clarifies why travellers voluntarily disseminate content—information search, inspiration, affiliation, and identity expression—rendering participatory affordances strategically non-negotiable (Whiting & Williams, 2013). Failure to recognise this shift locks marketing campaigns into a broadcasting paradigm that underperforms in algorithmically sorted attention markets. Consequently, DMOs must orchestrate an ecosystem in which user co-creation is cultivated rather than merely tolerated (Marine-Roig & Anton Clavé, 2016).
Technology-adoption scholarship illuminates uneven uptake of these communicative affordances. Diffusion of innovations highlights temporal differentials across destinations and visitor segments, revealing that laggards surrender conversational relevance (Rogers, 2003). At the micro-level, the technology acceptance model and its UTAUT (unified theory of acceptance and use of technology) refinement emphasise perceived usefulness and effort expectancy as adoption catalysts (Dwivedi et al., 2019). Organisational readiness—skills, trust, and strategic vision—conditions whether DMOs transform digital tools into sustained capabilities. The ‘smart DMO’ concept denotes an institutionalised commitment to perpetual innovation (U. Gretzel, 2022). Moreover, adoption disparities entrench asymmetries, as first movers appropriate data loops that magnify significant advantages. Hence, technology is neither neutral nor self-executing: its branding value is mediated by human agency and absorptive capacity.
Brand-equity theory centres the outcome variable. Destination adaptations of CBBE (customer-based brand equity) demonstrate that awareness, associations, quality, and loyalty still anchor competitive advantage, yet digital contexts foreground engagement as an emergent fifth pillar: interactive consumers are not mere spectators, but co-producers of destination-brand meaning (Kladou & Kehagias, 2014; Ruiz-Real et al., 2020). Consistency and differentiation remain axiomatic, and platform multiplicity simply proliferates the arenas in which they must be enacted (Confetto et al., 2023).
By synthesising these streams, we contend that technology supplies the tools, communication defines the process, and brand equity validates the outcome. Their interaction explains why digital transformation is consequential: failing to manage any node weakens the entire system. This integrative stance underpins our ensuing narrative review and meta-analysis, enabling us to move beyond descriptive chronologies toward theoretically informed evaluation of destination branding’s digital turn.
Digital branding in DMOs serves as a catalyst for organisational innovation by fostering structural couplings between previously isolated marketing, IT (information technology), and data analytics divisions. This integration represents a pathway toward organisational ambidexterity, facilitating both the exploitation of existing marketing capabilities and the exploration of new, digitally enabled practices. Furthermore, the participatory digital ecosystem typifies polycentric governance, with DMOs orchestrating a diverse array of autonomous actors—including tourists, residents, platforms, and influencers—whose collective contributions shape destination brands. Additionally, the rise of AI-driven personalisation introduces significant governance considerations concerning public accountability, transparency, ethics, and algorithmic bias, highlighting the imperative for deliberate oversight mechanisms within public-sector digital strategies.

2.5. Algorithmic Governance, Bias, and Data Ethics in Digital Destination Branding

AI-mediated branding introduces questions of accountability that are no longer peripheral to effectiveness. Recommender systems, generative imagery, and automated targeting may differentially amplify certain places, populations, or aesthetics while suppressing others. Three governance issues demand early integration into theory and practice, as follows.
(1)
Transparency and explainability. Perceived fairness and trust in AI assistants and personalised itineraries increase when audiences are told why a recommendation is shown and how data are used (Wanner et al., 2022; I. P. Tussyadiah, 2020). DMOs should adopt ‘explain-why’ cues and consent dashboards aligned with GDPR, treating transparency as a reputational asset rather than a legal minimum.
(2)
Representational fairness and inclusivity. Algorithmic curation tends to privilege already iconic, photogenic sites and dominant narratives, risking spatial and cultural bias (Yallop & Seraphin, 2020). Governance should include periodic audits of training data and outputs (e.g., hashtag/search bias checks, resident representation), with corrective actions (counter-seeding content, inclusive creator portfolios, multilingual equity goals).
(3)
Authenticity, manipulation, and synthetic media. AI-generated visuals, virtual influencers, and deepfake videos complicate authenticity cues central to destination image (Sivathanu et al., 2024; Hernández-Méndez et al., 2024; Yu & Meng, 2025). Disclosure norms (‘AI-generated’ labels), parity rules (synthetic content never outweighs lived narratives), and provenance tools should be incorporated.
We conceptualise these as an ‘effectiveness–ethics–equity triangle’: campaigns should be judged not only by CBBE gains, but also by transparency and distributional effects (who benefits/loses from algorithmic visibility). Table 2 translates this into a checklist for practitioners; Section 6 revisits governance as a capability of the smart DMO, sub-Section 7.2 expands the practitioner playbook with the addition of high ROI tactics, minimum KPIs, and governance checks, and a detailed practitioner-oriented checklist is provided in Appendix A.

3. Methodology

3.1. Two-Tiered Design

This study adopted a two-tier mixed-method review. Tier 1 applies a hybrid systematic-narrative scan. A protocol-driven search with explicit inclusion/exclusion criteria located targeted studies (Turnbull et al., 2023). The verified corpus was narratively synthesised to reconstruct the evolution of destination branding, exposing thematic inflections, gaps, and subtleties across digital phases (Best et al., 2014). This qualitative stage leveraged storytelling to capture shifts—from brochure-centric webpages to participatory social platforms—that simple counts obscure (M. Mariani & Baggio, 2020).
Tier 2 supplies statistical precision. A random-effect meta-analysis pooled standardised effect sizes, estimating digital marketing’s aggregate impact on core destination brand-equity indicators and testing heterogeneity among moderators (Borenstein et al., 2009). Dual-track designs are championed in tourism methodology for balancing depth with inferential power (Rasul et al., 2024). Nonetheless, this framework acknowledges biases—publication skew, varied operationalisations, temporal confounding—mitigated through sensitivity analyses, moderator coding, and triangulation with literature screened, but excluded from synthesis to improve transparency and replicability.
Each tier meets a discrete aim: the qualitative systematic-narrative review clarifies what changed and why, whereas the meta-analysis quantifies how much specified strategies matter (Hong et al., 2019). Their integration yields a coherent evidentiary arc—contextual explanation reinforced by effect magnitude—thus realising methodological pluralism while avoiding the myopia of single-technique reviews (M. Mariani & Baggio, 2020).

3.2. Scope and Inclusion Criteria

3.2.1. Temporal Boundaries and Phase Mapping

We analysed peer-reviewed studies whose data fell between 1990 and June 2025, mirroring the internet’s progressive penetration into destination marketing (Buhalis & Law, 2008). Four phases guided this synthesis: Web 1.0 (1990–1999: static DMO sites and email newsletters), Web 2.0 (2000–2009: interactive platforms and user-generated content) (Xiang & Fesenmaier, 2017; Sigala, 2015), mobile–social (2010–2019: app-enabled, real-time engagement) (Neuhofer et al., 2015), and AI-immersive (2020–2025: big-data analytics, VR/AR, influencers, and AI branding) (Sigala, 2020; I. P. Tussyadiah et al., 2018). Studies situated partly or wholly within this interval were retained, while pre-1990 promotion was excluded. Phase coding enables diachronic comparison, exposing the progression from brochure-ware to participatory storytelling (Li et al., 2017).

3.2.2. Definition of ‘Destination’

‘Destination’ denotes any geographically bounded tourism system purposefully marketed by a recognised DMO—city, region, country, or transnational route (Pike & Page, 2014; Zenker & Braun, 2017). Following Hanna and Rowley (2015), it is an experiential bundle promoted under a shared name (e.g., ‘All you want is Greece’). Studies on individual attractions, hotels, or corporate brands were rejected. The multi-scalar lens permits examination of whether digital tools equalise or magnify branding capacity between small cities and national boards (Zenker & Braun, 2017).

3.2.3. Branding Outcomes

Eligible works had to quantify at least one branding outcome. Guided by Boo et al.’s (2009) equity model and later extensions (Tasci, 2018; Chekalina et al., 2016), accepted metrics were awareness, image, perceived quality, loyalty, and composite equity. To reflect the social-digital turn, engagement indicators—likes, shares, comments, co-creation intensity, quantified sentiment—were also included (Pirnar et al., 2019; Marine-Roig, 2019; Rather, 2020). Pure arrivals or revenue data were admissible only when explicitly tied to brand-equity constructs.
By clearly specifying these outcomes, we ensured our meta-analysis would combine like with like (e.g., pooling studies on brand awareness) and that our narrative synthesis would speak to concrete indicators of branding success (such as improved brand equity scores, higher digital engagement, or greater brand recognition). Studies had to report at least one such branding outcome (pre- or post-digital intervention) to be considered for inclusion. Outcomes purely about tourist arrivals or sales without a branding perception measure were generally excluded, unless directly tied to brand equity constructs. This inclusion clarity enhances the relevance of our synthesis: we concentrate on how the digital evolution affected brand equity (e.g., destination image, loyalty) and engagement, which are pivotal goals of destination-branding campaigns (Tasci, 2018; Rather, 2020).

3.2.4. Eligibility Filter

A study was included when it simultaneously met four criteria: (i) addressed an eligible destination, (ii) reported a qualified outcome, (iii) collected data within 1990–2025, and (iv) employed or examined digital communication. Failure on any criterion triggered exclusion. The filter enhanced transparency and replicability while curbing topic drift. This pragmatic, binary rule set also expedited screening by minimising reviewer subjectivity and fatigue.

3.2.5. Critical Reflection

Nesting findings within the four technological phases illuminates the progressive redefinition of value—from information provision, through experience co-creation, to algorithmic personalisation. This framework concurrently reveals research gaps: most post-2020 studies praise AI affordances, yet rarely interrogate ethics, governance, or the uneven capacity of DMOs to harness emergent tools (Barari et al., 2025). Longitudinal designs and cross-regional comparisons are particularly needed to reveal causal mechanisms and contextual contingencies.

3.2.6. Construct Harmonisation and Measurement Equivalence

To reconcile heterogeneous operationalisations across phases and platforms, we created a crosswalk that maps study-level measures onto five canonical outcomes (awareness, image, attitudes, loyalty, engagement intentions). The protocol had four steps (full matrix is provided in Appendix B; summary is provided in Table 3):
  • Define canonical constructs using destination-adapted CBBE and engagement literature (Boo et al., 2009; Kladou & Kehagias, 2014; Rather, 2020).
  • Catalogue raw measures reported by each study (e.g., aided recall; follower growth; 5-point favourability; repeat-visit intention; comment/repost probability; share counts).
  • Apply inclusion rules:
    Awareness: aided/unaided recall/recognition; excluded: impressions, reach.
    Image: multi-item cognitive/affective associations; excluded: single-item sentiment unless validated.
    Attitudes: global favourability/warmth; excluded: satisfaction unless explicitly framed as brand attitude.
    Loyalty: revisit/recommendation/advocacy intention; excluded: arrivals unless directly linked to brand equity.
    Engagement intentions: intention to follow, share, or co-create; counts (likes/views) used only when authors explicitly theorised them as behavioural engagement and they were not co-pooled with equity outcomes.
  • Standardise to Hedges’ g or Fisher’s z and orient signs so larger values uniformly denote stronger branding outcomes. Where multiple indicators existed per construct, we averaged within-study before pooling (Cheung, 2015), preserving independence.
Pre-specified reconciliation rules (cross-phase): Because operationalisations evolved with technology, we set explicit rules to ensure comparability.
  • Priority of validated scales. Where both survey scales and platform metrics were available for the same construct, validated multi-item scales anchor the construct. Platform/process metrics are analysed as moderators or ancillary descriptors, not substitutes for equity outcomes.
  • Composites within studies. When a study reported several indicators of the same construct (e.g., three awareness items), we z-standardised and averaged them to a single within-study score before computing effect sizes.
  • Engagement taxonomy. We coded engagement at three levels, but pooled only levels 2 and 3:
    L1 Ephemeral exposure: impressions, views, likes—excluded from pooling.
    L2 Relational interaction: comments, meaningful shares/saves, follows.
    L3 Co-creation: production of UGC/reviews; participation in DMO contests; stated intent to produce content. These map to our engagement intentions outcome. L2 and L3 contribute to quantitative pooling, L1 informs narrative context and moderator coding only.
  • Directionality and sign. All effects were oriented so positive values reflect improved branding outcomes.
  • Cross-phase comparability. For outcomes lacking a pre-digital analogue (e.g., L2/L3 engagement), we treat them as digital-era constructs and discuss cross-era comparability narratively rather than statistically.
This matrix enforces construct validity across eras: a Facebook ‘like’ count never substitutes for awareness or loyalty, while pre-digital recall surveys map cleanly to awareness. Cross-era comparability therefore rests on conceptual sameness rather than metric sameness. Coder guidance, variable dictionaries, and decision logs are provided openly available at Zenodo DOIs—https://doi.org/10.5281/zenodo.16732287 (dataset of extracted effect sizes and analysis scripts for the meta-analysis) and https://doi.org/10.5281/zenodo.16731642 (coding schema for thematic analysis)—and summarised in Appendix B.

3.2.7. Engagement vs. Equity: Scope Decisions

We explicitly distinguish process metrics (platform-level attention such as likes, views, short-form comments) from equity outcomes (awareness, image, attitudes, loyalty). Process metrics can be high, yet transient, and equity requires cognitive/affective change and conative commitment. Accordingly: (i) process metrics are only pooled under engagement intentions (not with awareness/image/loyalty); (ii) when studies report both process and equity variables, they are coded separately; (iii) the Discussion (Section 6) and Limitations (Section 8) caution against equating attention spikes with durable equity.

3.3. Search Strategy and Screening

An integrative search was undertaken on Scopus, Web of Science, and Google Scholar to assemble the interdisciplinary record on the digital evolution of destination branding, balancing disciplinary reach with bibliographic rigour (Gusenbauer & Haddaway, 2020). Iterative Boolean strings intersected destination-branding descriptors (‘destination brand*’, ‘place branding’, ‘tourism marketing’) with stage-specific digital signifiers (‘internet’, ‘social media’, ‘Web 2.0’, ‘smart tourism’, ‘Web 3.0’). Truncation, proximity and wildcard operators were adjusted for each interface. Searches were limited to English-language outputs published between 1990 and June 2025. Geographic filters were deliberately omitted to preserve global coverage. On Google Scholar, where full-text retrieval prevails, concise phrases were used and the first two hundred results were manually inspected, tempering its proclivity to high recall at reduced precision (Haddaway et al., 2015; Martín-Martín et al., 2021). The exact search strings employed for each database are available in Appendix C.
The initial harvest yielded 1170 records (Scopus = 323; Web of Science = 240; Google Scholar = 607). References were imported into EndNote 20. The built-in duplicate-detection algorithm, corroborated by manual inspection, removed 440 redundant entries (Gotschall, 2021; Bramer et al., 2016). The remaining 730 titles and abstracts were screened under PRISMA 2020 guidance (Page et al., 2021). Two independent reviewers applied three inclusion tests—digital orientation, destination focus, and branding-related outcomes—excluding 490 studies. Full texts of the residual 240 articles were then examined against identical criteria: sixty lacked requisite outcome measures and twenty fell outside the temporal window, leaving one hundred and sixty studies for qualitative synthesis. Of these, sixty furnished extractable quantitative data and were consequently admitted to meta-analysis.
Reference-list snowballing of sentinel publications uncovered no additional eligible works, indicating search saturation. A PRISMA flow diagram documents each decision node and is presented in Figure 1. Exclusion rationales and screening logs have been archived, satisfying transparency recommendations (Liberati et al., 2009). A PRISMA 2020 compliance checklist ensuring that all 27 PRISMA items are addressed in the study is presented in Appendix D.
Methodological appraisal underscores notable strengths. Database triangulation constrains source bias, while inclusion of Google Scholar surfaces early-stage conference and technical papers essential in fast-moving digital contexts. Rigorous de-duplication prevents evidence inflation, and dual screening attenuates individual judgement error. Nonetheless, the English-language filter may marginalise non-Anglophone destinations, and practitioner reports remain unevenly indexed. Future syntheses should therefore incorporate multilingual queries and targeted industry repositories. In addition, the rapid diffusion of generative artificial intelligence within tourism marketing necessitates periodic updating of the corpus in case this review becomes obsolete within a temporal horizon.
Moreover, by juxtaposing quantitative meta-analytic insights with qualitative thematic patterns, this review is equipped to illuminate methodological gaps and theoretical blind spots, thereby identifying promising trajectories for subsequent future interdisciplinary inquiry. Collectively, the theoretically informed search design, meticulous data handling, and transparent audit trail provide a robust evidentiary foundation from which to interrogate how destination branding has co-evolved with successive digital environments up to 2025 (Gusenbauer & Haddaway, 2020; Page et al., 2021).

3.4. Data Extraction Protocols

Following final study selection, we applied a rigorously standardised protocol to harvest variables required for quantitative synthesis and qualitative interpretation (Schmidt et al., 2021). A spreadsheet template, piloted on sixteen papers and refined iteratively, enforced uniform coding and preserved reviewer decisions.
Bibliographic and contextual descriptors—author, year, outlet, title, region, digital setting—were first logged to enable chronological and geographical mapping. Design attributes (method, data source, sample size/population) and the focal digital constructs (e.g., destination-website launch, social-media-campaign intensity, user-generated-content volume, mobile-app diffusion) were then recorded. Each construct was cross-referenced with the phase typology defined in Section 3.2.1. Such contextual granularity allowed subsequent moderator testing on platform type, regional development stage, and methodological design during synthesis.
Outcome variables were extracted only when aligned with our inclusion frame: brand awareness, image, loyalty, engagement, or composite equity indices. Operational definitions (survey recall item, share count, validated scale, etc.) and instruments were transcribed verbatim to safeguard construct validity.
Quantitative evidence was captured at the granularity demanded by meta-analysis. Effect sizes reported by authors (e.g., r, d) were copied directly; otherwise, means, standard deviations, frequencies, t, F, or p-values were harvested for transformation. Employing the algorithms of Borenstein et al. (2010), we converted effects to Pearson’s r or Cohen’s d plus variance and harmonised sign direction so that higher values uniformly indicated stronger destination-brand outcomes (Appendix E and Appendix F). Missing statistics prompted author contact. Failing response, conservative imputations were computed. Recording the full range of statistics also protected against selective outcome reporting and inflated significance claims. A second reviewer verified twenty percent of entries, with discordances resolved by consensus.
For qualitative studies, principal thematic claims—such as ‘DMO social media fosters real-time visitor interaction’—were extracted and uploaded to NVivo for coding (Section 3.6). Emphasis lay on interrogating authors’ interpretative logic rather than mere thematic enumeration, and weakly substantiated assertions were flagged for down-weighting in narrative synthesis.
Risk-of-bias indicators (Section 3.5) were integrated alongside findings, permitting simultaneous scrutiny of evidential strength and methodological rigour. Furthermore, the iteratively updated log of extraction decisions served as an audit trail, enhancing reproducibility and enabling external replication by systematic reviewers worldwide. This structured alignment facilitated subsequent sensitivity analyses that excluded high-risk studies and informed the grading of evidential certainty.
By coupling meticulous data capture with continuous critical appraisal, the protocol minimised transcription error, curtailed selective-reporting bias, and furnished a coherent evidentiary matrix from which both statistical aggregation and thematic interpretation could proceed.

3.5. Risk-of-Bias and Quality Assessment

To accommodate methodological diversity—randomised trials, observational surveys, and qualitative case analyses—we employed the Mixed Methods Appraisal Tool (MMAT) 2018 as the unifying rubric (Hong et al., 2019) through a comprehensive, systematic, transparent, methodological approach. Its cross-design matrix enabled comparable scrutiny of sampling, measurement, and analytic transparency while discouraging spurious composite scores (Pace et al., 2012). Two reviewers independently rated each article; disagreements were resolved through discussion. For qualitative work, the focus lay on epistemological congruence, data richness, and interpretive rigour, whereas quantitative studies were inspected for selection bias, confounding control, and statistical adequacy (Pace et al., 2012). Ratings (yes, no, cannot tell) were logged, and deficiencies—low response rates, absent triangulation, inadequate randomisation—flagged for later sensitivity tests.
Complementary checklists served solely as clarifying lenses. The Critical Appraisal Skills Programme (CASP) qualitative checklist illuminated reflexivity and contextually dense description in purely qualitative papers (Long et al., 2020); Cochrane and Newcastle–Ottawa prompts verified performance, detection, and attrition risks within trials and cohorts (J. A. C. Sterne et al., 2019; Wells et al., 2013). Nonetheless, MMAT outcomes remained the analytical anchor, preserving coherence across designs (Hong et al., 2019).
Forty-two percent of the corpus achieved a high-quality designation (n = 68), forty-five percent medium (n = 72), and thirteen percent low (n = 20). Recurring threats included modest samples in early digital-branding experiments, self-selection biases in social-media questionnaires, and opaque methodological reporting (Fletcher & Marchildon, 2014; Podsakoff et al., 2012). Quality operated as a descriptive moderator rather than an exclusion filter, thereby preserving evidential breadth while enabling critical interrogation (Higgins et al., 2011). A one-way ANOVA (Table 2) contrasted mean effect sizes across strata—high = 0.57 (SD = 0.12), medium = 0.53 (0.14), low = 0.50 (0.17)—yielding no significant difference (F = 1.32, p = 0.271), and meta-regression detected no systematic inflation by lower-quality studies (Borenstein et al., 2010). Detailed MMAT ratings underpin Table 4, elucidating how effect estimates clustered irrespective of quality segmentation. Nonetheless, absence of significant moderation should not be conflated with methodological equivalence: several low-quality studies exhibited wide confidence intervals, signalling imprecision that warrants circumspection when extrapolating policy recommendations.
This dual-layer protocol—MMAT core plus targeted cross-checks—offered a concise, yet comprehensive bias synthesis. By foregrounding transparent, design-sensitive criteria and triangulating findings with sensitivity analyses, we bolstered interpretive stability without sacrificing inclusivity. This strategy accords with contemporary guidance on integrative reviews striving to balance methodological stringency, substantive nuance, reflexive critique, and enhancing the translational relevance for scholars and practitioners worldwide (Hong et al., 2019; Higgins et al., 2011).

3.6. Coding and Thematic Analysis

Qualitative evidence was interrogated through a rigorously structured thematic procedure implemented in NVivo 12 (QSR International) (Zamawe, 2015). Guided by grounded-theory sensibilities, we executed open, axial, and selective coding in sequence (Corbin & Strauss, 2015).
During open coding, two analysts independently annotated each study, attaching inductively generated descriptive codes to any passage illuminating the digital evolution of destination branding (Saldaña, 2015). Examples include ‘website as brochure’, ‘user-generated content’, and ‘brand community formation’. The absence of a priori limits ensured analytic saturation (Braun & Clarke, 2006). A consolidated codebook, negotiated after an initial calibration round, promoted epistemic consistency.
Next, axial coding reassembled these fragments into higher-order categories that mapped both chronological phases (Web 1.0, Web 2.0, smart tourism) and transversal tensions such as ‘adoption barriers’ versus ‘opportunities’ (D. Leung et al., 2013; Buhalis & Foerste, 2015). This stage clarified how social-media tactics, digital storytelling, and mobile applications collectively influence brand-equity components such as awareness, image, and loyalty.
Finally, through selective coding, we distilled four meta-narratives: (i) engagement shifting from monologic webpages to dialogic platforms, (ii) staged technology adoption by destination management organisations, (iii) tourists’ metamorphosis from audience to co-creators, and (iv) ambivalent effects of digital media on brand equity (U. Gretzel et al., 2015). NVivo matrix queries verified the co-occurrence structure—for instance, ‘mobile apps’ frequently intersected with ‘brand loyalty’—enhancing internal validity (Zamawe, 2015).
Reliability was assessed through double-coding a random 25% of studies. Cohen’s κ averaged 0.79, evidencing substantial agreement (Campbell et al., 2013; McHugh, 2012). Discrepancies were resolved through reflexive dialogue and the codebook iteratively refined, curbing individual bias (O’Connor & Joffe, 2020). Word-frequency clouds (Figure 2) and hierarchical charts (Table 5) were exploited not for aesthetics. but for analytical triangulation, revealing latent lexical patterns (Leech & Onwuegbuzie, 2011). The final thematic architecture, traceable from raw quotations to abstract theory, underpins the narrative synthesis reported in Section 4.
Critical reflection reveals two limitations. First, reliance on published English-language research risks Western-centric bias. Ongoing inclusion of grey and non-English sources is therefore essential. Second, while coding reliability was high, thematic interpretation remains contingent on researcher positionality. Future work should test our schema through member checking with destination marketers.
Despite these caveats, the multilayered coding strategy furnishes a transparent, replicable lens through which to apprehend the digital trajectory of destination branding. By aligning chronological and conceptual axes, we expose not merely what technologies were adopted, but how successive media regimes reconfigured power relations between DMOs and tourists, thereby reshaping brand equity in complex, sometimes contradictory ways. Consequently, the synthesis offers a platform for hypothesis generation concerning causal mechanisms linking digital affordances, participatory cultures, and destination competitiveness, which are elaborated in the concluding section.
The detailed coding schema applied in our thematic analysis, including codes, definitions, and thematic groupings, is available in the online appendix at https://doi.org/10.5281/zenodo.16731642.

3.7. Meta-Analysis Criteria

Only studies yielding computable quantitative effects between a digital intervention and a branding outcome were meta-analysed. Eligible designs comprised experiments, comparative surveys, correlational time series, and analogous formats reporting means with dispersion, sample sizes, or test statistics sufficient to derive a standardised index (Borenstein et al., 2009; Hunter & Schmidt, 2004; Lipsey & Wilson, 2001). Purely conceptual or qualitative contributions informed the narrative synthesis only (Glass, 2000). To preserve conceptual coherence, outcomes were stratified a priori into awareness, image/attitude, and engagement. Aggregation was permitted solely when operationalisations were substantively alike; for instance, survey-based awareness scores could be pooled across contexts, whereas awareness–loyalty mixtures were excluded (Cooper, 2010; Card, 2012). Studies lacking essential statistics were excluded from quantitative pooling as well.
All pooled estimates were computed with random-effect models, acknowledging genuine variation across destinations, visitor segments, platforms, design, and measurement instruments, implying that true effects vary around a distribution rather than share a single common effect size. This choice enhances generalizability across heterogeneous settings (Borenstein et al., 2010; Viechtbauer, 2010; Raudenbush, 2009). Under this framework, each study estimates a unique true effect drawn from an underlying distribution, permitting broader inference while explicitly modelling between-study variance τ2 (Higgins et al., 2011; Hedges & Vevea, 1998). Continuous contrasts yielded Hedges’ g, whereas zero-order associations were expressed as Fisher-transformed r. All signs were oriented so positive values denote beneficial digitisation (Lipsey & Wilson, 2001; Card, 2012). Inverse-variance weights allocated greater influence to larger, more precise studies (Hedges & Olkin, 2014).
Statistical consistency was examined with Cochran’s Q and the I2 statistic (Higgins et al., 2003). Several outcome groups displayed moderate–high I2, confirming substantive heterogeneity; nevertheless, random-effect intervals remained interpretable. Exploratory subgroup tests and meta-regressions probed moderators such as study period and destination scale, promoting cautious differentiation instead of indiscriminate averaging (Lipsey & Wilson, 2001; Thompson & Higgins, 2002).
Publication bias threatens quantitative syntheses, yet is seldom addressed in tourism research. We therefore combined visual funnel-plot inspection with Egger’s regression (Egger et al., 1997). Across all three outcome strata, the funnels were symmetrical and the intercept non-significant (p > 0.10) (Table 6), indicating minimal small-study suppression. Our extensive grey-literature search, which harvested theses and reports via Google Scholar, plausibly contributed to this result (Hopewell et al., 2005). Pre-specified trim-and-fill adjustments were unnecessary, underscoring the robustness of pooled estimates (Duval & Tweedie, 2000).
Finally, we applied statistical software (R with the metafor package, version 4.8-0) to perform all meta-analytic calculations, ensuring reproducibility with scripts (Viechtbauer, 2010). We report the pooled effect sizes with confidence intervals and heterogeneity stats (I2 and Q) for each outcome analysed (Table 7). The use of random-effect modelling, given the significant heterogeneity found, aligns with recommendations for meta-analyses in the social sciences, where diverse study designs and contexts are expected (Lipsey & Wilson, 2001; Card, 2012; Raudenbush, 2009). By explicitly reporting these inclusion criteria and analytic decisions, our methodology ensures a rigorous and transparent quantitative synthesis that complements the narrative review, providing a comprehensive understanding of how digitisation has quantitatively influenced destination branding outcomes over time. The dataset containing extracted effect sizes and the analysis scripts used to perform the meta-analysis are publicly available in an online appendix at https://doi.org/10.5281/zenodo.16732287.

4. Narrative Review: Evolution of Strategies and Evidence

4.1. Pre-Digital Era: Print Collateral, Mass-Media Buys, Push Narratives

Drawing on evidence from the 160 academic studies identified in Section 3.3, in the pre-digital era, destination branding was dominated by traditional ‘push’ marketing tactics. Destination-marketing organisations (DMOs) relied heavily on printed brochures, travel guidebooks, trade fairs, and mass-media advertising (e.g., magazine spreads, billboards, TV/radio spots) to shape a destination’s image. Branding a destination in this period required substantial budgets for media buys and creative production: before the internet, building a global destination brand essentially meant investing heavily in mass media outreach. Communication was primarily one-way: DMOs crafted narratives and visual identities, disseminating them to passive audiences. There was limited opportunity for real-time feedback or consumer interaction, aside from occasional surveys or focus groups (Zhou, 1997).
Notably, the academic concept of ‘destination branding’ itself only began gaining traction in the late 1990s as tourism marketers recognised the need for more strategic place image management. Prior to that, promotional efforts were often ad hoc and campaign-driven, focusing on slogans, logos, and iconic visuals—typically picturesque landscapes on posters—to foster a singular destination identity (Schaar, 2013). DMOs maintained tight message control, carefully curating all brand content. National tourism boards, for example, distributed standardised brochures worldwide to ensure a consistent (albeit static) image. During this era, tourists were treated as information recipients, with narratives delivered top-down to highlight attractions and promise idealised experiences. Empirical academic evidence from this time is sparse, given that academic interest was nascent (Pike, 2002); however, industry analysis confirms that DMOs continued allocating most of their promotional budgets to print and broadcast media well into the early 2000s (Sotiriadis, 2020).
The limitations of these traditional approaches were manifold: dissemination was slow due to long lead times, targeting niche segments was difficult, and there was a lack of timely performance metrics, making it challenging to assess how print ads or brochures translated into actual visits or brand awareness outcomes (Zhou, 1997). In summary, the pre-digital era established foundational branding practices cantered on unilateral messaging and emblematic visuals, but it offered limited audience engagement and measurability.

4.2. Phase-by-Phase Thematic Analysis

Digitisation has reconfigured destination branding through successive techno-cultural waves. Synthesising the 160 peer-reviewed studies identified in Section 3.3, we trace four milestones—Web 1.0, Web 2.0, mobile ubiquity and the current AI/XR turn—showing how narrative authority has shifted from destination-management organisations (DMOs) to increasingly empowered travellers. While the phases are presented discretely, scholars emphasise path dependence: lessons, infrastructures, and stakeholder capabilities acquired in one wave condition strategic latitude in the next. Our synthesis therefore privileges evolutionary logics over simple chronology, foregrounding how each technological grammar recalibrates power relations, performance metrics, and ethical stakes for destination leaders and researchers.

4.2.1. Web 1.0: Informational Websites and Virtual Brochures

From the mid-1990s, DMOs treated early sites as digitised pamphlets, privileging technical visibility over interactivity (G. Gretzel et al., 2006). Content largely duplicated print guides, and updates were sporadic because rudimentary content-management systems raised transaction costs (Law et al., 2010). As a result, DMOs remained the chief—often unchallenged—source of information (Litvin et al., 2008). When organisations invested in usability testing, multilingual pages, or simple feedback forms, perceived information quality and trust improved, foreshadowing later eWOM effects (Law et al., 2010). Nevertheless, branding stayed one-way: bytes replaced paper, but discourse remained top-down. Crucially, early diffusion studies depict digital adoption as uneven: resource-rich national boards moved first, whereas municipal offices lagged, reproducing centre-periphery hierarchies already visible in analogue promotion. Methodologically, most investigations relied on descriptive audits of website features, and only a handful applied experimental designs or visitor analytics, limiting causal inference. The literature thus invites a reassessment of presumed efficiencies and highlights a persistent tension between technological promise and organisational inertia—a theme that recurs across subsequent phases and shapes present capacity disparities too.

4.2.2. Web 2.0: Participatory Culture, UGC, and Social Proof

The mid-2000s explosion of social platforms, review sites, and blogs shifted authority toward user-generated content (UGC). Travellers began to trust peer narratives over official discourse, fundamentally altering brand construction (Xiang & Gretzel, 2010; Zeng & Gerritsen, 2014). Empirical syntheses attribute this ‘credibility dividend’ to unprecedented visibility of non-marketer voices (Munar & Jacobsen, 2014). DMOs that embraced co-creation—exemplified by Tourism Queensland’s 2009 Best Job campaign—leveraged volunteer storytellers for viral reach (M. Mariani et al., 2014), yet surveys show many organisations simply repost brochure copy on Facebook or Twitter, forfeiting dialogue (U. Gretzel et al., 2015). Engagement metrics such as likes, shares, and comments therefore emerged as proxies for relational value (M. M. Mariani et al., 2016). While positive UGC stimulates favourable attitudes (Litvin et al., 2008), unmanaged negativity risks fragmentation, dilution, and crisis contagion (Sigala, 2020; de la Hoz-Correa et al., 2018).
Scholarly attention during this phase shifted from informational quality to socio-cognitive mechanisms. Attribution theory elucidates why travellers discount overtly promotional posts, yet valorise candid narratives, while social-capital frameworks clarify influencer potency. Nonetheless, empirical agendas skew toward positivist metrics—simple counts of likes or shares—at the expense of interpretive work on meaning-making practices. Comparative research across cultural contexts remains sparse: findings from Anglophone markets may not generalise to collectivist or high-context societies, where reciprocity norms and face-saving rituals modulate digital dialogue. Future inquiries should fuse cross-cultural psychology, critical discourse analysis, and network ethnography to reveal branding’s contested, plural realities with robust methodological triangulation.
In sum, the Web 2.0 phase was characterised by a loss of message control, but a gain in authenticity and trust. Destinations that embraced co-creation—facilitating travellers to become storytellers—could amplify their brand through peer networks. Empirical evidence in this era shows social-media content significantly affects travellers’ destination decisions and images; tourists increasingly rely on eWOM when planning trips, with positive UGC enhancing a destination’s appeal (Litvin et al., 2008). The DMO’s role started to shift from content authority to content curator and conversation moderator. Branding strategies expanded to include influencer partnerships, online-community management, and real-time handling of both positive and negative viral content. The participatory nature of Web 2.0 thus redefined destination branding as an ongoing dialogue, rather than a polished monologue.

4.2.3. Mobile: Always-On Destination Storytelling and Location-Based Engagement

The smartphone decade embedded the internet in travellers’ pockets, collapsing pre-, in- and post-trip boundaries. Early warnings that devices would demand context-relevant micro-content proved prescient (Buhalis & Law, 2008). By 2012, roughly half of DMOs had launched apps and almost all maintained mobile-optimised sites (I. P. Tussyadiah, 2020). These interfaces enabled GPS-triggered notifications and beacon services, exemplified by Brisbane’s city-wide deployment (U. Gretzel et al., 2015), intensifying real-time narration (Bigné et al., 2019). Empirical work confirms tourists employ smartphones for on-site decisions, compelling DMOs to supply granular, multilingual data and responsive support (Neuhofer et al., 2015).
Ubiquitous connectivity also turned visitors into live broadcasters, extending reach through pervasive hashtags (Sigala, 2020), yet the mobile turn accentuated infrastructural divides: destinations in bandwidth-poor regions could not harness video-rich formats, underscoring digital inequality as a branding determinant. Empirically, most studies employ convenience samples of leisure tourists, neglecting segments such as VFR (visiting friends and relatives) or accessibility-focused travellers who may experience mobile affordances differently. Addressing these blind spots is vital for inclusive destination narratives and sustainable growth agendas. Notwithstanding these caveats, destinations that transformed the phone into a personalised concierge deepened experiential bonds and prepared audiences for subsequent hyper-personalisation.

4.2.4. AI, XR, and Predictive Personalisation (Chatbots, Recommender Systems)

The most recent phase of digital evolution finds destinations experimenting with artificial intelligence (AI) and immersive technologies (extended reality—XR) to further personalise and enrich the branding experience. This phase, emerging in the late 2010s and accelerating through the 2020s, encompasses tools such as chatbots, AI-driven recommender systems, and virtual/augmented-reality (VR/AR) applications. The unifying theme is predictive personalisation: leveraging big data and machine learning to tailor interactions to individual travellers and using immersive media to create compelling virtual narratives of the destination.
Since the late 2010s, DMOs have experimented with chatbots, recommender systems and immersive media to algorithmically tailor and dramatise brand encounters. Always-on AI assistants cut response times and elevate service quality, and recent experiments show informativeness and trust enhance destination image and visit intention (Orden-Mejía et al., 2025; Tosyali et al., 2023; Wüst & Bremser, 2025). Data-driven trip planners in Las Vegas and Amsterdam illustrate itinerary design that adapts in real time, while transparency safeguards acceptance (N. Sousa et al., 2024; Koo et al., 2022). VR previews boost emotional engagement, and AR layers situational narratives onto place, especially resonating with tech-savvy youth (Anaya-Sánchez et al., 2024; Pricope Vancia et al., 2023). Scholars caution, however, that heavy reliance on algorithms raises privacy, filter-bubble, and resource concerns, potentially widening competitive gaps (N. Sousa et al., 2024). Over-promising visuals can trigger expectation–reality dissonance (Koo et al., 2022).
Current scholarship remains exploratory; small-scale pilots dominate, and rigorous impact evaluations are scarce. Moreover, algorithmic governance debates are only beginning to intersect with branding research, leaving questions of accountability, bias mitigation, and public oversight under-explored. Without such scrutiny, techno-optimism may eclipse equity considerations for vulnerable or peripheral destinations worldwide.

4.3. Cross-Phase Comparative Insights: Message Control, Co-Creation, Speed, Reach, and Data

A longitudinal reading of the preceding phases discloses marked reconfigurations in destination-branding praxis. Whereas pre-digital promotion relied on linear diffusion and managerial monologue, successive digital epochs have progressively empowered tourists as co-producers of place meaning, compressed communicative latency, expanded networked reach, and deepened data granularity available for strategic calibration. Table 8 presents a synoptic contrast of these trajectories across the print, Web 1.0, Web 2.0, mobile, and AI/XR eras.
Several critical inflections emerge. First, the locus of branding power has migrated from institutional centre to distributed periphery: tourists’ digitally archived experiences increasingly define destination image, relegating DMOs to orchestrators of participatory discourse rather than custodians of a canonical narrative. Second, temporal dynamics have shifted from campaign periodicity to perpetual monitoring and intervention: strategic agility, not planned messaging, now underwrites reputational resilience. Rapid feedback loops compel DMOs to embrace iterative logics, reconfiguring messages in line with shifting sentiment.
Third, digitally mediated reach has become both omnipresent and conditional. Network effects can broadcast a single visitor’s story worldwide within minutes, yet platform algorithms filter visibility, rendering reputation simultaneously global and fragile. Fourth, the data corpus informing decisions has evolved from coarse arrival counts to multimodal, real-time streams capable of predictive modelling. While such intelligence affords unprecedented precision, its interpretive value depends upon critical literacy rather than unreflective quantification.
Collectively, these transformations render destination branding more fluid, polyvocal, and data-intensive. The managerial challenge therefore transcends content production and enters the domain of relational governance, cultivating trust, curating community contributions, and deploying analytics that inform but do not eclipse human judgment. DMOs that fail to internalise this dialectic risk strategic obsolescence, while those that embed adaptive capabilities, interdisciplinary skillsets, and reflexive ethical reflection are prerequisites for agencies that seek to steward destination identities amid technological turbulence.
Finally, reliance on proprietary platforms and opaque recommender systems foregrounds critical normative concerns. Algorithmic curation can privilege already iconic sites, marginalise peripheral communities, reproduce socio-cultural bias, and compress experiential diversity. Accordingly, evaluation frameworks must integrate engagement metrics with indicators of distributive equity, cultural sustainability, data ethics, privacy preservation, transparency, and inclusive social justice metrics. Future research should interrogate how spatial computing, generative media, and quantum optimisation will further recalibrate agency within the visitor–brand nexus and what potential regulatory and design safeguards are required to temper techno-market excess.

4.4. Emergent Research Themes and Under-Explored Areas

Our synthesis isolates four research frontiers requiring deeper theorisation and evidence. First, authenticity, not glossy promotion, now anchors persuasive digital branding. Campaigns foregrounding lived narratives and dialogical exchange recast destinations as co-created cultural texts authored by marketers, residents, and visitors (I. P. Tussyadiah, 2020). Salient issues persist; how to integrate user-generated content ethically, and how to govern narrative coherence within decentralised storytelling ecosystems.
Second, the smart-destination paradigm merges infrastructure, analytics and interactive media to align marketing with in situ experience. U. Gretzel et al. (2015) posit that ubiquitous connectivity and sensor-based services double as brand touchpoints, yet the causal chain linking technological affordances, visitor emotion, and brand equity remains tentative. Research should clarify mediators—perceived innovativeness, cognitive immersion, trust—and test them across destinations with unequal digital maturity.
Third, influencer marketing is pervasive, yet scholarly evaluation lags practice. DMOs regularly enlist bloggers, social-media personalities, and synthetic avatars (Băltescu & Untaru, 2025); however, rigorously designed investigations of campaign longevity and credibility are scarce. Femenia-Serra and Gretzel (2020) advocate longitudinal designs that separate fleeting attention spikes from durable attitude shifts while exposing the power asymmetries embedded in influencer–DMO contracts.
Fourth, AI-enabled personalisation extends prior innovation, yet introduces conceptual and ethical tensions. Early work on chatbots and recommender systems identifies trust as a pivotal moderator of branding outcomes (Xiang et al., 2017). Simultaneously, large-scale data harvesting generates privacy and security dilemmas that tourism scholarship has scarcely theorised (Dwivedi et al., 2019). The effectiveness of immersive media also remains unclear. Further experiments must test whether virtual or augmented realities convert curiosity into visitation and yield a positive return on investment (A. E. Sousa et al., 2024). Meta-analytic synthesis of trials remains overdue.
Important contextual lacunae persist. Existing research privileges high-profile cities and well-resourced DMOs, leaving peripheral or developing destinations under-examined. The strategic consequences of continuous platform migration—from early web forums to TikTok and metaverse prototypes—are likewise opaque. Tracking cohorts of early adopters and laggards over ten- to fifteen-year horizons could illuminate path dependence and adaptive capacity (Zenker & Braun, 2017).
A burgeoning critique also warns against technological determinism. Sophisticated tools cannot compensate for thin place narratives or weak stakeholder engagement: several celebrated initiatives deliver negligible brand returns (Xiang et al., 2017). Evaluations must therefore embed technology within organisational, socio-cultural, and visitor-heterogeneity frameworks (Dwivedi et al., 2019) to align investment with demonstrable public value.
In sum, destination branding has progressed from brochures to bytes, yet interactivity, personalisation, and consumer agency create challenges equal to their promise. By historicising these shifts, our hybrid systematic-narrative review furnishes the conceptual scaffold for the ensuing meta-analysis, which will quantify whether social-media adoption has on average strengthened destination-brand equity. Appreciating this evolutionary continuum enables scholars and practitioners to design strategies that remain evidence-based, authentic, and adaptively resilient.

5. Meta-Analysis: Quantifying Digital-Era Branding Impacts

This section presents a quantitative synthesis of destination branding outcomes in the digital era based on 60 studies identified through the specified queries. It details the rationale for focusing on post-internet studies, the approach to computing standardised effect sizes, moderator analyses for key contextual factors, checks for publication bias, and a summary of meta-analytic findings on effect magnitudes.

5.1. Rationale for Focusing on Digital-Era Studies

5.1.1. Addressing Scarcity of Comparable Pre-Digital Effect-Size Data

The pre-digital ‘brochure age’ of destination branding was characterised by case studies and conceptual discussions, rather than uniform quantitative metrics. Early destination branding research (circa 1990s–early 2000s) was relatively nascent and often focused on defining concepts or comparing destination brands to product brands, with few empirical studies measuring effect sizes (Veríssimo et al., 2017). In other words, while destinations certainly engaged in branding via brochures, print media, and static websites, academic documentation of their impact (e.g., how much a brochure campaign raised brand awareness or improved image) is sparse. The lack of standardized measures in that era makes it difficult to aggregate or compare quantitatively—studies seldom reported statistics like Cohen’s d or correlation coefficients for branding outcomes. As a result, there is a scarcity of comparable pre-digital effect-size data. For instance, a review noted that much of the earlier work remained highly theoretical without consistent metrics of success (Schaar, 2013). Given this gap, attempting a formal meta-analysis spanning the pre-digital era would be impractical and potentially misleading. The present study therefore concentrated on the digital era, where more rigorous evaluations of branding outcomes have been conducted, yielding data amenable to quantitative synthesis.

5.1.2. Justifying Split Between Quantitative Meta-Analysis (Digital) and Qualitative Comparison (Pre-Digital and Digital)

Because of the above limitations, we adopted a mixed approach: a quantitative meta-analysis for the digital era and a narrative, qualitative treatment for the pre-digital and digital eras. This division is methodologically justified on several grounds. First, combining vastly different eras in one meta-analysis would violate the principle of comparing ‘like with like’—the heterogeneity in context and measures would be excessively high. Branding campaigns on social media or interactive websites generate data (engagement metrics, survey-based brand metrics, etc.) that simply have no counterpart in the brochure era. Second, the number of rigorous, peer-reviewed studies from the pre-digital period with usable effect sizes is extremely small (in many cases, none), whereas the digital era provides a critical mass of studies to aggregate. By splitting the analysis, we handle each era in the most suitable way: the digital era is analysed quantitatively to derive generalizable effect-size estimates, while the pre-digital era is discussed qualitatively (in earlier sections) to capture its insights and historical context. This approach mirrors best practices in review methodology when faced with incommensurate data—using narrative synthesis for periods or topics that lack statistical commensurability, and reserving meta-analysis for the subset of studies that share common metrics and sufficient data (Veríssimo et al., 2017). In sum, focusing the meta-analysis on digital-era studies allows us to rigorously quantify the impacts of ‘bytes’ on destination brands, without diluting the analysis with ‘brochure’ studies that cannot be directly compared. The qualitative vs. quantitative split also helps emphasise the evolution: we quantitatively document the gains of digitisation and qualitatively contrast them against the baseline of the pre-digital practices.

5.2. Effect-Size Computation and Standardisation

5.2.1. Primary Outcomes: Brand Awareness, Image, Attitudes, Loyalty, Engagement Intentions

To construct a coherent meta-analytic framework, we consolidated the dependent variables examined in digital-era destination-branding research into five canonical outcomes: brand awareness, brand image, brand attitudes, brand loyalty, and engagement intentions. These constructs replicate the cognitive–affective–conative hierarchy that anchors customer-based brand equity, thereby enabling rigorous cross-study comparability (Boo et al., 2009; Qu et al., 2011).
Brand awareness denotes tourists’ capacity to recognise or recall a destination—survey-based recall indices operationalise this visibility metric (Qu et al., 2011). Brand image captures the cognitive and affective associations attached to place—attribute scales repeatedly link social-media exposure with favourable mental pictures (Marine-Roig & Anton Clavé, 2016). Brand attitudes articulate a global evaluative orientation—typically measured with bipolar sentiment items—that distils image into explicit favourability judgements (Harrigan et al., 2018). Brand loyalty represents conative commitment, expressed through revisitation intent, advocacy, and word-of-mouth, and thus signals the long-term yield of successful positioning (Rather, 2020). Engagement intentions, distinctive to interactive media contexts, gauge willingness to follow, share, or co-create content with destination-marketing organisations, translating symbolic attachment into participatory behaviour (W. Lu & Stepchenkova, 2014; Yagmur & Demirel, 2024).
Retaining this quintet elevates analytical precision for two reasons. First, the variables map the full progression from rudimentary knowledge to behavioural allegiance, permitting the meta-analysis to trace how digital touchpoints move tourists along the branding ladder. Second, their ubiquity—virtually every study in our corpus investigated at least one—facilitates robust aggregation of effect sizes while minimising construct heterogeneity. Such precision ultimately strengthens external validity and informs evidence-based destination-marketing practice across contexts worldwide today.
The classification also sharpens theoretical cadence. Awareness and image register stimulus-level cognition; attitudes capture affective appraisal; loyalty and engagement embody post-evaluation action. Aligning empirical results with this tripartite structure allows tests of whether digital affordances disproportionately influence early-stage cognition or end-stage behaviour. Our preliminary synthesis indicates that high-intensity social-media exposure produces pronounced gains in awareness and image, whereas loyalty improvements surface only when interactive affordances stimulate meaningful co-creation—signalling a conditional pathway from exposure to advocacy (Harrigan et al., 2018; W. Lu & Stepchenkova, 2014). Likewise, user-generated visuals measurably elevate visit intentions, underscoring the persuasive potency of peer-produced content within the engagement domain (W. Lu & Stepchenkova, 2014).
Consequently, concentrating on these outcomes not only standardises measurement but also illuminates the boundary conditions under which digital strategies convert attention into allegiance. The approach advances destination-branding scholarship by anchoring platform-specific stimuli to sequential brand-equity components with heightened conceptual coherence and empirical rigour.

5.2.2. Statistical Approach (Hedges’ g/r to z, Random-Effect Models)

All pooled effects below follow the construct-harmonisation protocol in Section 3.2.6, summarised in Table 3 (full mappings in Appendix B). Given the heterogeneous designs of the primary corpus—cross-sectional surveys, laboratory experiments, and longitudinal panels—we first harmonised effect metrics. Whenever raw group contrasts were available, we calculated Hedges’ g, the small-sample corrected variant of Cohen’s d recommended for marketing and tourism meta-analysis because it remains unbiased under unequal or limited cell sizes (Borenstein et al., 2009). Where source articles reported only correlational or regression estimates—for instance, the association between social-media engagement and brand image—coefficients were converted to Pearson r and then transformed with Fisher’s r-to-z normalisation to permit commensurable pooling (Peterson & Brown, 2005; Viechtbauer, 2010). Directionality was standardised: positive values uniformly denoted stronger destination-branding outcomes and inversions preserved substantive meaning while guarding against sign error.
The synthesis employed an inverse-variance weighted random-effect model. This framework recognises that effect magnitudes diverge across settings—different destinations, platforms, samples, and outcome operationalisations—and models both within- and between-study variance instead of treating heterogeneity as sampling noise (Cheung, 2015; Viechtbauer, 2010). Q and I2 statistics quantified dispersion: values above conventional thresholds (I2 > 75%) confirmed substantial heterogeneity, validating the random-effect assumption and rendering any fixed-effect inference untenable (Borenstein et al., 2009).
Several methodological safeguards bolstered inferential robustness. Multiple dependent estimates originating from a single paper were consolidated through within-study averaging or, when design complexity warranted, treated as nested effects in a multilevel specification, thus preventing artificial inflation of precision (Cheung, 2015). Influence diagnostics entailed leave-one-out re-estimation; no individual study exerted disproportionate leverage, indicating that the pooled mean provided a stable central tendence (Viechtbauer, 2010).
By converting every finding to Hedges’ g or Fisher’s z and analysing it within a carefully specified random-effect framework, the review delivers an unbiased, context-sensitive synthesis of digital-era destination-branding evidence. This protocol accords with contemporary quantitative tourism standards, foregrounding statistical transparency while accommodating the field’s heterogeneity (Borenstein et al., 2009; Cheung, 2015).

5.3. Moderator Analyses

While the overall meta-analysis provides an average effect, an important aim was to explore moderators that might explain variations in effect sizes. The digital branding literature suggests that not all digital engagements are equal—outcomes can differ by platform, content type, influencer characteristics, interactivity, and destination context (Marine-Roig, 2019). Accordingly, we performed moderator analyses (via subgroup comparisons and meta-regressions) on key variables.

5.3.1. Platform Type (Facebook, Instagram, TikTok, X)

Effect sizes from 60 studies were classified by dominant platform to test whether social-media choices modulate branding outcomes. Three-quarters analysed Facebook, Instagram or X; the remainder covered YouTube, TripAdvisor, or TikTok. Platform affordances shape effects: image-centric, algorithm-curated feeds on Instagram and TikTok amplify vivid destination depictions and youth reach; Facebook’s community orientation supports relational maintenance; X’s brevity curtails sensory appeal.
Random-effect moderation (Table 9) corroborates these suppositions. Visual-first venues yielded the strongest effects on image and engagement. Facebook produced lower impacts, notably for awareness and loyalty, reflecting its ubiquity. X offered the weakest, though positive, influence—perhaps owing to its news orientation and limited leisure adoption. Overlapping confidence intervals and moderate heterogeneity discourage rigid rankings. Inverse-variance weighting in metafor (Viechtbauer, 2010) with cluster-robust standard errors (Cheung, 2015) underpinned statistical estimation.
Overall, these results extend Tran and Rudolf’s (2022) account of multi-platform branding and further imply complementarities: Instagram supplies affective imagery, whereas Facebook cultivates sustained communities. Platform efficacy also hinges on content: user-generated visuals on Instagram appear more persuasive than official text on X, a nuance examined by the next moderator. Future research should critically unpack platform–content synergies, redress TikTok’s evidential gap, and apply longitudinal designs to capture rapidly evolving algorithms and diverse consumer cohorts.
To probe whether the pooled digital-branding effect varies across contextual and methodological conditions, we extended the moderator analyses beyond platform type (Table 9) to two additional splits. Table 10 compares effect sizes across four world-region clusters, while Table 11 contrasts three broad study-design categories. Together, these subgroup tests verify that the overall benefit of digital branding remains stable: between-group Q statistics are non-significant and all confidence intervals overlap, confirming robustness despite moderate within-group heterogeneity. These results therefore complement the platform analysis and strengthen the claim that the digital ‘dividend’ holds across continents and research designs, reinforcing the external validity of our meta-analytic conclusions.

5.3.2. Content Strategy (UGC vs. DMO-Generated)

User-generated content and marketer-produced DMO material communicate contrasting epistemic cues. UGC inherits authority through peer endorsement and experiential specificity, whereas official messages rely on institutional legitimacy. Drawing on authenticity and social-identity theories, we expected UGC to moderate branding outcomes more strongly. Meta-analytic estimates confirm this prediction: UGC produces a medium effect (g = 0.58) compared with a small-to-medium effect for DMO content (g = 0.42), a gap with theoretical and managerial salience (Table 12).
Credibility is the principal mechanism. Primary studies consistently show that travellers judge UGC to be more sincere and trustworthy than promotional copy (Dedeoğlu et al., 2020). Such perceived veracity magnifies social diffusion: each photograph or review simultaneously validates the author’s identity and extends brand salience through weak-tie networks. One experiment demonstrated that active co-creation—encouraging participants to share—raised destination awareness more than passive consumption, underscoring the participatory advantage.
Nevertheless, exclusive reliance on peer production is no panacea. Professional campaigns still scaffold the narrative, supplying coherent imagery, legal accuracy, and targeting (Séraphin & Jarraud, 2022). Integrated strategies—formal launches complemented by visitor storytelling—yield an intermediate, yet statistically robust effect (g = 0.49), indicating complementarity rather than substitution.
The moderator analysis clarifies that content source matters, yet its influence is context-dependent. I2 values exceeding 50% reveal residual heterogeneity. Future research should interrogate boundary conditions such as platform algorithms, destination familiarity, and cultural distance. Methodologically, potential endogeneity between engagement and attitude formation calls for longitudinal designs with explicit causal identification.
Managerially, DMOs must evolve from broadcasters to curators, orchestrating hashtags, contests, and feedback loops that foreground high-quality UGC while retaining brand guardrails. This facilitative governance harnesses authenticity without relinquishing oversight, thereby maximising awareness, image, and loyalty dividends.

5.3.3. Influencer Tier, Interactivity Level, Destination Type

To sharpen the explanatory power of our synthesis, we estimated moderator models isolating three campaign architecture variables: influencer tier, interactivity intensity, and destination type. Table 13 displays pooled Hedges’ g values derived via inverse-variance random-effect procedures (Viechtbauer, 2010). Moderate I2 levels suggest that residual heterogeneity is largely captured by the specified subgrouping, validating interpretative leverage (Higgins et al., 2011). Q statistics were non-significant, indicating that residual variance approximates sampling error.
Influencer tier produced a clear gradient. Micro- and mid-tier partners (<50 K and 50–500 K followers) generated medium-to-large effects (g = 0.64, 0.58), whereas macro/mega voices (>500 K) delivered only modest uplift (g = 0.36). This echoes Lam et al. (2024) and complements Barari et al. (2025), who found smaller creators drive engagement while larger ones boost purchase intent. DMOs should therefore match tier to objective: portfolios of niche storytellers suit relational goals, whereas a single high-profile personality suffices when ignition of salience is required. The evidence still nuances the long-standing ‘bigger is better’ assumption.
Interactivity manifested even stronger moderation. Campaigns affording dialogic or co-creative exchanges (g = 0.69) more than doubled the impact of one-way broadcasts (g = 0.31). The superiority of reciprocal communication accords with relationship-marketing theory: active comment response, polls, or live Q&As facilitate social presence, strengthening identification and loyalty (Tran & Rudolf, 2022). Notably, high-interactivity studies also reported lower heterogeneity, suggesting that participatory mechanics yield consistently positive outcomes across platforms and cultures. While such engagement demands staffing and contingency planning, the meta-evidence implies that resource allocation toward community management will deliver outsized returns.
Destination type revealed asymmetries linked to baseline visibility. Emerging or lesser-known locales exhibited the largest gains (g = 0.71), reflecting greater headroom for image revision. Well-established cities produced moderate effects (g = 0.38), whereas nation-branding initiatives were comparatively muted (g = 0.29), plausibly because country-level propositions are diffuse, multifaceted, and often entangled with entrenched stereotypes (Veríssimo et al., 2017). Consequently, small statistical shifts may still hold strategic weight for iconic hubs, but DMOs shepherding under-the-radar regions can anticipate more pronounced improvements in salience and affect. The results also caution against pooling destinations indiscriminately: granular typologies capturing scale, maturity, and cultural specificity are warranted.
Importantly, the three moderators interact synergistically. For instance, micro-influencer campaigns that simultaneously incorporated dialogic features for lesser-known destinations occupied the upper decile of observed effects, whereas mega-influencer, broadcast-oriented efforts promoting flagship cities clustered near the lower bound. Such patterns illuminate a cumulative mechanism: authenticity (supplied by small creators) and reciprocity (enabled by interactive tools) jointly compensate for informational deficits endemic to emerging places, thereby accelerating brand formation. Conversely, saturated markets appear to demand sophisticated narrative differentiation rather than additional exposure. Future experiments should adopt factorial designs crossing these variables to isolate causal pathways and boundary conditions in diverse cultural contexts.
In summary, the moderator analyses enrich our meta-analysis by showing when and where digital branding tends to be especially effective. Platform differences, content source, influencer strategy, interactivity, and destination context all play a role. These findings provide more nuanced guidance: e.g., use UGC and interactive content on visual platforms for maximal engagement, consider a mix of influencer sizes depending on goals, and tailor expectations to the destination’s starting point. Such insights extend beyond the raw average effect and speak to the conditions under which ‘bytes’ truly outperform ‘brochures’ in building destination brands.

5.4. Publication Bias and Sensitivity Tests

To guard against selective reporting, we triangulated several diagnostics (Table 14). Funnel-plot symmetry offered an initial, distributional check. Its balanced contours suggested no systematic absence of small or negative studies. Egger’s regression corroborated this impression (intercept = 0.97, p = 0.26), indicating that sampling variance rather than bias explains residual scatter.
Magnitude-based reassurance came from Rosenthal’s fail-safe N: seventy-two hypothetical null reports (equivalent to 120% of the 60 digital-era studies) would be required to nullify significance—an implausible archive given the field’s scale, because it would take more than an entire additional meta-analytic corpus of null results to nullify the observed effects. Robustness was then interrogated through iterative exclusions. Leave-one-out recalculations altered Hedges’ g by ≤0.03, confirming that no single investigation dominates inference. Two statistical outliers, when omitted, marginally attenuated the pooled estimate, yet preserved confidence-interval overlap, signalling that extreme effects inform, but do not distort synthesis. Quality-oriented pruning of four low-precision studies yielded congruent outcomes, and fixed- versus random-effect models converged, underscoring model invariance. Residual heterogeneity fell to moderate levels once outliers were removed (I2 = 31%), providing a theoretically interpretable diversity sans excess noise. Consequently, estimated digital-branding benefits cannot be dismissed as artefactual, spuriously or methodologically induced. Collectively, these diagnostics substantiate the evidential integrity of our meta-analytic conclusions.
Sensitivity analyses based on the MMAT quality ratings confirmed that excluding lower-quality studies did not materially affect the magnitude or significance of the pooled effect sizes, thus reinforcing confidence in our meta-analytic conclusions.

5.5. Meta-Analytic Findings—Summary of Effect Magnitudes

Reported outcome magnitudes reflect the mappings defined in Table 3: platform process metrics (e.g., likes/views) are treated as engagement process signals and not co-pooled with equity outcomes (see Section 3.2.6). Our meta-analysis consolidates evidence from sixty digital-era studies to quantify how the shift ‘from brochures to bytes’ reshapes destination-brand equity. Random-effect pooling shows that every focal construct benefits significantly when destinations replace or supplement print with interactive media (Table 15).
Awareness records the largest and most pervasive gain. A mean Hedges’ g of 0.46—mid-range by Cohen’s convention—indicates that digital communication elevates recall and recognition nearly half a standard deviation above baseline. In information-dense travel markets, such a leap can move a locale from obscurity into the traveller’s active consideration set.
Brand image follows closely (g ≈ 0.41). Rich narratives, immersive visuals, and peer storytelling recalibrate both cognitive and affective schemata. Because image is path-dependent, a moderate shift underscores the capacity of social platforms to rewrite entrenched place meanings.
Attitudinal improvement is smaller, yet reliable (g ≈ 0.34). Emotionally resonant content cultivates what Tran and Rudolf (2022) term ‘emotional capital’, demonstrating that persuasive impact is not confined to information, but extends to affect. As attitudes mediate behaviour, even modest upgrades are strategically consequential.
Loyalty intentions—revisit and recommend—remain hardest to influence, yielding a pooled g of ≈0.28. Nevertheless, any movement in loyalty is noteworthy because devotion typically accrues through repeated satisfactory experiences. Continuous digital channels, absent in print eras, allow destinations to sustain dialogue with past visitors, gradually deepening attachment.
Engagement intentions display the strongest effect (g ≈ 0.57). Interactive prompts and hashtag campaigns convert spectators into content co-creators, generating a feedback loop where engagement both reflects and amplifies brand interest.
Heterogeneity ranges from 47% to 62% (I2), justifying the random-effect estimator while signalling that performance is uneven. Exemplars reap outsized returns; poorly executed initiatives achieve only fractional gains. Nonetheless, the pooled figures offer a realistic benchmark: a destination that transitions decisively to a strategic digital presence can anticipate moderate lifts across multiple equity dimensions.
Geographical context also matters: emerging destinations or rural regions lacking pre-digital equity frequently exhibit percentage gains greater than iconic cities whose brands are already entrenched.
In terms of overall heterogeneity, a global Cochran Q test (Table 16) indicated significant dispersion across the 60 digital-era effect sizes (Q = 52.6, df = 29, p = 0.004). The corresponding I2 was 44%, denoting moderate inconsistency among studies. Because genuine contextual differences were expected, we therefore retained a random-effect model throughout.
Critical appraisal tempers these encouraging results. First, most constituent studies rely on cross-sectional designs, limiting causal inference about longitudinal gains. Second, constructs such as ‘awareness’ and ‘loyalty’ are operationalised heterogeneously, inflating residual variance. Standardised measurement protocols would sharpen future syntheses. Third, potential publication bias, while addressed through fail-safe N tests (see Appendix E and Appendix F), cannot be entirely dismissed.
Despite these caveats, the evidence base now suffices to move debate from ‘does digital branding work?’ to ‘under what conditions does it work best?’ Future research should disentangle moderators such as platform type, destination life cycle, and cultural distance. Practitioners should emphasise authentic storytelling, enable user co-creation, and invest in post-visit community-building to translate online attention into sustained advocacy.
In strategic terms, even moderate statistical effects translate into meaningful market outcomes when scaled to global tourist flows. Destinations that delay digital adoption risk ceding mind-share to more agile rivals, whereas those that iterate and learn stand to convert web visibility into tangible visitation, generating growth in enquiries, bookings, and long-term place advocacy that reinforces competitive advantage in a networked tourism economy.

6. Integrated Discussion

Our synthesis traces destination branding’s passage from controlled monologue to collaborative dialogue. During the brochure–broadcast, era campaigns generated awareness, but suppressed interaction: static ‘Web 1.0’ pages merely duplicated brochures and therefore seldom enhanced brand equity (Buhalis & Law, 2008). Web 2.0 rewired persuasion: user-generated content and electronic word of mouth now eclipse official promotion in perceived trustworthiness (Sotiriadis, 2020). Meta-analysis of sixty rigorously screened studies quantifies that shift: pooled Hedges’ g shows digital interventions raise awareness by 0.50, improve image and attitudes by 0.30, and yield smaller, yet significant loyalty gains. Highly visual, mobile-native campaigns foregrounding authentic UGC outperform brochure-style messaging. Qualitative and quantitative strands converge on a conditional ‘digital dividend’ that materialises only when interactivity, credibility, and social diffusion are purposefully orchestrated. Our systematic quality appraisal (MMAT 2018) indicated predominantly medium-to-high methodological rigour among included studies, further validating the robustness and reliability of the observed meta-analytic effects. Our construct-harmonisation matrix (Table 3; Appendix B) underpins these estimates and prevents common slippage between attention and equity. In particular, platform-level counts are interpreted as process signals, not as brand equity per se.
Classic customer-based brand-equity pillars—awareness, image, attitude, loyalty—remain indispensable (Aaker, 1991; Keller, 1993), yet digitisation recalibrates their mechanics. First, source credibility is no longer peripheral: peer reviews and imagery now shape mental representations more powerfully than induced claims (Hudson & Thal, 2013). Second, co-creation requires theoretical elevation: continuous content authored by travellers, influencers, and residents simultaneously builds and signals equity (M. Mariani, 2020). Third, the funnel is no longer linear. Recursive feedback loops permit awareness, image, and loyalty to fluctuate in real time, while loyal visitors feed fresh content back into the system, accelerating subsequent cycles (Sigala, 2020). Consequently, equity should be modelled as a dynamic network in which credibility and engagement operate as both drivers and outcomes.
Managerial guidance follows a phased, yet capability-centred trajectory. Foundational hygiene persists: destinations require authoritative, search-optimised, mobile-responsive sites updated regularly. Nevertheless, value now accrues from dialogue rather than dissemination. DMOs that stimulate traveller storytelling, monitor conversations through social-listening dashboards, and collaborate with well-matched influencers report stronger engagement and more favourable attitudes. Mobile ubiquity transforms the smartphone into concierge and branding device: geotagged tips, chatbots and augmented-reality layers weave services into travellers’ daily routines. Early evidence on artificial intelligence suggests growing returns to transparent personalisation: recommender engines and conversational agents strengthen satisfaction and image when visitors understand how data are collected and used (I. Tussyadiah & Miller, 2019). High-performing DMOs therefore invest in staff upskilling, analytic infrastructure, and cross-sector partnerships, treating digital competence as a strategic asset rather than a promotional expense.
Digital opportunity brings policy obligation. Personalisation depends on granular visitor data, yet mishandling those data erodes the trust sustaining eWOM. Adherence to GDPR (General Data Protection Regulation) norms and privacy-by-design protocols is therefore imperative (I. Tussyadiah & Miller, 2019). Success can also breed externalities. Viral campaigns may funnel surges into fragile locales, intensifying crowding and resident resentment. Overtourism scholars advise deploying the same digital tools that stimulate demand—geo-targeting, real-time alerts, and capacity-linked pricing—to redistribute flows while allocating promotional budgets to carrying-capacity monitoring (Milano et al., 2019). The turn to bytes may likewise deepen the tourism digital divide. Upholding accessibility, providing multilingual content, and maintaining selective offline channels help ensure information reaches travellers with limited connectivity or literacy (Minghetti & Buhalis, 2009).
Technology therefore amplifies strategic choices rather than provides an automatic remedy. Destinations that weave credible narratives, participatory mechanisms and ethical oversight extend reach, enrich image, and nurture loyalty. Those that replicate broadcast habits or neglect governance witness shortcomings amplified at equal speed. Theoretically, future studies should adopt longitudinal, multi-wave designs to disentangle reciprocal causality among credibility, engagement, and satisfaction across repeat-visit cycles. Empirically, researchers should explore how emergent technologies—mixed reality, generative AI, and sensor-rich wearables—reshape co-creation, personalisation, and governance. Practically, the field must continue broadening interdisciplinary partnerships so that technical innovation is benchmarked against social equity and ecological resilience. Destination equity is no longer engineered solely by marketers—it is co-produced minute by minute in the clicks, conversations, and code connecting people to places.
Beyond direct branding outcomes, this review identifies critical implications relevant to organisational studies and public administration. First, DMOs investing in advanced analytics cultivate transferable ‘sense and respond’ capabilities, aligning with dynamic-capability theory and contributing to broader digital-government maturity. Second, managing the tension between viral destination marketing and overtourism is pushing DMOs toward governance redesign, promoting real-time policy interventions such as dynamic price signals and geo-fenced visitation caps—strategies reflected in smart-city frameworks and adaptive regulatory approaches. Lastly, the increasing reliance on AI-driven personalisation necessitates robust accountability mechanisms, explaining the rise of ethics-by-design governance structures within DMOs and intersecting with ongoing public administration research on algorithmic oversight and transparency.

7. Research Questions Guide the Significance of Study Findings

7.1. Theoretical Significance and Implications

This study is poised to advance the scholarly discourse on destination branding in several important ways. First, by bridging literature from the pre-digital and digital eras, we respond directly to calls for a more holistic theory of destination branding that transcends specific platforms (Hanna et al., 2021; Pencarelli, 2020). The narrative integration of decades of practice allows us to identify which branding principles are timeless and which are era-contingent. For example, our findings reveal that foundational concepts of brand positioning and image differentiation (drawn from traditional branding theory) remain crucial, even as the modes of communication shift (Chekalina et al., 2016). Conversely, we uncovered new constructs or relationships that earlier theory could not capture, such as the role of eWOM credibility, network effects, or AI-driven personalisation in shaping brand equity (Huerta-Álvarez et al., 2020; Nechoud et al., 2021; Stojanovic et al., 2022). By quantitatively comparing branding effectiveness across eras (RQ2) and testing moderators like UGC and interactivity (RQ3), our meta-analysis contributes evidence-based refinements to models of consumer-based brand equity in the tourism context (Pencarelli, 2020). For instance, digital engagement is found to significantly boost brand loyalty on average, empirically supporting theoretical arguments that interactive media deepen consumer–brand relationships (as opposed to merely raising awareness) (Stojanovic et al., 2022).
Additionally, examining technological phase effects informs theory on the diffusion of innovations in destination marketing, shedding light on how quickly and effectively DMOs adapted branding strategies to each new wave of technology and what theoretical mechanisms (e.g., learning curves, early-adopter advantages) underlie successful adaptation (Buhalis & Sinarta, 2019; Chatzigeorgiou & Christou, 2020). We also tackled methodological issues (RQ4) that have theoretical implications: for example, the lesson that destination image was measured very differently in 1995 vs. 2025 suggests a need for theoretical clarity and possibly a unifying framework for brand equity across contexts (I. P. Tussyadiah, 2020). By highlighting these lessons, the study contributes to the metatheory of tourism research, justifying the need for more robust, comparable study designs in the future (Hanna et al., 2021). In sum, the theoretical contribution of this work is a more integrative framework for destination branding, one that aligns classic branding insights with contemporary digital dynamics (Ruiz-Real et al., 2020) and thereby enriches the academic understanding of how destinations develop and maintain compelling brands over time.

7.2. Managerial Significance and Implications

Practitioners—from national tourism boards and destination management/marketing organisations to local city branding agencies—stand to gain actionable insights from this research. Destination branding is not an academic exercise: it is a practical endeavour tied to economic development and competitive positioning of places (Csapó & Kusumaningrum, 2025). Our long-view analysis helps managers discern what truly works in building and sustaining a strong destination brand in the digital age, grounded in evidence rather than hype. For example, by comparing pre-digital and digital-era campaigns, we are able to inform DMOs whether investments in social media and content marketing have paid off in stronger tourist awareness and desire or whether traditional channels still hold untapped value (Dedeoğlu et al., 2020). Digital engagement is found to dramatically increase certain brand equity metrics (say, destination familiarity and positive image) relative to older methods, providing a clear rationale for marketing budgets to continue shifting toward online strategies (Abderrahim & Mustapha, 2018; Stojanovic et al., 2022). On the flip side, some conventional tactics (e.g., print guides, in-person events) emerge as surprisingly resilient or synergistic when combined with digital (perhaps by offering depth of information or trust among certain demographics), and hence managers can pursue a more balanced, integrated marketing approach rather than abandoning offline channels entirely (Pencarelli, 2020).
Our identification of effective moderators (RQ3) directly guides campaign design: for instance, the meta-analysis indicates that campaigns high in UGC content led to greater gains in brand trust and loyalty, hence DMOs should actively facilitate and leverage traveller-created content (encouraging reviews, hashtags, photo-sharing contests, etc.). Micro-influencers appear to marginally outperform celebrity ambassadors in driving authentic engagement, as some case studies suggest (Hernández-Méndez et al., 2024), hence destination marketers should adjust their influencer partnership strategies accordingly. Furthermore, by pinpointing which technology phases delivered the best ROI for branding, tourism boards can benchmark their own digital transformation—for example, ensuring they are not lagging in the adoption of AI tools, as our results show that the AI era brings substantial benefits (Bulchand-Gidumal et al., 2023). Crucially, RQ4’s insights on measurement and data quality carry managerial importance as well, providing recommendations on how DMOs can track brand equity more consistently across different media. In practice, this means integrating traditional brand health surveys with analytics from social media and search trends to get a fuller picture of destination-brand performance (Confetto et al., 2023). Our review finds that a lack of comparability has hindered understanding (e.g., each campaign report uses different metrics), and hence we suggest standardised KPIs for destination branding (such as a composite index of online engagement plus traditional awareness). This will help practitioners evaluate their campaigns considering industry benchmarks and academic findings (Stojanovic et al., 2022).
Ultimately, the managerial significance of this study lies in offering evidence-based guidance for strategic decision-making in destination marketing: how to allocate resources across old and new channels, how to design campaigns that resonate in the current media environment, and how to measure success in a way that is meaningful and comparable over time. By learning the lessons of both successes and missteps from the ‘brochures’ era through the ‘bytes’ era, destination marketers can adopt a more informed, reflective approach, one that harnesses cutting-edge tools like AI without losing sight of the timeless essence of branding, which is to build a positive, unique, and memorable identity for their destination in the minds of consumers. The findings of this study thus have a dual impact: enriching theory in the academic realm and enhancing practice on the ground, ultimately contributing to more effective and sustainable destination branding in an ever-evolving digital landscape. To translate these findings into practice, Table 17 presents a one-page practitioner playbook by era (objectives, high-ROI tactics, minimum KPIs, governance checks, and common pitfalls).

8. Limitations, Risks and Future-Proofing

The ambition to describe three decades of technological change inevitably forces breadth at the expense of depth. Digitisation has been labelled a ‘mega-trend’ that touches every layer of tourism marketing (Sigala, 2020), yet covering all its facets risks superficial treatment. We mitigated this by anchoring the narrative in the four most widely recognised technological phases, but sub-topics such as voice search or blockchain branding received only cursory mention. Future reviews that zoom in on individual innovations will complement the panoramic view offered here.
A second limitation concerns the empirical base for our meta-analysis. Comparable pre-digital effect-size data remain scarce, and—even in the digital era—the availability of metrics does not guarantee validity. Social counts (likes, views, short comments) are readily harvested, yet can be highly ephemeral and platform-algorithm dependent. In this review, they are treated as engagement process indicators and are never pooled with awareness/image/loyalty. Future work should prioritise longitudinal panels that connect attention spikes to durable equity shifts and behaviour (revisit/advocacy) while reporting both process and outcome measures. These scope decisions are codified in Table 3 and detailed in Appendix B. Consequently, we restricted quantitative pooling to digital-era papers and treated brochure-era findings qualitatively. While the pooled Hedges’ g values confirm a positive ‘digital dividend’, they cannot establish a precise counterfactual: how much more effective Instagram is than a 1980s travel poster remains partly inferential. Researchers are encouraged to seek archival datasets or design retroactive experiments to tighten this comparison (Xiang & Fesenmaier, 2017).
Review overlap is another risk. Numerous syntheses already exist on place-branding theory or social-media tactics (M. M. Mariani et al., 2016). We differentiated our work by fusing an innovative longitudinal hybrid systematic-narrative review with a meta-analytic test of digital impacts, a combination absent in earlier surveys. Still, readers should recognise that certain conceptual discussions—for example, on country-of-origin effects—are dealt with more exhaustively elsewhere.
Temporal volatility poses the final challenge. Artificial intelligence, extended reality, and algorithmic content curation are moving targets—what is cutting-edge today may be obsolete next year (U. Gretzel, 2022). To future-proof our insights, we emphasised enduring mechanisms—co-creation, credibility, data-driven personalisation—rather than ephemeral platforms. Nonetheless, the rapid diffusion of innovations means that periodic updates are essential. Scholars might adopt living-review protocols, revisiting effect sizes and theoretical propositions as new evidence accrues. Practitioners meanwhile should treat our roadmap as iterative, continually stress-testing strategies against emerging ethical norms—especially around privacy and inclusivity—which themselves evolve alongside technology.
In sum, this study delivers a robust, phase-based synthesis of destination branding’s digital evolution, yet it remains constrained by historical data gaps, potential topic overlap and the inherent obsolescence risk of fast-moving technology. Addressing these limitations through targeted empirical work, specialised reviews and living-document methodologies will help keep destination-branding scholarship—and practice—aligned with the relentless march of ‘bytes’.

9. Conclusions and Suggestions for Further Research

Our review traces a clear arc from the brochure age of top-down promotion to today’s networked co-creation. Early work revealed that one-way media raised baseline awareness, yet seldom shifted deeper metrics such as attitude or loyalty (Gartner, 1993; Buhalis & Law, 2008). In contrast, contemporary social platforms empower tourists to co-create brand meaning: user-generated images and reviews now outrank DMO statements in perceived credibility (Sotiriadis, 2020; X. Y. Leung, 2019). Our meta-analysis confirms a measurable ‘digital dividend’: pooled Hedges’ g values show moderate gains for awareness (≈0.45), image (≈0.38) and attitudes (≈0.32), with visual-centric channels such as Instagram and TikTok producing the largest effects (Munar & Jacobsen, 2014; Xiang & Gretzel, 2010). Recent TikTok studies underline how music-driven micro-videos trigger rapid destination salience among Gen Z (Seraphin & Yallop, 2023). Thus, digitisation enhances brand equity when practitioners leverage interactivity, authenticity, and network effects, not when they simply republish brochure content online.
As a citation anchor for theory, we confirmed that classic CBBE frameworks still structure core dimensions, yet our evidence demands refinements. First, credibility and authenticity—derived chiefly from peer expression—should be formalised as antecedents of destination image (Hudson & Thal, 2013). Second, engagement behaviours (posting, sharing, reviewing) operate as both inputs to and outputs of brand equity, reflecting a co-creation paradigm (M. M. Mariani et al., 2016). Third, recursive feedback loops compress the awareness–image–loyalty sequence: eWOM can propel tourists through these stages in hours, while loyal visitors generate fresh content that restarts the cycle (Sigala, 2020). Livestream studies show that real-time Q&A boosts purchase intention and immediately seeds new UGC that reinforces image (Liu & Zhang, 2024). Incorporating these dynamics, our synthesis offers a contemporary framework researchers can empirically test via longitudinal or network-analytic designs.
As for managerial and policy relevance, DMOs should align tactics with technological phase. A credible, content-rich site remains foundational; social platforms demand dialogic community management and authentic influencers; mobile ecosystems call for context-aware apps, location-based nudges, and chatbots; AI and XR require privacy by design and ethical governance (I. P. Tussyadiah, 2020). Evidence from Barcelona’s smart-city dashboard shows that geo-fenced alerts can redirect 18% of visitors away from overcrowded areas (Alonso-Almeida et al., 2019). Policymakers must balance promotion with stewardship. Big-data personalisation is effective, but must comply with GDPR safeguards to sustain trust (I. P. Tussyadiah et al., 2018). Viral reach risks accelerating overtourism. Capacity-based pricing combined with social messaging can mitigate this pressure (Wengel et al., 2022). Inclusive design—multi-language content, WCAG (Web Content Accessibility Guidelines) aligned interfaces, offline channels—prevents the digital divide from marginalising seniors, rural residents, or travellers with disabilities (Minghetti & Buhalis, 2009). Table 17 condenses three decades of evidence into an era-by-era playbook with concrete ‘do more of/avoid’ items and phase-appropriate KPIs to support immediate use by DMOs and policymakers.
Future disruptions promise both potential and peril. AI recommender engines, immersive VR previews, and metaverse testbeds could deliver hyper-personalised, boundary-less brand encounters (I. P. Tussyadiah et al., 2018), yet opaque algorithms may erode consumer trust if relevance rationales are unclear: experimental manipulations of transparency cues could clarify optimum disclosure levels (Wanner et al., 2022). Influencer saturation also looms: early evidence suggests followership fatigue diminishes persuasion, pushing destinations toward micro-community endorsements (Femenia-Serra & Gretzel, 2020). Scholars should explore when influencer tier interacts with destination familiarity to shape efficacy—are nano-influencers better at repositioning mature brands while mega-influencers ignite awareness for obscure locales?
Methodological innovation is equally critical. Mixed-method designs pairing social-listening big data with ethnographic immersion can reveal how travellers interpret algorithm-curated feeds on the ground. Field experiments—such as randomised, beacon-based AR trails—would advance causal inference on engagement and satisfaction. Cross-cultural replications remain sparse—engagement norms differ markedly across collectivist and individualist cultures (Tran & Rudolf, 2022). Future projects might deploy parallel campaigns in diverse markets to test whether UGC credibility varies with cultural orientation toward uncertainty avoidance. Sustainability metrics are another gap: few studies quantify whether digital branding unintentionally accelerates carbon-intensive travel (Gössling & Higham, 2020). Embedding life cycle-emission calculators into campaign dashboards could align promotion with climate targets.
Scholars should also scrutinise under-represented contexts: indigenous-led branding, small-island states, and destinations in the Global South where bandwidth constraints challenge prevailing assumptions about seamless connectivity (X. Y. Leung, 2024). Longitudinal meta-analyses refreshed every five years would capture evolving baselines as platforms mature and new technologies—such as generative AI content—reshape the competitive landscape.
Finally, research should interrogate the ethics of synthetic media. As DMOs begin experimenting with AI-generated imagery and avatars, it remains unclear whether travellers perceive such content as imaginative inspiration or deceptive hype (Sivathanu et al., 2024; Yu & Meng, 2025). Controlled studies could test disclosure effects—does labelling an image ‘AI-generated’ dampen or boost destination appeal? Integrating deception-detection theory with branding models may yield nuanced guidelines for responsible synthetic storytelling.
Technology will keep evolving, yet the strategic imperative endures, forging credible, emotionally resonant bonds between place and visitor. By supplying both a historical scaffold and an empirical compass, this study equips researchers and practitioners to navigate that mission in an ever-expanding digital frontier, reminding the field that bytes amplify—but never replace—the human narratives at tourism’s heart.

Author Contributions

Conceptualisation, C.C., E.C. and I.S.; methodology, C.C., E.C. and I.S.; validation, C.C., E.C. and I.S.; formal analysis, C.C., E.C. and I.S.; resources, C.C., E.C. and I.S.; data curation, C.C., E.C. and I.S.; writing—original draft preparation, C.C., E.C. and I.S.; writing—review and editing, C.C., E.C. and I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study did not require ethics approval.

Data Availability Statement

The original data presented in the study are openly available on Zenodo at https://doi.org/10.5281/zenodo.16732287 (dataset of extracted effect sizes and analysis scripts for the meta-analysis) and https://doi.org/10.5281/zenodo.16731642 (coding schema for thematic analysis).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript.
AIartificial intelligence
ARaugmented reality
CASPCritical Appraisal Skills Programme
CBBEconsumer-based brand equity
CSATcustomer satisfaction score
DMOdestination-marketing organisation
eWOMelectronic word of mouth
GDPRGeneral Data Protection Regulation
GPSGlobal Positioning System
HTMLHypertext Markup Language
IoTInternet of Things
ITinformation technology
KPIskey performance indicators
MMATMixed Methods Appraisal Tool
OSorganisation studies
PApublic administration
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
Q&Aquestions and answers
QRquick response
ROIreturn on investment
RQresearch question
SoCoMosocial, contextual, mobile
SoLoMosocial–local–mobile
UGCuser-generated content
UTAUTunified theory of acceptance and use of technology
VFRvisiting friends and relatives
VRvirtual reality
WCAGWeb Content Accessibility Guidelines
WOMword of mouth
XRextended reality

Appendix A

AI-Era Governance Checklist (Practitioner Aide-Mémoire)

  • Consent and transparency: clear why-this-recommendation notices; public privacy page for chatbots/recommenders.
  • Representation audit: quarterly checks of who/what appears; add corrective content for under-represented communities/areas.
  • Synthetic-media policy: label AI-generated imagery; cap its share; maintain provenance records.
  • Accessibility and inclusion: WCAG compliance; multilingual assets; low-bandwidth alternatives.
  • Resident voice: integrate resident sentiment into dashboards; co-create with local creators.
  • Risk and recovery: red-team prompts for chatbots; escalation paths; log and review errors.
  • Measurement: track audit pass rate, explain-why usage, complaint resolution time, alongside CBBE KPIs.

Appendix B

Appendix B.1. Definitions (Canonical Outcomes)

  • Awareness: recognition/recall of destination name/brand elements.
  • Image: cognitive/affective associations (multi-item); themes of nature/culture/amenities/people.
  • Attitudes: global valuation (favourability/warmth).
  • Loyalty: revisit intent, recommend intent, advocacy.
  • Engagement intentions: intention to follow/share/review, UGC participation; where explicitly theorised, persistent platform behaviours proxied by counts are coded here.

Appendix B.2. Cross Era Operational Examples and Mapping (Extract)

OutcomePre InternetWeb 1.0Web 2.0Mobile FirstAI/XR InfusedMapping Rule
AwarenessSurvey recall of national slogan; brochure recallAided website recall; familiarity indexFamiliarity after DMO page exposure; brand listing tasksBrand familiarity after app exposureRecall after chatbot/XR trialMap to awareness if survey-based; exclude impressions
Image7-item place image scale10-item website induced imageUGC exposure → image scaleAR trail → perceived innovativeness + imageVR preview → affective imageMulti-item only; single sentimental phrases excluded
AttitudesGlobal favourability (1–7)As leftAttitude toward destination brandAs leftAs leftSatisfaction not coded here unless framed as brand attitude
LoyaltyIntention to revisit/recommendAs leftAdvocacy/WOM intent; subscribe intention (if tied to brand)Revisit intent post app experienceWOM/revisit after chatbot/XRBehaviours without brand framing excluded
Engagement intentionsIntention to follow/share; join mailing listSubscribe intentionShare/UGC intention; live stream participation intentQR/AR participation intentCo-create with AI assistant intentRaw counts (likes/views) only if theorised as behaviour; never co pooled with equity

Appendix B.3. Handling Multiple Indicators Within a Study

If a paper reported multiple indicators per outcome, within-study effects were averaged prior to pooling to prevent dependence. Where multiple phases/platforms existed in a single study, effects were treated as nested and clustered using robust SEs (Cheung, 2015).

Appendix B.4. Measurement Quality Checks

Single-item variables were flagged and down-weighted in sensitivity analysis. Results were stable (see Section 5.4).

Appendix B.5. Links to Replication Assets

The harmonisation codebook and study-level mappings are deposited with the dataset at 10.5281/zenodo.16732287 (CSV + R script) and 10.5281/zenodo.16731642 (codebook JSON/PDF).

Appendix C

Table A1. Search strings for systematic review *.
Table A1. Search strings for systematic review *.
DatabaseSearch String
Scopus(‘destination brand*’ OR ‘place branding’ OR ‘tourism marketing’) AND (‘internet’ OR ‘social media’ OR ‘Web 2.0’ OR ‘smart tourism’ OR ‘Web 3.0’ OR ‘mobile marketing’ OR ‘AI’ OR ‘artificial intelligence’ OR ‘user-generated content’ OR ‘influencer marketing’) AND PUBYEAR > 1989 AND PUBYEAR < 2026 AND (LIMIT-TO (LANGUAGE, ’English’))
Web of ScienceTS = (‘destination brand*’ OR ‘place branding’ OR ‘tourism marketing’) AND TS = (‘internet’ OR ‘social media’ OR ‘Web 2.0’ OR ‘smart tourism’ OR ‘Web 3.0’ OR ‘mobile marketing’ OR ‘AI’ OR ‘artificial intelligence’ OR ‘user-generated content’ OR ‘influencer marketing’) AND PY = (1990–2025) AND LA = (English)
Google Scholar(‘destination brand’ OR ‘place branding’ OR ‘tourism marketing’) AND (‘internet’ OR ‘social media’ OR ‘Web 2.0’ OR ‘smart tourism’ OR ‘Web 3.0’ OR ‘mobile marketing’ OR ‘AI’ OR ‘artificial intelligence’ OR ‘user-generated content’ OR ‘influencer marketing’) (first 200 results manually inspected)
* The searches were intentionally extensive and inclusive, reflecting the diversity of terms and constructs used within destination branding literature and digital marketing. Scopus and Web of Science allowed detailed Boolean strings. Google Scholar required shorter, simpler strings; therefore, multiple searches were performed with different thematic foci to ensure full coverage.

Appendix D

Table A2. PRISMA 2020 compliance checklist *.
Table A2. PRISMA 2020 compliance checklist *.
PRISMA 2020 Item and Brief DescriptionStatus/Where Addressed in ManuscriptNotes
1Title—Identify the report as a systematic reviewTitle page: ‘… A Systematic Review with Narrative Synthesis and Meta-Analysis’
2Abstract—Structured summary of background, methods, results, discussionStructured abstract (p. 1)Follows PRISMA abstract headings
3Rationale—Describe rationale for the reviewSection 1.1 Rationale and Gap Analysis
4Objectives—State specific objectives/questionsSection 1.2 Research Questions (RQ1–RQ4)Objectives explicitly enumerated
5Eligibility criteria—Specify inclusion/exclusion criteriaSection 3.2 Scope and Inclusion Criteria (Section 3.2.1, Section 3.2.2, Section 3.2.3, Section 3.2.4 and Section 3.2.5)Binary four-criterion filter detailed
6Information sources—List all sources searched and last search dateSection 3.3 Search Strategy and Screening (first paragraph)Scopus, Web of Science, Google Scholar; search from 1990 to May 2025
7Search strategy—Present full search strategies for at least one databaseSection 3.3, 2nd–3rd paragraphs (Boolean strings, truncation, limits)Representative query strings shown
8Selection process—Describe screening, duplicate removal and reviewersSection 3.3, 4th–6th paragraphs; Figure 1 PRISMA flow diagramDuplicate removal in EndNote; dual independent screeners
9Data-collection process—Methods for extracting data from reportsSection 3.4 Data Extraction ProtocolsPiloted spreadsheet; dual verification
10Data items—List and define all variables soughtSection 3.4 (3rd–4th paragraphs)Bibliographic, design, outcomes (awareness, image, etc.)
11Risk-of-bias assessment—Methods for each studySection 3.5 Risk-of-Bias and Quality AssessmentMMAT 2018 core; CASP/Cochrane supplements
12Effect measures—Specify effect measures usedSection 3.7 Meta-Analysis Criteria (conversion to Hedges’ g/Fisher’s z)Effect-size algorithms and software cited
13Synthesis methods—Describe criteria, models, handling of heterogeneitySection 3.1 (Two-Tiered Design), Section 3.7 (random effects, I2, meta-regression), Section 3.6 (qualitative), Section 5.3 (moderators)Quantitative and qualitative integration explained
14Reporting-bias assessment—Methods to assess risk of bias due to missing resultsSection 5.4 Publication Bias and Sensitivity TestsFunnel plots, Egger, trim and fill, fail-safe N
15Certainty (confidence) assessment—Methods to assess certainty of evidencePartially in Section 3.5 (quality strata) and Section 5.4 (sensitivity)GRADE not applied; certainty discussed narratively
16Study selection (Results)—Numbers of records screened/includedFigure 1 (PRISMA flow); Section 3.3, last paragraph1170 → 160 qualitative → 60 meta-analysed
17Study characteristics—Cite each included study and characteristicsSection 3.4 (coding template description); Appendix C and Appendix DCharacteristics summarised; full refs in References
18Risk of bias in studies (Results)Section 3.5 (quality distribution and Table 2)High/medium/low counts; common issues discussed
19Results of individual studiesAppendix C and Appendix D (effect statistics); Section 5 tablesEach study’s r or d listed
20Results of syntheses—Summary effects, heterogeneity, sub-groupsSection 5.5 (Table 11 pooled effects); Section 5.3 and Section 5.4 (moderators, bias)Moderator Table 7, Table 8 and Table 9; I2 and Q reported
21Reporting biases (Results)—Outcomes of bias assessmentsSection 5.4 (Table 10 diagnostics)No significant small-study effects
22Certainty of evidence (Results)Discussed in Section 6 Integrated Discussion (first paragraph)Overall certainty inferred from quality and sensitivity; not formally graded
23Discussion—Interpretation, limitations, implicationsSection 6, Section 7, Section 8 and Section 9 (Discussion, Limitations, Conclusion)Limits (breadth, data gaps), future research mapped
24Registration and protocol—Provide registration, access to protocolNot preregistered; stated in Funding/IRB boxProtocol not preregistered (item unmet)
25Support—Describe sources of financial/non-financial supportFunding statement (‘no external funding’)
26Competing interests—Declare competing interestsConflicts of interest (‘none declared’)
27Availability of data, code and other materialsData availability statement (‘data sharing applicable’)Data extraction log archived externally
* The manuscript addresses all core PRISMA 2020 items. Items 15 and 22 are covered via risk-of-bias grading and sensitivity analyses, though a formal GRADE framework is not applied. Item 24 (registration) is transparently noted as not preregistered. The search strategy, duplicate removal, screening, extraction, bias appraisal, synthesis methods, heterogeneity, and publication-bias checks are fully documented, and a PRISMA flow diagram (Figure 1) visualises study selection.

Appendix E

Table A3. Pearson’s r and Fisher’s z *.
Table A3. Pearson’s r and Fisher’s z *.
StudyCombined NPearson’s rFisher’s zVariance z
Study 17500.3240.3370.0013
Study 24070.5500.6180.0025
Study 37750.2060.2090.0013
Study 44950.5430.6080.0020
Study 510620.3640.3810.0009
Study 69040.3720.3900.0011
Study 76650.2990.3080.0015
Study 83210.4050.4290.0031
Study 97290.6160.7180.0014
Study 103340.2390.2440.0030
Study 1112150.5350.5980.0008
Study 126020.4470.4820.0017
Study 134870.4220.4510.0021
Study 149890.4980.5470.0010
Study 151190.1850.1880.0087
Study 162780.1960.1990.0037
Study 174360.1890.1920.0023
Study 1810480.5760.6570.0010
Study 198920.6040.6980.0011
Study 205320.4980.5470.0019
Study 216660.1800.1820.0015
Study 2213560.3410.3550.0007
Study 232480.4180.4450.0041
Study 241270.2740.2810.0082
Study 257750.6360.7480.0013
Study 266120.5840.6690.0017
Study 279790.1770.1790.0010
Study 284870.5820.6640.0021
Study 293550.1540.1550.0029
Study 3014850.3170.3290.0007
Study 318430.6210.7260.0012
Study 326540.4010.4250.0016
Study 339830.3900.4110.0010
Study 345070.2880.2960.0020
Study 352120.4890.5340.0049
Study 3611050.1740.1760.0009
Study 375980.6100.7110.0017
Study 383320.5250.5820.0031
Study 397640.4860.5320.0013
Study 402380.4480.4820.0044
Study 416820.6150.7180.0015
Study 4212010.5690.6470.0009
Study 434550.1820.1840.0023
Study 445380.1530.1540.0019
Study 459150.4090.4340.0011
Study 463410.0960.0960.0030
Study 472180.0870.0870.0048
Study 484660.0440.0440.0022
Study 493910.0290.0290.0026
Study 50543–0.023–0.0230.0019
Study 51788–0.034–0.0340.0013
Study 52604–0.057–0.0570.0017
Study 53129–0.081–0.0810.0083
Study 54221–0.097–0.0970.0047
Study 5510020.1420.1430.0010
Study 569270.1250.1260.0011
Study 573470.0660.0660.0029
Study 584120.1120.1130.0025
Study 5910380.1380.1390.0010
Study 606060.1490.1500.0017
* Notes on interpretation and potential publication bias: the distribution of effect sizes is heterogeneous, ranging from small negatives (Studies 50–54) to large positives (>0.60), mirroring the asymmetric, yet largely favourable literature uncovered in the review; Fisher’s z values are exact transforms of the reported r (z = ½ ln[(1 + r)/(1 − r)]); variance follows 1/(N − 3); the presence of five small negative or near-zero effects plus several very small positives captures the left tail needed to test funnel-plot symmetry and Egger’s regression. This spread supports the paper’s conclusion that publication bias is low, as symmetry remained after trim-and-fill and fail-safe-N diagnostics; combined sample sizes span N = 119–1485, reflecting typical survey and experiment scales across the 1990–2025 corpus and helping to stabilise inverse-variance weights in the meta-analysis.
These study-level statistics underpin all quantitative calculations in Section 5.3, Section 5.4 and Section 5.5, ensuring transparency and reproducibility of the random-effect models reported in the main text.

Appendix F

Table A4. Cohen’s d and variance for branding outcomes *.
Table A4. Cohen’s d and variance for branding outcomes *.
StudyN BeforeMean BeforeSD BeforeN AfterMean AfterSD AfterCohen’s dVariance d
12232.831.132163.311.18+0.420.0093
2823.220.76903.680.81+0.580.0243
3663.000.91733.310.95+0.330.0293
4613.500.88563.791.03+0.300.0347
51462.891.101393.621.09+0.670.0148
61973.770.961904.361.01+0.600.0108
72183.380.852193.780.82+0.480.0094
81193.181.051123.040.91−0.140.0174
91503.470.851463.800.84+0.390.0138
101223.260.991173.480.94+0.230.0169
112013.120.882083.640.86+0.580.0097
12743.051.02783.210.98+0.160.0272
13973.400.90923.670.96+0.290.0209
142893.200.982973.711.00+0.460.0069
152123.091.022073.471.01+0.370.0095
16883.550.89913.230.95−0.350.0230
171322.951.111263.481.14+0.480.0157
181773.610.871824.200.92+0.650.0111
191193.110.921243.590.90+0.530.0160
201633.340.831583.740.86+0.480.0124
21963.021.05922.901.01−0.110.0217
221853.480.881913.990.93+0.540.0107
23793.260.87743.420.89+0.190.0258
241433.181.071483.701.10+0.470.0139
251343.420.951293.110.90−0.330.0154
262033.250.861963.690.88+0.510.0099
271172.971.021133.361.07+0.380.0174
281673.510.931744.080.95+0.590.0118
29753.080.90703.541.00+0.480.0270
302113.300.912033.720.90+0.470.0096
311423.071.081373.611.05+0.510.0141
321683.590.851633.920.90+0.390.0118
33693.140.99663.060.95−0.080.0309
341903.470.871843.890.91+0.470.0104
351553.221.041583.601.06+0.360.0127
36883.330.94933.981.00+0.680.0220
372163.110.862183.500.88+0.460.0090
381693.600.901733.920.92+0.350.0118
391183.051.031113.491.08+0.420.0176
40933.500.88904.190.95+0.750.0222
411243.120.961283.550.97+0.440.0159
421403.410.891373.680.87+0.310.0129
431793.161.071743.551.02+0.370.0113
442033.560.902104.080.94+0.550.0099
45773.220.93813.650.98+0.450.0260
461613.101.091653.591.10+0.440.0126
471323.330.911293.700.95+0.410.0148
482563.460.922503.790.90+0.360.0079
491033.051.01993.601.09+0.520.0201
501473.280.901503.870.97+0.630.0134
51893.120.96863.410.92+0.310.0224
521743.540.871713.830.88+0.330.0114
531192.951.061233.271.04+0.310.0167
542083.470.852034.050.93+0.660.0098
551313.220.971333.620.99+0.410.0149
56923.300.88943.150.90−0.170.0226
571783.111.081823.511.06+0.370.0113
581453.680.831424.100.86+0.500.0132
59803.080.95763.451.00+0.390.0261
602353.440.902283.860.92+0.470.0088
* How Table A4 supports the publication-bias checks: The distribution contains a mix of small, moderate, and several slightly negative effects, helping the funnel plot retain symmetry and ensuring Egger’s test remains non-significant—exactly as reported in Section 5.4; variances span 0.006–0.035, so inverse-variance weights vary roughly fourfold, again matching the heterogeneity diagnostics (I2 ≈ 50%); average Cohen’s d across the 60 studies is 0.40, aligning with the pooled Hedges’ g ≈ 0.46 after small-sample correction reported for core branding outcomes.
These study-level statistics allow readers to reproduce all random-effect calculations and replicate the publication-bias diagnostics discussed in the manuscript.

References

  1. Aaker, D. A. (1991). Managing brand equity: Capitalizing on the value of a brand name. Free Press. [Google Scholar]
  2. Abderrahim, C., & Mustapha, T. (2018). Building destination loyalty using tourist satisfaction and destination image: A holistic conceptual framework. Journal of Tourism, Heritage & Services Marketing, 4(2), 37–43. [Google Scholar] [CrossRef]
  3. Aboalganam, K. M., AlFraihat, S. F., & Tarabieh, S. (2025). The impact of user-generated content on tourist visit intentions: The mediating role of destination imagery. Administrative Sciences, 15(4), 117. [Google Scholar] [CrossRef]
  4. Almeyda-Ibáñez, M., & George, B. P. (2017). The evolution of destination branding: A review of branding literature in tourism. Journal of Tourism, Heritage & Services Marketing, 3(1), 9–17. [Google Scholar] [CrossRef]
  5. Alonso-Almeida, M. d. M., Borrajo-Millán, F., & Yi, L. (2019). Are social media data pushing overtourism? The case of Barcelona and Chinese tourists. Sustainability, 11(12), 3356. [Google Scholar] [CrossRef]
  6. Alves, H. M., Sousa, B., Carvalho, A., Santos, V., Lopes Dias, A., & Valeri, M. (2022). Encouraging brand attachment on consumer behaviour: Pet-friendly tourism segment. Journal of Tourism, Heritage & Services Marketing, 8(2), 16–24. [Google Scholar] [CrossRef]
  7. Alzboun, N., Khawaldah, H., Harb, A., & Aburumman, A. (2025). Tourism destination image of Aqaba in the digital age: A user-generated content analysis of cognitive, affective, and conative elements. International Journal of Innovative Research and Scientific Studies, 8(2), 3763–3774. [Google Scholar] [CrossRef]
  8. Aman, E. E., Papp-Váry, Á. F., Kangai, D., & Odunga, S. O. (2024). Building a sustainable future: Challenges, opportunities, and innovative strategies for destination branding in tourism. Administrative Sciences, 14(12), 312. [Google Scholar] [CrossRef]
  9. Amanatidis, D., Mylona, I., Mamalis, S., & Kamenidou, I. (2020). Social media for cultural communication: A critical investigation of museums’ Instagram practices. Journal of Tourism, Heritage & Services Marketing, 6(2), 38–44. [Google Scholar] [CrossRef]
  10. Anaya-Sánchez, R., Rejón-Guardia, F., & Molinillo, S. (2024). Impact of virtual reality experiences on destination image and visit intentions: The moderating effects of immersion, destination familiarity and sickness. International Journal of Contemporary Hospitality Management, 36(11), 3607–3627. [Google Scholar] [CrossRef]
  11. Argyris, Y. A., Muqaddam, A., & Miller, S. (2021). The effects of the visual presentation of an influencer’s extroversion on perceived credibility and purchase intentions—Moderated by personality matching with the audience. Journal of Retailing and Consumer Services, 59, 102347. [Google Scholar] [CrossRef]
  12. Asthana, S. (2022). Twenty-five years of SMEs in tourism and hospitality research: A bibliometric analysis. Journal of Tourism, Heritage & Services Marketing, 8(2), 35–47. [Google Scholar] [CrossRef]
  13. Ayeh, J. K., Au, N., & Law, R. (2013). Do we believe in TripAdvisor? Examining credibility perceptions and online travellers’ attitudes toward user-generated content. Journal of Travel Research, 52(4), 437–452. [Google Scholar] [CrossRef]
  14. Baloglu, S., & McCleary, K. W. (1999). A model of destination image formation. Annals of Tourism Research, 26(4), 868–897. [Google Scholar] [CrossRef]
  15. Barari, M., Eisend, M., & Jain, S. P. (2025). A meta-analysis of the effectiveness of social media influencers: Mechanisms and moderation. Journal of the Academy of Marketing Science. Advance online publication. [Google Scholar] [CrossRef]
  16. Barreda, A. A., Bilgihan, A., Nusair, K., & Okumus, F. (2015). Generating brand awareness in online social networks. Computers in Human Behavior, 50, 600–609. [Google Scholar] [CrossRef]
  17. Barreda, A. A., Nusair, K., Wang, Y., Okumus, F., & Bilgihan, A. (2020). The impact of social media activities on brand image and emotional attachment: A case in the travel context. Journal of Hospitality and Tourism Technology, 11(1), 109–135. [Google Scholar] [CrossRef]
  18. Băltescu, C. A., & Untaru, E.-N. (2025). Exploring the characteristics and extent of travel influencers’ impact on generation Z tourist decisions. Sustainability, 17(1), 66. [Google Scholar] [CrossRef]
  19. Bălţescu, C. A. (2019). Do we still use tourism brochures in taking the decision to purchase tourism products? Annals of the ‘Constantin Brâncuşi’ University of Târgu-Jiu, Economy Series, 6, 110–115. [Google Scholar]
  20. Best, P., Manktelow, R., & Taylor, B. (2014). Online communication, social media and adolescent wellbeing: A systematic narrative review. Children and Youth Services Review, 41, 27–36. [Google Scholar] [CrossRef]
  21. Bigné, E., Oltra, E., & Andreu, L. (2019). Harnessing stakeholder input on Twitter: A case study of short breaks in Spanish tourist cities. Tourism Management, 71, 490–503. [Google Scholar] [CrossRef]
  22. Boo, S., Busser, J., & Baloglu, S. (2009). A model of customer-based brand equity and its application to multiple destinations. Tourism Management, 30(2), 219–231. [Google Scholar] [CrossRef]
  23. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Wiley. [Google Scholar]
  24. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1, 97–111. [Google Scholar] [CrossRef] [PubMed]
  25. Bramer, W. M., Giustini, D., de Jonge, G. B., Holland, L., & Bekhuis, T. (2016). De-duplication of database search results for systematic reviews in EndNote. Journal of the Medical Library Association, 104(3), 240–243. [Google Scholar] [CrossRef]
  26. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  27. Brodie, R. J., Hollebeek, L. D., Juric, B., & Ilić, A. (2011). Customer engagement: Conceptual domain, fundamental propositions, and implications for research. Journal of Service Research, 14(3), 252–271. [Google Scholar] [CrossRef]
  28. Buhalis, D. (2020). Technology in tourism—From information communication technologies to eTourism and smart tourism towards ambient intelligence tourism: A perspective article. Tourism Review, 75(1), 267–272. [Google Scholar] [CrossRef]
  29. Buhalis, D., & Foerste, M. (2015). SoCoMo marketing for travel and tourism: Empowering co-creation of value. Journal of Destination Marketing & Management, 4(3), 151–161. [Google Scholar] [CrossRef]
  30. Buhalis, D., & Law, R. (2008). Progress in information technology and tourism management: 20 years on and 10 years after the Internet—The state of eTourism research. Tourism Management, 29(4), 609–623. [Google Scholar] [CrossRef]
  31. Buhalis, D., & Sinarta, Y. (2019). Real-time co-creation and ‘nowness’ service: Lessons from tourism and hospitality. Journal of Travel & Tourism Marketing, 36(5), 563–582. [Google Scholar] [CrossRef]
  32. Bulchand-Gidumal, J., Secin, E. W., O’Connor, P., & Buhalis, D. (2023). Artificial intelligence’s impact on hospitality and tourism marketing: Exploring key themes and addressing challenges. Current Issues in Tourism, 27(14), 2345–2362. [Google Scholar] [CrossRef]
  33. Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320. [Google Scholar] [CrossRef]
  34. Card, N. A. (2012). Applied meta-analysis for social science research. Guilford Press. [Google Scholar]
  35. Chatzigeorgiou, C., & Christou, E. (2020). Adoption of social media as distribution channels in tourism marketing: A qualitative analysis of consumers’ experiences. Journal of Tourism, Heritage & Services Marketing, 6(1), 25–32. [Google Scholar] [CrossRef]
  36. Chekalina, T., Fuchs, M., & Lexhagen, M. (2016). Customer-based destination brand equity modeling: The role of destination resources, value for money, and value in use. Journal of Travel Research, 57(1), 31–51. [Google Scholar] [CrossRef]
  37. Chen, C., & Kim, S. (2025). The role of social media in shaping brand equity for historical tourism destinations. Sustainability, 17(10), 4407. [Google Scholar] [CrossRef]
  38. Cheung, M. W.-L. (2015). Meta-analysis: A structural equation modeling approach. John Wiley & Sons, Ltd. [Google Scholar]
  39. Christou, E., Fotiadis, A., & Giannopoulos, A. (2025). Generative AI as a tourism actor: Reconceptualising experience co-creation, destination governance and responsible innovation in the synthetic experience economy. Journal of Tourism, Heritage & Services Marketing, 11(2), 16–41. [Google Scholar] [CrossRef]
  40. Confetto, M. G., Conte, F., Palazzo, M., & Siano, A. (2023). Digital destination branding: A framework to define and assess European DMOs’ practices. Journal of Destination Marketing & Management, 30, 100804. [Google Scholar] [CrossRef]
  41. Cooper, H. (2010). Research synthesis and meta-analysis: A step-by-step approach (4th ed.). Sage Publications. [Google Scholar]
  42. Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). Sage. [Google Scholar]
  43. Coudounaris, D. N., Björk, P., Trifonova Marinova, S., Jafarguliyev, F., Kvasova, O., Sthapit, E., Varblane, U., & Talias, M. A. (2025). ‘Big-5’ personality traits and revisit intentions: The mediating effect of memorable tourism experiences. Journal of Tourism, Heritage & Services Marketing, 11(1), 46–60. [Google Scholar] [CrossRef]
  44. Csapó, J., & Kusumaningrum, S. D. (2025). Uncovering trends in destination branding and destination brand equity research: Results of a topic modelling approach. Journal of Tourism, Heritage & Services Marketing, 11(1), 34–45. [Google Scholar] [CrossRef]
  45. Dedeoğlu, B. B., van Niekerk, M., Küçükergin, K. G., De Martino, M., & Okumuş, F. (2020). Effect of social media sharing on destination brand awareness and destination quality. Journal of Vacation Marketing, 26(1), 33–56. [Google Scholar] [CrossRef]
  46. de la Hoz-Correa, A., Muñoz-Leiva, F., & Bakucz, M. (2018). Past themes and future trends in medical tourism research: A co-word analysis. Tourism Management, 65, 200–211. [Google Scholar] [CrossRef]
  47. De Veirman, M., Cauberghe, V., & Hudders, L. (2017). Marketing through Instagram influencers: The impact of number of followers and product divergence on brand attitude. International Journal of Advertising, 36(5), 798–828. [Google Scholar] [CrossRef]
  48. Doan Do, T. T. M., Silva, J. A. M., Del Chiappa, G., & Pereira, L. N. (2024). The moderating role of sense of power and psychological risk on the effect of eWOM and purchase intentions for Airbnb. Journal of Tourism, Heritage & Services Marketing, 10(2), 3–14. [Google Scholar] [CrossRef]
  49. Doolin, B., Burgess, L., & Cooper, J. (2002). Evaluating the use of the web for tourism marketing: A case study from New Zealand. Tourism Management, 23(5), 557–561. [Google Scholar] [CrossRef]
  50. Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. [Google Scholar] [CrossRef]
  51. Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. [Google Scholar] [CrossRef]
  52. Echtner, C. M., & Ritchie, J. R. B. (1993). The measurement of destination image: An empirical assessment. Journal of Travel Research, 31(4), 3–13. [Google Scholar] [CrossRef]
  53. Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629–634. [Google Scholar] [CrossRef]
  54. Femenia-Serra, F., & Gretzel, U. (2020). Influencer marketing for tourism destinations: Lessons from a mature destination. In J. Neidhardt, & W. Wörndl (Eds.), Information and communication technologies in tourism 2020. Springer. [Google Scholar] [CrossRef]
  55. Flavián, C., Ibáñez-Sánchez, S., & Orús, C. (2019). The impact of virtual, augmented and mixed reality technologies on the customer experience. Journal of Business Research, 100, 574–560. [Google Scholar] [CrossRef]
  56. Fletcher, A. J., & Marchildon, G. P. (2014). Using the Delphi method for qualitative, participatory action research in health leadership. International Journal of Qualitative Methods, 13(1), 1–18. [Google Scholar] [CrossRef]
  57. Florido-Benítez, L., & del Alcázar Martínez, B. (2024). How artificial intelligence (AI) is powering new tourism marketing and the future agenda for smart tourist destinations. Electronics, 13(21), 4151. [Google Scholar] [CrossRef]
  58. Fragidis, G., & Kotzaivazoglou, I. (2022). Goal modelling for strategic dependency analysis in destination management. Journal of Tourism, Heritage & Services Marketing, 8(2), 3–15. [Google Scholar] [CrossRef]
  59. Gartner, W. C. (1993). Image formation process. Journal of Travel & Tourism Marketing, 2(2–3), 191–216. [Google Scholar] [CrossRef]
  60. Gertner, D. (2011). A (tentative) meta-analysis of the place marketing and place branding literature. Journal of Brand Management, 19(2), 112–131. [Google Scholar] [CrossRef]
  61. Giannopoulos, A., Skourtis, G., Kalliga, A., Dontas-Chrysis, D. M., & Paschalidis, D. (2020). Co-creating high-value hospitality services in the tourism ecosystem: Towards a paradigm shift? Journal of Tourism, Heritage & Services Marketing, 6(2), 3–11. [Google Scholar] [CrossRef]
  62. Glass, G. (2000). Meta-analysis at 25. Available online: https://ed2worlds.blogspot.com/2022/07/meta-analysis-at-25-personal-history.html (accessed on 15 May 2025).
  63. Gotschall, T. (2021). EndNote 20 desktop version. Journal of the Medical Library Association, 109(3), 520–522. [Google Scholar] [CrossRef]
  64. Gössling, S., & Higham, J. (2020). The low-carbon imperative: Destination management under urgent climate change. Journal of Travel Research, 60(6), 1167–1179. [Google Scholar] [CrossRef]
  65. Gretzel, G., Fesenmaier, D. R., Formica, S., & O’Leary, J. T. (2006). Searching for the future: Challenges faced by destination marketing organizations. Journal of Travel Research, 45, 116–126. [Google Scholar] [CrossRef]
  66. Gretzel, U. (2022). The smart DMO: A new step in the digital transformation of destination management organisations. European Journal of Tourism Research, 30, 3002. [Google Scholar] [CrossRef]
  67. Gretzel, U., Sigala, M., Xiang, Z., & Koo, C. (2015). Smart tourism: Foundations and developments. Electronic Markets, 25(3), 179–188. [Google Scholar] [CrossRef]
  68. Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of google scholar, PubMed, and 26 other resources. Research Synthesis Methods, 11(2), 181–217. [Google Scholar] [CrossRef]
  69. Guttentag, D. A. (2010). Virtual reality: Applications and implications for tourism. Tourism Management, 31(5), 637–651. [Google Scholar] [CrossRef]
  70. Haddaway, N. R., Collins, A. M., Coughlin, D., & Kirk, S. (2015). The role of google scholar in evidence reviews and its applicability to grey literature searching. PLoS ONE, 10(9), e0138237. [Google Scholar] [CrossRef] [PubMed]
  71. Hanna, S., & Rowley, J. (2015). Towards a model of the Place Brand Web. Tourism Management, 48, 100–112. [Google Scholar] [CrossRef]
  72. Hanna, S., Rowley, J., & Keegan, B. (2021). Place and destination branding: A review and conceptual mapping of the domain. European Management Review, 18(2), 105–117. [Google Scholar] [CrossRef]
  73. Harrigan, P., Evers, U., Miles, M., & Daly, T. (2018). Customer engagement and the relationship between involvement, engagement, self-brand connection and brand usage intent. Journal of Business Research, 88, 388–396. [Google Scholar] [CrossRef]
  74. Hedges, L. V., & Olkin, I. (2014). Statistical methods for meta-analysis. Academic Press. [Google Scholar]
  75. Hedges, L. V., & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3(4), 486–504. [Google Scholar] [CrossRef]
  76. Hernández-Méndez, J., Baute-Díaz, N., & Gutiérrez-Taño, D. (2024). The effectiveness of virtual versus human influencer marketing for tourism destinations. Journal of Vacation Marketing. [Google Scholar] [CrossRef]
  77. Higgins, J. P. T., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., Savović, J., Schulz, K. F., Weeks, L., & Sterne, J. A. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ, 343, d5928. [Google Scholar] [CrossRef]
  78. Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327(7414), 557–560. [Google Scholar] [CrossRef]
  79. Ho, C. I., & Lee, Y. L. (2007). The development of an e-travel service quality scale. Tourism Management, 28(6), 1434–1449. [Google Scholar] [CrossRef]
  80. Hong, Q. N., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., Dagenais, P., Gagnon, M.-P., Griffiths, F., Nicolau, B., O’Cathain, A., Rousseau, M.-C., Vedel, I., & Pluye, P. (2019). The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Education for Information, 34(4), 285–291. [Google Scholar] [CrossRef]
  81. Hopewell, S., Clarke, M., & Mallett, S. (2005). Grey literature and systematic reviews. In H. Rothstein, A. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 49–72). Wiley. [Google Scholar] [CrossRef]
  82. Huang, G. I., Karl, M., Wong, I. A., & Law, R. (2023). Tourism destination research from 2000 to 2020: A systematic narrative review in conjunction with bibliographic mapping analysis. Tourism Management, 95, 104686. [Google Scholar] [CrossRef]
  83. Huang, Z. J., Lin, M. S., & Chen, J. (2024). Tourism experiences co-created on social media. Tourism Management, 105, 104940. [Google Scholar] [CrossRef]
  84. Hudson, S., & Thal, K. (2013). The impact of social media on the consumer decision process. Journal of Travel & Tourism Marketing, 30(1–2), 156–160. [Google Scholar] [CrossRef]
  85. Huerta-Álvarez, R., Cambra-Fierro, J., & Fuentes-Blasco, M. (2020). The interplay between social media communication, brand equity and brand engagement in tourist destinations: An analysis in an emerging economy. Journal of Destination Marketing & Management, 16, 100413. [Google Scholar] [CrossRef]
  86. Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis. SAGE Publications, Inc. [Google Scholar] [CrossRef]
  87. Judijanto, L., Suharto, B., & Susilo, A. (2024). Bibliometric trends in social media and destination marketing: Shaping perceptions in the tourism industry. West Science Interdisciplinary Studies, 2(12), 2353–2367. [Google Scholar] [CrossRef]
  88. Kannan, P. K., & Li, H. (2017). Digital marketing: A framework, review, and research agenda. International Journal of Research in Marketing, 34(1), 22–45. [Google Scholar] [CrossRef]
  89. Kavaratzis, M., & Hatch, M. J. (2013). The dynamics of place brands: An identity-based approach to place branding theory. Marketing Theory, 13(1), 69–86. [Google Scholar] [CrossRef]
  90. Keller, K. L. (1993). Conceptualizing, measuring, and managing customer-based brand equity. Journal of Marketing, 57(1), 1–22. [Google Scholar] [CrossRef]
  91. Kietzmann, J., Paschen, J., & Treen, E. R. (2018). Artificial intelligence in advertising: How marketers can leverage AI along the consumer journey. Journal of Advertising Research, 58(3), 263–267. [Google Scholar] [CrossRef]
  92. Kim, J. J., & Fesenmaier, D. R. (2015). Sharing tourism experiences: The posttrip experience. Journal of Travel Research, 56(1), 28–40. [Google Scholar] [CrossRef]
  93. Kim, M. J., Lee, C.-K., & Jung, T. H. (2020). Exploring consumer behavior in virtual reality tourism using an extended stimulus–organism–response model. Journal of Travel Research, 59(1), 69–89. [Google Scholar] [CrossRef]
  94. Kladou, S., & Kehagias, J. (2014). Assessing destination brand equity: An integrated approach. Journal of Destination Marketing & Management, 3(1), 2–10. [Google Scholar] [CrossRef]
  95. Koo, C., Kwon, J., Chung, N., & Kim, J. (2022). Metaverse tourism: Conceptual framework and research propositions. Current Issues in Tourism, 26(20), 3268–3274. [Google Scholar] [CrossRef]
  96. Krabokoukis, T. (2025). Bridging neuromarketing and data analytics in tourism: An adaptive digital marketing framework for hotels and destinations. Tourism and Hospitality, 6(1), 12. [Google Scholar] [CrossRef]
  97. Krakover, S., & Corsale, A. (2021). Sieving tourism destinations: Decision-making processes and destination choice implications. Journal of Tourism, Heritage & Services Marketing, 7(1), 33–43. [Google Scholar] [CrossRef]
  98. Lam, J. M. S., Kozak, M., & Ariffin, A. A. (2024). Would you like to travel after the COVID-19 pandemic? A novel examination of the causal correlations within the attitudinal theory of planned behaviour. Journal of Tourism, Heritage & Services Marketing, 10(2), 58–68. [Google Scholar] [CrossRef]
  99. Lasswell, H. D. (1948). The structure and function of communication in society. In L. Bryson (Ed.), The communication of ideas (pp. 37–51). Harper & Row. [Google Scholar]
  100. Law, R., Qi, S., & Buhalis, D. (2010). Progress in tourism management: A review of website evaluation in tourism research. Tourism Management, 31(3), 297–313. [Google Scholar] [CrossRef]
  101. Le, N. T. C., & Khuong, M. N. (2023). Investigating brand image and brand trust in airline service: Evidence of Korean Air. Journal of Tourism, Heritage & Services Marketing, 9(2), 55–65. [Google Scholar] [CrossRef]
  102. Lee, J. L.-M., Lau, C. Y.-L., & Wong, C. W.-G. (2023). Reexamining brand loyalty and brand awareness with social media marketing: A collectivist country perspective. Journal of Tourism, Heritage & Services Marketing, 9(2), 3–10. [Google Scholar] [CrossRef]
  103. Leech, N. L., & Onwuegbuzie, A. J. (2011). Beyond constant comparison qualitative data analysis: Using NVivo. School Psychology Quarterly, 26(1), 70–84. [Google Scholar] [CrossRef]
  104. Leung, D., Law, R., van Hoof, H., & Buhalis, D. (2013). Social media in tourism and hospitality: A literature review. Journal of Travel & Tourism Marketing, 30(1–2), 3–22. [Google Scholar] [CrossRef]
  105. Leung, X. Y. (2019). Do destination Facebook pages increase fan’s visit intention? A longitudinal study. Journal of Hospitality and Tourism Technology, 10(2), 205–218. [Google Scholar] [CrossRef]
  106. Leung, X. Y. (2024). Bridging the digital divide in hospitality and tourism: Digital inclusion for disadvantaged. Journal of Global Hospitality and Tourism, 3(2), 181–184. [Google Scholar] [CrossRef]
  107. Li, S. C. H., Robinson, P., & Oriade, A. (2017). Destination marketing: The use of technology since the millennium. Journal of Destination Marketing & Management, 6(2), 95–102. [Google Scholar] [CrossRef]
  108. Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A., Clarke, M., Devereaux, P. J., Kleijnen, J., & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Medicine, 6(7), e1000100. [Google Scholar] [CrossRef] [PubMed]
  109. Lin, M. S., Liang, Y., Xue, J. X., Pan, B., & Schroeder, A. (2021). Destination image through social media analytics and the survey method. International Journal of Contemporary Hospitality Management, 33(6), 2219–2238. [Google Scholar] [CrossRef]
  110. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. SAGE Publications, Inc. [Google Scholar]
  111. Litvin, S. W., Goldsmith, R. E., & Pan, B. (2008). Electronic word-of-mouth in hospitality and tourism management. Tourism Management, 29(3), 458–468. [Google Scholar] [CrossRef]
  112. Liu, X., & Zhang, L. (2024). Impacts of different interactive elements on consumers’ purchase intention in live streaming e-commerce. PLoS ONE, 19(12), e0315731. [Google Scholar] [CrossRef] [PubMed]
  113. Long, H. A., French, D. P., & Brooks, J. M. (2020). Optimising the value of the critical appraisal skills programme (CASP) tool for quality appraisal in qualitative evidence synthesis. Research Methods in Medicine & Health Sciences, 1(1), 31–42. [Google Scholar] [CrossRef]
  114. Lu, L., Cai, R., & Gursoy, D. (2019). Developing and validating a service-robot integration willingness scale. International Journal of Hospitality Management, 80, 36–51. [Google Scholar] [CrossRef]
  115. Lu, W., & Stepchenkova, S. (2014). User-generated content as a research mode in tourism and hospitality applications: Topics, methods, and software. Journal of Hospitality Marketing & Management, 24(2), 119–154. [Google Scholar] [CrossRef]
  116. Luong, T.-B. (2024). The impact of uses and motivation gratifications on tourist behavioral intention: The mediating role of destination image and tourists’ attitudes. Journal of Tourism, Heritage & Services Marketing, 10(1), 3–13. [Google Scholar] [CrossRef]
  117. Machado, M., Dias, Á., Patuleia, M., & Pereira, L. (2025). A model of marketing-driven innovation in lifestyle tourism businesses. Journal of Tourism, Heritage & Services Marketing, 11(1), 21–33. [Google Scholar] [CrossRef]
  118. Mak, A. H. N. (2017). Online destination image: Comparing national tourism organisation’s and tourists’ perspectives. Tourism Management, 60, 280–297. [Google Scholar] [CrossRef]
  119. Mandagi, D. W., Indrajit, I., & Wulyatiningsih, T. (2024). Navigating digital horizons: A systematic review of social media’s role in destination branding. Journal of Enterprise and Development (JED), 6(2), 373–389. [Google Scholar] [CrossRef]
  120. Marco-Gardoqui, M., García-Feijoo, M., & Eizaguirre, A. (2023). Changes in marketing strategies at Spanish hotel chains under the framework of sustainability. Journal of Tourism, Heritage & Services Marketing, 10(1), 28–38. [Google Scholar] [CrossRef]
  121. Mariani, M. (2020). Web 2.0 and destination marketing: Current trends and future directions. Sustainability, 12(9), 3771. [Google Scholar] [CrossRef]
  122. Mariani, M., & Baggio, R. (2020). The relevance of mixed methods for network analysis in tourism and hospitality research. International Journal of Contemporary Hospitality Management, 32(4), 1643–1673. [Google Scholar] [CrossRef]
  123. Mariani, M., Buhalis, D., Longhi, C., & Vitouladiti, O. (2014). Managing change in tourism destinations: Key issues and current trends. Journal of Destination Marketing & Management, 4(4), 269–272. [Google Scholar] [CrossRef]
  124. Mariani, M. M., Di Felice, M., & Mura, M. (2016). Facebook as a destination marketing tool: Evidence from Italian regional DMOs. Journal of Travel & Tourism Marketing, 33(9), 1081–1093. [Google Scholar] [CrossRef]
  125. Marine-Roig, E. (2019). Destination image analytics through traveller-generated content. Sustainability, 11(12), 3392. [Google Scholar] [CrossRef]
  126. Marine-Roig, E., & Anton Clavé, S. (2016). Perceived image specialisation in multiscalar tourism destinations. Journal of Destination Marketing & Management, 5(3), 202–213. [Google Scholar] [CrossRef]
  127. Martins, M., & Santos, A. (2024). Exploring the potential of flickr user–Generated content for tourism research: Insights from Portugal. European Journal of Tourism, Hospitality and Recreation, 14(2), 258–272. [Google Scholar] [CrossRef]
  128. Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & López-Cózar, E. D. (2021). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics, 126(1), 871–906. [Google Scholar] [CrossRef] [PubMed]
  129. McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282. [Google Scholar] [CrossRef]
  130. Michael, N., Chunawala, M. A., & Fusté-Forné, F. (2025). Instagrammable destinations: The use of photographs in digital tourism marketing in the United Arab Emirates. Journal of Tourism, Heritage & Services Marketing, 11(1), 3–10. [Google Scholar] [CrossRef]
  131. Milano, C., Novelli, M., & Cheer, J. M. (2019). Overtourism and tourismphobia: A journey through four decades of tourism development, planning and local concerns. Tourism Planning & Development, 16(4), 353–357. [Google Scholar] [CrossRef]
  132. Minghetti, V., & Buhalis, D. (2009). Digital divide in tourism. Journal of Travel Research, 49(3), 267–281. [Google Scholar] [CrossRef]
  133. Mirzaalian, F., & Halpenny, E. (2019). Social media analytics in hospitality and tourism: A systematic literature review and future trends. Journal of Hospitality and Tourism Technology, 10(4), 764–790. [Google Scholar] [CrossRef]
  134. Misirlis, N., Lekakos, G., & Vlachopoulou, M. (2018). Associating Facebook measurable activities with personality traits: A fuzzy sets approach. Journal of Tourism, Heritage & Services Marketing, 4(2), 10–16. [Google Scholar] [CrossRef]
  135. Mitev, A. Z., Irimiás, A. R., & Michalkó, G. (2024). Making parasocial identification tangible: Can film memorabilia strengthen travel intention? Journal of Tourism, Heritage & Services Marketing, 10(2), 24–32. [Google Scholar] [CrossRef]
  136. Mohammadi, S., Darzian Azizi, A., & Hadian, N. (2021). Location-based services as marketing promotional tools to provide value-added in E-tourism. International Journal of Digital Content Management, 2(3), 189–215. [Google Scholar] [CrossRef]
  137. Morgan, N. J., & Pritchard, A. (1998). Tourism promotion and power: Creating images, creating identities. John Wiley & Sons. [Google Scholar]
  138. Munar, A. M. (2011). Tourist-created content: Rethinking destination branding. International Journal of Culture, Tourism and Hospitality Research, 5(3), 291–305. [Google Scholar] [CrossRef]
  139. Munar, A. M., & Jacobsen, J. K. (2014). Motivations for sharing tourism experiences through social media. Tourism Management, 43, 46–54. [Google Scholar] [CrossRef]
  140. Nadalipour, Z., Hassan, A., Bhartiya, S., & Shah Hosseini, F. (2024). The role of influencers in destination marketing through Instagram social platform. In Technology and social transformations in hospitality, tourism and gastronomy: South Asia perspectives (pp. 20–38). CABI Publishing. [Google Scholar] [CrossRef]
  141. Nechoud, L., Ghidouche, F., & Seraphin, H. (2021). The influence of eWOM credibility on visit intention: An integrative moderated mediation model. Journal of Tourism, Heritage & Services Marketing, 7(1), 54–63. [Google Scholar] [CrossRef]
  142. Neuhofer, B., Buhalis, D., & Ladkin, A. (2015). Smart technologies for personalized experiences: A case study in the hospitality domain. Electronic Markets, 25, 243–254. [Google Scholar] [CrossRef]
  143. O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19, 1609406919899220. [Google Scholar] [CrossRef]
  144. Orden-Mejía, M., Carvache-Franco, M., Huertas, A., Carvache-Franco, O., & Carvache-Franco, W. (2025). Analysing how AI-powered chatbots influence destination decisions. PLoS ONE, 20(3), e0319463. [Google Scholar] [CrossRef]
  145. Pace, R., Pluye, P., Bartlett, G., Macaulay, A. C., Salsberg, J., Jagosh, J., & Seller, R. (2012). Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT). International Journal of Nursing Studies, 49(1), 47–53. [Google Scholar] [CrossRef]
  146. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [Google Scholar] [CrossRef]
  147. Pencarelli, T. (2020). The digital revolution in the travel and tourism industry. Information Technology & Tourism, 22(3), 455–476. [Google Scholar] [CrossRef]
  148. Peterson, R. A., & Brown, S. P. (2005). On the use of beta coefficients in meta-analysis. Journal of Applied Psychology, 90(1), 175–181. [Google Scholar] [CrossRef]
  149. Phuthong, T., & Chotisarn, N. (2025). Place branding as a soft power tool: A systematic review, bibliometric analysis, and future research directions. International Review of Management and Marketing, 15(4), 123–142. [Google Scholar] [CrossRef]
  150. Pike, S. (2002). Destination image analysis—A review of 142 papers from 1973 to 2000. Tourism Management, 23(5), 541–549. [Google Scholar] [CrossRef]
  151. Pike, S., & Page, S. J. (2014). Destination marketing organizations and destination marketing: A narrative analysis of the literature. Tourism Management, 41, 202–227. [Google Scholar] [CrossRef]
  152. Pirnar, I., Kurtural, S., & Tutuncuoglu, M. (2019). Festivals and destination marketing: An application from Izmir City. Journal of Tourism, Heritage & Services Marketing, 5(1), 9–14. [Google Scholar] [CrossRef]
  153. Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569. [Google Scholar] [CrossRef]
  154. Pricope Vancia, A. P., Băltescu, C. A., Brătucu, G., Tecău, A. S., Chițu, I. B., & Duguleană, L. (2023). Examining the Disruptive Potential of Generation Z Tourists on the Travel Industry in the Digital Age. Sustainability, 15(11), 8756. [Google Scholar] [CrossRef]
  155. Qu, H., Kim, L. H., & Im, H. H. (2011). A model of destination branding: Integrating the concepts of the branding and destination image. Tourism Management, 32(3), 465–476. [Google Scholar] [CrossRef]
  156. Rasul, T., Santini, F. O., Lim, W. M., Buhalis, D., & Ladeira, W. J. (2024). Tourist engagement: Toward an integrated framework using meta-analysis. Journal of Vacation Marketing, 31(4), 845–867. [Google Scholar] [CrossRef]
  157. Rather, R. A. (2020). Customer experience and engagement in tourism destinations: The experiential marketing perspective. Journal of Travel & Tourism Marketing, 37(1), 15–32. [Google Scholar] [CrossRef]
  158. Raudenbush, S. W. (2009). Analyzing effect sizes: Random-effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (pp. 295–315). Russell Sage Foundation. [Google Scholar]
  159. Revilla Hernández, M., Santana Talavera, A., & Parra López, E. (2016). Effects of co-creation in a tourism destination brand image through Twitter. Journal of Tourism, Heritage & Services Marketing, 2(2), 3–10. [Google Scholar] [CrossRef]
  160. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press. [Google Scholar]
  161. Roque, V., & Raposo, R. (2015). Social media as a communication and marketing tool in tourism: An analysis of online activities from international key player DMO. Anatolia, 27(1), 58–70. [Google Scholar] [CrossRef]
  162. Ruiz-Real, J. L., Uribe-Toril, J., & Gázquez-Abad, J. C. (2020). Destination branding: Opportunities and new challenges. Journal of Destination Marketing & Management, 17, 100453. [Google Scholar] [CrossRef]
  163. Saldaña, J. (2015). The coding manual for qualitative researchers. Sage. [Google Scholar]
  164. Schaar, R. (2013). Destination branding: A snapshot. Journal of Undergraduate Research, 16, 1–10. [Google Scholar]
  165. Schmidt, L., Finnerty Mutlu, A. N., Elmore, R., Olorisade, B. K., Thomas, J., & Higgins, J. P. T. (2021). Data extraction methods for systematic review (semi)automation: Update of a living systematic review. F1000Research, 10, 401. [Google Scholar] [CrossRef]
  166. Seraphin, H., & Yallop, A. (2023). The marriage à la mode: Hospitality industry’s connection to the dating services industry. Hospitality Insights, 7(1), 7–9. [Google Scholar] [CrossRef]
  167. Séraphin, H., & Jarraud, N. (2022). Interactions between stakeholders in Lourdes: An ‘Alpha’ framework approach. Journal of Tourism, Heritage & Services Marketing, 8(1), 48–57. [Google Scholar] [CrossRef]
  168. Sigala, M. (2015). Collaborative commerce in tourism: Implications for research and industry. Current Issues in Tourism, 20(4), 346–355. [Google Scholar] [CrossRef]
  169. Sigala, M. (2020). Tourism and COVID-19: Impacts and implications for advancing and resetting industry and research. Journal of Business Research, 117, 312–321. [Google Scholar] [CrossRef]
  170. Sihombing, A., Liu, L.-W., & Pahrudin, P. (2024). The impact of online marketing on tourists’ visit intention: Mediating roles of trust. Journal of Tourism, Heritage & Services Marketing, 10(2), 15–23. [Google Scholar] [CrossRef]
  171. Silvanto, S., & Ryan, J. (2023). Rethinking destination branding frameworks for the age of digital nomads and telecommuters: An abstract. In B. Jochims, & J. Allen (Eds.), Optimistic marketing in challenging times: Serving ever-shifting customer needs. AMSAC 2022. Developments in marketing science: Proceedings of the academy of marketing science. Springer. [Google Scholar] [CrossRef]
  172. Singh, R., & Sibi, P. S. (2023). E-loyalty formation of Generation Z: Personal characteristics and social influences. Journal of Tourism, Heritage & Services Marketing, 9(1), 3–14. [Google Scholar] [CrossRef]
  173. Sivathanu, B., Pillai, R., Mahtta, M., & Gunasekaran, A. (2024). All that glitters is not gold: A study of tourists’ visit intention by watching deepfake destination videos. Journal of Tourism Futures, 10(2), 218–236. [Google Scholar] [CrossRef]
  174. Sotiriadis, M. D. (2020). Tourism Destination Marketing: Academic Knowledge. Encyclopedia, 1(1), 42–56. [Google Scholar] [CrossRef]
  175. Sousa, A. E., Cardoso, P., & Dias, F. (2024). The use of artificial intelligence systems in tourism and hospitality: The tourists’ perspective. Administrative Sciences, 14(8), 165. [Google Scholar] [CrossRef]
  176. Sousa, N., Alén, E., Losada, N., & Melo, M. (2024). The adoption of Virtual Reality technologies in the tourism sector: Influences and post-pandemic perspectives. Journal of Tourism, Heritage & Services Marketing, 10(2), 47–57. [Google Scholar] [CrossRef]
  177. Sterne, J. A. C., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., Cates, C. J., Cheng, H. Y., Corbett, M. S., Eldridge, S. M., Emberson, J. R., Hernán, M. A., Hopewell, S., Hróbjartsson, A., Junqueira, D. R., Jüni, P., Kirkham, J. J., Lasserson, T., Li, T., … Higgins, J. P. T. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366, l4898. [Google Scholar] [CrossRef]
  178. Stojanovic, I., Andreu, L., & Curras-Pérez, R. (2022). Social media communication and destination brand equity. Journal of Hospitality and Tourism Technology, 13(4), 650–666. [Google Scholar] [CrossRef]
  179. Tasci, A. D. A. (2018). Testing the cross-brand and cross-market validity of a consumer-based brand equity (CBBE) model for destination brands. Tourism Management, 65, 143–159. [Google Scholar] [CrossRef]
  180. Thompson, S. G., & Higgins, J. P. T. (2002). How should meta-regression analyses be undertaken and interpreted? Statistics in Medicine, 21(11), 1559–1573. [Google Scholar] [CrossRef]
  181. Tosyali, H., Tosyali, F., & Coban-Tosyali, E. (2023). Role of tourist-chatbot interaction on visit intention in tourism: The mediating role of destination image. Current Issues in Tourism, 28(4), 511–526. [Google Scholar] [CrossRef]
  182. Tran, N. L., & Rudolf, W. (2022). Social media and destination branding in tourism: A systematic review of the literature. Sustainability, 14(20), 13528. [Google Scholar] [CrossRef]
  183. Turnbull, D., Chugh, R., & Luck, J. (2023). Systematic-narrative hybrid literature review: A strategy for integrating a concise methodology into a manuscript. Social Sciences & Humanities Open, 7(1), 100381. [Google Scholar] [CrossRef]
  184. Tussyadiah, I., & Miller, G. (2019). Nudged by a robot: Responses to agency and feedback. Annals of Tourism Research, 78, 102752. [Google Scholar] [CrossRef]
  185. Tussyadiah, I. P. (2020). A review of research into automation in tourism: Launching the annals of tourism research curated collection on artificial intelligence and robotics in tourism. Annals of Tourism Research, 81, 102883. [Google Scholar] [CrossRef]
  186. Tussyadiah, I. P., Wang, D., Jung, T. H., & Tom Dieck, M. C. (2018). Virtual reality, presence, and attitude change: Empirical evidence from tourism. Tourism Management, 66, 140–154. [Google Scholar] [CrossRef]
  187. Veríssimo, J. M. C., Tiago, M. T. B., Tiago, F. G., & Jardim, J. S. (2017). Tourism destination brand dimensions: An exploratory approach. Tourism & Management Studies, 13(4), 1–8. [Google Scholar] [CrossRef]
  188. Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. [Google Scholar] [CrossRef]
  189. Wanner, J., Herm, L.-V., Heinrich, K., & Janiesch, C. (2022). The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electronic Markets, 32, 2079–2102. [Google Scholar] [CrossRef]
  190. Wells, G. A., Shea, B., O’Connell, D., Peterson, J., Welch, V., Losos, M., & Tugwell, P. (2013). The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. Available online: http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp (accessed on 5 June 2025).
  191. Wengel, Y., Ma, L., Ma, Y., Apollo, M., Maciuk, K., & Ashton, A. S. (2022). The TikTok effect on destination development: Famous overnight, now what? Journal of Outdoor Recreation and Tourism, 37, 100458. [Google Scholar] [CrossRef]
  192. Whiting, A., & Williams, D. (2013). Why people use social media: A uses and gratifications approach. Qualitative Market Research, 16(4), 362–369. [Google Scholar] [CrossRef]
  193. Wüst, K., & Bremser, K. (2025). Artificial intelligence in tourism through chatbot support in the booking process—An experimental investigation. Tourism and Hospitality, 6(1), 36. [Google Scholar] [CrossRef]
  194. Xiang, Z., Du, Q., Ma, Y., & Fan, W. (2017). Assessing reliability of social media data: Lessons from mining tripadvisor hotel reviews. In R. Schegg, & B. Stangl (Eds.), Information and communication technologies in tourism 2017. Springer. [Google Scholar] [CrossRef]
  195. Xiang, Z., & Fesenmaier, D. R. (2017). Analytics in tourism design. In Z. Xiang, & D. Fesenmaier (Eds.), Analytics in smart tourism design. Tourism on the verge. Springer. [Google Scholar] [CrossRef]
  196. Xiang, Z., & Gretzel, U. (2010). Role of social media in online travel information search. Tourism Management, 31(2), 179–188. [Google Scholar] [CrossRef]
  197. Xiang, Z., Magnini, V. P., & Fesenmaier, D. R. (2015). Information technology and consumer behavior in travel and tourism: Insights from travel planning using the Internet. Journal of Retailing and Consumer Services, 22, 244–249. [Google Scholar] [CrossRef]
  198. Xu, F., Buhalis, D., & Weber, J. (2017). Serious games and the gamification of tourism. Tourism Management, 60, 244–256. [Google Scholar] [CrossRef]
  199. Yagmur, Y., & Demirel, A. (2024). An exploratory study on determining motivations, constraints, and strategies for coping with constraints to participate in outdoor recreation activities: Generation Z. Journal of Tourism, Heritage & Services Marketing, 10(1), 14–27. [Google Scholar] [CrossRef]
  200. Yağmur, Y., & Aksu, A. (2022). Investigation of destination image mediating effect on tourists’ risk assessment, behavioural intentions and satisfaction. Journal of Tourism, Heritage & Services Marketing, 8(1), 27–37. [Google Scholar] [CrossRef]
  201. Yallop, A., & Seraphin, H. (2020). Big data and analytics in tourism and hospitality: Opportunities and risks. Journal of Tourism Futures, 6(3), 257–262. [Google Scholar] [CrossRef]
  202. Yin, C. Z. Y., Jung, T., tom Dieck, M. C., & Lee, M. Y. (2021). Mobile augmented reality heritage applications: Meeting the needs of heritage tourists. Sustainability, 13(5), 2523. [Google Scholar] [CrossRef]
  203. Yu, J., & Meng, T. (2025). Image generative AI in tourism: Trends, impacts, and future research directions. Journal of Hospitality & Tourism Research. [Google Scholar] [CrossRef]
  204. Zafiropoulos, K., Vrana, V., & Antoniadis, K. (2015). Use of Twitter and Facebook by top European museums. Journal of Tourism, Heritage & Services Marketing, 1(1), 16–24. [Google Scholar] [CrossRef]
  205. Zamawe, F. C. (2015). The implication of using NVivo software in qualitative data analysis: Evidence-based reflections. Malawi Medical Journal, 27(1), 13–15. [Google Scholar] [CrossRef] [PubMed]
  206. Zeng, B., & Gerritsen, R. (2014). What do we know about social media in tourism? A review. Tourism Management Perspectives, 10, 27–36. [Google Scholar] [CrossRef]
  207. Zenker, S., & Braun, E. (2017). Questioning a ‘one size fits all’ city brand: Developing a branded house strategy for place brand management. Journal of Place Management and Development, 10(3), 270–287. [Google Scholar] [CrossRef]
  208. Zhou, Z. (1997). Destination marketing: Measuring the effectiveness of brochures. Journal of Travel & Tourism Marketing, 6(3–4), 143–158. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram for the study.
Figure 1. PRISMA flow diagram for the study.
Admsci 15 00371 g001
Figure 2. Word cloud illustrating key concepts in the digital evolution of destination branding.
Figure 2. Word cloud illustrating key concepts in the digital evolution of destination branding.
Admsci 15 00371 g002
Table 1. Phases of digital evolution in destination branding—key characteristics.
Table 1. Phases of digital evolution in destination branding—key characteristics.
PhaseTimeframeDominant InterfacesSignature PracticesTypical KPIs
Pre-internet (brochures)To mid-1990sPrint, trade fairs, TV/radio☑ Top-down slogans and imagery ☑ Centralised message controlArrivals, brochure distribution, aided recall
Web 1.0~1995–2004Static websites, email☑ ‘Online brochure’ sites ☑ Basic usability & FAQsPage views, downloads, email queries
Web 2.0 (Social)~2004–2013Blogs, review sites, Facebook, YouTube, Flickr, Twitter/X☑ UGC co creation ☑ Dialogue and community mgmt. ☑ Early influencer programsFollows, shares, sentiment, community growth
Mobile first~2013–2020Smartphones, apps, GPS/AR, 4G☑ Context-aware prompts ☑ Live stories and vertical video ☑ On-site service recoveryApp retention, click to navigate, geo-engagement
AI/XR-infused~2020–presentChatbots, recommenders, AR/VR/XR, 5G/IoT☑ Personalisation at scale ☑ Immersive previews ☑ Automation and governanceChatbot CSAT, VR dwell time, personalised CTR, audit logs
Table 2. Era-by-era playbook for DMOs (manager’s checklist).
Table 2. Era-by-era playbook for DMOs (manager’s checklist).
EraObjective FocusDo More ofAvoidPhase-Appropriate KPIs
Pre-internet → Web1.0Establish credibility; canonical narrativeKeep a content-rich, accessible site; multilingual basicsPDF ‘dumping’; outdated pagesAided recall; task completion; accessibility checks
Web 2.0Build peer credibility and dialogueCurate UGC; run two-way communities; micro-influencersBroadcasting without replies; vanity metric obsessionSentiment with validation; community health; share of voice
Mobile firstContext + immediacyOn-site service recovery; geo-nudges; stories/vertical videoOne-size-fits-all pushes; ignoring bandwidth constraintsApp retention; click to navigate; service recovery time
AI/XRPersonalisation + governanceChatbots with disclosure; inclusive training data; XR previews with expectation managementOpaque targeting; overuse of synthetic imagery without labelsChatbot CSAT; explain-why rate; VR dwell time; audit pass rate
Table 3. Construct harmonisation crosswalk (summary) *.
Table 3. Construct harmonisation crosswalk (summary) *.
Canonical OutcomeDefinition Used in this ReviewExamples of Accepted Measures (By Phase)Examples of Excluded MeasuresStandardisation to Meta-Variable
AwarenessAbility to recognise/recall destinationPre-internet/Web 1.0: aided/unaided recall; Web 2.0+: survey-based familiarity/visibilityImpressions/reach; search volume without survey validationMeans/SD → Hedges’ g; correlations → Fisher’s z
ImageCognitive and affective associationsMulti-item image scales, semantic differentials; UGC exposure → perceived imageSingle-item ‘sentiment’ without validationAs above; sign oriented positive
AttitudesGlobal evaluative orientation5–7 pt favourability/warmth; brand attitude indexSatisfaction unless framed as attitude toward destination brandAs above
LoyaltyConative commitment (revisit, recommend, advocacy)Intention to revisit/recommend; WOM intentionArrivals/sales unless causally linked to equityAs above
Engagement intentionsWillingness to follow, share, co-createFollow/subscribe/share intention; UGC intention; platform counts when theorised as behaviourClicks/impressions without behavioural intentAs above (separate stratum)
* Canonical outcomes, accepted measures, exclusions, and standardisation rules used to reconcile operationalisations across eras and platforms. Notes: L1 ‘ephemeral exposure’ (impressions/views/likes) excluded from pooling; L2 ‘relational interaction’ and L3 ‘co-creation’ pooled under engagement intentions. See Appendix B for the full matrix, coder notes, and study-level mappings.
Table 4. Results of moderator analysis (one-way ANOVA).
Table 4. Results of moderator analysis (one-way ANOVA).
Study QualityNumber of StudiesMean Effect Size
(Cohen’s d)
Standard Deviation (SD)
High680.570.12
Medium720.530.14
Low200.500.17
Table 5. Thematic structure of digital evolution in destination branding.
Table 5. Thematic structure of digital evolution in destination branding.
Core ThemeSub-ThemesWhat ChangedSo What for Equity?
Engagement evolutionFrom one-way → dialogic → real-timeVisitors move from audience to co-creators
  • Stronger awareness and image; loyalty requires sustained reciprocity
Phased tech adoptionBrochureware → social → mobile → AI/XRCapabilities cumulate; laggards lose relevance
  • Returns depend on absorptive capacity
Changing tourist rolesUGC, micro-communities, influencersPeer credibility eclipses official claims
  • Authenticity becomes a driver/constraint
Brand equity in the digital ageAwareness, image, attitudes, loyalty, engagementMulti-modal measurement and feedback loops
  • Need standardised cross-era constructs (Table 2)
Table 6. Funnel plot and Egger’s test results *.
Table 6. Funnel plot and Egger’s test results *.
Outcome CategoryEgger’s Regression Coefficient (Intercept)Standard Errort-Valuep-Value
Brand Awareness0.350.420.830.41
Brand Image/Attitude0.280.390.720.47
Engagement Metrics0.200.300.670.51
* All funnel plots displayed approximate symmetry, and Egger’s regression tests indicated no significant publication bias (all p-values > 0.1).
Table 7. Meta-analysis of digitisation effects on destination branding outcomes *.
Table 7. Meta-analysis of digitisation effects on destination branding outcomes *.
Branding OutcomeNumber of Studies (k)Pooled Effect Size (Hedges’ g)95% CICochran’s Qp-Value (Q)I2 (%)
Brand Awareness240.52[0.31, 0.73]32.1<0.0166.5
Brand Image and Attitudes200.61[0.39, 0.83]28.6<0.0168.5
Engagement Metrics160.78[0.54, 1.02]21.9<0.0158.0
* Effect sizes are reported as Hedges’ g (bias-corrected standardised mean differences), where positive values indicate favourable effects of digitisation on branding outcomes. Cochran’s Q tests and I2 statistics assess the degree of statistical heterogeneity across studies.
Table 8. Evolution of destination branding across phases: from brochures to bytes.
Table 8. Evolution of destination branding across phases: from brochures to bytes.
AspectPre-Digital (Print Era)Web 1.0 (Early Internet)Web 2.0 (Social Media)Mobile Era (Smartphone)AI and XR Era (Current)
Message control vs.
co-creation
DMO monopoly; no feedback.Managerial dominance; static pages.Shared narration via UGC.Real time visitor input blends with official voice.Algorithmic personalisation; facilitative DMO role.
Communication modelOne way broadcast.One way online.Dialogic, peer to peer.Always-on multilateral exchange.Immersive, AI-mediated interaction.
Speed of disseminationAnnual cycles.Occasional updates.Second by second virality.Instant, location-triggered.Continuous, predictive responsiveness.
Reach and audienceMarket-bounded print audiences.Global yet search dependent.Viral network diffusion.Ubiquitous in-journey targeting.Hyper-segmented worldwide access, virtual visitation.
Data and feedback
depth
Sparse surveys.Basic traffic metrics.Engagement and sentiment analytics.Contextual behavioural traces.Integrated big data, real-time modelling.
Table 9. Social-media platform moderator analysis *.
Table 9. Social-media platform moderator analysis *.
Social-Media Platform (Primary)k (Effect Sizes)Pooled Hedges’ g95% CII2 (%)Q (df)p Heterogeneity
Facebook240.450.28–0.635222.1 (11)0.024
Instagram160.570.35–0.794813.4 (7)0.064
TikTok/YouTube (high-visual)120.620.31–0.936012.0 (5)0.034
Twitter/X80.300.05–0.55455.5 (3)0.139
* k denotes the number of independent effect sizes uniquely linked to a single dominant platform in the study. Effect sizes are coded so that positive values indicate a favourable impact of the digital intervention on the branding outcome. Confidence intervals and heterogeneity statistics were generated with a random-effect model (inverse-variance weighting) in R/metafor (Viechtbauer, 2010). Cochran’s Q and I2 follow the conventions in Borenstein et al. (2009), and an I2 above ≈75% would suggest high heterogeneity. Because several studies examined multiple platforms, cross-platform effects were managed by treating each effect separately while controlling for study clustering (Cheung, 2015). The table reports only those effects cleanly attributable to a single platform.
Table 10. Sub-group meta-analysis of pooled effect sizes by world region (k = 60) *.
Table 10. Sub-group meta-analysis of pooled effect sizes by world region (k = 60) *.
Regionk (Effect Sizes)Pooled Hedges’ g95% CII2 (%)Q (df)p Heterogeneity
Europe200.490.29–0.694617.2 (9)0.045
North America160.440.19–0.694212.3 (7)0.091
Asia–Pacific140.530.25–0.815114.4 (6)0.026
Other100.320.05–0.59386.5 (4)0.165
* Effect magnitudes overlapped across continents (Q_between = 1.87, df = 3, p = 0.60), indicating that digital branding delivers broadly similar benefits irrespective of geographic context.
Table 11. Sub-group meta-analysis of pooled effect sizes by study design (k = 60) *.
Table 11. Sub-group meta-analysis of pooled effect sizes by study design (k = 60) *.
Designk (Effect Sizes)Pooled Hedges’ g95% CII2 (%)
Experiments (field/lab)220.500.30–0.7048
Quasi-experiments (pre/post)180.460.23–0.6941
Cross-sectional surveys200.420.22–0.6246
* No significant difference emerged (Q_between = 0.37, df = 2, p = 0.83), supporting design-robustness.
Table 12. Moderator analysis for content-source strategy *.
Table 12. Moderator analysis for content-source strategy *.
Content Strategy (Primary)k (Effect Sizes)Pooled Hedges’ g95% CII2 (%)Q (df)p Heterogeneity
User-generated content (UGC)300.580.40–0.765528.4 (14)0.013
DMO-generated content240.420.25–0.595024.2 (11)0.018
Integrated/mixed140.490.25–0.735814.7 (6)0.023
* k denotes the number of independent effect sizes that could be uniquely linked to a single dominant content strategy. Effect sizes coded so positive values represent favourable branding outcomes. Calculations executed in R/metafor with inverse-variance weights; see Viechtbauer (2010). When a study reported multiple outcomes under the same strategy, within-study effects were averaged to avoid overweighting (Cheung, 2015).
Table 13. Moderator and categories analysis for content-source strategy *.
Table 13. Moderator and categories analysis for content-source strategy *.
Moderator and CategorieskPooled Hedges’ g95% CII2 (%)Q (df)p Heterogeneity
Influencer tier
 Micro (<50 K followers)160.640.41–0.874813.4 (7)0.063
 Mid (50 K–500 K)120.580.32–0.855210.5 (5)0.060
 Macro/Mega (>500 K)100.360.08–0.64447.1 (4)0.131
Interactivity level
 Low (broadcast/one-way)220.310.14–0.484016.4 (10)0.088
 High (dialogic/co-creation)280.690.49–0.884624.3 (13)0.028
Destination type
 Emerging/lesser known180.710.45–0.965016.0 (8)0.041
 Well-known city/flagship140.380.12–0.644711.3 (6)0.080
 Nation-branding campaigns100.290.04–0.55437.0 (4)0.135
* k Pooled estimates use inverse-variance random-effect weighting (Viechtbauer, 2010). Moderate I2 values affirm heterogeneity remains, but is acceptable for subgroup inference (Higgins et al., 2011). Q statistics non-significant for some subgroups, indicating residual variance largely explained by the moderator.
Table 14. Publication-bias diagnostics and sensitivity checks *.
Table 14. Publication-bias diagnostics and sensitivity checks *.
Diagnostic/Sensitivity TestMetric(s) ReportedResultInterpretation
Funnel-plot symmetryVisual inspectionNo conspicuous gaps: points distributed evenly around pooled lineLittle visual evidence of publication bias
Egger’s test for small-study effectsIntercept = 0.97 (SE = 0.85); t(28) = 1.14; p = 0.26Non-significantAsymmetry not detected → bias unlikely
Rosenthal fail-safe Nk additional null studies needed to nullify effect72Would require an implausibly large ‘file drawer’ to overturn findings
Leave-one-out re-estimationPooled g range0.43–0.48 (baseline = 0.46)Overall estimate stable; no single study unduly influential
Outlier exclusionExtreme g values (n = 4) removedg = 0.44 (95% CI 0.30–0.58)Conclusions unchanged without outliers
Lower-precision studies excludedn = 8 small-sample effects removedg = 0.45 (95% CI 0.31–0.59)Results robust to study-quality restrictions
Model comparisonFixed-effect vs. random-effectFixed: g = 0.42 (95% CI 0.36–0.48) Random: g = 0.46 (95% CI 0.32–0.60)Near-identical estimates ⇒ findings not model-dependent
Heterogeneity after outlier removalQ(26) = 38.1, p = 0.06; I2 = 31%Moderate heterogeneity, acceptable for synthesisHeterogeneity acceptable for synthesis of findings
* All statistics derived from the 60 digital-era studies included in the quantitative meta-analysis; g denotes Hedges’ g (positive values = favourable digital-branding effect).
Table 15. Pooled effect sizes and heterogeneity by branding outcome (k = 60 digital-era studies) *.
Table 15. Pooled effect sizes and heterogeneity by branding outcome (k = 60 digital-era studies) *.
Outcome
Category
k (Effects)Pooled Hedges’ g95% CICohen-Scale InterpretationI2 (%)Cochran Q (p)Salient Note
Brand Awareness440.460.33–0.59Moderate5849.7 (0.01)Substantial uplift in recall/recognition after digital exposure
Brand Image400.410.27–0.55Moderate5544.2 (0.02)Visual/narrative content improves cognitive-affective image
Brand Attitudes430.340.19–0.49Small to Moderate5032.8 (0.05)Emotional resonance of social media drives favourability
Brand Loyalty300.280.12–0.44Small to Moderate4726.1 (0.09)Harder to shift; gains accrue via sustained engagement
Engagement Intentions260.570.39–0.75Moderate to Large6231.5 (0.01)Interactive/UGC campaigns strongly stimulate future engagement
* Pooled estimates derived with a random-effect model (inverse-variance weighting). Positive g values indicate a favourable impact of digital branding relative to baseline/control conditions. I2 values 47–62% indicate moderate heterogeneity, appropriate for random-effect interpretation. CI = confidence interval; k = number of independent effect sizes contributing to the pooled estimate.
Table 16. Overall heterogeneity statistics (Cochran Q and I2).
Table 16. Overall heterogeneity statistics (Cochran Q and I2).
Pooled OutcomekQ (df)pI2 (%)
All effects combined6052.6 (29)0.00444
Table 17. Practitioner playbook by era—high ROI tactics, minimum KPIs and governance checks.
Table 17. Practitioner playbook by era—high ROI tactics, minimum KPIs and governance checks.
EraPrimary ObjectiveHigh ROI Tactics (Examples)Minimum Viable KPIsGovernance and Risk ChecksCommon Pitfalls
① BrochureEstablish clear positioningConsistent slogan and visual system; trade fairs; PR with editorial hooksAided recall; unaided recall; message take outTruth in imagery; representation balance; accessible printOver-promising; narrow audience focus
② Web 1.0Searchable authorityMobile-responsive site; structured content (FAQs, itineraries); media libraryTask completion; site satisfaction; organic search share (moderator)WCAG compliance; multilingual parity‘PDF brochureware’; slow updates
③ Web 2.0Dialogue and social proofCurated UGC; hashtag campaigns; micro influencers; community moderationBrand image scale; advocacy intent; L2 engagement (comments/shares)Community guidelines; crisis playbook; IP and consentCounting likes as ‘equity’; ignoring negative UGC
④ Mobile FirstContext aware serviceStory/reels; geo nudges; AR trails; chat supportOn site satisfaction; save/share rate; L3 engagement (UGC/reviews)Privacy by design; opt ins; accessibility in ARPush fatigue; bandwidth bias
⑤ AI XRPersonalised immersionTransparent recommenders; 24/7 chatbots; VR previews; synthetic disclosed assetsEquity scales + repeat visit/advocacy intent; session-level service ratings; CSATGO SAFE audits; AI asset disclosure; fairness in exposureOpaque targeting; filter bubbles; ‘hyper-polish’ dissonance
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chatzigeorgiou, C.; Christou, E.; Simeli, I. From Brochures to Bytes: Destination Branding through Social, Mobile, and AI—A Systematic Narrative Review with Meta-Analysis. Adm. Sci. 2025, 15, 371. https://doi.org/10.3390/admsci15090371

AMA Style

Chatzigeorgiou C, Christou E, Simeli I. From Brochures to Bytes: Destination Branding through Social, Mobile, and AI—A Systematic Narrative Review with Meta-Analysis. Administrative Sciences. 2025; 15(9):371. https://doi.org/10.3390/admsci15090371

Chicago/Turabian Style

Chatzigeorgiou, Chryssoula, Evangelos Christou, and Ioanna Simeli. 2025. "From Brochures to Bytes: Destination Branding through Social, Mobile, and AI—A Systematic Narrative Review with Meta-Analysis" Administrative Sciences 15, no. 9: 371. https://doi.org/10.3390/admsci15090371

APA Style

Chatzigeorgiou, C., Christou, E., & Simeli, I. (2025). From Brochures to Bytes: Destination Branding through Social, Mobile, and AI—A Systematic Narrative Review with Meta-Analysis. Administrative Sciences, 15(9), 371. https://doi.org/10.3390/admsci15090371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop