Next Article in Journal
The Internationalization of the Portuguese Textile Sector into the Chinese Market: Contributions to Destination Image
Previous Article in Journal
How Beautiful Memories Stay and Encourage Intention to Recommend the Destination: The Moderating Role of Coastal Destination Competitiveness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Secrets of More Likes: Understanding eWOM Popularity in Wine Tourism Reviews Through Text Complexity and Personal Disclosure

1
College of Shanghai Lausanne Hospitality Management, Shanghai Business School, Shanghai 200233, China
2
Department of Apparel, Events, and Hospitality Management, Iowa State University, Ames, IA 50014, USA
3
School of Culture and Creativity, Beijing Normal-Hong Kong Baptist University, Zhuhai 519087, China
4
Winterster School of Art, University of Southampton, Southampton SO17 1BJ, UK
*
Author to whom correspondence should be addressed.
Tour. Hosp. 2025, 6(3), 145; https://doi.org/10.3390/tourhosp6030145
Submission received: 18 June 2025 / Revised: 17 July 2025 / Accepted: 21 July 2025 / Published: 29 July 2025

Abstract

Online reviews increasingly shape experiential travel decisions. This study investigates how structural and linguistic features of user-generated content influence peer endorsement in wine tourism. While prior research has explored review valence and credibility, limited attention has been paid to how micro-level textual and identity cues affect social approval metrics such as likes. Grounded in the Elaboration Likelihood Model, the analysis draws on 7942 TripAdvisor reviews using automated web scraping, readability metrics, and multivariate regression. Results indicate that location disclosure significantly increases likes, while higher textual complexity reduces endorsement. Title length and reviewer contributions function as peripheral cues, with an interaction between complexity and title length compounding cognitive effort. Findings refine dual-process persuasion theory and offer practical insights for content optimization in post-pandemic tourism engagement.

1. Introduction

The transformative evolution of internet technologies and digital communication platforms has fundamentally reshaped how consumers access, evaluate, and act upon information. Among the most powerful tools in this digital environment is electronic word-of-mouth (eWOM), which refers to consumer-generated content such as online reviews, ratings, multimedia posts, and blogs that is shared rapidly and widely through virtual communities. eWOM now plays a pivotal role across all phases of the consumer decision-making process, including need recognition, information search, evaluation of alternatives, purchase decisions, and post-purchase evaluations (Erkan & Evans, 2018; López & Sicilia, 2014; Themba & Mulala, 2013). Its influence is particularly pronounced in the tourism and hospitality sectors, where services are perishable, heterogeneous, intangible, and thus inherently difficult to evaluate before consumption (Gerdt et al., 2019).
For travelers, who are often navigating a complex set of choices with limited verifiable information, eWOM provides a surrogate for direct experience. Empirical evidence supports the increasing reliance on peer-generated reviews: more than half of all travelers will not make a booking without reading online reviews, and approximately 80% consult these reviews before finalizing hospitality-related purchases (Filieri & McLeay, 2014; TripAdvisor, 2018). In this context, TripAdvisor has emerged as one of the most influential digital intermediaries in the tourism ecosystem. Since its founding in 2000, TripAdvisor has become one of the most authoritative and widely used review aggregators worldwide, hosting over 700 million reviews and attracting more than 400 million unique monthly visitors (Litvin et al., 2018; Filieri et al., 2020). The platform’s integration of peer content, including images, narratives, and star ratings, enables a rich evaluative process that shapes traveler perception, expectation, and intention.
However, the exponential growth of eWOM has introduced a new challenge: information overload. As consumers are increasingly confronted with vast volumes of online reviews, the ability to efficiently identify relevant, credible, and high-quality content becomes a significant concern (Filieri, 2016). In response, many platforms have incorporated social validation tools, most notably the “like” function, which allows users to express agreement, support, or perceived helpfulness of a review. This feature, ubiquitous in social media environments, has now become central in the architecture of eWOM platforms, influencing content ranking algorithms and thereby indirectly shaping consumer decision-making (Luo et al., 2020; Li & Xie, 2020). A higher number of “likes” signals collective endorsement and often correlates with greater content visibility and influence.
Despite its growing prevalence, the mechanism and determinants of social approval, as measured by “likes,” have received limited scholarly attention within the tourism industry. Most prior research has focused on the valence, volume, and source credibility of eWOM (Donthu et al., 2021), while far less is known about how micro-level textual characteristics, such as review readability or headline structure, influence peer endorsement. Although some studies have examined “likes” in the context of social media engagement (Chua & Banerjee, 2013), their extension to tourism-specific review platforms remains conceptually and empirically underdeveloped (Christou & Chatzigeorgiou, 2020).
This gap in knowledge is particularly relevant in the context of short-form experiential tourism, such as wine tourism, which has experienced significant growth in both demand and academic interest. Wine tourism refers to travel experiences centered on visiting vineyards, wineries, and wine-related events, where tasting and learning about wine serve as primary motivators (Hall et al., 2009). Beyond simple leisure, wine tourism is often associated with deeper cultural engagement, regional identity, and authenticity. Despite disruptions caused by the COVID-19 pandemic, the global wine tourism industry remained valued at EUR 8.7 billion in 2020 and is projected to exceed EUR 29.6 billion by 2030 (Statista, 2022), reflecting its long-term viability and appeal. However, most existing research in this field remains focused on macro-level issues such as destination branding or economic impacts (Karagiannis & Metaxas, 2020; Santos et al., 2019), leaving a theoretical and empirical gap concerning consumer behavior and digital engagement mechanisms.
In light of these observations and grounded in the Elaboration Likelihood Model, the present study aims to empirically investigate how specific attributes of online reviews in the wine tourism sector predict their reception by other users, operationalized through the number of “likes” received. Employing a quantitative content analysis informed by large-scale review data content analysis approach, this research explores the relationships between review readability, title length, and reviewer experience (i.e., the total number of contributions) and the degree of social approval. In doing so, this study addresses three interlinked gaps in the literature: (1) the under-theorized relationship between review structure and social validation; (2) the limited application of social media interaction metrics (e.g., likes) in tourism research; and (3) the paucity of studies linking eWOM characteristics to consumer behavior in niche markets like wine tourism.
This research contributes to both theory and practice. Theoretically, it enriches the discourse on eWOM by extending its analytical lens to the mechanisms of peer endorsement and by integrating structural review features into the conceptual framework of online influence. Practically, the findings offer actionable insights for content creators, digital marketers, and platform designers in the tourism industry. For reviewers and travel bloggers, understanding which structural features increase perceived usefulness can inform more strategic content creation. For platform algorithms, identifying review attributes that align with user approval can enhance content curation and personalization. More broadly, this study has implications for promoting post-pandemic tourism recovery, as socially endorsed reviews may facilitate trust, reduce uncertainty, and stimulate short-tour participation in experiential domains such as wine tourism.

2. Literature Review

2.1. Electronic Word-of-Mouth (eWOM) in Hospitality and Tourism

The concept of electronic word-of-mouth (eWOM) evolved from the traditional notion of word-of-mouth (WOM), which describes the informal exchange of opinions among consumers through interpersonal communication and observational learning (Litvin et al., 2008). The foundation of WOM research dates back to the 1960s (Pyle, 2010), and its definition has gradually expanded over time. For instance, Westbrook (1987) broadly defined WOM as all informal communications directed at other consumers about the ownership, usage, or characteristics of particular goods and services. Compared to conventional marketing communication channels, WOM is more interpersonal and inherently trust-based (Meuter et al., 2013).
Building upon Westbrook’s definition and leveraging the advent of digital communication technologies, Buttle (1998) emphasized that WOM can also be disseminated electronically. Consequently, eWOM is now recognized as a form of informal, user-generated communication concerning product attributes or usage experiences that is shared over internet platforms (Cheng & Zhou, 2010). The rapid expansion of internet technologies and the rise of user-generated content (UGC) have transformed virtual communities into hubs for consumer knowledge exchange. These platforms enable prospective travelers to obtain multidimensional perspectives on products or services and support their decision-making processes (Bahtar & Muda, 2016; Ukpabi & Karjaluoto, 2018). As a result, eWOM has become a critical influencer in travel-related decisions within the hospitality industry (Gerdt et al., 2019).
eWOM behavior can be categorized into three main forms: giving, receiving, and forwarding opinions (Chu & Kim, 2011). Of these, opinion-giving is especially relevant, often aligned with the concept of opinion leadership, where knowledgeable individuals share insights with others through textual reviews, ratings, or interactive actions such as “liking” a review. Such actions constitute active participation in the eWOM ecosystem and carry substantial persuasive potential. Given its growing importance, eWOM, particularly in the form of online reviews, has become a focal point for academic inquiry in hospitality and tourism studies. Both qualitative and quantitative research approaches have been employed to investigate review motivations, review quality, credibility, and behavioral impacts (Gerdt et al., 2019; Sann et al., 2021). Qualitative studies often explore textual features or storytelling elements. For example, Black and Kelley (2009) analyzed narrative structures in online hotel reviews, while Jeacle and Carter (2011) employed netnography to assess the perceived trustworthiness of travel ranking platforms. In contrast, quantitative research has investigated numerical variables such as review frequency, rating scores, and causal links to consumer behavior. Xie et al. (2016) explored the impact of management responses on hotel ratings, while Melián-González et al. (2013) examined the statistical relationship between the number of reviews and overall review scores. Other studies employed experimental methods to analyze antecedents of review credibility (Coursaris et al., 2017).
TripAdvisor, one of the world’s largest travel review platforms, has become a central repository for eWOM research. Since its inception in 2000, it has evolved into a leading global source for user-generated travel content. Its comprehensive dataset of user ratings, review texts, and meta-information (e.g., dates, reviewer profiles) has been widely used in empirical studies (Molinillo et al., 2016). For instance, Rita et al. (2022) found that review ratings significantly affect user sentiment, while Sayfuddin and Chen (2021) explored how fluctuations in hotel star ratings influence revenue outcomes. Scholars have also examined non-numerical review attributes, such as quality, consistency, and linguistic features. Xie et al. (2016) highlighted the central role of review quality in driving hotel popularity, whereas Reyes-Menendez et al. (2019) emphasized the importance of review volume and source authority. However, while prior research has extensively explored macro-level determinants of eWOM influence, micro-level textual features and user engagement signals such as readability and the use of “likes” have received comparatively limited scholarly attention.
Despite extensive research on TripAdvisor reviews, a critical gap remains regarding how users evaluate individual review features such as readability, title length, or reviewer credibility as signals of review helpfulness or persuasiveness. This study seeks to address this gap by focusing on user engagement behaviors such as “likes,” which function as proxies for perceived helpfulness.

2.2. Review Helpfulness and the Elaboration Likelihood Model in Tourism eWOM

In digital consumption environments, particularly within tourism and hospitality, online reviews have emerged as one of the most influential forms of user-generated content. Among the various evaluative signals embedded in these reviews, perceived helpfulness is widely regarded as a key indicator of content quality and credibility. Helpfulness reflects the extent to which a review is seen by others as valuable for decision making, and it often serves as a proxy for its persuasive or informational utility (Wang et al., 2019). On platforms like Amazon, this construct is typically operationalized through a binary system where users click “Yes” or “No” in response to prompts such as “Was this review helpful to you?” (Park & Kim, 2008). Reviews that accumulate more helpful votes are algorithmically prioritized, increasing their visibility and thereby amplifying their influence on subsequent user behavior (Ghose & Ipeirotis, 2010).
A substantial body of research has examined the textual and contextual factors that contribute to review helpfulness. Chua and Banerjee (2015) demonstrated that identity transparency and reviewer credibility significantly enhance helpfulness ratings. Huang et al. (2015) reported that review length positively correlates with perceived helpfulness, suggesting that more elaborated reviews provide deeper informational value. More recently, researchers have turned to machine learning-assisted approaches to examine features such as linguistic complexity, semantic coherence, and emotional tone. For example, Singh et al. (2017) introduced readability metrics to evaluate how language structure impacts engagement. However, the vast majority of these studies are situated within online retail contexts, such as Amazon or Yelp, and their applicability to tourism-specific platforms remains limited.
Unlike Amazon’s binary helpfulness voting system, TripAdvisor employs a like-based endorsement mechanism to surface socially validated reviews. Prior studies have identified such “likes” as heuristic signals that indicate social consensus and facilitate information triage in digital environments (Turel & Qahri-Saremi, 2024). These mechanisms not only shape content visibility through algorithmic ranking but also influence users’ perceptions of review credibility and usefulness (Filieri et al., 2018). Although different in format, the “like” function serves a similar social purpose by elevating reviews through algorithmic ranking and signaling to future readers which contributions have been socially validated (Lee et al., 2011; Meek et al., 2021; Zhou & Guo, 2017). Reviews with a higher number of likes tend to be more prominently displayed, thus becoming more influential in shaping tourist expectations and behaviors. Despite its practical significance, the use of likes as an engagement metric in tourism review contexts has received comparatively little theoretical attention.
Although platform-specific features such as the “like” button on TripAdvisor provide users with a low-effort means of expressing endorsement, the psychological processes behind these actions are far from trivial. Prior studies have shown that such engagement behaviors, whether in the form of likes, helpful votes, or shares, are influenced not only by content quality but also by how users cognitively process message cues (Filieri et al., 2018). In this sense, likes function not merely as expressions of preference but as outcome indicators of underlying evaluative mechanisms. To explain how users attend to different aspects of review content and make endorsement decisions, the Elaboration Likelihood Model (Cacioppo et al., 1986) provides a robust theoretical lens. It enables us to distinguish between deeper analytical processing and surface-level heuristic responses, both of which are highly relevant to digital review environments where attention is limited and information is abundant.
To better understand how users process and respond to various review features, the Elaboration Likelihood Model (ELM) offers a compelling explanatory framework. First introduced by Petty et al. (1981), ELM posits that individuals evaluate persuasive messages through two distinct cognitive pathways: a central route and a peripheral route. The central route involves deep processing of message content, focusing on the quality, logic, and evidential support of the arguments. In contrast, the peripheral route relies on external or surface-level cues, such as the credibility of the source, visual presentation, or writing style, particularly when cognitive motivation or ability is limited (Cacioppo et al., 1986; Cheung et al., 2012). In the context of online reviews, both processing routes can operate simultaneously, depending on the user’s level of involvement and available cognitive resources.
Tourism eWOM presents a particularly fertile context for applying ELM, given the information-rich yet uncertain nature of travel decisions. Many users engage with reviews in a selective or time-constrained manner, often using heuristic shortcuts to guide judgments. In high-involvement scenarios, travelers may deeply engage with the text, analyzing argument clarity, narrative coherence, and relevance. In contrast, under lower-involvement conditions, peripheral cues such as review length, the number of likes, reviewer profile features, or the presence of location information become decisive in shaping impression formation and perceived trustworthiness.
Despite ELM’s wide adoption in consumer behavior research, its application to tourism review platforms remains relatively limited. Prior studies have primarily examined how eWOM influences purchase intentions or attitude change, often through experimental designs (Fan & Miao, 2012; Park & Kim, 2008). However, few studies have modeled how specific review attributes, operationalized as central or peripheral cues, affect user engagement outcomes such as likes. For example, readability, which determines how easily a review can be cognitively processed, has been largely overlooked despite its conceptual alignment with central-route elaboration. Peripheral cues such as title structure, geographic disclosure, or a reviewer’s prior contributions have also not been systematically theorized in terms of their visibility effects or social signaling functions.
This study addresses these gaps by extending the ELM framework to the context of social endorsement in tourism eWOM. By treating likes as a behavioral signal of approval, rather than merely a passive impression, the research connects the cognitive mechanisms of message processing with real-world user interaction metrics. It examines how central-route variables such as readability affect endorsement likelihood and how peripheral cues, including title length, location disclosure, and reviewer activity, serve as heuristics under conditions of low elaboration. In doing so, this approach builds a theoretical bridge between persuasion models and engagement behaviors on tourism review platforms. Moreover, it recognizes that online users are not merely passive readers but active participants whose feedback shapes the broader visibility and influence of user-generated content.
To this end, this study develops a conceptual model grounded in the Elaboration Likelihood Model that identifies both central and peripheral processing cues as predictors of peer endorsement, operationalized through the number of likes a review receives. This framework enables a structured examination of how users cognitively and heuristically engage with digital review content. By distinguishing between these two processing routes, the model offers a parsimonious yet comprehensive explanation of user behavior in tourism eWOM contexts.
Peripheral cues represent low-effort indicators that users rely on when time, attention, or motivation is limited. These cues often serve as heuristics for quick judgment without requiring in-depth content evaluation. In the context of online reviews, one such cue is the number of prior contributions made by the reviewer. Filieri et al. (2018) suggest that visible reputation signals, such as accumulated contributions, inform users’ assessments of source credibility. Similarly, geographic location disclosure enhances the perceived authenticity of the reviewer and reduces psychological distance from the reader, thus functioning as a social trust cue (Srivastava & Kalro, 2019). Title structure is another peripheral element; concise and well-crafted titles can improve scannability and attract user attention, which may increase the likelihood of receiving likes (Biswas et al., 2022).
Central cues, by contrast, demand more cognitive effort and involve deliberate evaluation of the message content. Among these, readability stands out as a key factor in determining whether a review is comprehensible and thus persuasive. Defined as the ease with which written content can be understood, readability is essential for user engagement. Previous studies (Baek et al., 2012; Reyes-Menendez et al., 2019) indicate that higher readability enhances persuasiveness by lowering cognitive barriers and improving processing fluency. Despite its conceptual alignment with central-route elaboration, readability has not been widely examined in tourism eWOM, which underscores the need for its inclusion in this study.
Based on this theoretical framework, four hypotheses are proposed.
H1. 
Reviewer experience, measured by the number of prior contributions, is expected to positively influence the number of likes, reflecting enhanced credibility.
H2. 
The presence of location information in the reviewer’s profile is posited to increase perceived transparency and authenticity, thereby boosting endorsement.
H3. 
Review title length is hypothesized to affect user engagement by influencing initial attention and content triage.
H4. 
Higher readability is anticipated to promote social approval, as clear and accessible content facilitates easier processing.

2.3. Wine Tourism and the Role of eWOM

Given its experiential and emotionally charged nature, wine tourism has emerged as a dynamic sub-sector within cultural and experiential tourism, offering travelers a blend of scenic exploration, gastronomic enjoyment, and cultural immersion rooted in local heritage and sustainability. No longer considered a niche activity, it now plays an integral role in regional tourism strategies, providing wineries with direct-to-consumer marketing opportunities and serving as a platform for brand storytelling (Santos et al., 2019). In the post-pandemic era, wine tourism has gained renewed appeal, aligning with travelers’ growing preference for low-density, open-air, and emotionally rewarding alternatives to mass tourism (Alebaki et al., 2022). Recognized by the UNWTO for its potential to promote sustainability and revitalize rural economies, wine tourism contributes significantly to national GDP and employment, estimated at EUR 30 billion annually and over 400,000 jobs in Germany alone (Tafel & Szolnoki, 2020). These emotionally resonant and sensorially rich experiences make wine tourism especially conducive to electronic word-of-mouth (eWOM), as travelers increasingly turn to peer narratives to assess authenticity, value, and destination appeal.
Despite its economic and cultural significance, wine tourism remains under-researched in the context of digital behavior, particularly concerning how travelers produce and evaluate online reviews. Most scholarly attention has focused on macro-level themes such as destination competitiveness, regional branding, and visitor segmentation. For instance, Ferreira and Hunter (2017) conducted a comparative study of wine tourism development in South Africa, while Karagiannis and Metaxas (2020) examined the marketing of wine routes in Greece. Conceptual reviews by Montella (2017) and Gómez et al. (2019) have synthesized developments in the field and identified key research trajectories.
However, few studies have systematically explored how digital feedback mechanisms, such as likes, endorsements, and perceived helpfulness, operate within the context of wine tourism. This is surprising given the increasing prominence of TripAdvisor as a space where travelers evaluate, share, and validate wine-related experiences. Travelers use these platforms not only to narrate their visits but also to construct and disseminate place-based meaning. As such, wine tourism offers a fertile context for examining how review characteristics influence peer validation and how eWOM mechanisms mediate the social construction of destination value.
Table 1 below summarizes key studies that have shaped the contemporary wine tourism literature. These include both empirical and conceptual contributions, selected based on their thematic relevance to wine tourism development, diversity in geographical focus, and representation of major scholarly trajectories in the field. The table aims to illustrate the dominant focus of prior research, particularly its emphasis on macro-level issues such as destination branding, segmentation, and regional strategies.

3. Methodology

This study adopts a quantitative approach to investigate the relationship between specific structural and contextual features of user-generated content and peer-based endorsement behaviors in the form of “likes.” Anchored in the Elaboration Likelihood Model, this research integrates automated web-based data acquisition, advanced text preprocessing, metric-based operationalization of linguistic features, and multivariate regression modeling. The overall approach was structured to ensure both empirical generalizability and computational transparency.

3.1. Data Collection

The dataset for this study was constructed using a Python-based crawler developed in version 3.9, leveraging libraries such as requests, BeautifulSoup, and Selenium for dynamic content parsing. Reviews were scraped from the top 100 most popular short wine-related travel experiences listed on TripAdvisor in the United States. Each review’s metadata, including review title, body text, user profile attributes (e.g., reviewer’s prior contributions and location visibility), and the total number of likes received, was extracted using a multi-layered DOM traversal strategy. To minimize systematic bias and platform detection, the crawler employed randomized user agent strings, session rotations, and asynchronous delay mechanisms. In total, 7942 valid reviews were collected and exported in both CSV and JSON formats for structured analysis.

3.2. Variable Operationalization

To prepare the dataset for statistical modeling, raw review content and metadata were transformed into analytically tractable variables. To aid in comprehension, Figure 1 presents a representative example of a TripAdvisor review used in this study and visualizes the key analytical variables extracted for analysis, including title length, readability, reviewer’s prior contributions, location disclosure, and number of likes.
The readability of review texts was computed using the Automated Readability Index (ARI), a metric widely adopted in digital communication and eWOM studies to capture linguistic complexity and processing fluency (Cruz & Lee, 2016). This index is particularly suitable for large-scale digital corpora due to its character-count basis, offering lower error rates compared to syllable-based measures. The ARI is defined as
ARI = 4.71 × Characters Words + 0.5 × Words Sentences 21.43
Text parsing was conducted using the nltk library, where tokens were segmented via the Punkt tokenizer, and non-alphanumeric characters were excluded. Higher ARI values correspond to increased linguistic complexity, thereby capturing the cognitive elaboration required by the central route of information processing.
Review title length was operationalized as the total number of alphabetic and numeric characters in the review title, following prior research on heuristic cues that impact user attention and review visibility (Ghasemaghaei et al., 2018). Location disclosure was treated as a binary indicator variable, where “1” represented public disclosure of the reviewer’s geographic location and “0” indicated either omission or suppression of this information on the platform. This identity cue has been shown to enhance message authenticity and psychological proximity in online review contexts (Cao et al., 2018). Reviewer’s prior contributions, a continuous variable representing the total number of previous reviews posted by the user, was extracted directly and verified via cross-matching with user profile links. This metric serves as a proxy for the reviewer’s source credibility, a commonly accepted peripheral cue in eWOM literature. To address skewness, reviewer’s prior contributions was log-transformed in robustness checks.

3.3. Data Analysis

The processed dataset was imported into the R programming environment (version 4.3.1), where statistical analyses were performed using the tidyverse, lmtest, and sandwich packages. Descriptive statistics were first computed to examine the central tendency and dispersion of each variable. Distributions were visualized through histograms and density plots to identify skewness or kurtosis, and normality was tested using the Shapiro–Wilk test. Bivariate correlations were examined to detect multicollinearity, and all variance inflation factors (VIF) were found to be below the conservative threshold of 2.5.
The core analytical model consisted of an Ordinary Least Squares (OLS) regression, where the number of likes received by each review served as the dependent variable. The model is formally specified as
Likes i = β 0 + β 1 · Readability i + β 2 · TitleLength i + β 3 · Contributions i + β 4 · LocationDisclosure i + ε i
In this specification, the dependent variable Likes i represents the number of likes received by the ith review, serving as a proxy for peer-based endorsement and perceived helpfulness. The variable Readability i denotes the ARI score of the review text, which captures linguistic complexity and aligns with central-route information processing. The predictor TitleLength i refers to the number of alphanumeric characters in the review title. The predictor Contributions i captures the total number of previous reviews posted by the reviewer, reflecting source credibility. The binary variable LocationDisclosure i indicates whether the reviewer publicly disclosed their geographic location. The term ε i captures the error term, representing unexplained variance in the model and accounting for random fluctuations not captured by the explanatory variables.

4. Results

4.1. Descriptive Analysis

Descriptive statistics are summarized in Table 2. On average, a review received 0.492 likes, with a median of 0 and a maximum of 24. Reviewer contributions showed a highly right-skewed distribution, with a mean of 25.84, a median of 3, and a maximum of 7176. Title length ranged from 0 to 128 characters, with a mean of 24.86 and a standard deviation of 14.62. The readability index varied between 0.001 and 51.27, with values largely concentrated between 3 and 6, and a mean of 4.49. Location disclosure was relatively balanced, with 53.6% of users choosing to display their geographic identity.
To illustrate the distributions in greater detail, Table 3 categorizes the independent variables into frequency bands. More than 74% of reviewers had made fewer than 10 previous contributions. Approximately 83% of titles were under 50 characters, and over 98% of reviews had readability indices below 10. Roughly 53.7% of users disclosed their location.
To visualize the data structure, histograms for all four independent variables and the dependent variable (likes) are presented in Figure 2. These plots reveal high skewness in “likes” and “contributions,” suggesting that only a small number of reviews receive a high number of likes or are posted by prolific reviewers.
To examine initial associations between the dependent and independent variables, Pearson correlation coefficients were computed and are presented in Table 4. Reviewer contributions (r = 0.07, p = 0.002) and title length (r = 0.04, p = 0.037) were positively and significantly correlated with likes. Readability showed a small negative but non-significant correlation (r = –0.05, p = 0.118). Location disclosure, although binary, also displayed a moderate and statistically significant correlation with likes (r = 0.14, p < 0.001). These results suggest that while linear associations are generally modest, several predictors show meaningful significance in bivariate relationships. Table 4 and Figure 3 summarize these correlations and their statistical significance.

4.2. Regression Analysis

To evaluate the hypothesized relationships, an Ordinary Least Squares (OLS) regression was conducted using the number of likes as the dependent variable. The model incorporated four predictors: readability index, title length, reviewer’s prior contributions, and location disclosure. The results are summarized in Table 5.
The results indicate that location disclosure is a strong positive predictor of likes, suggesting that displaying geographic identity enhances perceived authenticity. The readability index has a significant negative effect, indicating that less complex (i.e., easier-to-read) content is more likely to receive peer endorsement. Title length also positively affects likes, though the effect size is smaller. Contributions have a positive but weaker influence on likes, supporting their role as a credibility cue. These results support all four hypotheses, albeit with varying degrees of effect size.
The baseline OLS model (Table 5) explained approximately 28.1% of the variance in likes (R2 = 0.281, adjusted R2 = 0.276). This represents a relatively strong model fit compared to prior studies on online review helpfulness, which often report lower explanatory power (e.g., R2 in the range of 0.1–0.2; Ghose & Ipeirotis, 2010).
To further explore potential interaction effects, an extended model was estimated with interaction terms between readability index and title length and between contributions and location disclosure. Table 6 presents the updated regression estimates.
The interaction between readability and title length is statistically significant and negative, indicating that the negative effect of readability on likes becomes stronger as the title gets longer. This suggests a compounding cognitive cost when both components demand greater processing effort. Contributions remain a statistically significant positive predictor in this extended model.

4.3. ANOVA Analysis

A Type II ANOVA was conducted to assess the overall contribution of each independent variable to the explained variance in likes. The results are presented in Table 7.
The ANOVA supports the regression findings by confirming that all four predictors explain a statistically significant portion of the variance in likes. The results affirm that both central-route and peripheral-route cues significantly predict peer endorsement (likes) on TripAdvisor wine tour reviews. Location disclosure and text readability emerged as the most robust predictors, with additional nuance uncovered through interaction modeling. These findings will be further contextualized and interpreted in the following chapter.

5. Discussion

The present study sought to examine how structural and linguistic features of online reviews predict social endorsement behavior, as measured by the number of “likes”, on TripAdvisor wine tour pages. Grounded in the Elaboration Likelihood Model (ELM), the findings affirm that both central-route (readability) and peripheral-route (location disclosure, title length, and reviewer’s prior contributions) cues play significant roles in determining review popularity. This chapter interprets these findings, connects them with existing literature, and discusses their theoretical and practical implications.
Among all predictors, location disclosure demonstrated the strongest positive effect on the number of likes. This aligns with previous findings that suggest users interpret geographic information as a cue for transparency, authenticity, and shared identity (Srivastava & Kalro, 2019; Filieri et al., 2018). In online environments, especially those involving intangible products like travel experiences, disclosing one’s location may serve as a proxy for social presence—a key determinant of trust in computer-mediated communication. This finding suggests that readers place greater value on reviews perceived to be authored by “real people in real places,” reinforcing the importance of identity cues in eWOM credibility judgments (Xie et al., 2016). This result partially contradicts earlier studies in general e-commerce contexts where reviewer location was found to have a marginal or non-significant effect. One possible explanation lies in the tourism-specific context: travel decisions are more experience-based and less standardized than retail purchases, increasing reliance on contextual signals such as reviewer origin.
Contrary to expectations, the readability index exhibited a statistically significant negative effect on likes, indicating that more complex reviews are less likely to be endorsed. This supports findings by Baek et al. (2012) and Singh et al. (2017), who observed that easier-to-read reviews enhance user engagement by reducing cognitive load. In the context of ELM, this suggests that even when users are capable of central-route processing, they prefer cognitively accessible messages, particularly in high-choice environments saturated with competing content (Moradi & Zihagh, 2022). Interestingly, this finding diverges from traditional media studies, where greater syntactic complexity often correlates with perceived credibility or authority (Yeboah-Banin et al., 2018). The difference may be attributable to the informal, peer-based nature of online review environments, where users prioritize relatability over formal authority. As such, high readability may function less as a central cue and more as a hybrid heuristic, blurring the boundary between ELM routes in digital peer contexts (Cheung et al., 2012).
Title length had a modest but positive effect on likes, consistent with studies that emphasize the “attention gateway” role of titles (Biswas et al., 2022). Longer titles may increase scannability and preview informativeness, which are critical in low-effort browsing conditions typical of mobile-based travel planning. This finding reinforces the peripheral-route hypothesis: users form early impressions based on visible surface-level cues before engaging with full content (Park & Kim, 2008). Yet, the interaction term between readability and title length revealed a subtle but important moderation effect: the negative influence of complex readability becomes more pronounced when the title is also long. This suggests a cumulative processing burden, as users may disengage entirely when both the headline and body require substantial effort, aligning with dual-task processing theories that propose simultaneous high-load inputs can trigger selective attention withdrawal.
Reviewer reviewer’s prior contributions, while positive in direction, showed a weaker effect than expected. Prior literature has treated reviewer history as a key component of source credibility (Chua & Banerjee, 2015), yet in this study, its influence was only marginally significant. One possible explanation is the plateauing effect of reputation: after a certain threshold, additional contributions may no longer enhance perceived reliability. Another interpretation concerns platform design. On TripAdvisor, reviewers’ prior contributions are visible but less prominently displayed compared to star ratings or profile photos. As such, unless users actively seek it, this cue may be underutilized. This finding echoes Chevalier and Mayzlin (2006), who argue that user attention is often anchored on the most salient cues, which limits the persuasive power of background metrics such as contribution volume.
The regression model explained approximately 28% of the variance in likes, a level moderately higher than most prior studies in the domain of online review helpfulness, which often report R2 values in the range of 0.1 to 0.2 (Ghose & Ipeirotis, 2010). This suggests that the selected predictors provide a meaningful explanatory framework. The relatively strong model performance may also be attributed to domain specificity: wine tourism reviews may have more consistent structural and rhetorical patterns compared to general travel or product reviews.

6. Implication

6.1. Theoretical Implications

First, this study develops the Elaboration Likelihood Model by demonstrating that central and peripheral cues do not operate in isolation but may interact to jointly influence user response. The observed interaction between readability and title length (β = –0.00063, p = 0.018) suggests that when both textual complexity and headline length are high, user engagement declines. Rather than functioning as independent persuasive routes, these cues may compound users’ cognitive effort and reduce the likelihood of elaboration. This interaction highlights the need to reconsider the assumption of route independence in ELM and supports more nuanced perspectives that account for cross-cue dynamics (Kruglanski & Thompson, 1999).
Second, this study redefines readability not as a classic central-route variable promoting elaboration, but rather as a filtering mechanism that deters engagement under cognitive strain. Unlike traditional applications of ELM that associate complex arguments with deeper persuasion, our results suggest that high linguistic complexity may backfire in user-generated contexts. In peer-to-peer review ecosystems, simplicity facilitates persuasion not through argument strength, but through reducing processing effort. This challenges conventional understandings of elaboration quality and raises important questions about how digital audiences redefine “effortful thinking” in participatory environments.
Third, the findings shed new light on the signaling function of identity-based cues in eWOM. Location disclosure, as the strongest predictor, suggests that personal transparency operates as a powerful heuristic of authenticity. This extends beyond the traditional Elaboration Likelihood Model (ELM) classification of peripheral cues by positioning identity signals may function as contextually central cues in settings where trust and experiential relevance are critical. In travel reviews, where readers actively seek empathetic and experience-proximal guidance, such cues may function similarly to testimonial anchoring, thus broadening the ELM’s applicability to domains characterized by social co-presence and contextual resonance.
Fourth, the modest explanatory power of reviewers’ prior contributions problematizes longstanding assumptions about source credibility hierarchies. While reviewer expertise is often framed as a dominant peripheral cue (Cheung et al., 2012), its weak effect here suggests the need for a more nuanced credibility construct that accounts for visibility, salience, and platform-specific affordances. This invites theoretical refinement of the source credibility dimension within ELM-informed models, emphasizing that not all cues labeled “credible” translate into actual persuasive leverage, particularly when they compete with more intuitively processed signals such as geography or message clarity.

6.2. Practical Implications

This study provides practical insights for various stakeholders involved in digital travel platforms and content creation. For individual reviewers, the findings suggest that transparent identity presentation and writing clarity significantly enhance social endorsement. Specifically, disclosing one’s geographic location, arguably a low-effort action, emerged as the strongest predictor of likes. This underscores the value of contextual self-disclosure in fostering trust and relatability, which in turn boosts peer recognition. Platforms may consider nudging users to share such contextual cues through subtle interface design, while content strategists can tailor recommendation algorithms to prioritize reviews that exhibit both clarity and authenticity. This implies that users seeking broader influence in digital communities can benefit from selectively revealing contextual cues that humanize their content. Similarly, avoiding overly complex language enhances readability and engagement, especially in saturated online environments where cognitive ease becomes a prerequisite for interaction. Reviewers aiming to build reputation or visibility should prioritize approachable language and self-identification as low-cost strategies to boost audience response.
For digital content strategists and tourism marketers, the results underscore the importance of peripheral design elements in shaping user attention and interaction. The modest yet positive effect of title length suggests that optimizing headline construction can materially affect downstream engagement. Particularly in travel and wine-related contexts, where choices are emotionally driven and search-based, well-crafted titles serve as perceptual gateways. However, the observed interaction between title length and readability suggests that overly elaborate phrasing may diminish effectiveness. Strategists should therefore prioritize concise yet informative titles alongside accessible review content to minimize cognitive load.
For platform designers and UI/UX architects, these insights offer actionable guidelines for structuring user-generated content displays. The weak effect of reviewers’ prior contributions relative to location disclosure highlights that even valid credibility cues may go unnoticed unless they are saliently presented. This calls for redesigning review interfaces to foreground contextual signals (e.g., location and temporal proximity) rather than relying solely on cumulative metrics like post count. Moreover, implementing readability analytics or title-length indicators in real time could empower users to adjust their content for maximum reach, effectively gamifying clarity without compromising authenticity.
For the broader travel and wine tourism industry, this study provides empirical evidence on how digital trust is negotiated in peer communication environments. As location-based transparency and message simplicity drive endorsement, destination marketing organizations (DMOs) and tour operators may consider incentivizing user reviews that include personal context. For instance, encouraging reviewers to include their hometown or travel motivations can enhance the perceived authenticity and emotional resonance of reviews. Furthermore, training tour staff to facilitate post-trip reflections with clear prompts may generate more readable content, thereby improving the visibility and persuasive power of authentic testimonials across platforms. While the number of likes does not directly indicate business performance outcomes such as increased sales or winery visitation, it functions as a key signal of social validation that shapes content ranking and visibility on digital platforms. This increased exposure can influence consumer awareness and trust formation, both of which are essential precursors to conversion in the tourism decision-making process. As such, optimizing review characteristics that enhance peer endorsement remains a strategically relevant practice for wine tourism marketers, even in the absence of direct economic attribution.

7. Limitation and Future Research

While this study provides robust insights into the linguistic and structural predictors of peer endorsement, it is limited by its exclusive focus on textual variables within a single content modality (El-Said, 2020). Nonverbal cues such as reviewer profile images or attached photos, commonly present in real-world reviews, were not incorporated into the analysis. Future research could adopt a multimodal approach by integrating visual and temporal features to better capture the full spectrum of factors influencing engagement. Additionally, while the present study focused on wine tourism, replicating the model across diverse travel verticals (e.g., adventure and wellness) would further validate the generalizability of these findings.
Furthermore, the data were exclusively sourced from TripAdvisor, which, while being one of the most widely used and credible eWOM platforms in tourism research, may limit the generalizability of findings to other review ecosystems with different user demographics or engagement mechanisms (e.g., Yelp and Google Reviews). Future studies could adopt a multi-platform design to validate whether the observed patterns persist across different digital environments. Moreover, while this study uses likes as a proxy for peer endorsement, it does not directly reflect business outcomes such as revenue or visitation. Future research could explore the causal link between digital endorsement metrics and firm-level performance in wine tourism. Finally, as with all user-generated content, the reviews analyzed may reflect certain biases such as self-selection or emotional extremity, which were not explicitly controlled for. Future research could incorporate content-based filtering or sentiment analysis to further address this issue.

Author Contributions

Conceptualization, J.Z.; Methodology, J.Z.; Software, J.Z.; Validation, X.W.; Formal analysis, J.Z.; Investigation, X.W. and Y.M.; Resources, Y.M.; Data curation, J.Z.; Writing—original draft, J.Z.; Writing—review & editing, X.W.; Visualization, J.Z.; Supervision, X.W.; Project administration, X.W.; Funding acquisition, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Department of Education of Guangdong Province (grant number UICR0400019-23), the Department of Science and Technology of Guangdong Province (Guangdong and Hong Kong Universities “1+1+1” Joint Research Collaboration Scheme), and the Guangdong Planning Office of Philosophy and Social Science (grant number GD25YSG29).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in TripAdvisor at https://www.tripadvisor.com/.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alebaki, M., Psimouli, M., & Kladou, S. (2022). Social media for wine tourism: The digital winescape of Cretan wineries in the era of COVID-19. In Global strategic management in the service industry: A perspective of the New Era (pp. 81–98). Emerald Publishing Limited. [Google Scholar]
  2. Baek, H., Ahn, J., & Choi, Y. (2012). Helpfulness of online consumer reviews: Readers’ objectives and review cues. International Journal of Electronic Commerce, 17(2), 99–126. [Google Scholar] [CrossRef]
  3. Bahtar, A. Z., & Muda, M. (2016). The impact of User–Generated Content (UGC) on product reviews towards online purchasing—A conceptual framework. Procedia Economics and Finance, 37, 337–342. [Google Scholar] [CrossRef]
  4. Biswas, B., Sengupta, P., Kumar, A., Delen, D., & Gupta, S. (2022). A critical assessment of consumer reviews: A hybrid NLP-based methodology. Decision Support Systems, 159, 113799. [Google Scholar] [CrossRef]
  5. Black, H. G., & Kelley, S. W. (2009). A storytelling perspective on online customer reviews reporting service failure and recovery. Journal of Travel and Tourism Marketing, 26(1), 169–179. [Google Scholar] [CrossRef]
  6. Buttle, F. A. (1998). Word of mouth: Understanding and managing referral marketing. Journal of Strategic Marketing, 6(3), 241–254. [Google Scholar] [CrossRef]
  7. Cacioppo, J. T., Petty, R. E., Kao, C. F., & Rodriguez, R. (1986). Central and peripheral routes to persuasion: An individual difference perspective. Journal of Personality and Social Psychology, 51(5), 1032. [Google Scholar] [CrossRef]
  8. Cao, Z., Hui, K. L., & Xu, H. (2018). An economic analysis of peer disclosure in online social communities. Information Systems Research, 29(3), 546–566. [Google Scholar] [CrossRef]
  9. Cheng, X., & Zhou, M. (2010, August 24–26). Study on effect of eWOM: A literature review and suggestions for future research. 2010 International Conference on Management and Service Science (pp. 1–4), Wuhan, China. [Google Scholar]
  10. Cheung, C. M. Y., Sia, C. L., & Kuan, K. K. (2012). Is this review believable? A study of factors affecting the credibility of online consumer reviews from an ELM perspective. Journal of the Association for Information Systems, 13(8), 2. [Google Scholar] [CrossRef]
  11. Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43(3), 345–354. [Google Scholar] [CrossRef]
  12. Christou, E., & Chatzigeorgiou, C. (2020). Adoption of social media as distribution channels in tourism marketing: A qualitative analysis of consumers’ experiences. Journal of Tourism, Heritage & Services Marketing (JTHSM), 6(1), 25–32. [Google Scholar]
  13. Chu, S. C., & Kim, Y. (2011). Determinants of consumer engagement in electronic word-of-mouth (eWOM) in social networking sites. International Journal of Advertising, 30(1), 47–75. [Google Scholar] [CrossRef]
  14. Chua, A. Y., & Banerjee, S. (2013). Reliability of reviews on the internet: The case of TripAdvisor. In World congress on engineering & computer science: International conference on internet and multimedia technologies (pp. 453–457). York. [Google Scholar]
  15. Chua, A. Y., & Banerjee, S. (2015). Understanding review helpfulness as a function of reviewer reputation, review rating, and review depth. Journal of the Association for Information Science and Technology, 66(2), 354–362. [Google Scholar] [CrossRef]
  16. Colombini, D. C. (2015). Wine tourism in Italy. International Journal of Wine Research, 7(1), 29–35. [Google Scholar] [CrossRef]
  17. Coursaris, C., Osch, W., & Albini, A. (2017). What drives perceptions of review trustworthiness in electronic word-of-mouth: An experimental study of TripAdvisor. In Atas da conferência da associação portuguesa de sistemas de informação (Vol. 17, pp. 111–126). CAPSI 2017 Proceedings. [Google Scholar]
  18. Cruz, R. A., & Lee, H. J. (2016). The effects of sentiment and readability on useful votes for customer reviews with count type review usefulness index. Journal of Intelligence and Information Systems, 22(1), 43–61. [Google Scholar] [CrossRef]
  19. Donthu, N., Kumar, S., Pandey, N., Pandey, N., & Mishra, A. (2021). Mapping the electronic word-of-mouth (eWOM) research: A systematic review and bibliometric analysis. Journal of Business Research, 135, 758–773. [Google Scholar] [CrossRef]
  20. El-Said, O. A. (2020). Impact of online reviews on hotel booking intention: The moderating role of brand image, star category, and price. Tourism Management Perspectives, 33, 100604. [Google Scholar] [CrossRef]
  21. Erkan, I., & Evans, C. (2018). Social media or shopping websites? The influence of eWOM on consumers’ online purchase intentions. Journal of Marketing Communications, 24(6), 617–632. [Google Scholar] [CrossRef]
  22. Fan, Y. W., & Miao, Y. F. (2012). Effect of electronic word-of-mouth on consumer purchase intention: The perspective of gender differences. International Journal of Electronic Business Management, 10(3), 175. [Google Scholar]
  23. Ferreira, S. L., & Hunter, C. A. (2017). Wine tourism development in South Africa: A geographical analysis. Tourism Geographies, 19(5), 676–698. [Google Scholar] [CrossRef]
  24. Figueroa B, E., & Rotarou, E. S. (2018). Challenges and opportunities for the sustainable development of the wine tourism sector in Chile. Journal of Wine Research, 29(4), 243–264. [Google Scholar] [CrossRef]
  25. Filieri, R. (2016). What makes an online consumer review trustworthy? Annals of Tourism Research, 58, 46–64. [Google Scholar] [CrossRef]
  26. Filieri, R., Acikgoz, F., Ndou, V., & Dwivedi, Y. (2020). Is TripAdvisor still relevant? The influence of review credibility, review usefulness, and ease of use on consumers’ continuance intention. International Journal of Contemporary Hospitality Management, 33(1), 199–223. [Google Scholar] [CrossRef]
  27. Filieri, R., Hofacker, C. F., & Alguezaui, S. (2018). What makes information in online consumer reviews diagnostic over time? The role of review relevancy, factuality, currency, source credibility and ranking score. Computers in Human Behavior, 80, 122–131. [Google Scholar] [CrossRef]
  28. Filieri, R., & McLeay, F. (2014). E-WOM and accommodation: An analysis of the factors that influence travelers’ adoption of information from online reviews. Journal of Travel Research, 53(1), 44–57. [Google Scholar] [CrossRef]
  29. Gerdt, S. O., Wagner, E., & Schewe, G. (2019). The relationship between sustainability and customer satisfaction in hospitality: An explorative investigation using eWOM as a data source. Tourism Management, 74, 155–172. [Google Scholar] [CrossRef]
  30. Ghasemaghaei, M., Eslami, S. P., Deal, K., & Hassanein, K. (2018). Reviews’ length and sentiment as correlates of online reviews’ ratings. Internet Research, 28(3), 544–563. [Google Scholar] [CrossRef]
  31. Ghose, A., & Ipeirotis, P. G. (2010). Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics. IEEE Transactions on Knowledge and Data Engineering, 23(10), 1498–1512. [Google Scholar] [CrossRef]
  32. Gómez, M., Pratt, M. A., & Molina, A. (2019). Wine tourism research: A systematic review of 20 vintages from 1995 to 2014. Current Issues in Tourism, 22(18), 2211–2249. [Google Scholar] [CrossRef]
  33. Hall, C. M., Johnson, G., Cambourne, B., Macionis, N., Mitchell, R., & Sharples, L. (2009). Wine tourism: An introduction. In Wine tourism around the world (pp. 1–23). Routledge. [Google Scholar]
  34. Huang, A. H., Chen, K., Yen, D. C., & Tran, T. P. (2015). A study of factors that contribute to online review helpfulness. Computers in Human Behavior, 48, 17–27. [Google Scholar] [CrossRef]
  35. Jeacle, I., & Carter, C. (2011). In TripAdvisor we trust: Rankings, calculative regimes and abstract systems. Accounting, Organizations and Society, 36(4–5), 293–309. [Google Scholar] [CrossRef]
  36. Jones, M. F., Singh, N., & Hsiung, Y. (2015). Determining the critical success factors of the wine tourism region of Napa from a supply perspective. International Journal of Tourism Research, 17(3), 261–271. [Google Scholar] [CrossRef]
  37. Karagiannis, D., & Metaxas, T. (2020). Sustainable wine tourism development: Case studies from the Greek region of Peloponnese. Sustainability, 12(12), 5223. [Google Scholar] [CrossRef]
  38. Kruglanski, A. W., & Thompson, E. P. (1999). Persuasion by a single route: A view from the unimodel. Psychological Inquiry, 10(2), 83–109. [Google Scholar] [CrossRef]
  39. Lee, H. A., Law, R., & Murphy, J. (2011). Helpful reviewers in TripAdvisor, an online travel community. Journal of Travel & Tourism Marketing, 28(7), 675–688. [Google Scholar] [CrossRef]
  40. Li, Y., & Xie, Y. (2020). Is a picture worth a thousand words? An empirical study of image content and social media engagement. Journal of Marketing Research, 57(1), 1–19. [Google Scholar] [CrossRef]
  41. Litvin, S. W., Goldsmith, R. E., & Pan, B. (2008). Electronic word-of-mouth in hospitality and tourism management. Tourism Management, 29(3), 458–468. [Google Scholar] [CrossRef]
  42. Litvin, S. W., Goldsmith, R. E., & Pan, B. (2018). A retrospective view of electronic word-of-mouth in hospitality and tourism management. International Journal of Contemporary Hospitality Management, 30(1), 313–325. [Google Scholar] [CrossRef]
  43. López, M., & Sicilia, M. (2014). Determinants of E-WOM influence: The role of consumers’ internet experience. Journal of Theoretical and Applied Electronic Commerce Research, 9(1), 28–43. [Google Scholar] [CrossRef]
  44. Luo, T., Freeman, C., & Stefaniak, J. (2020). “Like, comment, and share”—Professional development through social media in higher education: A systematic review. Educational Technology Research and Development, 68(4), 1659–1683. [Google Scholar] [CrossRef]
  45. Meek, S., Wilk, V., & Lambert, C. (2021). A big data exploration of the informational and normative influences on the helpfulness of online restaurant reviews. Journal of Business Research, 125, 354–367. [Google Scholar] [CrossRef]
  46. Melián-González, S., Bulchand-Gidumal, J., & González López-Valcárcel, B. (2013). Online customer reviews of hotels: As participation increases, better evaluation is obtained. Cornell Hospitality Quarterly, 54(3), 274–283. [Google Scholar] [CrossRef]
  47. Meuter, M. L., McCabe, D. B., & Curran, J. M. (2013). Electronic word-of-mouth versus interpersonal word-of-mouth: Are all forms of word-of-mouth equally influential? Services Marketing Quarterly, 34(3), 240–256. [Google Scholar] [CrossRef]
  48. Molina, A., Gómez, M., González-Díaz, B., & Esteban, Á. (2015). Market segmentation in wine tourism: Strategies for wineries and destinations in Spain. Journal of Wine Research, 26(3), 192–224. [Google Scholar] [CrossRef]
  49. Molinillo, S., Fernández-Morales, A., Ximénez-de-Sandoval, J. L., & Coca-Stefaniak, A. (2016). Hotel assessment through social media–TripAdvisor as a case study. Tourism & Management Studies, 12(1), 15–24. [Google Scholar]
  50. Montella, M. M. (2017). Wine tourism and sustainability: A review. Sustainability, 9(1), 113. [Google Scholar] [CrossRef]
  51. Moradi, M., & Zihagh, F. (2022). A meta-analysis of the elaboration likelihood model in the electronic word of mouth literature. International Journal of Consumer Studies, 46(5), 1900–1918. [Google Scholar] [CrossRef]
  52. Park, D. H., & Kim, S. (2008). The effects of consumer knowledge on message processing of electronic word-of-mouth via online consumer reviews. Electronic Commerce Research and Applications, 7(4), 399–410. [Google Scholar] [CrossRef]
  53. Petty, R. E., Cacioppo, J. T., & Goldman, R. (1981). Personal involvement as a determinant of argument-based persuasion. Journal of Personality and Social Psychology, 41(5), 847. [Google Scholar] [CrossRef]
  54. Pyle, M. A. (2010). Word-of-mouth: Are we hearing what the consumer is saying? Advances in Consumer Research, 37, 341. [Google Scholar]
  55. Reyes-Menendez, A., Saura, J. R., & Martinez-Navalon, J. G. (2019). The impact of e-WOM on hotels management reputation: Exploring tripadvisor review credibility with the ELM model. IEEE Access, 7, 68868–68877. [Google Scholar] [CrossRef]
  56. Rita, P., Ramos, R., Borges-Tiago, M. T., & Rodrigues, D. (2022). Impact of the rating system on sentiment and tone of voice: A Booking.com and TripAdvisor comparison study. International Journal of Hospitality Management, 104, 103245. [Google Scholar] [CrossRef]
  57. Sann, R., Lai, P. C., & Chen, C. T. (2021). Review papers on eWOM: Prospects for hospitality industry. Anatolia, 32(2), 177–206. [Google Scholar] [CrossRef]
  58. Santos, V. R., Ramos, P., Almeida, N., & Santos-Pavón, E. (2019). Wine and wine tourism experience: A theoretical and conceptual review. Worldwide Hospitality and Tourism Themes, 11(6), 718–729. [Google Scholar] [CrossRef]
  59. Sayfuddin, A. T. M., & Chen, Y. (2021). The signaling and reputational effects of customer ratings on hotel revenues: Evidence from TripAdvisor. International Journal of Hospitality Management, 99, 103065. [Google Scholar] [CrossRef]
  60. Singh, J. P., Irani, S., Rana, N. P., Dwivedi, Y. K., Saumya, S., & Roy, P. K. (2017). Predicting the “helpfulness” of online consumer reviews. Journal of Business Research, 70, 346–355. [Google Scholar] [CrossRef]
  61. Srivastava, V., & Kalro, A. D. (2019). Enhancing the helpfulness of online consumer reviews: The role of latent (content) factors. Journal of Interactive Marketing, 48, 33–50. [Google Scholar] [CrossRef]
  62. Statista. (2022). Global wine tourism market size 2020–2030. Available online: https://www.statista.com/topics/8997/wine-tourism-in-european-countries/ (accessed on 15 January 2023).
  63. Tafel, M. C., & Szolnoki, G. (2020). Relevance and challenges of wine tourism in Germany: A winery operators’ perspective. International Journal of Wine Business Research, 33(1), 60–79. [Google Scholar] [CrossRef]
  64. Themba, G., & Mulala, M. (2013). Brand-related eWOM and its effects on purchase decisions: An empirical study of university of Botswana students. International Journal of Business and Management, 8(8), 31. [Google Scholar] [CrossRef]
  65. Torres, J. P., Barrera, J. I., Kunc, M., & Charters, S. (2021). The dynamics of wine tourism adoption in Chile. Journal of Business Research, 127, 474–485. [Google Scholar] [CrossRef]
  66. TripAdvisor. (2018). 24 insights to shape your Tripadvisor strategy. Tripadvisor Insights. Available online: https://www.tripadvisor.co.uk/TripAdvisorInsights/w710 (accessed on 18 March 2025).
  67. Turel, O., & Qahri-Saremi, H. (2024). Role of “likes” and “dislikes” in influencing user behaviors on social media. Journal of Management Information Systems, 41(2), 515–545. [Google Scholar] [CrossRef]
  68. Ukpabi, D. C., & Karjaluoto, H. (2018). What drives travelers’ adoption of user-generated content? A literature review. Tourism Management Perspectives, 28, 251–273. [Google Scholar] [CrossRef]
  69. Wang, X., Tang, L. R., & Kim, E. (2019). More than words: Do emotional content and linguistic style matching matter on restaurant review helpfulness? International Journal of Hospitality Management, 77, 438–447. [Google Scholar] [CrossRef]
  70. Westbrook, R. A. (1987). Product/consumption-based affective responses and postpurchase processes. Journal of Marketing Research, 24(3), 258–270. [Google Scholar] [CrossRef]
  71. Xie, K. L., Chen, C., & Wu, S. (2016). Online consumer review factors affecting offline hotel popularity: Evidence from TripAdvisor. Journal of Travel & Tourism Marketing, 33(2), 211–223. [Google Scholar]
  72. Yeboah-Banin, A. A., Fosu, M., & Tsegah, M. (2018). Linguistic complexity and second language advertising audiences: Is there a case for linguistic exclusion? Journal of Communication Inquiry, 42(1), 70–90. [Google Scholar] [CrossRef]
  73. Zhou, S., & Guo, B. (2017). The order effect on online review helpfulness: A social influence perspective. Decision Support Systems, 93, 77–87. [Google Scholar] [CrossRef]
Figure 1. Variables involved in data analysis after data processing.
Figure 1. Variables involved in data analysis after data processing.
Tourismhosp 06 00145 g001
Figure 2. Distributions of key variables.
Figure 2. Distributions of key variables.
Tourismhosp 06 00145 g002
Figure 3. Correlation matrix of variables.
Figure 3. Correlation matrix of variables.
Tourismhosp 06 00145 g003
Table 1. Summary of selected studies of wine tourism since 2015.
Table 1. Summary of selected studies of wine tourism since 2015.
StudyStudy TypePurpose Destination
Colombini (2015)Case studyTo trace the evolution of wine tourism in Italy since its inception in 1993.Italy
Ferreira and Hunter (2017)To examine how wine tourism has developed geographically and institutionally in South Africa.South Africa
Karagiannis and Metaxas (2020)To assess the impact and strategic significance of the Peloponnesian wine route initiative.Greek
Jones et al. (2015)To explore the main determinants behind Napa Valley’s emergence as a leading wine tourism destination.Napa
Figueroa B and Rotarou (2018)To evaluate the present state and structural challenges of Chile’s emerging wine tourism industry.Chile
Molina et al. (2015)To segment and profile different types of tourists who visit wineries in Spain.Spain
Torres et al. (2021)To analyze the primary enablers of wine tourism growth in Chile and their long-term dynamics.Chile
Montella (2017)Literature reviewTo synthesize current academic discourse on wine tourism and outline future research trajectories.N/A
Gómez et al. (2019)To conduct a systematic review of wine tourism studies published between 1995 and 2014.
Santos et al. (2019)To develop a conceptual understanding of wine tourism experiences and suggest avenues for further theoretical inquiry.
Table 2. Summary of descriptive statistics.
Table 2. Summary of descriptive statistics.
VariableMeanMedianStdMinMax
likes0.49200.941024
contributions25.8353148.37507176
title_length24.8562214.6180128
readability_index4.4854.2842.0400.00151.271
location disclosure0.53610.49901
Table 3. Frequency distribution of review attributes.
Table 3. Frequency distribution of review attributes.
VariableRangeCountPercentage
Contributions0–10590574.40%
11–50130816.50%
51–1504645.80%
151–200670.80%
>2001982.50%
Title Length (chars)0–1087811.10%
11–20293036.90%
21–50408051.30%
51–100410.50%
Readability Index0–5543268.40%
6–10233329.40%
>101772.20%
Location DisclosureNot Disclosed366746.20%
Disclosed427553.80%
Table 4. Pearson correlation coefficients and significance tests among key variables.
Table 4. Pearson correlation coefficients and significance tests among key variables.
Variable 1Variable 2Pearson’s rp-ValueSignificance
LikesContributions0.070.002Yes (p < 0.01)
LikesTitle Length0.040.037Yes (p < 0.05)
LikesReadability–0.050.118No
LikesLocation Disclosure0.14<0.001Yes (p < 0.001)
ContributionsTitle Length0.030.092No
ContributionsReadability–0.030.097No
Title LengthReadability0.010.618No
Table 5. OLS regression coefficient estimates.
Table 5. OLS regression coefficient estimates.
PredictorCoefficientStd. Errorz-Valuep-Value95% CI (Lower)95% CI (Upper)
Intercept0.3950.02814.34<0.001 ***0.3410.449
Readability−0.0200.004−4.9<0.001 ***−0.028−0.012
Title Length0.0020.0012.680.007 **0.0000.004
Contributions0.0000.0001.780.075−0.0000.001
Location0.2410.02111.67<0.001 ***0.2000.281
*** p < 0.001, ** p < 0.01.
Table 6. Extended regression model with interaction terms.
Table 6. Extended regression model with interaction terms.
PredictorCoefficientStd. Errorp-Value
Intercept0.30280.0409<0.001
Readability Index−0.00560.00690.416
Title Length0.0050.00160.0019
Readability × Title Length−0.00060.00030.0183
Contributions0.00750.00260.0038
Table 7. ANOVA summary table.
Table 7. ANOVA summary table.
PredictorSum SqMean SqF ValuePr(>F)
location_disclosed145.5145.478168.19<2.2 × 10−16
contributions16.916.89419.531.00 × 10−5
title_length6.86.7957.860.00508
readability_index1414.01116.25.76 × 10−5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, J.; Wang, X.; Mao, Y. Secrets of More Likes: Understanding eWOM Popularity in Wine Tourism Reviews Through Text Complexity and Personal Disclosure. Tour. Hosp. 2025, 6, 145. https://doi.org/10.3390/tourhosp6030145

AMA Style

Zheng J, Wang X, Mao Y. Secrets of More Likes: Understanding eWOM Popularity in Wine Tourism Reviews Through Text Complexity and Personal Disclosure. Tourism and Hospitality. 2025; 6(3):145. https://doi.org/10.3390/tourhosp6030145

Chicago/Turabian Style

Zheng, Jie, Xi Wang, and Yaning Mao. 2025. "Secrets of More Likes: Understanding eWOM Popularity in Wine Tourism Reviews Through Text Complexity and Personal Disclosure" Tourism and Hospitality 6, no. 3: 145. https://doi.org/10.3390/tourhosp6030145

APA Style

Zheng, J., Wang, X., & Mao, Y. (2025). Secrets of More Likes: Understanding eWOM Popularity in Wine Tourism Reviews Through Text Complexity and Personal Disclosure. Tourism and Hospitality, 6(3), 145. https://doi.org/10.3390/tourhosp6030145

Article Metrics

Back to TopTop