Next Article in Journal
Joint Use of Thermal Characterization and Simulation of AlGaN/GaN High-Electron Mobility Transistors in Transient and Steady State Regimes to Estimate the Hotspot Temperature
Previous Article in Journal
Compact Dual-Band Rectifier with Self-Matched Branches Using Comprehensive Impedance Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Face of Cross-Dissimilarity: Role of Competitors’ Online Reviews Based on Semi-Supervised Textual Polarity Analysis

School of Economics and Management, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(5), 934; https://doi.org/10.3390/electronics14050934
Submission received: 1 February 2025 / Revised: 22 February 2025 / Accepted: 25 February 2025 / Published: 26 February 2025

Abstract

:
Existing online review research has not fully captured consumer purchasing behavior in complex decision-making environments, particularly in contexts involving multiple product comparisons and conflicting review perspectives. This study thoroughly investigates the impact on focal product purchase decisions when consumers compare multiple products and face information inconsistency. Based on online review data from JD.com, we propose a semi-supervised deep learning model to analyze consumers’ sentiment polarity toward product attributes. The method establishes implicit relationships between labeled and unlabeled data through consistency regularization. Subsequently, we conceptualize three types of online review dissimilarity factors, rating-sentiment dissimilarity, cross-review dissimilarity, and brand dissimilarity, and employ regression models to examine the impact of competing products’ online reviews on focal product sales. The results indicate that by employing a semi-supervised deep learning approach, unlabeled data are annotated with pseudo-labels and utilized for model training, achieving more accurate sentiment classification than using labeled data alone. Moreover, positive (negative) sentiment attributes of competing products have a significant negative (positive) effect on focal product purchases. Online review dissimilarity moderates the spillover effects of competing products. Notably, these spillover effects are more pronounced when competing products are from the same brand compared to different brands. The research findings not only highlight the heterogeneous effects of positive and negative sentiments but also provide a new perspective for examining dissimilarity, enriching the understanding of online review spillover effects and the role of dissimilarity, while offering practical guidance for resource allocation decisions by companies and platforms.

1. Introduction

Online reviews have received considerable attention for impacting consumers’ purchasing behaviors. To make a purchase decision online, consumers often face difficulty relying solely on seller-provided information. Consequently, they turn to online reviews from other consumers for more detailed product information [1]. Recently, there has been growing interest in examining how online reviews affect sales [2]. For example, Wang et al. [3] investigated how online reviews affect the offline sales of high-involvement products. Liu et al. [4] studied how the dispersion of online reviews influences consumers’ purchasing decisions. However, in real-world scenarios, consumers typically assess several related products during a single shopping experience. This process, known as “market basket selection” [5], is a key factor influencing sales.
Our research primarily focuses on competing products. The term “spillover” was used by Borah et al. to describe the phenomenon in which negative evaluations of one product spill over into negative evaluations of another product [6]. When searching for a product online, consumers consider not only the reviews of the focal product (the one they intend to purchase) but also the reviews of their competitors [7]. Kwark et al. [7] explored how the online review ratings of related products can influence consumers’ purchase decisions of focal products through the spillover effect. Deng et al. [8] developed a fixed effect model to investigate the spillover effects of online review ratings. There is currently a lack of research on the effects of the more detailed content (such as sentiment polarity) of other competing products on the purchase of focal products.
In online reviews, consumers’ expressions of product attributes often contain valuable opinion and sentiment information, which is important for companies to improve product quality and provide potential consumers with the required information [9,10]. With the expansion of online product categories, some platforms (e.g., JD.com) encourage consumers to evaluate the usage of products in various aspects. By analyzing different sentiment dimensions in online reviews, it is possible to reveal consumers’ perceptions of specific product attributes (e.g., quality, appearance, etc.), thereby reducing the uncertainty of product information [11]. However, consumers may have differing opinions about product attributes when writing online reviews. When multiple online reviews present conflicting evaluations of the same product, online review dissimilarity arises, meaning that the same attribute may be positively evaluated in one online review and negatively evaluated in another [12]. Therefore, it is imperative to explore the unique effects of positive and negative sentiment.
In addition, it is crucial to understand how contextual factors moderate the spillover effects of online reviews for competing products. Contextual factors refer to the associations between individual pieces of information within an information set, such as review dissimilarity. Online review dissimilarity refers to contradictions that emerge either within individual reviews or across multiple reviews along various dimensions [13]. While prior research has discussed the empirical evidence of the moderating role of contextual factors in online review ratings, differences in online review content have been neglected [12]. In a single online review, dissimilarity often manifests as two-sidedness, where both the advantages and disadvantages of a product are presented [12]. Compared to a single-sided online review, two-sided online reviews offer a broader range of perspectives, helping consumers assess more fully whether a product meets their needs [14]. For example, in online reviews of headphones, some consumers might describe the headphones as lightweight, while others might find them bulky. Although consumers may be more inclined to accept two-sided online reviews, we still have limited knowledge of how they respond to inconsistencies across multiple online reviews [13]. In addition, prospect theory posits that evaluators respond to positive and adverse information in distinct ways [15]. Therefore, considering the direction of online review dissimilarity helps identify consumers’ heterogeneous perceptions of review content.
This study aims to address the research question: How do the sentiment polarities of attributes and the dissimilarity online review dissimilarity of competing products influence the sales of the focal products?
To answer this question, we first employ a semi-supervised deep learning model to predict the sentiment polarity in online reviews. Then, we construct a spillover effect regression model and examine online review dissimilarity across three dimensions: online review rating–sentiment dissimilarity, which reflects the discrepancy between review ratings and textual sentiment within individual reviews; cross-online review dissimilarity, which manifests as significant variations in content characteristics across different reviews for the same product; product brand dissimilarity mainly investigates the comparative differences between the online reviews of competing brands and the focal product. The findings indicate that sentiment attributes in competing products’ online reviews generate distinct spillover effects: positive attribute sentiment has a negative spillover effect on focal product sales, while negative attribute sentiment produces a positive spillover effect. Particularly, when competing products exhibit significant differences in online reviews, positive online reviews promote focal product sales, while negative online reviews tend to suppress focal product sales. Furthermore, the spillover effects of online reviews are more pronounced between products of the same brands compared to those of different brands. The contributions of this study are as follows:
  • Sentiment Analysis: We propose a semi-supervised deep learning approach for capturing attribute-specific sentiment polarity in online reviews. The approach can identify and quantify fine-grained sentiment features in online reviews, thereby revealing their impact on product sales.
  • Cross-dissimilarity: We develop a conceptual framework of online review dissimilarity across three dimensions. Unlike previous research focusing on brand effects [16], we propose and analyze how online review dissimilarity across same-brand and different-brand products influences focal product sales.
  • Spillover effect: We investigate the moderating effects of online review dissimilarity from the perspective of competing products, uncovering the differential spillover mechanisms of online review content across positive and negative contexts.
The structure of this paper is as follows: Section 2 reviews the literature on the spillover effect of online reviews and online review dissimilarity. The hypotheses are detailed in Section 3. In Section 4, we present a sentiment analysis and an econometric model method. Section 5 provides the sentiment analysis and empirical results. Section 6 summarizes the research, and discusses the results, implications, and limitations.

2. Literature Review

2.1. Spillover Effect of Online Reviews

Online reviews reveal consumers’ attitudes and experiences regarding their purchased products [17]. According to consumer behavior theory, when individuals realize that their actions can benefit others, they tend to engage more actively in the decision-making process, which enriches the richness of online review content [18]. Some studies have explored the impact of online review characteristics on consumers’ purchase intentions or product sales. For example, Cai et al. stated out that the readability of online reviews directly affects product sales, while the amount of information in the reviews plays a moderating role [19]. Zhai et al. utilized natural language processing techniques to examine the effect of online review content characteristics on the sales of remanufactured products [20].
The spillover effect describes how an event occurring in one context can influence individuals in another context [21]. When choosing a product, consumers typically compare multiple options. Since competing products often offer similar features, changes in one product may affect consumers’ perception and choice of other products [7,22]. Some studies have further explored the spillover effects between competitors. For example, Borah et al. investigated the impact of product recalls on competing brands by applying vector autoregression models [6]. Wu et al. investigated brand spillover effects, taking into account company and market characteristics [23]. Xu et al. examined the potential spillover effect in the context of room rentals on room-sharing platforms [24].
Some studies have examined the spillover effects of online review ratings [7]. For example, Deng et al. utilized an open dataset to build a two-factor fixed effects model, analyzing how restaurant online reviews influence hotel ratings through spillover effects [8]. Kwark et al. examined the spillover effects of the online review ratings of related products on purchases by analyzing clickstream data [7]. Although online review ratings provide consumers with important product information, they often fail to capture the real-life experience of using the product. Existing research generally indicates the significant role of online reviews in boosting product sales [18]. However, discussions on the sentiment spillover effect in online reviews remain insufficient, especially regarding product attributes, where related discussions are even more scarce. Therefore, this study aims to broaden research into the spillover effects of online review content, with a focus on examining the role and impact of product attributes and sentiment polarity.

2.2. Online Reviews Dissimilarity

Consumer cognition is influenced by contextual factors. The dissimilarity of online reviews may increase consumer uncertainty, and reduce the credibility of reviews and products, thus affecting consumer purchasing decisions. Existing research has thoroughly explored the phenomenon of online review dissimilarity [25]. For example, Choi et al. indicated that highly inconsistent online review ratings can reduce consumers’ trust in online reviews [26]. Yin et al. further noted that lower rating dissimilarity helps enhance review credibility, thus enhancing the usefulness of online reviews [27]. In addition, the dissimilarity between online review content and ratings has been widely discussed in relevant studies [28]. For instance, Shan et al. conducted research on the discrepancy between ratings and sentiments in online reviews, exploring its significant role in detecting fake reviews [13]. Based on the Elaboration Likelihood Model (ELM), Wang et al. further analyzed the moderating effect of rating dissimilarity [12]. Eslami et al. showed that the dissimilarity between positive sentiment in reviews and ratings has a unique impact on sales [29].
Consumers typically browse product descriptions and quickly scan multiple reviews before delving into specific evaluations. This behavior underscores the importance of contextualizing information and the need to assess the value of individual information within the overall environment. However, existing research has overlooked the discrepancies between multiple reviews and the impact of such differences in competing products. Consumers’ preference for a product does not necessarily mean that they like all the features of the product. Cross-online review dissimilarity can exist at the attribute level [12]. Within a set of online reviews, a particular attribute may receive positive feedback in one review but negative feedback in another. According to prospect theory [30], people attach different levels of importance to positive versus negative information. Dissimilarities across online reviews are particularly prominent in deceptive communication, often raising consumer concerns about product quality [31]. Specifically, differences in online reviews between competing products may impact consumers’ trust and their reliance on reviews, thereby further influencing product sales and market performance. Therefore, this study delves into the impact of the multi-scenario dissimilarities in online reviews of competing products on focal product sales from the perspective of spillover effects.

3. Theoretical Background and Hypothesis Development

3.1. Theoretical Background

This study investigates the impact of dissimilarity in online reviews from competing products on focal product sales. Drawing upon two theoretical perspectives, Prospect Theory and Cognitive Dissonance Theory, we systematically examine the moderating effects of rating-sentiment dissimilarity, cross-online review dissimilarity, and competing brand dissimilarity in the context of positive and negative attributes.
Prospect theory suggests that individuals exhibit significant value judgment biases in processing positive and negative information [30]. This cognitive bias primarily manifests as a loss aversion effect, where consumers show higher sensitivity to negative information, requiring a greater amount of positive information to offset the impact of negative information of equal magnitude [1]. Based on this theoretical insight, this study distinguishes between positive and negative reviews of competing products to precisely identify the differential impact mechanisms of different types of online reviews.
Cognitive dissonance theory holds that individuals experience cognitive dissonance when they encounter information that contradicts or challenges their attitudes, values, or self-perceptions [32]. Individuals tend to select information that aligns with their decisions while avoiding information that contradicts their decisions [33]. However, the existing literature has not adequately explained how cognitive dissonance triggered by inconsistent online reviews of competing products influences consumer behavior.
By integrating these two theoretical frameworks, this study conducts an in-depth investigation of the spillover effects of dissimilarity on focal product sales across multi-scenarios, which not only enriches the application scenarios of these two theories but also provides new insights into understanding the underlying psychological mechanisms of how inconsistent information drives consumer behavior.

3.2. Hypothesis Development

3.2.1. Spillover Effect of Competing Products

Online reviews often exhibit clear sentiment polarity, reflecting consumers’ positive or negative opinions regarding product features and attributes [10]. If the product’s actual performance meets or exceeds expectations, consumers generally experience positive sentiment responses; conversely, negative sentiment feedback is likely to arise. Studies show that positive online reviews enable 87% of consumers to make quicker purchasing decisions, while negative online reviews prompt 80% of consumers to change their original choices [12]. For example, the study by Van Nguyen et al. [34] pointed out that positive (negative) online reviews promote (suppress) consumer demand. Jang et al. [35] found that online reviews expressing anger indirectly hinder product sales, whereas online reviews conveying pleasant sentiments promote sales growth. In summary, positive online reviews typically reflect high customer satisfaction with the product, while negative online reviews may leave a strong negative impression on consumers.
Competing products refer to those that consumers perceive as similar and substitutable [36]. In the purchasing process, consumers generally assess the utility of several products to select the one that best satisfies their needs. Therefore, the evaluation of competing products becomes a pivotal factor in their decision-making process. The spillover effect theory further explains how individuals’ attitudes and sentiment in one domain can influence their decisions or behaviors in another domain [37]. When consumers search for product information, signals conveyed by competing products affect their perception of the focal product’s quality [7]. The information from competing products can affect consumers’ assessment of the focal product, prompting them to engage in more cautious consideration during purchase decisions. Consequently, competition-related information tends to increase consumers’ cognitive load and drive them to comprehensively compare and analyze different products [9].
In a highly competing market environment, consumers are exposed to much information about various products during purchasing. The research by Zhang et al. [38] has shown that positive online reviews regarding product attributes help boost product sales performance. However, the effect may be reversed when it comes to competing products. The research by Kamakura and Kang [39] indicated that demand fluctuations for competing products generally exhibit a negative correlation. When a product receives positive online reviews, consumer approval of that product increases, which in turn diminishes the appeal of other competing products to consumers [40]. Based on these findings, we infer that positive online reviews enhance the market competitiveness of the focal product and indirectly weaken the market share of competing products. Therefore, we put forward the following hypotheses:
Hypothesis 1a.
The positive sentiment attributes of competing products have a negative impact on the sales of the focal product.
Hypothesis 1b.
The negative sentiment attributes of competing products have a positive impact on the sales of the focal product.

3.2.2. The Role of Rating-Sentiment Dissimilarity

Online review ratings reflect the overall attitude of the consumer towards the product, while online review content serves as a channel for consumers to express their verbal behavior [41]. However, ratings alone are insufficient for consumers to make purchasing decisions. Consumers require more detailed information (i.e., review content) to help them verify the credibility of the ratings [13]. The consistency principle states that the components of the same online review (review content and rating) should be logically consistent and not contradict each other [42]. However, the study by Wu et al. has verified that the dissimilarity between ratings and review content is widespread across three real datasets, with over 40% of (review rating and content) pairs showing dissimilarity in terms of sentiment polarity [43].
Consumers typically expect to read online reviews that contain high ratings and positive emotions [43]. However, dissimilarity in review ratings often raises concerns about product quality and may reduce purchase intention [29]. Moreover, deceptive communication tends to be more contradictory and inconsistent than genuine communication [13]. Studies have pointed out that the dissimilarity between ratings and review content is more pronounced in fake reviews. For example, fake reviews may attract potential consumers’ attention through extreme review ratings (e.g., one-star or five-star), while the review content expresses neutral or opposite sentiment [13]. In addition, many scholars have emphasized the importance of online reviews in promoting product sales [44]. However, such contradictory information weakens the role of online reviews in the consumer decision-making process, thereby diminishing consumer trust in online reviews [45].
Given that the increasing dissimilarity between ratings and review content may have a negative impact on consumers [46], we propose that this dissimilarity could trigger a suppression effect, thereby altering consumers’ responses to both positive and negative sentiment. Furthermore, when competing products are involved, this effect may be reversed. Therefore, we hypothesize the following:
Hypothesis 2a.
Rating-sentiment dissimilarity positively moderates the effect of the positive sentiment attributes of competing products on sales of focal products.
Hypothesis 2b.
Rating-sentiment dissimilarity negatively moderates the effect of the negative sentiment attributes of competing products on sales of focal products.

3.2.3. The Role of Cross-Online Review Dissimilarity

Next, we focus on the impact of online review dissimilarity based on contextual background. The readers’ attention to review information largely depends on its formed mental schema (formed by prior online reviews) [46]. Thus, the similarity of online review content affects consumers’ selective attention to review information, promotes deeper cognitive processing, and improves the fluency of information processing [47]. However, consumers’ overall preference for a product does not necessarily reflect their specific evaluations of its various attributes. Positive and negative opinions in online reviews tend to target different product attributes [13]. Dissimilarity appears when the same attribute simultaneously receives both positive and negative feedback across several online reviews [27]. Drawing on the research of Yin et al., we define cross-online review dissimilarity based on contextual background as the degree of divergence in opinions expressed across multiple online reviews [48].
Online review dissimilarity can lead consumers to experience greater cognitive dissonance [49]. Such dissimilarity raises consumers’ concerns about product quality, particularly when both positive and negative aspects are frequently mentioned [50]. Empirical research by Shan et al. found that review variance (termed “dissimilarity” in their study) is associated with the likelihood of fake online reviews [13]. Given the negative impact of review variance [51], we hypothesize that increased attribute variance among online review texts may produce an inhibitory effect, thereby altering the consumers’ responses to positive and negative sentiments.
Hypothesis 3a.
Cross-online review dissimilarity positively moderates the effect of the positive sentiment attributes of competing products on sales of focal products.
Hypothesis 3b.
Cross-online review dissimilarity negatively moderates the effect of the negative sentiment attributes of competing products on sales of focal products.

3.2.4. The Role of Product Brand Dissimilarity

Brand competitiveness represents an enterprise’s capacity to surpass competitors through building emotional attraction, fundamentally manifesting as consumers’ multidimensional evaluation of product attributes [52]. Janakiraman et al. found that there is a priori and dynamic spillover effect in perception among highly similar competing brands [53]. Aggarwal et al. further pointed out that the substitutability of competing products can enhance consumers’ tendency to choose higher-quality products from the available options and shift their preferences to other competing brands [54]. This indicates that brand competition is not only reflected in the quality and attributes of the products themselves but is also influenced by consumers’ perception of brand signals and their overall perception of other brands in the market.
When consumers purchase other products from the same brand, their prior product experiences can serve as diagnostic signals, further reinforcing the brand’s meaning [55]. This information can extend from one product to others within the same brand, leading to a strong consistency and correlation in consumers’ perceptions of product quality across products within the same brand [56]. Furthermore, the study by Song et al. [57] suggested that the online reviews of the focal brand may have a negative impact on competing brands, weakening consumers’ perceptions of those competing brands. This indicates that when consumers encounter positive online reviews of the focal brand, their views of competing brands may become more negative. In summary, we speculate that the sentiment attributes of competing products have a significantly different impact on purchase decisions for focal products across brands versus within the same brand. Therefore, we hypothesize the following:
Hypothesis 4a.
The positive product attributes of competing products within the same brand as the focal product have a greater impact on the purchase of the focal product than those from different brands.
Hypothesis 4b.
The negative product attributes of competing products within the same brand as the focal product have a greater impact on the purchase of the focal product than those from different brands.
To sum up, the conceptual framework is illustrated in Figure 1.

4. Methodology

First, we conducted a pre-experiment to gain a deeper understanding of consumer preferences and selection behaviors when browsing products. Next, we analyzed products and selected competing products. Then, we developed a sentiment analysis model based on the several key product attributes consumers focus on within the JD.com online review, aiming to capture the sentiment polarities expressed in the online reviews. Finally, drawing on the sentiment analysis results, online review rating-sentiment dissimilarity, cross-online review dissimilarity, and product brand dissimilarity, we developed a regression model to investigate the specific impact of online reviews for both the focal product and its competing products on the focal product’s sales.

4.1. Focal and Competing Product Selection

We conducted a detailed survey on consumer preferences for browsing and purchasing related products during online shopping via the Credano.com platform. To ensure the completeness and reliability of the data, a total of 188 valid questionnaires were obtained after excluding incomplete and missing responses. By collecting and analyzing this data, we were able to gain deeper insights into consumer habits and preferences when browsing products, particularly regarding which information sources and tools consumers tend to rely on when faced with a diverse range of product choices. In the first stage of the study, participants were asked to select three products of interest, with potential purchase intent, from a list of 20 different electronic products. These electronic products were selected from JD’s Gold List and included categories, such as USB drives, headphones, and keyboards, each with distinct attribute features. In addition to product selection, participants were also asked to provide information about their primary methods of browsing related products. The preliminary test indicates that “headphones” were the most selected product among participants, accounting for 20.5% of the total choices. Additionally, consumers used several different methods when browsing related products: 35.5% search for information directly through product searches, 39.4% rely on product ranking lists, 23.7% depend on platform-recommended products, and only 1.4% use recommendations from bloggers, friends, or social media. These results provide an important foundation for subsequent model design and consumer behavior research.
Consumers frequently come across similar products during their online shopping experiences [58]. Therefore, during the purchasing process, consumers pay attention to online reviews of the target (focal) product and take into account the online reviews of other similar (related) products [7]. Based on this, it is necessary to conduct a reasonable selection and analysis of similar products to accurately explore how the spillover effect of online reviews influences consumer purchasing behavior. Based on the survey results, we selected headphone product datasets from three sources to form an initial competing product set Q = q 1 , q 2 , q κ , including direct product search (first ten pages), product rankings, and platform-recommended products (recommended products on each product detail page). First, we crawled each product’s title and description information, use the TF-IDF to extract keywords, and manually reviewed them to determine the key product attributes. Next, we chose a focal product and calculated the attribute overlap degree between other products and the focal product. According to the overlap values, we selected the top κ products with the highest overlap to constitute the competing product set for the focal product. The formula is as follows:
A O D i = a t t Q s a m e u n i o n
where A O D i is the attribute overlap degree between focal product i and the other product, and a t t Q s a m e is the number of attributes of a product that are the same as the focal product. u n i o n represents the number of specific attributes involved in products. Thus, we have a set of competing products for each product.

4.2. Sentiment Analysis

In this section, we provide a detailed discussion on constructing an aspect-level sentiment analysis model. Compared to overall sentiment analysis, aspect-level sentiment analysis offers finer granularity, focusing primarily on identifying the sentiment polarity associated with different attributes or aspects within the text [59]. For example, the sentence “Although the sound quality is excellent, the battery life is disappointing” involves two distinct attributes: “sound quality” and “battery life.” These two attributes exhibit positive and negative sentiment polarities, respectively. Thus, conducting an overall sentiment analysis of the sentence would fail to accurately capture the complexity of the sentiment expressed, as the sentiment polarities of the different attributes diverge.
Manually labeled data are typically costly and time-consuming, especially in tasks, such as sentiment analysis, where the labeling process requires substantial human intervention. When label data are insufficient, supervised learning methods often exhibit a decline in performance and may lead to model overfitting. To address this issue, we adopted a semi-supervised deep learning approach, leveraging a small amount of manually labeled data to iteratively predict pseudo-labels for the unlabeled data. This process expands the labeled dataset, enhancing the model’s prediction accuracy [60]. As shown in Figure 2, given limited labelled reviews R l with corresponding label Y l under each aspect and a substantial amount of unlabeled reviews R u , we initially estimated low-entropy labels for the unlabeled data. Then, based on these guessed labels, we mixed the labeled and unlabeled data to create an infinite number of augmented data samples for model training.

4.2.1. Manually Labeled Data

We first manually labeled the product attribute categories mentioned in consumer reviews. To help potential consumers quickly understand the product experience conveyed in online reviews, the JD.com platform extracted frequently mentioned product attributes from many online reviews and displayed an overview of these attributes in the online review section. Based on this, we categorized the attributes of headphones into “workmanship, sound quality, comfort, battery life” and further explored the sentiment polarities related to these attributes. Figure 3 illustrates a specific example of the labeling process, where 1 represents positive sentiment, 3 represents neutral, 2 represents negative, and 0 indicates that the attribute is not mentioned. To enhance the validity of the labeled data, we performed preliminary data cleaning on the collected online review data. Additionally, the labeling process was independently conducted by two individuals with extensive online shopping experience, and only online reviews where both annotators agreed were included in the final training dataset.
We evaluated the model’s performance on 127,022 unlabeled data points with labeled data sizes of 6000 and 12,000. For the labeled dataset, we split the data into training, validation, and test sets following an 8:1:1 ratio. The training set consists of 4800 (or 9600) samples, while both the validation and test sets include 600 (or 1200) samples each. It is important to note that the fine-grained attribute sentiment classification in this study is a multi-label classification task, where the model must assess the sentiment polarities for four attributes (4 × 4 = 16 categories). Given the complexity added by the large number of categories, we trained a separate model for each attribute.

4.2.2. Unlabeled Data Label Guessing

For unlabeled online reviews R u = r 1 u , , r m u , we generated K augmentations for each review text, namely r i , k a , k [ 1 , K ] . In practice, we proposed four ways to generate the augmentations, namely, by dropping a fraction of words, replacing some words, changing the order of words as well as translating original review texts, and then translating them back. Then, we generate the guessed label for these generated augmentations. In specific, we utilized a 12-layer BERT-base [61] as our encoder model to predict the labels under each aspect. The guessed label for the unlabeled data was obtained by taking the weighted average of the predicted results, ensuring that the model produces consistent labels across the different augmentations, which can be defined as follows:
y i u = 1 w o + k w k w o   f r i u + k = 1 K w k   f r i , k a
where f is the label prediction model, and w o and w k are the learnable weights controlling the contributions of different quality of augmentations to the guessed label. We further utilized a sharpening function with a temperature parameter T to prevent the generated labels from being too uniform, which can be expressed as follows:
y ^ i u = y i u 1 T y i u 1 T  
where |     | is the l 1-norm function and the generated label becomes a one-hot vector when T 0 . In this manner, we can obtain the guessed labels Y u for the unlabeled online reviews and augmented review text R a = { r i , k a } with their guessed label Y a . Note that we considered two scenarios for generating guessed labels for the unlabeled data. In the first scenario, we predicted the sentiment for all aspects simultaneously, treating the prediction problem as a multi-label classification with a total of A × C categories, where A and C represent the number of aspects and the number of classes per aspect, respectively. In the second scenario, we simplified the task by decomposing the categories and training a separate model for each aspect, treating each model as a multi-class classification problem with C classes.

4.2.3. Aspect-Level Label Mixup

Given the limited labeled online reviews under each aspect and the guessed labels for unlabeled online reviews, we interpolated them in their hidden space and the corresponding aspect labels to create an infinite amount of new augmented data samples. In this way, our Mixup strategy leverages information from unlabeled online reviews while learning from labeled review texts, which facilitates mining implicit relations between sentences, prompting models to maintain linearity between training examples, and many previous studies have also demonstrated its effectiveness [62]. In practice, we found { 3,4 , 5,6 , 7,9 , 12 } layers provided the most representational power, with each layer capturing distinct types of information, spanning from syntactic details to aspect-level semantic nuances in text. Thus, we chose layer l from L = { 7,9 , 12 } as our mixing layer and two samples from the merged dataset R = R l R u R a , Y = Y l Y u Y a , namely r i , r j R and y i , y j Y . We first computed the hidden representation h l at the l -th layer, calculated as follows:
h l i = f l h l 1 i ; θ , l [ 1 ,   L ] h l j = f l h l 1 j ; θ , l [ 1 ,   L ]
where f l ( ) denotes the l -th layer of the encoder network. The linear interpolations can be defined as follows:
r ~ = Mix r i , r j = λ h l i + 1 λ h l j y ~ = Mix y i , y j = λ y i + 1 λ y j
where λ [ 0,1 ] is the mix parameter.

4.2.4. Semi-Supervised Mixup Training

In training, we utilize the kL-divergence as the training loss of the two samples:
L Mix = E r i , r j R   KL y ~   | |   f r ~
Since r i , r j are randomly sampled from R , the interpolated review texts can be defined from different categories: Mixup among labeled data, Mixup between labeled and unlabeled data, and Mixup of unlabeled data. Thus, the loss can be divided into two types based on the interpolated reviews:
(1)
Supervised loss. When two samples r i , r j are both from the labeled data, namely r i , r j R l , the training loss degenerates as a supervision loss.
(2)
Consistency loss. When two samples are from unlabeled or augmentation data, namely r i , r j R u R a , the KL-divergence can be seen as a consistency loss to enforce the model to predict the same labels with the original data sample.
Meanwhile, we also define the entropy of the predicted labels on unlabeled data as a self-training loss and minimize this loss to encourage the model to produce confident labels, defined as follows:
L Entro = E r R u max 0 , γ y ^ u
where     is the l 2-norm function, γ is the margin hyper-parameter. The overall objective function can be,
L = L Mix + L Entro
With the trained semi-supervised sentiment classification model, we can obtain the aspect-level sentiment of the online reviews. Algorithm 1 provides an overview of the sentiment analysis algorithm, as summarized in the flow table below.
Algorithm 1 Online review aspect-level sentiment analysis process
1: Input: Online review set R = R l R u , Aspect-level sentiment label Y l
2: for r i R u do
3:    Unlabeled review text augmentation to obtain r i , k a , k [ 1 , K ]
4:    Predict labels for r i , k a and add it to Y a
5:    Predict labels for r i , calculate y ^ i u and add it to Y u
6:    Minimize entropy on labels to obtain L Entro
7: end for
8: Mix labeled and unlabeled set together R = R l R u R a , Y = Y l Y u Y a
9: for r i , r j R and y i , y j Y do
10:   Encode two samples to obtain hidden representation h l i , h l j at the l -th layer,
11:   Mixup training process to obtain L Mix
12: end for
13: Minimize the objective function L to train the model
14: Output: Aspect-level sentiment scores Y

4.3. Variable Measurement

4.3.1. Online Review Dissimilarity Analysis

Rating-sentiment Dissimilarity. The rating-sentiment dissimilarity is calculated using the following equation [13]:
R S _ d i s s i m c = z _ s e n t i m e n t c z _ r a t i n g c
where z _ s e n t i m e n t c and z _ r a t i n g c represents the z s c o r e s of sentiment and the rating of the c t h online review, respectively.
Cross-online Review Dissimilarity. When online reviews show significant differences in evaluating the same attribute, it creates uncertainty and complexity in consumer decision-making. The cross-online review dissimilarity is calculated as follows:
C r o s s _ d i s s i m t = ( p t p o s log p t p o s + p t n e g log p t n e g + p t u n r e f log p t u n r e f )
where p p o s ,   p n e g ,   p u n r e f are the mean percentages of positive, negative, and unmentioned attributes, respectively, in all online reviews at time t .
Product Brand Dissimilarity. We incorporated brand dimension dissimilarity measures to analyze the effects of attribute sentiment expressions emanating from intra-brand and inter-brand competing products:
P B _ d i s s i m i s a m = 1 Q s a m e κ = 1 Q s a m p o l a r i t y _ a t t _ c o m s a m
P B _ d i s s i m i d i f = 1 Q d i f κ = 1 Q d i f p o l a r i t y _ a t t _ c o m d i f
where p o l a r i t y _ a t t _ c o m s a m is the mean number of product attributes exhibiting sentiment polarity (positive or negative) expressions in consumer-generated online reviews, specifically for competing products within the same brand portfolio as the focal product. p o l a r i t y _ a t t _ c o m d i f is the mean number of product attributes exhibiting sentiment polarity (positive or negative) expressions in consumer-generated online reviews, specifically for competing products within a different brand portfolio as the focal product. The total number of competing products Q for focal product i is the sum of the same-brand competitors Q s a m e and different-brand competitors Q d i f , expressed as Q = Q s a m e + Q d i f .

4.3.2. Data

We collected the online review data for products on JD.com, an authoritative e-commerce platform in China. Since JD.com only provides product ranking lists, we used the third-party platform jingcanmou.com to obtain detailed sales data. To ensure the quality of the online reviews, we pre-processed the raw data before building the regression model. The specific pre-processing steps include removing pure symbols and duplicate texts, eliminating invalid characters from online reviews, and excluding products with few online reviews. Following these processing steps, the final dataset comprises 87 products and 127,022 valid online reviews from JD.com, collected between 15 October 2023 and 15 May 2024.

4.3.3. Variables

Dependent variable. This study examined the impact of the online reviews of competing products on the sales of focal products. The dependent variable is the daily sales of the focal product on JD.com.
Independent variables. In terms of independent variables, we applied sentiment analysis to convert the text information from online reviews into the average number of product attributes expressing positive and negative in consumer reviews. To capture the impact of dissimilarity information in online reviews on consumer purchasing decisions, we introduced three variables reflecting online review dissimilarity.
Control variables. Several control variables were considered to account for factors that may influence sales. Specifically, we first took into account the length of the text and the number of online reviews, eliminating certain data, such as cases with excessively long or short texts, and those with a low number of reviews. In addition, based on the aforementioned competing product selection criteria and platform display characteristics, most of the products analyzed in this study are bestsellers, making the impact of publication time negligible. Headphones are commonly used consumer goods, and highly sensitive to price and promotions, so we controlled for these factors.
Table 1 displays the descriptive statistics of the main variables. For example, the mean value of P o s _ a t t being 1.306 suggests that most online reviews mention at least one positive sentiment attribute. The mean of N e g _ a t t is lower than that of P o s _ a t t , reflecting that positive online reviews are more common than negative ones. After that, we performed correlation analysis for each variable. All calculated Variance Inflation Factor (VIF) values are below 3, suggesting no multi-collinearity issues among the variables.

4.4. Empirical Model Specification

4.4.1. Spillover Effect Model

The sentiment expressions of competing products, particularly the positive or negative sentiments surrounding product attributes, may influence consumers’ purchasing decisions regarding the focal product through the dynamic mechanisms of market competition. To more comprehensively capture the spillover effects between different products within the competing environment, we integrated the sentiment variables of competing products’ attributes into the models. Given that consumers typically base their purchase decisions on historical online review data [10], we incorporated temporal lags for the relevant variables in the following equations:
S a l e s i = α 0 + α 1 P o s _ a t t i , t 1 + α 2 P o s _ a t t _ c o m i , t 1 + β C o n t r o l s i , t + μ i + ε i , t
S a l e s i = α 0 + α 1 N e g _ a t t i , t 1 + α 2 N e g _ a t t _ c o m i , t 1 + β C o n t r o l s i , t + μ i + ε i , t
where S a l e s i , t is the sale of product i on day t . P o s _ a t t i , t 1 and N e g _ a t t i , t 1 indicate the mean number of product attributes in online reviews where consumers express positive and negative sentiments on day t 1 , respectively. P o s _ a t t _ c o m i , t 1 is the mean number of positive sentiment attributes of competing products of the focal product i on day t 1 . N e g _ a t t _ c o m i , t 1 is the mean number of negative sentiment attributes of competing products from the focal product i on day t 1 . For example, an online review analyzed by our sentiment analysis model is represented as the structured string {1, 1, 0, 2}, resulting in P o s _ a t t _ c o m = 2 (workmanship and sound quality) and N e g _ a t t _ c o m = 1 (battery life). So, the specific sentiment polarity of attributes in online reviews are workmanship-positive, sound quality-positive, comfort-no mention, battery life-negative. In addition, the control variables include P r i c e , P r o m o t i o n . Specifically, P r i c e denotes the online selling price of focal products, P r o m o t i o n is measured by the percentage of promotional effort used to assess the impact of promotions on the focal product within a specific period. α 0 is the intercept term, μ i captures unobserved individual heterogeneity, and ε i , t is the error term.

4.4.2. Moderating Effect Model

(1)
Rating-sentiment Dissimilarity
We explored the moderating effect of online review dissimilarity from competing products on the focal product’s sales. First, we introduced the variable which measures the dissimilarity of online review ratings and sentiment expressions from competing products; we constructed the model as follows:
S a l e s i = α 0 + α 1 P o s _ a t t i , t 1 + α 2 P o s _ a t t _ c o m i , t 1 + α 3 P o s _ a t t _ c o m i , t 1 × R S _ d i s s i m i , t 1 + R S _ d i s s i m i , t 1 + β C o n t r o l s i , t + μ i + ε i , t
S a l e s i = α 0 + α 1 N e g _ a t t i , t 1 + α 2 N e g _ a t t _ c o m i , t 1 + α 3 N e g _ a t t _ c o m i , t 1 × R S _ d i s s i m i , t 1 + R S _ d i s s i m i , t 1 + β C o n t r o l s i , t + μ i + ε i , t
where R S _ d i s s i m i , t 1 is the mean dissimilarity of ratings and sentiment from competing products on day t 1 . P o s _ a t t _ c o m i , t 1 × R S _ d i s s i m i , t 1 is an interactive term for the positive attributes and rating-sentiment dissimilarity, and N e g _ a t t _ c o m i , t 1 × R S _ d i s s i m i , t 1 is an interactive term for the negative attributes and rating-sentiment dissimilarity.
(2)
Cross-online Review Dissimilarity
Next, we examined the moderating effect of dissimilarity across several online reviews from competing products by adding interactive terms based on Formulas (13) and (14). These terms are employed to test the moderating effect, and the model is constructed as follows:
S a l e s i = α 0 + α 1 P o s _ a t t i , t 1 + α 2 P o s _ a t t _ c o m i , t 1 + α 3 P o s _ a t t _ c o m i , t 1 × C r o s s _ d i s s i m i , t 1 + C r o s s _ d i s s i m i , t 1 + β C o n t r o l s i , t + μ i + ε i , t
S a l e s i = α 0 + α 1 N e g _ a t t i , t 1 + α 2 N e g _ a t t _ c o m i , t 1 + α 3 N e g _ a t t _ c o m i , t 1 × C r o s s _ d i s s i m i , t 1 + C r o s s _ d i s s i m i , t 1 + β C o n t r o l s i , t + μ i + ε i , t
where C r o s s _ d i s s i m i , t 1 is the mean cross-online review dissimilarities from competing products on day t 1 . P o s _ a t t _ c o m i , t 1 × C r o s s _ d i s s i m i , t 1 is an interactive term for the positive attributes and cross-online review dissimilarity, and N e g _ a t t _ c o m i , t 1 × C r o s s _ d i s s i m i , t 1 is an interactive term for the negative attributes and cross-online review dissimilarity.
(3)
Product Brand Dissimilarity
To differentiate the spillover effect among competitors under the same and different brand dissimilarities, we added the variables, which are calculated by Formulas (11) and (12). Based on these variables, we then construct the model as follows:
S a l e s i = α 0 + α 1 P o s _ a t t i , t 1 + α 2 P o s _ P B _ c o m i , t 1 s a m + α 3 P o s _ P B _ c o m i , t 1 d i f + β C o n t r o l s i , t + μ i + ε i , t
S a l e s i = α 0 + α 1 N e g _ a t t i , t 1 + α 2 N e g _ P B _ c o m i , t 1 s a m + α 3 N e g _ P B _ c o m i , t 1 d i f + β C o n t r o l s i , t + μ i + ε i , t
where P o s _ P B _ c o m i , t 1 s a m is the mean number of positive sentiment attributes in the competing product set from the same brand as the focal product i on day t 1 .   N e g _ P B _ c o m i , t 1 s a m is the mean number of negative sentiment attributes in the competing product set. P o s _ P B _ c o m i , t 1 d i f is the mean number of positive sentiment attributes in the competing product set. N e g _ P B _ c o m i , t 1 d i f is the mean number of negative sentiment attributes in the competing product set.

5. Results

In this section, we first provide a detailed presentation of the sentiment analysis results. Following this, we conduct an in-depth discussion of the empirical estimation results. In Section 5.3, the robustness checks are investigated by introducing additional variables.

5.1. Analysis of Sentiment Model

Classification tasks. Note that aspect-level sentiment classification in our paper can be regarded as a multi-class classification problem for four product attributes (A1, workmanship; A2, sound quality; A3, comfort; A4, battery life). Intuitively, we can convert the problem into sixteen binary classification tasks with each attribute corresponding to four classes (Negative, 0; Neutral, 1; Positive, 2; and Not mentioned, 3). However, we found that decomposing the problem into four binary classification tasks led to better performance. Toward this end, we implement four sentiment classifiers: The 1st/2nd/3rd/4th classifier predicts the attributes A1/A2/A3/A4 to be negative, neutral, positive, and not mentioned.
Classification algorithms. We compare our aspect-level semi-supervised deep learning model with the following baselines: Random Forest (RF), Support Vector Machine (SVM), eXtreme Gradient Boosting (XGBoost), FastText, LSTM + Attention (LSTM with attention mechanism), and Bidirectional Encoder Representations from Transformers (BERT). We turn the hyperparameters of each baseline and our model via cross-validation, and the range of hyperparameters tested in our paper is shown in Table 2.
Performance evaluation. Due to the large number of hyperparameters to tune, we adopt a cross-validation approach with separate validation and test sets. Specifically, we split the dataset into 80% for training, 10% for validation, and 10% for testing. Various models and hyperparameter combinations are trained on the training set and evaluated on the validation set. The optima model and hyperparameters are selected based on the average accuracy across five experimental runs. We report the average accuracy, precision, recall, and F-1 score results evaluated on the test set.
Sentiment analysis results. In this section, we present the results of aspect-level sentiment analysis. We analyze the model’s performance using four metrics: accuracy, precision, recall, and F1 score. The detailed procedure can be found in Appendix A. Table 3 presents the results of attribute-based sentiment classification across 16 classes under two different numbers of data labels. Pretrain BERT and Finetune BERT represent the base pre-trained and fine-tuned BERT classification models, respectively. Experimental results show that the Pretrain BERT model has the lowest accuracy across all evaluation metrics, with accuracy as low as 28.21%. This indicates that the Pretrain BERT model performs poorly in classification without fine-tuning. After fine-tuning, the accuracy of the Finetune BERT model shows a significant improvement, reaching 82.32% (12,000) and 68.30% (6000), respectively. This demonstrates that fine-tuning allows BERT to learn richer features from the data. In addition, the FastText and LSTM + Attention models achieved accuracies of 80.74% (12,000) and 80.01% (12,000), respectively, indicating that the performance of these two models is relatively similar, though neither surpasses the Finetune BERT model. Therefore, the fine-tuned BERT model exhibits the best classification performance in this study.
In addition, we train classification models for four product attributes (A1, work-manship; A2, sound quality; A3, comfort; A4, battery life) and compare the performance of various machine learning and deep learning models. The selected models include commonly used ones, such as Random Forest, SVM, XGBoost, FastText, LSTM + Attention, and BERT. To streamline and emphasize the comparison results, we only present the accuracy of each model (other metrics are similar). As shown in Table 4, training independent classification models for each attribute significantly improved performance compared to directly performing a 16-class classification task. Furthermore, Random Forest and SVM demonstrate relatively average performance, particularly for the A2 attribute (sound quality), where their accuracy fell below 80%. This indicates that traditional models have limitations in effectively capturing key text features, which leads to a noticeable decline in classification performance. In addition, compared to the Finetuned BERT model, the semi-supervised learning method proposed in this study exhibits superior performance in attribute-based sentiment classification tasks. This method establishes implicit relationships between labeled and unlabeled online review data through consistency regularization, enabling it to achieve better sentiment classification results even with limited labeled data.

5.2. Analysis of Empirical Model

5.2.1. Spillover Effect

We estimate the model using fixed effects and apply robust t-statistics to address potential heteroskedasticity and clustered correlations within the error terms. Online reviews’ positive and negative sentiment polarities have differential impacts on consumer purchasing behavior. As shown in Table 5, Models 1 and 2 demonstrate the estimated results regarding the sentiment polarity of competing product attributes. The coefficient of P o s _ a t t _ c o m (−0.2173, p < 0.01) has a significant negative impact on focal products’ sales, while the coefficient of N e g _ a t t _ c o m (0.1579, p < 0.1) is positive. The findings indicate that positive attribute sentiments in online reviews of competing products have a negative spillover effect on focal product sales, while negative attribute sentiments demonstrate a positive spillover effect. Therefore, the results support Hypothesis 1a and Hypothesis 1b. By observing the coefficient of N e g _ a t t _ c o m , we can see that the spillover effect from negative sentiment attributes of competing products is relatively weaker than positive sentiment attributes. Then, taking Model 6 as an example, we further analyzed the effects of the control variables. The coefficients for price and promotions are both negative and significant. This aligns with the basic principles of market economics: price has a positive effect on product sales, where an increase in price leads to a decrease in sales. Simultaneously, as a market incentive, promotions can effectively boost product sales growth in the short term.

5.2.2. Moderating Effect

Our analysis of Models 3 and 4 reveals significant moderating effects of rating-sentiment review dissimilarity. P o s _ a t t _ c o m × R S _ d i s s i m (1.0748, p < 0.01) and N e g _ a t t _ c o m × R S _ d i s s i m (−0.9149, p < 0.01) demonstrate distinct impacts on product sales. When competing products exhibit high online review dissimilarity of rating-sentiment, positive online reviews paradoxically decrease consumer trust in product advantages, ultimately benefiting focal product sales. Conversely, negative online reviews of competing products tend to suppress focal product sales under similar conditions. These findings provide support for Hypotheses 2a and 2b.
Model 5 and Model 6 explore the moderating effect of cross-online review dissimilarity. The results reveal that P o s _ a t t _ c o m × C r o s s _ d i s s i m (2.8349, p < 0.01) is positive and significant, while N e g _ a t t _ c o m × C r o s s _ d i s s i m (−1.8612, p < 0.1) is negative and significant. The interaction between cross-online review dissimilarity and positive (negative) sentiment attributes shows a significant correlation. Specifically, when competing products show significant differences across online reviews, positive online reviews reduce potential consumers’ trust in the corresponding advantages of the product, ultimately promoting focal product sales. In contrast, negative online reviews of competing products tend to suppress focal product sales. Therefore, the results support Hypotheses 3a and Hypothesis 3b.
Next, we present the results of the product brand dissimilarity of competing products from the same or different brands in Table 6. Model 7 to Model 12 incorporate all explanatory variables of the same or different brands. P o s _ P B _ d i s s i m s a m (−0.1471, p < 0.05) and P o s _ P B _ d i s s i m d i f (−0.1243, p < 0.05) are negative and significant, whereas coefficients N e g _ P B _ d i s s i m s a m (0.1242, p < 0.05) and N e g _ P B _ d i s s i m d i f (0.1275, p < 0.05) are positive and significant. Moreover, in comparison with Model 11 and Model 12, the coefficients of P o s _ P B _ d i s s i m s a m are higher than P o s _ P B _ d i s s i m d i f , and so are the negative coefficients. The results indicate that the brand of competing products has a significant impact on the focal products’ sales. In addition, the spillover effects of the sentiment attributes in online reviews of competing products (both positive and negative) are significantly higher for same-brand products than for different-brand products, supporting Hypothesis 4a and Hypothesis 4b.

5.3. Robustness Checks

We conduct robustness checks to verify the reliability of our research conclusions. Based on the research by Guo et al. [16], we add the control variable Brand, which indicates the number of brands present in the competing product set of the focal product. The estimation results in Table 7 provide additional validation for the robustness of our findings.

6. Discussion and Conclusions

6.1. Main Findings

This study examines the spillover effects of online reviews on focal product sales, with particular emphasis on investigating the moderating roles of online review rating-sentiment dissimilarity, cross-online review dissimilarity, and product brand dissimilarity using semi-supervised sentiment polarity. The research reveals the following key findings:
First, our research reveals that positive attribute sentiment in competing products’ online reviews generates a negative spillover effect on focal product sales, while negative attribute sentiment produces a positive spillover effect. This finding aligns with Kwark et al. [7], further validating the cross-product spillover effects of online reviews from competing products. Coefficient comparison analysis reveals that the spillover effect of negative sentiment attributes in competing products’ online reviews is relatively weaker compared to positive sentiment attributes. This result resonates with the findings of Luo et al. [40], indicating that high-quality online reviews not only influence consumers’ choice tendencies but also prompt them to reassess their product preferences. As demonstrated in Sun et al.’s research [10], given the prevalence of positive online reviews in e-commerce environments, consumers tend to comprehensively evaluate the advantageous features of competing products when confronted with abundant positive feedback.
Second, the analysis demonstrates that the rating-sentiment dissimilarity and cross-online review dissimilarity of competing products moderates spillover effects on the focal product’s sales. When competing products exhibit significant variations in online reviews, positive (negative) online reviews promote (suppress) the sales of the focal product. This is because the discrepancy in evaluations of competing products increases consumers’ information uncertainty, which in turn, affects their judgments of the relative advantages and disadvantages of the products, ultimately influencing their purchase intention towards the focal product. This finding aligns with Yin et al. [27], who revealed that highly inconsistent online reviews diminish their reference value and significantly influence consumer decision-making. Moreover, cross-online review dissimilarity heightens consumers’ perceived product risk, thus affecting the sales of the product. Notably, the moderating effect of cross-online review dissimilarity is stronger than that of rating-sentiment dissimilarity, suggesting that contradictions in specific review content have a more profound impact on consumer purchase decisions.
Third, the research findings reveal the significant moderating role of product brand dissimilarity in the spillover effect of online review. Specifically, spillover effects of sentiment attributes in online reviews from competing products of the same brands are substantially stronger than those from products of a different brand as the focal product, corroborating Kwark et al.’s findings [7]. The product brand dissimilarity amplifies the spillover effect of sentiment attributes in the online reviews of competing products. This underscores the important role of brand positioning in shaping consumer evaluation and decision-making processes. When facing cross-brand product choices, consumers exhibit a stronger comparative tendency and are more sensitive to online review information. This has important implications for a company’s brand strategy and online review management.
Table 8 shows the summary of the findings with relevant research. Compared with other studies, this paper comprehensively includes and delves into the various dissimilarities in online reviews from the perspective of spillover effects, particularly focusing on their impact on consumer decision-making and behavior. It offers a more detailed analysis and discussion of these issues. Through the study of these dissimilarities, we gain a better understanding of their role and variation across different contexts.

6.2. Theoretical Implications

This study makes several critical contributions to the literature in the relevant field. First, thoroughly examining the spillover effect of attribute-specific sentiment in online reviews of competing products further enriches research on online reviews and user-generated content. While previous scholars have examined the impact of online review ratings on the sales of competing products [7], this study is the first to combine sentiment analysis with product attributes, specifically focusing on the spillover effects of online reviews on competing products. Our research offers new evidence, emphasizing the importance of accounting for spillover effects in the analysis of product online reviews within highly competing market environments. This facilitates a deeper understanding of the mechanisms by which spillover effects influence online product sales and extends the theoretical framework surrounding online product marketing strategies.
Second, this study deepens the academic understanding of information inconsistency by thoroughly examining three moderating effects, rating-sentiment dissimilarity, cross-online review dissimilarity, and brand dissimilarity, while emphasizing the crucial role of online review dissimilarity in spillover effects. Our findings reveal that online review dissimilarity produces significant inhibitory effects, enriching and extending existing consumer decision-making theories. Specifically, when reviewers express contradictory opinions about the same product attributes, such dissimilarity diminishes consumers’ confidence in online review information, subsequently affecting their purchase decisions. Based on the analysis of limitations in existing research, this study suggests that future research should adopt a multi-dimensional perspective to measure new influencing factors and explore this phenomenon from a systematic cue theoretical perspective.
Third, this study further investigates how brand dissimilarity (between same-brand and different-brand products) influences spillover effects of online reviews between products. While existing research has demonstrated the significant impact of branding on product sales, studies on brand competition and the spillover effects of online reviews remain relatively scarce. The study finds that the spillover effects between different-brand products are significantly greater than those between same-brand products. This finding expands the literature on brand spillover effects and provides a new theoretical foundation for understanding the differential impact of online reviews on product sales within the same brand and across different brands.

6.3. Managerial Implications

We provide several practical implications regarding online reviews and product sales. Marketers have increasingly come to view online reviews as influential marketing tools. Our research shows that the sentiment polarities of product attributes in online reviews can spill over to other products, and each attribute has a distinct sentiment dimension. Therefore, when formulating marketing strategies, companies must refine their evaluation of different sentiment attributes to capture consumer needs more accurately. Second, the study reveals that the spillover effect between competing products varies significantly between the same brand and different brands. Companies should fully understand the impact of brand perception and more effectively leverage the sentiment polarities of specific attributes in their marketing efforts. Third, when reviewers’ product attribute evaluations significantly differ from other reviewers, providing detailed contextual information and supporting arguments is crucial. Moreover, incorporating warning messages can prevent review dissimilarities caused by user errors, enhance the reference value of online reviews, and effectively mitigate the inhibitory effect of online review dissimilarity on consumers’ purchase intentions. Finally, platforms should optimize the design of online review systems, allowing consumers to provide more detailed feedback on various product attributes, thereby enhancing the diversity of online reviews. Considering that positive online reviews often dominate consumer feedback, platforms can establish an open online review system that enables consumers to interact and verify the authenticity of published online reviews, thus improving their credibility.

6.4. Limitations and Future Work

There are still some areas for improvement in this study. First, the research focuses on a single product category: headphones. Given that the effect of online reviews can vary greatly among different product categories, future research should expand its scope to include a broader range of products, such as smartphones and laptops, to provide more comprehensive marketing insights based on online reviews for businesses. Second, the analysis is limited to online reviews from the JD.com platform. Future research should consider the spillover effects across multiple platforms and examine the combined impact of online reviews from different platforms on product sales. Lastly, online reviews encompass various features (such as reviewer characteristics), which can significantly affect sales. Because of the limitations of data collection techniques, this study cannot perform large-scale data acquisition. In the future, we aim to incorporate more relevant features to establish a more complete model of the impact of online reviews, with a particular focus on exploring spillover effects in greater depth.

Author Contributions

Conceptualization, S.S. and Y.Y.; methodology, S.S. and Y.Y.; software, Y.Y.; validation, Y.L.; investigation, Y.L.; data curation, Y.Y.; writing—original draft preparation, S.S. and Y.Y.; writing—review and editing, S.S.; visualization, Y.Y.; supervision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 72071010).

Data Availability Statement

The data of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

For each class i , T P i (True Positive) is the number of samples from a class correctly predicted as positive by the model. F P i (False Positive) is the number of samples from a class falsely predicted as positive. T N i (True Negative) is the number of samples from a class correctly predicted as negative. F N i (False Negative) is the number of samples from a class falsely predicted as negative. Then,
A c c u r a c y = T P i + T N i T P i + T N i + F P i + F N i
P r e c i s i o n i = T P i T P i + F P i
R e c a l l i = T P i T P i + F N i
F 1   s c o r e i = 2 × P r e c i s i o n i × R e c a l l i P r e c i s i o n i + R e c a l l i
We compute the Macro-average for the metrics, namely, we compute the performance for each class, and then compute the average over classes to obtain the P r e c i s i o n , R e c a l l , and F 1 s c o r e , defined as follows:
P r e c i s i o n = 1 N i = 1 N P r e c i s i o n i
R e c a l l = 1 N i = 1 N R e c a l l i
F 1   s c o r e = 1 N i = 1 N F 1   s c o r e i
where N represents the number of classes.

References

  1. Pan, X.; He, S.; García-Zamora, D.; Wang, Y.; Martínez, L. A novel online reviews-based decision-making framework to manage rating and textual reviews. Expert Syst. Appl. 2025, 259, 125367. [Google Scholar] [CrossRef]
  2. Chen, C.D.; Ku, E.C. Diversified online review websites as accelerators for online impulsive buying: The moderating effect of price dispersion. J. Internet Commer. 2021, 20, 113–135. [Google Scholar] [CrossRef]
  3. Wang, S.; Lin, Y.; Zhu, G. Online reviews and high-involvement product sales: Evidence from offline sales in the Chinese automobile industry. Electron. Commer. Res. Appl. 2022, 57, 101231. [Google Scholar] [CrossRef]
  4. Liu, F.; Wei, H.Y.; Wang, X.Y.; Zhu, Z.Z.; Chen, H.P.A. The influence of online review dispersion on consumers’ purchase intention: The moderating role of dialectical thinking. J. Bus. Res. 2023, 165, 114058. [Google Scholar] [CrossRef]
  5. Russell, G.J.; Petersen, A. Analysis of cross-category dependence in market basket selection. J. Retail. 2000, 76, 367–392. [Google Scholar] [CrossRef]
  6. Borah, A.; Tellis, G.J. Halo (Spillover) Effects in Social Media: Do Product Recalls of One Brand Hurt or Help Rival Brands? J. Mark. Res. 2016, 53, 143–160. [Google Scholar] [CrossRef]
  7. Kwark, Y.; Lee, G.M.; Pavlou, P.A.; Qiu, L. On the spillover effects of online product reviews on purchases: Evidence from clickstream data. Inf. Syst. Res. 2021, 32, 895–913. [Google Scholar] [CrossRef]
  8. Deng, F.M.; Gong, X.Y.; Luo, P.; Liang, X.D. The underestimated online clout of hotel location factors: Spillover effect of online restaurant ratings on hotel ratings. Curr. Issues Tour. 2023, 28, 70–78. [Google Scholar] [CrossRef]
  9. Hao, J.; Hao, X.; Tian, Y.Z.D. Effects of service attributes and competition on electronic word of mouth: An elaboration likelihood perspective. Inf. Technol. Manag. 2023, 24, 367–379. [Google Scholar] [CrossRef] [PubMed]
  10. Sun, B.; Kang, M.; Zhao, S.Y. How online reviews with different influencing factors affect the diffusion of new products. Int. J. Consum. Stud. 2023, 47, 1377–1396. [Google Scholar] [CrossRef]
  11. Liu, X.; Ren, P.; Xu, Z.; Xie, W. Evolutive multi-attribute decision making with online consumer reviews. Omega 2025, 131, 103225. [Google Scholar] [CrossRef]
  12. Wang, Y.; Ngai, E.W.; Li, K. The effect of review content richness on product review helpfulness: The moderating role of rating inconsistency. Electron. Commer. Res. Appl. 2023, 61, 101290. [Google Scholar] [CrossRef]
  13. Shan, G.; Zhou, L.; Zhang, D. From conflicts and confusion to doubts: Examining review inconsistency for fake review detection. Decis. Support Syst. 2021, 144, 113513. [Google Scholar] [CrossRef]
  14. Zhang, S.; Liu, W.; Zhang, T.; Han, W.; Zhu, Y. Harms of inconsistency: The impact of user-generated and marketing-generated photos on hotel booking intentions. Tour. Manag. Perspect. 2024, 51, 101249. [Google Scholar] [CrossRef]
  15. Long, X.; Nasiry, J. Prospect theory explains newsvendor behavior: The role of reference points. Manag. Sci. 2015, 61, 3009–3012. [Google Scholar] [CrossRef]
  16. Guo, Y.X.; Wang, F.F.; Xing, C.; Lu, X.L. Mining multi-brand characteristics from online reviews for competitive analysis: A brand joint model using latent Dirichlet allocation. Electron. Commer. Res. Appl. 2022, 53, 101141. [Google Scholar] [CrossRef]
  17. Jeong, E.; Li, X.; Kwon, A.; Park, S.; Li, Q.; Kim, J. A Multimodal Recommender System Using Deep Learning Techniques Combining Review Texts and Images. Appl. Sci. 2023, 14, 9206. [Google Scholar] [CrossRef]
  18. Duan, Y.; Liu, T.; Mao, Z. How online reviews and coupons affect sales and pricing: An empirical study based on e-commerce platform. J. Retail. Consum. Serv. 2022, 65, 102846. [Google Scholar] [CrossRef]
  19. Cai, X.; Cebollada, J.; Cortinas, M. Impact of seller-and buyer-created content on product sales in the electronic commerce platform: The role of informativeness, readability, multimedia richness, and extreme valence. J. Retail. Consum. Serv. 2023, 70, 103141. [Google Scholar] [CrossRef]
  20. Zhai, M.F.; Wang, X.Y.; Zhao, X.J. The importance of online customer reviews characteristics on remanufactured product sales: Evidence from the mobile phone market on Amazon.com. J. Retail. Consum. Serv. 2023, 77, 103677. [Google Scholar] [CrossRef]
  21. Elf, P.; Gatersleben, B.; Christie, I. Facilitating positive spillover effects: New insights from a mixed-methods approach exploring factors enabling people to live more sustainable lifestyles. Front. Psychol. 2019, 9, 2699. [Google Scholar] [CrossRef] [PubMed]
  22. Qian, Z.F.; Day, J.S.; Ignatius, J.; Dhamotharand, L.; Chai, J.W. Digital advertising spillover, online-exclusive product launches, and manufacturer-remanufacturer competition. Eur. J. Oper. Res. 2024, 313, 565–586. [Google Scholar] [CrossRef]
  23. Wu, X.; Zhang, F.; Zhou, Y. Brand spillover as a marketing strategy. Manag. Sci. 2022, 68, 5348–5363. [Google Scholar] [CrossRef]
  24. Xu, Y.K.; Nicolau, J.L.; Luo, P. Travelers’ reactions toward recommendations from neighboring rooms: Spillover effect on room bookings. Tour. Manag. 2022, 88, 104427. [Google Scholar] [CrossRef]
  25. Choi, H.S.; Leon, S. An emprical investigastion of online review helpfulness: A big data perspective. Decis. Support Syst. 2020, 139, 113403. [Google Scholar] [CrossRef]
  26. Choi, J.; Yoo, S.H.; Lee, H. Two faces of review inconsistency: The respective effects of internal and external inconsistencies on job review helpfulness. Comput. Hum. Behav. 2023, 140, 107570. [Google Scholar] [CrossRef]
  27. Yin, D.Z.; Mitra, S.; Zhang, H. When do consumers value positive vs. negative reviews? An empirical investigation of confirmation bias in online word of mouth. Inf. Syst. Res. 2016, 27, 131–144. [Google Scholar] [CrossRef]
  28. Han, M.X. How does mobile device usage influence review helpfulness through consumer evaluation? Evidence from TripAdvisor. Decis. Support Syst. 2022, 153, 113682. [Google Scholar] [CrossRef]
  29. Eslami, S.P.; Ghasemaghaei, M. Effects of online review positiveness and review score inconsistency on sales: A comparison by product involvement. J. Retail. Consum. Serv. 2018, 45, 74–80. [Google Scholar] [CrossRef]
  30. Meng, J.; Weng, X.I. Can prospect theory explain the disposition effect? A new perspective on reference points. Manag. Sci. 2018, 64, 3331–3351. [Google Scholar] [CrossRef]
  31. Shaalan, Y.; Zhang, X.; Chan, J.; Salehi, M. Detecting singleton spams in reviews via learning deep anomalous temporal aspect-sentiment patterns. Data Min. Knowl. Discov. 2021, 35, 450–504. [Google Scholar] [CrossRef]
  32. Festinger, L. Cognitive dissonance. Sci. Am. 1962, 207, 82–106. [Google Scholar] [CrossRef] [PubMed]
  33. Zhang, X.; Zhang, X.; Liang, S.; Yang, Y.; Law, R. Infusing new insights: How do review novelty and inconsistency shape the usefulness of online travel reviews? Tour. Manag. 2023, 96, 104703. [Google Scholar] [CrossRef]
  34. Van Nguyen, T.; Zhou, L.; Chong, A.Y.L.; Li, B.; Pu, X. Predicting customer demand for remanufactured products: A data-mining approach. Eur. J. Oper. Res. 2020, 281, 543–558. [Google Scholar] [CrossRef]
  35. Jang, S.; Chung, J.; Rao, V.R. The importance of functional and emotional content in online consumer reviews for product sales: Evidence from the mobile gaming market. J. Bus. Res. 2021, 130, 583–593. [Google Scholar] [CrossRef]
  36. Shocker, A.D.; Bayus, B.L.; Kim, N. Product complements and substitutes in the real world: The relevance of “other products”. J. Mark. 2004, 68, 28–40. [Google Scholar] [CrossRef]
  37. Lee, D.W.; Hong, Y.C.; Seo, H.Y.; Yun, J.Y.; Lee, N. Different influence of negative and positive spillover between work and life on depression in a longitudinal study. Saf. Health Work. 2021, 12, 377–383. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, H.; Chen, Z.; Chen, B.; Hu, B.; Li, M.; Yang, C.; Jiang, B. Complete quadruple extraction using a two-stage neural model for aspect-based sentiment analysis. Neurocomputing 2022, 492, 452–463. [Google Scholar] [CrossRef]
  39. Kamakura, W.A.; Kang, W. Chain-wide and store-level analysis for cross-category management. J. Retail. 2007, 83, 159–170. [Google Scholar] [CrossRef]
  40. Luo, X.; Zhang, J.; Gu, B.; Phang, C. Expert blogs and consumer perceptions of competing brands. Management Inform. Syst. Quart. 2017, 41, 371–395. [Google Scholar]
  41. Jin, W.; Chen, Y.; Yang, S.; Zhou, S.; Jiang, H.; Wei, J. Personalized managerial response and negative inconsistent review helpfulness: The mediating effect of perceived response helpfulness. J. Retail. Consum. Serv. 2023, 74, 103398. [Google Scholar] [CrossRef]
  42. Van Kampen, H.S. The principle of consistency and the cause and function of behaviour. Behav. Process. 2019, 159, 42–54. [Google Scholar] [CrossRef] [PubMed]
  43. Wu, H.; Guo, G.; Yang, E.; Luo, Y.; Chu, Y.; Jiang, L.; Wang, X. PESI: Personalized Explanation recommendation with Sentiment Inconsistency between ratings and reviews. Knowl. Based Syst. 2024, 283, 111133. [Google Scholar] [CrossRef]
  44. Nguyen, D.H.; de Leeuw, S.; Dullaert, W.E. Consumer behaviour and order fulfilment in online retailing: A systematic review. Int. J. Manag. Rev. 2018, 20, 255–276. [Google Scholar] [CrossRef]
  45. Smith, R.E.; Swinyard, W.R. Attitude-behavior consistency: The impact of product trial versus advertising. J. Mark. Res. 1983, 20, 257–267. [Google Scholar] [CrossRef]
  46. Choi, H.S. Do extraordinary claims require extraordinary evidence? Differential effect of trust cues on helpfulness by review extremity: An empirical study using big data. Eur. J. Inf. Syst. 2024, 33, 1–22. [Google Scholar] [CrossRef]
  47. Wang, S.; Karmakar, S.; Wang, F.; Pei, Y. Content dissimilarity and online review helpfulness: Contextual insights. J. Bus. Res. 2025, 187, 115068. [Google Scholar] [CrossRef]
  48. Yin, H.; Zheng, S.; Yeoh, W.; Ren, J. How online review richness impacts sales: An attribute substitution perspective. J. Assoc. Inf. Sci. Technol. 2021, 72, 901–917. [Google Scholar] [CrossRef]
  49. Yin, D.; Vreede, T.D.; Vreede, S.G.J.D. Decide now or later: Making sense of incoherence across online reviews. Inf. Syst. Res. 2023, 34, 1211–1227. [Google Scholar] [CrossRef]
  50. Zhang, R.; Yu, Z.; Yao, W. Navigating the complexities of online opinion formation: An insight into consumer cognitive heuristics. J. Retail. Consum. Serv. 2024, 81, 103966. [Google Scholar] [CrossRef]
  51. Wang, Y.; Ngai, E.W.T.; Li, K. Effects of sentiment quantity, dispersion, and dissimilarity on online review forwarding behavior: An empirical analysis. J. Retail. Consum. Serv. 2024, 81, 103978. [Google Scholar] [CrossRef]
  52. Kwon, O.; Singh, T.; Kim, S. The competing roles of variety seeking in new brand adoption. J. Retail. Consum. Serv. 2023, 72, 103283. [Google Scholar] [CrossRef]
  53. Janakiraman, R.; Sismeiro, C.; Dutta, S. Perception spillovers across competing brands: A disaggregate model of how and when. J. Mark. Res. 2009, 46, 467–481. [Google Scholar] [CrossRef]
  54. Aggarwal, P.; Vaidyanathan, R.; Venkatesh, A. Using Lexical Semantic Analysis to Derive Online Brand Positions: An Application to Retail Marketing Research. J. Retail. 2009, 85, 145–158. [Google Scholar] [CrossRef]
  55. Gupta, S.; Gallear, D.; Rudd, J.; Foroudi, P. The impact of brand value on brand competitiveness. J. Bus. Res. 2020, 112, 210–222. [Google Scholar] [CrossRef]
  56. Voss, K.E.; Li, Y.Y.; Song, Y.S. Competing cues in brand alliance advertisements. J. Bus. Res. 2022, 149, 476–493. [Google Scholar] [CrossRef]
  57. Song, R.; Kim, H.; Lee, G.M.; Jang, S. Does deceptive marketing pay? The evolution of consumer sentiment surrounding a pseudo-product-harm crisis. J. Bus. Ethic. 2019, 158, 743–761. [Google Scholar] [CrossRef]
  58. Hillen, J.; Fedoseeva, S. E-commerce and the end of price rigidity? J. Bus. Res. 2021, 125, 63–73. [Google Scholar] [CrossRef]
  59. Zhao, Y.H.; Zhang, L.Y.; Zeng, C.X.; Lu, W.R.; Chen, Y.D.; Fan, T. Construction of an aspect-level sentiment analysis model for online medical reviews. Inf. Process. Manag. 2023, 60, 103513. [Google Scholar] [CrossRef]
  60. Odaka, Y.; Kaneiwa, K. Block-segmentation vectors for arousal prediction using semi-supervised learning. Appl. Soft Comput. 2023, 142, 110327. [Google Scholar] [CrossRef]
  61. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT); Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 4171–4186. [Google Scholar]
  62. Chen, J.; Yang, Z.; Yang, D. Mixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), Online, 5–10 July 2020; Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 2147–2157. [Google Scholar]
Figure 1. Conceptual framework.
Figure 1. Conceptual framework.
Electronics 14 00934 g001
Figure 2. Illustration of the semi-supervised sentiment analysis model.
Figure 2. Illustration of the semi-supervised sentiment analysis model.
Electronics 14 00934 g002
Figure 3. An example of the online review from sentiment analysis.
Figure 3. An example of the online review from sentiment analysis.
Electronics 14 00934 g003
Table 1. Descriptive statistics.
Table 1. Descriptive statistics.
VariablesMinimumMaximumMeanStd. Dev.
S a l e s 342457123.47509.67
P o s _ a t t 0.62.3811.3060.42
N e g _ a t t 0.010.3470.1530.091
P o s _ a t t _ c o m 0.7851.8471.3340.252
N e g _ a t t _ c o m 0.0630.3320.1470.044
R S _ d i s s i m 0.3961.4680.9570.274
C r o s s _ d i s s i m 0.2560.3730.3110.015
P o s _ P B _ d i s s i m s a m 0.6622.3821.2950.362
N e g _ P B _ d i s s i m s a m 00.3830.1430.075
P o s _ P B _ d i s s i m d i f 0.8441.8491.2820.232
N e g _ P B _ d i s s i m d i f 0.0540.2860.1460.039
P r i c e 83249161.52156.27
P r o m o t i o n 0.36910.8090.105
Table 2. The range of hyperparameters.
Table 2. The range of hyperparameters.
ModelHyperparameters and Their Tested Ranges
RFmax_depth = 1, 2, 4, 8, 16, 32, 64; estimators = 20, 30, 40, 50, 100
SVMkernel = rbf, linear, poly, sigmoid
XGBoostmax_depth = 1, 2, 4, 8; estimators = 20, 30, 50, 100; subsample = 0.5, 0.7, 0.9, 1.0; gamma = 0, 0.1, 0.2, 0.4; colsample_bytree = 0.5, 0.7, 0.9, 1.0; min_child_weight = 1, 2, 3, 4, 5
FastTextdim = 512; word_ngrams = 1, 2, 3, 4; lr = [1 × 10−4, 1 × 10−2]
LSTM + Attentiondim = 512; num_layers = 1, 2, 3; dropout = [0.1,0.5]; num_heads = 1, 2, 4, 8; lr = [1 × 10−4, 1 × 10−2]
BERTbert-base-chinese; dropout = [0.1,0.5]; lr = [1 × 10−5, 1 × 10−3], weight_decay = [0, 0.1]
Ourbert-base-chinese; mix-layer = [0,3]; λ = [0.1,1.0]; T = [0.5,1.5]; γ = [0.5, 1.0]; lr = [1 × 10−4, 1 × 10−3]; dropout = [0.1,0.5]
Table 3. The comparison result (%) of the aspect-level sentiment analysis (Scenario 1).
Table 3. The comparison result (%) of the aspect-level sentiment analysis (Scenario 1).
Model6000 (5%)12,000 (20%)
AccuracyPrecisionRecallF1 ScoreAccuracyPrecisionRecallF1 Score
FastText64.9275.3977.4376.2080.7488.0086.1886.93
LSTM + Attention63.0375.4774.1874.7280.0186.9385.9386.33
Pretrain BERT28.2136.1252.1442.5828.2136.1252.1442.58
Finetune BERT68.3079.6478.2878.4182.3289.3487.3188.13
Table 4. The comparison result (%) of the aspect-level sentiment analysis (Scenario 2).
Table 4. The comparison result (%) of the aspect-level sentiment analysis (Scenario 2).
Model1000 (5%)4000 (20%)
A1A2A3A4A1A2A3A4
RF82.0578.1089.7094.7589.3076.2091.4096.30
SVM83.4575.8089.8094.8589.8079.8591.8596.40
XGBoost87.2078.3091.3095.3591.7581.9091.2596.65
FastText84.7081.0090.6594.9092.3582.9792.5594.03
LSTM + Attention87.6080.4089.6595.4092.7883.8092.2096.15
BERT89.7381.7091.5095.5092.9185.8092.6996.70
This study92.8083.2592.7596.1593.8087.6693.1597.35
Table 5. Analysis results of online review spillover effect and dissimilarity.
Table 5. Analysis results of online review spillover effect and dissimilarity.
VariablesSpillover EffectsRating-Sentiment
Dissimilarity
Cross-Online Review
Dissimilarity
Model 1Model 2Model 3Model 4Model 5Model 6
P o s _ a t t 0.2916 ***
(5.1)
0.3231 ***
(4.2)
0.3637 ***
(4.98)
P o s _ a t t _ c o m −0.2173 ***
(−3.17)
−0.7793 **
(−2.37)
−2.3445 ***
(−4.19)
N e g _ a t t −0.1075 *
(−1.69)
−0.1064 *
(−1.70)
−0.1867 **
(−2.3)
N e g _ a t t _ c o m 0.1579 *
(1.77)
0.5821 ***
(3.22)
1.8005 *
(1.8)
R S _ d i s s i m −1.5619 ***
(−3.41)
−0.2342
(−1.19)
C r o s s _ d i s s i m −1.2116 ***
(−4.94)
−0.4374 **
(−2.23)
Moderating variables:
P o s _ a t t _ c o m × R S _ d i s s i m 1.0748 ***
(2.88)
P o s _ a t t _ c o m × C r o s s _ d i s s i m 2.8439 ***
(4.36)
N e g _ a t t _ c o m × R S _ d i s s i m −0.9149 ***
(−2.99)
N e g _ a t t _ c o m × C r o s s _ d i s s i m −1.8612 *
(−1.82)
Control variables:
P r i c e −0.1537 ***
(−2.92)
−0.1661 *
(−1.91)
−0.3954 **
(−2.37)
−0.186 *
(−1.76)
−0.1554
(−1.36)
−0.3323 **
(−3.03)
P r o m o t i o n −0.1269 ***
(−2.71)
−0.2556 ***
(−4.58)
−0.1558 *
(−2.24)
−0.027
(−1)
−0.1135 *
(−2.92)
−0.2392 ***
(−3.28)
N889288928892889288928892
R-squared0.14750.11420.16950.17170.19460.2126
Note. Robust t-statistics in parentheses. * p < 0.1; ** p < 0.05; *** p < 0.01.
Table 6. Analysis results of product brand dissimilarity.
Table 6. Analysis results of product brand dissimilarity.
VariablesSame BrandDiffer BrandComparison
Model 7Model 8Model 9Model 10Model 11Model 12
P o s _ a t t 0.3414 ***
(4.72)
0.2728 ***
(4.48)
0.3546 ***
(4.94)
N e g _ a t t −0.3666 ***
(−5.82)
−0.2868 ***
(−5.97)
−0.382 ***
(−6.09)
Moderating variables:
P o s _ P B _ d i s s i m s a m −0.1259 *
(−1.79)
−0.1471 **
(−2.09)
N e g _ P B _ d i s s i m s a m 0.1027 *
(1.8)
0.1275 **
(2.29)
P o s _ P B _ d i s s i m d i f −0.105 ***
(−1.7)
−0.1243 **
(−2.01)
N e g _ P B _ d i s s i m d i f 0.0751 *
(1.84)
0.1242 **
(2.09)
Control variables:
P r i c e −0.1837 ***
(−3.31)
−0.1352 ***
(−2.96)
−0.1741 ***
(−3.04)
−0.1256 **
(−2.52)
−0.1426 **
(−2.43)
−0.2056 ***
(−3.1)
P r o m o t i o n −0.1014 *
(−1.92)
−0.0759 *
(−1.75)
−0.0914 *
(−1.72)
−0.074 *
(−1.7)
−0.0913 *
(−1.74)
−0.256 ***
(−4.65)
N889288928892889288928892
R-squared0.21310.24270.21130.2410.23670.2693
Note. Robust t-statistics in parentheses. * p < 0.1; ** p < 0.05; *** p < 0.01.
Table 7. Robustness checks.
Table 7. Robustness checks.
VariablesPositiveNegative
Model IModel IIModel IIIModel IV
P o s _ a t t _ c o m −0.7793 **
(−2.37)
−2.3445 ***
(−4.19)
N e g _ a t t _ c o m 0.6507 ***
(3.59)
1.14 **
(1.98)
R S _ d i s s i m −1.5619 ***
(−3.41)
−0.3663 *
(−1.8)
C r o s s _ d i s s i m −1.2116 ***
(−4.94)
−0.5278 **
(−2.36)
Moderating variables:
P o s _ a t t _ c o m × R S _ d i s s i m 1.0748 ***
(2.88)
P o s _ a t t _ c o m × C r o s s _ d i s s i m 2.8439 ***
(4.36)
N e g _ a t t _ c o m × R S _ d i s s i m −1.0368 *
(−3.37)
N e g _ a t t _ c o m × C r o s s _ d i s s i m −1.2326 **
(−2.00)
Control variables:
P r i c e −0.3954 **
(−2.37)
−0.1554
(−1.36)
−0.1408
(−1.33)
−0.3147 ***
(−2.82)
P r o m o t i o n −0.1558 *
(−2.24)
−0.1135 *
(−2.92)
0.0414
(1.2)
−0.2426 ***
(−3.23)
Newly Added Variable:
N u m b e r _ b r a n d 0.1682 **
(−2.29)
−0.0882
(−1.36)
−0.0772 **
(−2.21)
−0.0651
(−0.84)
N8892889288928892
R-squared0.16950.19460.19530.2158
Note. Robust t-statistics in parentheses. * p < 0.1; ** p < 0.05; *** p < 0.01.
Table 8. Summary of our findings with relevant prior research.
Table 8. Summary of our findings with relevant prior research.
StudyDissimilaritySpillover EffectFindings/Results
[7]Not includedReview ratingReview ratings of related products influence focal product purchases.
[8]Not includedReview ratingReview ratings have a positive spillover effect on hotels, decreasing with distance from the restaurant.
[27]Review ratingNot includedLower rating dissimilarity helps enhance review credibility.
[41]Review rating-
sentiment
Not includedReview dissimilarity (negative rating with positive text) influences how personalized managerial responses affect review helpfulness.
[47]Review content Not includedContent dissimilarity in online reviews, including differences in topic compared to product descriptions and previous reviews, affects perceived helpfulness.
[51]Review sentiment Not includedPositive dissimilarity amplifies the negative impact of positive emotions, while negative dissimilarity suppresses the positive impact of negative emotions.
This paperThree typesReview sentimentSentiment attributes in online reviews of competing products generate distinct spillover effects. From the spillover effect perspective, three types of review dissimilarity moderate these effects.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shan, S.; Yang, Y.; Li, Y. Face of Cross-Dissimilarity: Role of Competitors’ Online Reviews Based on Semi-Supervised Textual Polarity Analysis. Electronics 2025, 14, 934. https://doi.org/10.3390/electronics14050934

AMA Style

Shan S, Yang Y, Li Y. Face of Cross-Dissimilarity: Role of Competitors’ Online Reviews Based on Semi-Supervised Textual Polarity Analysis. Electronics. 2025; 14(5):934. https://doi.org/10.3390/electronics14050934

Chicago/Turabian Style

Shan, Siqing, Yangzi Yang, and Yinong Li. 2025. "Face of Cross-Dissimilarity: Role of Competitors’ Online Reviews Based on Semi-Supervised Textual Polarity Analysis" Electronics 14, no. 5: 934. https://doi.org/10.3390/electronics14050934

APA Style

Shan, S., Yang, Y., & Li, Y. (2025). Face of Cross-Dissimilarity: Role of Competitors’ Online Reviews Based on Semi-Supervised Textual Polarity Analysis. Electronics, 14(5), 934. https://doi.org/10.3390/electronics14050934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop