Skip to Content
  • Article
  • Open Access

12 March 2026

Review System Design and Sales: How Interface Visibility Moderates the Effect of Platform-Generated Default Reviews

,
,
,
and
1
School of Management, Seoul School of Integrated Sciences and Technologies (aSSIST University), Seoul 03767, Republic of Korea
2
Business School, Harbin Institute of Technology, Harbin 150001, China
3
School of Management, Harbin University of Commerce, Harbin 150001, China
*
Author to whom correspondence should be addressed.

Abstract

Platform-generated default reviews are widespread in e-commerce, yet they are underexamined as a review system design feature. Using 1994 Taobao product snapshots from two windows (January–April 2025, n = 983; December 2025–January 2026, n = 1011) surrounding a gradual interface redesign that folded default reviews from the default view, this study examines how the default review ratio relates to sales and whether reduced visibility moderates the association. Regression results show that higher default review ratios are associated with lower log sales prior to the redesign, while the negative association attenuates once default reviews are de-emphasised; conditional sales levels are also higher post-redesign. Because the rollout was gradual and the data are repeated cross-sectional snapshots, estimates are interpreted as differences in conditional associations across regimes. These patterns are robust to alternative specifications, additional controls, category-specific post shifts, and winsorization. Overall, the market impact of platform-generated review signals depends on interface visibility, highlighting an actionable governance lever for review system design.

1. Introduction

Online product reviews are a central information input in digital retailing, shaping consumers’ beliefs about product quality and transaction risk and, ultimately, market outcomes. A large empirical body of literature shows that review valence and volume are associated with product sales and demand shifts in online marketplaces [1,2]. At the same time, the economic value of reviews also creates incentives for strategic distortion, and researchers have documented various forms of review manipulation and fraud that can undermine the informativeness and credibility of review systems [3,4,5].
In this paper, we focus on an understudied but increasingly prevalent platform side mechanism in Chinese e-commerce: default reviews. In many marketplaces, when buyers do not actively leave feedback within a specified period, the platform automatically records a standardized review entry—often accompanied by a generic sentence such as “this user did not fill in the evaluation content”—and assigns a five-star rating. Unlike fabricated or incentivized reviews that are intentionally authored to persuade, default reviews are generated by the platform’s rules as a by-product of user inaction. However, they may still distort the information environment by mechanically inflating ratings while contributing little diagnostic content. When default reviews account for a large share of a product’s review pool, consumers may infer that the displayed reputation is less representative of genuine experiences, interpret the review system as less transparent, and become more sceptical toward the product’s credibility [6,7,8].
A further complication is that the influence of default reviews should depend not only on their prevalence but also on their visibility in the review interface. A growing stream of research on salience and digital choice architecture shows that platform interface design and information presentation can systematically steer what users notice and process, thereby shaping judgments and choices [9,10,11,12,13]. Interface elements—such as ordering rules, salience cues, and default settings—can systematically steer attention and behaviour in digital choice environments [13,14]. More broadly, research on algorithmic and interface governance highlights that platform-mediated information curation can change user trust and perceived fairness, even when the underlying information remains available somewhere in the system [15,16]. These insights suggest a simple but important implication for review systems: the same default review stock may exert very different market effects depending on whether it is displayed prominently or pushed out of the default viewing set.
Motivated by this logic, we examine how the default review ratio relates to product sales and whether this association changes when the platform alters the review interface in a way that reduces the salience of default reviews. Empirically, we exploit an interface transition on Taobao in which default reviews became increasingly folded/less visible in the review section while genuine user reviews were prioritized in the default display. Because the change was implemented progressively rather than at a single sharp cutoff, we adopt a pre–post window design: we compare products observed in an earlier window when default reviews were typically shown together with genuine reviews versus a later window when default reviews were largely folded from the default view. Using product-level secondary data collected via Python 3.12.9-based web scraping from publicly accessible Taobao product pages (platform-displayed sales at the time of scraping, current price, total review volume, and default review counts/ratio), we estimate interaction models that allow the marginal association between the default review ratio and sales to differ between the two periods while accounting for key covariates and product category fixed effects.
Building on the above motivation, this study pursues the following research objectives. Objective 1: Examine how the default review ratio is associated with product sales. Objective 2: Test whether reducing the salience/visibility of default reviews via an interface redesign attenuates the association in Objective 1. Objective 3: Whilst holding relevant covariates constant, assess whether sales levels differ between the post- and pre-redesign periods. These objectives guide the empirical design, the presentation of results, and the discussion throughout the paper.
This study contributes to the literature in three ways. First, it extends research on online reviews and manipulation by theorizing and empirically examining platform-generated default reviews as an interface-embedded distortion that operates even without persuasive user-generated text. Second, it connects review research with interface and choice architecture perspectives by showing why visibility and salience are central boundary conditions for when default reviews matter for market outcomes [17,18]. Third, it provides evidence from a large, real-world marketplace setting, offering practical implications for platform governance and review system design aimed at improving transparency and sustaining consumer trust [19,20].

2. Literature Review

To strengthen the theoretical grounding, we conducted a structured scoping review of prior work on (i) platform-generated or system-filled review signals and reporting bias, (ii) interface design, salience, and digital choice architecture. We synthesize representative evidence in each stream and use it to motivate the hypotheses tested in Section 3. Table 1 summarizes representative evidence across the two streams and highlights how each stream motivates our hypotheses and design.
Table 1. Structured scoping review mapping of key literature streams (representative evidence) and their relevance to this study.

2.1. Default Reviews

Default reviews refer to platform-generated evaluations that appear when consumers do not actively submit feedback within a specified time window. In major Chinese e-commerce platforms, the system typically auto populates a generic text (e.g., “This user did not fill in the evaluation content”) alongside a default star rating, thereby adding entries to the review pool without voluntary consumer-generated content (as shown in Figure 1). Unlike fake or incentivised reviews that involve deliberate content fabrication, default reviews are produced through platform rules and thus constitute a distinct type of platform-generated review signal embedded in the review infrastructure [28,29,30].
Figure 1. Screenshots of users’ review sections on Taobao.com and JD.com mobile apps. Note: The screenshots are shown in the original Chinese-language interface, and the arrows in the figure provide English translations of the focal interface elements.
Although default reviews are often textless and low in diagnostic value, they still contribute to aggregate rating cues that consumers may use heuristically when evaluating products. Prior research has long established that online review signals affect sales and demand outcomes [1,31] and that consumers’ reliance on reviews depends on perceived informativeness and credibility [21,32]. Studies further highlight the roles of review presentation and source-related cues in shaping trust and persuasion in online environments [2,33,34]. More broadly, systematic evidence on review credibility identifies multiple antecedents that govern whether consumers regard review environments as trustworthy [22]. These streams imply that platform-generated rating artifacts—despite limited textual information—may still matter because they modify the structure and composition of the review pool, motivating research that explicitly isolates default reviews and their prevalence at the product level [35].
Importantly, the review pool consumers observe is shaped by reporting/selection processes and platform rules, implying that a mechanically generated “default positive” rule can systematically change the review pool composition even without additional persuasive content [6,7,8]. Accordingly, the default review ratio captures a platform-embedded component of the reputation signal that differs conceptually from voluntary eWOM. As summarized in Table 1 (Stream 1), this stream of work suggests that review signals affect demand, but the observed review pool is jointly determined by consumer reporting and platform mechanisms—motivating our focus on default-review prevalence as a product-level platform-generated signal.

2.2. Interface Design and the Salience of Review Information

A growing stream of research emphasises that the influence of online reviews depends not only on review content but also on how review information is structured and presented on platform interfaces. Design elements such as ordering rules, formatting, grouping, and filtering shape consumers’ attention allocation and determine which cues become accessible and salient during decision making [2,36,37]. Related work further shows that contextual framing and perceived transparency of platform mechanisms affect how consumers interpret review environments and form credibility judgments [23,38,39]. From a choice architecture perspective, subtle interface features can systematically steer user inference without changing underlying content, implying that the same review pool may carry different persuasive weight under different display regimes [9,40]. More broadly, research on algorithmic curation and interface-based influence highlights that platforms can shape user judgments through presentation and navigation structures rather than through direct persuasion [15,41]. Together, this literature suggests that review interface design is a consequential boundary condition for the effects of platform-generated review signals.
Figure 2 illustrates how platform interface choices can alter the default viewing set of review information. In the pre-redesign interface, default reviews are readily visible alongside voluntary reviews, whereas the post-redesign interface folds default reviews behind an additional navigation step, thereby increasing access costs and reducing salience in routine browsing. Prior research on digital nudging and default effects suggests that such visibility and ordering decisions systematically shape what users attend to and use as input in judgment and choice [9,14]. Related work on online credibility and platform transparency further indicates that presentation and perceived openness of platform mechanisms influence trust and the interpretation of review environments [23,38,39], and that interface-embedded curation can affect perceived fairness even when information remains technically available elsewhere in the system [15,16]. These insights motivate treating the redesign as a meaningful boundary condition for the market impact of default review prevalence.
Figure 2. Post-redesign screenshots of users review sections on Taobao.com mobile apps. Note: The screenshots are shown in the original Chinese-language interface, and the arrows in the figure provide English translations of the focal interface elements.
As summarized in Table 1 (Stream 2), interface choices that alter visibility and access costs shift which cues enter consumers’ default information set, making the Taobao redesign a theoretically meaningful boundary condition.
Overall, while eWOM research has established how user-generated review valence, volume, and textual diagnostics relate to demand, two gaps remain. Prior work pays limited attention to platform-generated artifacts (e.g., default reviews) that reshape the observable review pool, and it rarely links interface-driven visibility/salience to the impact of such artifacts in real marketplaces. These gaps motivate our focus on default review prevalence and the redesign that de-emphasizes default reviews.

3. Hypothesis Development and Conceptual Framework

To address Objectives 1–3, this study develops testable hypotheses and a conceptual framework. Building on the two literature streams summarized in Table 1, it links default review prevalence to sales outcomes and specifies when this association should weaken as interface visibility changes. Theoretically, the study integrates information-diagnosticity theory and a choice architecture perspective (operationalized in digital marketplaces as interface-level digital nudges) to explain both the baseline association and its boundary conditions.

3.1. Default Review Ratio and Sales

Online reviews serve as an important external signal that help consumers infer product quality and transaction risk, thereby shaping purchase decisions and market outcomes [24,42]. Empirically, a large body of work documents that review valence, volume, and related review signals are associated with demand and sales across platforms and product categories [1,2,21]. This evidence, however, is largely grounded in settings where the review pool predominantly reflects voluntary consumer feedback, which is more likely to be perceived as diagnostic, experience-based, and authentic [33,43]. In such settings, consumers can treat review statistics as informative signals because they are presumed to aggregate dispersed, experience-based information. When this presumption weakens, the same statistics can become “noisy” indicators that require discounting, shifting attention toward other cues (e.g., price or seller attributes) or increasing decision delay.
Based on information-diagnosticity theory in eWOM research [44,45], reviews influence demand to the extent that they are perceived as informative, experience-based, and trustworthy inputs for inferring quality and transaction risk [24,42,46]. When the observable review pool is increasingly populated by platform-generated entries that are less diagnostic and produced through opaque rules [8], the credibility of the review environment may be discounted relative to settings dominated by voluntary feedback [33,47]. Crucially, low-diagnostic entries can dilute the informational value of the displayed pool: even if they mechanically add positive ratings, they contribute little incremental evidence about product performance, making it harder for consumers to separate truly high-quality products from those benefiting from rule-based inflation. Moreover, because platform rules and reporting/selection processes shape what is observable to consumers [6,7], a higher default review ratio can serve as a compositional cue that weakens perceived representativeness of genuine experiences, thereby lowering purchase propensity and, in aggregate, sales. Put differently, the default review ratio can be interpreted as an indicator of how much of the displayed reputation is “earned” through active feedback versus “filled” through system rules, and a higher share of the latter increases perceived uncertainty about the underlying experience distribution.
Taken together, the demand relevance of review signals [1,2,21] and the distinctive informational limitations and opacity of system-filled default reviews [8] imply that higher default review prevalence is more likely to undermine (rather than enhance) perceived credibility and purchase propensity in the marketplace context considered here. Therefore, we predict a negative association between the default review ratio and sales.
H1
The default review ratio is negatively associated with product sales.

3.2. Interface Redesign as a Boundary Condition

The influence of review signals depends not only on their existence in the system but also on whether they enter consumers’ attention and information set during evaluation. Research on digital choice environments shows that interface design elements—such as ordering, grouping, defaults, and visual salience—systematically shape attention allocation and what information becomes accessible and influential in decision making [9,25,26]. This boundary condition logic is consistent with evidence on digital nudging and credibility heuristics, which suggests that presentation and salience can steer how users interpret and use online information [13,23,24]. Based on the choice architecture framework, the “default view” effectively defines the baseline information set for many consumers, particularly under limited attention and time constraints [13,48]. Defaults and frictions (e.g., an extra click) can meaningfully change what is noticed and used, even when the underlying information remains available elsewhere [14,49]. In parallel, heuristic approaches to online credibility imply that users often rely on readily observable cues rather than exhaustively processing all available information, making salience a key determinant of which signals influence judgment [23].
Applied to default reviews, this perspective implies that the market implications of default review prevalence should be strongest when default reviews are salient in the default viewing set. Prior to the redesign, default and genuine reviews were mixed and shown in chronological order, making default reviews readily visible and more likely to be interpreted as cues of platform intervention. After the redesign, default reviews are folded/de-emphasised, and the default view primarily displays genuine reviews, reducing the visibility and accessibility of default reviews [50]. Even if default reviews remain available elsewhere in the system, reduced salience lowers the likelihood that consumers will notice, process, and incorporate them into judgment. As a result, the marginal association between default review prevalence and sales should attenuate in the post-redesign regime.
H2: 
The negative association between the default review ratio and sales is weaker in the post-redesign period than in the pre-redesign period.
Beyond moderating the marginal effect (slope) of default reviews, the redesign may also be associated with a shift in the overall information environment of the review interface. By prioritizing genuine reviews and improving the structure and perceived transparency of review information, the post-redesign interface can increase the perceived diagnosticity of available cues and facilitate trust formation, which may influence conversion efficiency at a broader level [25,27,39]. Holding relevant covariates constant (e.g., price, review volume, and product category differences), this improved default viewing environment implies a higher conditional baseline sales level in the post-redesign period.
From a choice architecture lens, interface governance can change not only which cues are weighted (a slope effect) but also the overall “quality” of the default informational environment. When the default interface foregrounds experience-based feedback and reduces the prominence of low-diagnostic artefacts, consumers may face lower cognitive costs in interpreting the review environment and may form evaluations more efficiently (i.e., less deliberation and less perceived ambiguity). In addition, research on transparency and algorithmic/interface communication suggests that clearer presentation and perceived openness of platform mechanisms can strengthen perceived trustworthiness and reduce scepticism toward automated or curated systems [15,16,51]. In a review context, a default view that highlights genuine reviews can therefore increase perceived interpretability and fairness of the information environment, which is conducive to trust formation and conversion. Related evidence also indicates that how review information is formatted and surfaced can shift how strongly consumers respond to review cues [25], and field-based evidence on word-of-mouth system implementation shows that design interventions in review environments can translate into conversion changes [27]. Taken together, even if the redesign does not “add” new information, it can improve the default interface’s diagnosticity and perceived transparency, implying higher conditional baseline sales in the post-redesign period.
H3: 
Holding relevant covariates constant, product sales are higher in the post-redesign period than in the pre-redesign period.
As illustrated in Figure 3, we summarize the conceptual framework and hypotheses.
Figure 3. Conceptual framework and hypothesized relationships.

4. Methodology

4.1. Research Design and Data

This study exploits a platform interface redesign related to the display and salience of default reviews as a quasi-exogenous change in the review information environment. Importantly, the redesign was rolled out gradually over approximately one year rather than implemented at a single known cutoff date. To mitigate contamination from partial rollout, we adopt a repeated cross-sectional pre–post design that compares an early “pre-redesign” window with a late “post-redesign” window when the redesign was largely completed while treating the intermediate months as a transition (washout) period and excluding them from the main analysis. The key identification intuition is that the redesign altered the visibility and accessibility of default reviews in the review interface, which may change the strength of the association between default review prevalence and sales.
The study compiled product-level secondary data by conducting Python 3.12.9-based web scraping of publicly available information from Taobao product pages. The dataset consists of 1994 product observations, with 983 observations collected in the pre-redesign window (1 January–15 April 2025) and 1011 observations collected in the post-redesign window (1 December 2025–1 January 2026). We operationalize the interface regime using a binary classification based on observable front-end elements of the review module at the time of access/screenshot (legacy interface = 0; redesigned interface = 1). A product page is coded as legacy if default reviews are visible in the main review stream by default (i.e., they appear alongside user-generated reviews without requiring additional clicks/tabs). A page is coded as redesigned if default reviews are folded, de-emphasized, or separated in the default view (e.g., they are moved under an additional tab/section, collapsed behind a filter, or require an extra click to display), thereby reducing their salience and accessibility in the default review browsing experience.
To support this operational classification and document rollout completion, we conducted a manual audit of randomly sampled product pages within each sampling window. Table 2 reports the resulting interface patterns (percentages computed within each sampling window) and summarizes the coding rule used. Any remaining legacy interface exposure in the post window would attenuate estimated regime differences, rendering our main estimates conservative.
Table 2. Manual audit of interface regime and coding rule (legacy vs. redesigned).
In addition, where available, we recorded shop and product attributes that may affect demand and review generation, including shop type and product/service attributes (e.g., big brand indicator, free shipping, and promotion status).
Because the interface change was rolled out gradually without a known cutoff date, a regression discontinuity design is not feasible. Likewise, a standard difference-in-differences design would require a credible control group and multi-period data to assess parallel trends; our data consist of repeated cross-sectional snapshots from two non-overlapping windows rather than a product-level panel. Accordingly, we adopt a pre–post window design with an interaction (difference in slopes) specification, focusing on whether the conditional association between the default review ratio and sales differs across interface regimes while controlling for key covariates and category fixed effects.
All data used in this section are aggregated at the product level and are publicly visible on the platform; no personally identifiable information was collected. Standard data cleaning procedures were applied prior to analysis, including checking for missing values and ensuring consistent measurement across periods. Summary statistics by period are reported in Section 5.

4.2. Measures

Table 3 summarizes variable definitions, transformations, and data sources for all measures used in the main analyses and robustness checks. All variables are measured at the product-snapshot level and scraped from publicly accessible Taobao product pages, including platform-displayed sales, current price, total review count, and default review counts/ratio.
Table 3. Variable design and variable description.
Our dependent variable is ln_sales = ln(1 + sales). The focal explanatory variable is the default review ratio (default_ratio), and we define post based on the two sampling windows described in Section 4.1. Following prior work on online demand and review dynamics [1,2,18], we control for ln_price and ln_total_reviews in the baseline models and add shop/listing attributes (e.g., shop type, big_brand01, free_ship01, promo01) in extended specifications. Product category fixed effects are included in all main regressions.

4.3. Empirical Strategy

To test our hypotheses, we estimate an interaction model that compares the association between the default review ratio and sales across the pre- and post-redesign windows. The baseline specification is:
In 1 + sales i = β 0 + β 1 DefaultRatio i + β 2 Post i + β 3 DefaultRatio i + Post i + X i γ + α c + ε i
where In 1 + sales i is the log-transformed sales outcome for product i , D e f a u l t R a t i o i is the default review ratio, and P o s t i indicates whether the observation is from the post-redesign window. X i is a vector of controls (log price and log total reviews in the baseline model), α c denotes product category fixed effects, and ε i is the error term.
This specification yields a straightforward interpretation. The coefficient β 1 captures the association between the default review ratio and sales in the pre-redesign period (testing H1). The coefficient β 3 captures how this association changes in the post-redesign period; the implied post-redesign slope is β 1 + β 3 (testing H2). Finally, β 2 captures the baseline difference in sales levels between the post- and pre-redesign windows after conditioning on observed covariates and category fixed effects (testing H3). In addition to reporting coefficient estimates, we present marginal effects and predicted values to visualise the moderation pattern across the observed range of default review ratios.
Because the data are product-level snapshots rather than a panel tracking identical products over time, our estimates should be interpreted as differences in conditional associations across periods. To reduce concerns that results are driven by heteroskedasticity, all regressions report heteroskedasticity-robust standard errors.

4.4. Robustness Checks

We conduct a battery of robustness checks to assess whether the baseline findings are sensitive to inference procedures, model specification, sample composition, and variable operationalization. First, to evaluate the sensitivity of statistical inference, we re-estimate the baseline model using bootstrap standard errors (2000 replications) in addition to heteroskedasticity-robust standard errors. Second, to mitigate concerns about omitted variable bias, we progressively enrich the set of controls by adding shop- and product-related attributes that may affect demand and review generation (e.g., shop type, big brand indicator, free shipping, promotion status). Third, because product categories may experience differential baseline shifts across periods, we allow for category-specific post-period changes by interacting product category indicators with the post-redesign indicator (category × Post), thereby relaxing the assumption of a common post-shift across categories.
Fourth, to ensure that the results do not hinge on a single operationalization of default reviews, we replace the default review ratio with alternative measures such as the count of default reviews and transformed variants (e.g., log or logit transformations) and verify whether the moderation pattern remains consistent. Fifth, to reduce the influence of extreme outcomes in sales, we winsorize (or trim) the sales distribution and re-estimate the main specification. Finally, as an alternative estimator suitable for non-negative, right-skewed outcomes, we estimate Poisson pseudo-maximum likelihood (PPML) models with a log link and category fixed effects.

5. Data Analysis

We report the empirical results in the order of H1–H3 (Objectives 1–3).

5.1. Descriptive Statistics and Preliminary Patterns

Table 4 reports descriptive statistics by period. The default review ratio increases substantially from the pre-redesign window (mean = 0.268) to the post-redesign window (mean = 0.750), consistent with the platform’s interface change altering the prevalence/visibility of default reviews. Sales and several covariates (e.g., price and review volume) are right skewed, motivating the log transformations used in the main regressions.
Table 4. Descriptive statistics by period (pre vs. post).
The distributions of key categorical attributes (product category, shop type, big brand, free shipping, and promotion) are summarized in Appendix A (Table A1, Table A2 and Table A3).
As a preliminary check, two-sample t-tests (Appendix B, Table A4) show that the pre- and post-redesign samples differ significantly in sales, default review ratio, review volume, and price, motivating the inclusion of covariates and fixed effects in the regression analyses.

5.2. Baseline Regression Results

Table 5 reports the baseline interaction model that tests whether the association between the default review ratio and sales differs across the pre- and post-redesign windows. Consistent with Objective 1 (H1), the default review ratio is negatively associated with log sales in the pre-redesign period ( β 1 = 2.582 , p < 0.01 ), indicating that products with a higher share of default reviews tend to exhibit lower sales after accounting for price, overall review volume, and product category fixed effects.
Table 5. Main regression results.
Consistent with Objective 2 (H2), the interaction term between the default review ratio and the post-redesign indicator is positive and statistically significant ( β 3 = 1.046 , p < 0.01 ). This pattern implies that the negative association between default review ratio and sales becomes weaker after the interface redesign. The implied slope of the default review ratio in the post-redesign period equals β 1 + β 3 = 1.536 , suggesting attenuation rather than reversal of the negative relationship.
Finally, supporting Objective 3 (H3), the post-redesign indicator is positive and significant ( β 2 = 1.447 , p < 0.01 ), implying a systematic shift in baseline sales levels between the two periods after conditioning on covariates and category fixed effects.

5.3. Moderation Visualization

To facilitate interpretation of the interaction effect, Figure 4 visualizes the predicted values of ln(1 + sales) across the observed range of the default review ratio for the pre- and post-redesign periods based on the baseline specification (Table 5, Column 2). The central lines show the predicted values, while the upper and lower bounds indicate 95% confidence intervals. Predictions are computed holding continuous covariates at their means and incorporating product category fixed effects. The figure shows a clear moderation pattern: in the pre-redesign period, higher default review ratios are associated with a steeper decline in predicted log sales, whereas in the post-redesign period, the slope is noticeably flatter. This visualization is consistent with the positive interaction term in Table 5, indicating that the negative association between the default review ratio and sales is attenuated after the interface redesign. Moreover, because the interaction term is positive, the post–pre gap in predicted sales tends to widen as the default review ratio increases, reflecting the combined shift in intercept and slope implied by the model.
Figure 4. Predicted log sales by default review ratio before vs. after the interface redesign.
For completeness, the corresponding simple slope (marginal effect) estimates with 95% confidence intervals are reported in Appendix E (Figure A1).

5.4. Results of Robustness Checks

To assess robustness of the baseline interaction results, Table 6 reports a set of specification checks. The key coefficients are stable when inference is based on bootstrap SEs (C1), when additional shop/product controls are included (C2), when allowing for category-specific post shifts (C3), and when winsorizing sales (C4): the default review ratio remains negative and significant, and the post × default_ratio term remains positive and significant, consistent with attenuation after the redesign. In contrast, the PPML model using sales levels (C5) yields a different sign for the interaction term, suggesting sensitivity to outcome scale/estimator; this pattern likely reflects estimator/outcome scale differences: the log-linear OLS models (ln(1 + sales)) and PPML (sales in levels with a log link) do not target exactly the same estimand. The log transformation reduces the influence of extreme sales values and supports proportional interpretation, whereas PPML places more weight on level differences in the conditional mean for a non-negative, right-skewed outcome. With highly skewed sales (and potential distributional shifts between the pre and post windows), the interaction term can therefore vary across estimators. We thus treat PPML as complementary and rely on the consistent log-scale results (Table A5, Table A6, Table A7 and Table A8) for our main inference. The robustness specifications are fully reported in the main text. Appendix C (Table A5, Table A6, Table A7, Table A8 and Table A9) provides only extended coefficient outputs to facilitate replication.
Table 6. Robustness checks for the baseline interaction model.
We also examined whether the results depend on how default review exposure is operationalized. As reported in Table 7, replacing the share-based default review ratio with alternative measures (e.g., ln(default review counts) or a logit-transformed ratio) preserves the negative pre-period association with sales, while the post-period attenuation pattern is more measure-dependent. This is plausible because count-based measures partly co-move with overall review volume (controlled by ln_total_reviews) and rescaling (e.g., logit) changes the functional form, which can affect interaction estimates. We therefore retain the share-based default_ratio as the focal measure in the main analysis, as it most directly captures the relative prevalence (and hence visibility) of default reviews in the review pool. Full coefficient outputs (including fixed-effects coefficients) are provided in Appendix D for transparency.
Table 7. Alternative operationalizations of default review exposure (robustness).
Overall, the negative pre-period association is robust across specifications, while moderation is most clearly captured by the share-based measure.

6. Discussion and Conclusions

6.1. Discussion

This study examines whether the prevalence of platform-generated default reviews is associated with product sales and whether this association depends on review interface design. Overall, default review prevalence is negatively associated with sales. This negative association attenuates after default reviews are folded/de-emphasized, and conditional sales levels are higher in the post-redesign window.
The results for H1 indicate a negative association between default review prevalence and sales in the pre-redesign regime, where default and genuine reviews were mixed and visible in the default view. In the broader eWOM literature, review signals have been repeatedly linked to demand and sales outcomes [1,2,52,53]. Our evidence indicates that, in addition to these commonly examined signals, the composition of the displayed review pool—which can be shaped by reporting/selection processes and platform rules [6,7]—is empirically associated with market performance in this setting. Theoretically, this supports the view that platform-generated artefacts can meaningfully shape the informational environment of eWOM.
Evidence consistent with H2 suggests that the negative association becomes significantly weaker after default reviews are folded/de-emphasized in the default view. Existing evidence shows that interface visibility and ordering shape attention allocation and salience in digital environments [10,11,54]; we extend this evidence to a marketplace setting involving platform-generated default reviews by showing an attenuation after the redesign. This identifies interface visibility/salience as a boundary condition for review-signal effects, refining eWOM-based demand inference by linking platform design to the strength of review-related associations.
The results for H3 indicate that conditional sales levels are higher in the post-redesign window. One plausible interpretation is that prioritizing genuine reviews and improving the structure of the default interface may enhance the overall diagnosticity of the review environment and facilitate more efficient decision making [11,55]. At the same time, because the redesign was rolled out gradually and the design is a repeated cross-sectional rather than a product-level panel, we treat this level difference as period- and regime-consistent evidence rather than a clean causal estimate of a redesign “lift”. Conceptually, the pattern is consistent with interface governance shifting baseline conversion efficiency.

6.2. Theoretical Implications

This study offers three theoretical contributions to the literature on online reviews, platform governance, and retailing outcomes.
First, this study extends review research beyond content-centric explanations by conceptualizing default reviews as a platform-generated, structural feature of review systems and empirically linking their prevalence to market performance. Prior research on eWOM has primarily emphasized review valence, volume, and textual diagnostics as drivers of demand or has focused on explicitly fraudulent review content; comparatively less attention has been paid to platform-generated artifacts that reshape the composition of the observable review pool [1,5,33,56]. Evidence for H1 suggests that platform-generated default reviews do not merely add “noise”; by altering review pool composition (the default review ratio), they create an observable artifact that is systematically associated with sales.
Second, interface visibility/salience is identified as a consequential boundary condition for platform-generated review signals. Rather than assuming that default review prevalence has a uniform market implication, the results for H2 indicate that its marginal association with sales is substantially weaker once default reviews are folded or de-emphasized in the default interface. This advances digital choice architecture and digital nudging perspectives by demonstrating—in a real marketplace setting—that interface governance can moderate how a platform-generated artifact translates into demand outcomes [9,10,11,12]. More broadly, this contribution highlights “default review governance” as a mechanism through which platforms shape the economic meaning of review signals—by altering what enters consumers’ baseline information set in routine browsing.
Third, the study extends interface governance and digital retailing theory by showing that review interface redesign may be associated not only with slope moderation but also with a shift in the conditional baseline level of marketplace performance (H3). Specifically, the positive post-period level difference is consistent with the idea that foregrounding genuine reviews and improving the structure of the default display can enhance the overall diagnosticity of the review environment and facilitate more efficient evaluation and conversion. Given the gradual rollout and repeated cross-sectional design, we interpret this pattern as regime-consistent evidence of an improved information environment rather than a clean causal redesign “lift” [11,55]; nevertheless, it underscores that interface design can matter for market outcomes even when underlying information remains available elsewhere in the system.

6.3. Practical Implications

The analysis yields actionable implications for platforms, sellers, and regulators concerned with the integrity and effectiveness of review systems. In practical terms, the redesign reduces the magnitude of the negative association between default review prevalence and sales by roughly 40% (Table 5, Col. 2), suggesting that folding/de-emphasizing default reviews can materially mitigate their market impact.
For platform operators, the results suggest that default reviews are not “neutral fillers”: when they constitute a sizable share of the review pool, they can be associated with poorer market outcomes. Interface-level governance—such as folding default reviews and foregrounding genuine feedback—appears to reduce the marginal influence of default review prevalence, indicating that relatively low-cost design changes can improve the informational environment without removing data from the system. Platforms should also consider clearer labelling and disclosure of default review generation rules to reduce ambiguity and suspicion.
For sellers and brands, managing the composition of feedback becomes important alongside managing average ratings. A high default review ratio may signal limited genuine engagement and can be linked to weaker sales performance. Firms should therefore prioritize post-purchase engagement strategies that increase authentic review contributions (e.g., service follow-ups and review facilitation compliant with platform policies) rather than relying on mechanically accumulated default ratings.
For policymakers and industry stakeholders, default reviews represent an under-discussed transparency issue: they may inflate rating signals while providing little diagnostic content. Beyond general disclosure, our results suggest several practical governance steps: (i) require explicit labelling of system-generated/default reviews in the default view and separate reporting of “voluntary reviews” versus “default reviews” in review summaries; (ii) mandate a standardized disclosure statement describing how default reviews are generated, counted, and displayed (including when they are folded/de-emphasized); and (iii) encourage auditable documentation of major review interface changes that materially alter review salience. These measures can strengthen consumer protection and improve the credibility and accountability of platform review ecosystems.

6.4. Conclusions

This study examines whether the prevalence of platform-generated default reviews is associated with product sales and whether this association depends on review interface design. Using repeated cross-sectional product snapshots collected before and after a gradual interface redesign, this study finds that higher default review prevalence is negatively associated with sales when default reviews are salient, but this negative association becomes significantly weaker once default reviews are folded/de-emphasized in the default view. Conditional sales levels are higher in the post-redesign window, though this level difference should be interpreted cautiously given the gradual rollout and repeated cross-sectional design. Overall, the findings underscore that the market impact of platform-generated review signals depends critically on interface visibility.

6.5. Limitations and Future Research

This study has several limitations that point to promising directions for future research. First, although the study exploits two sampling windows that correspond to largely pre- versus post-folding display regimes, the platform’s rollout was gradual rather than a single, sharply timed intervention. As a result, our repeated cross-sectional design cannot fully isolate the interface change from other contemporaneous shifts in the platform environment. Future work could leverage higher-frequency panel data, exploit quasi-experimental variation in rollout timing across categories, regions, or user cohorts, or combine platform logs (e.g., exposure and click data) to better identify the causal pathway from visibility to conversion.
Second, our data are product-level snapshots rather than a product–time panel, which limits the ability to control for unobserved time-invariant product heterogeneity through within product estimation. Collecting longitudinal data on the same SKUs across multiple time points would enable stronger fixed-effects designs and allow researchers to examine dynamic adjustment processes—such as whether sellers respond to interface changes by altering review management strategies.
Third, the moderation results are interpreted primarily under the log linear specification with robust inference. Alternative estimators using sales levels (e.g., PPML) reproduce the negative association between default reviews and sales but suggest that the magnitude and direction of the moderation may vary with outcome scaling and functional-form assumptions. Future work can clarify when and why the moderating effect is scale-dependent, for example, by modelling heterogeneity across product categories, price tiers, or demand distributions.
Finally, our empirical setting focuses on Chinese e-commerce platforms and default review mechanisms that may differ across markets. Comparative studies across platforms that adopt different default feedback rules or disclosure standards would improve external validity and inform platform governance on review transparency.

Author Contributions

Conceptualization, Y.L., P.Z. and D.H.; Methodology, Y.L., P.Z. and D.H.; Software, Y.L., Y.C. and W.L.; Validation, Y.L., Y.C. and W.L.; Formal analysis, Y.L., P.Z. and D.H.; Investigation, Y.L., Y.C. and W.L.; Resources, P.Z. and D.H.; Data curation, Y.L., Y.C. and W.L.; Writing—original draft, Y.L.; Writing—review & editing, P.Z., D.H., Y.C. and W.L.; Visualization, Y.L.; Supervision, P.Z. and D.H.; Project administration, P.Z. and D.H.; Funding acquisition, P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 71972060; the Ministry of Education of Humanities and Social Science, grant number 24YJA630143.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Distribution of binary product attributes by period (pre vs. post).
Table A1. Distribution of binary product attributes by period (pre vs. post).
Post
01
product_type_name
    1
        Frequency9098
        Percent47.952.1
    2
        Frequency118101
        Percent53.946.1
    3
        Frequency117108
        Percent52.048.0
    4
        Frequency100101
        Percent49.850.2
    5
        Frequency9999
        Percent50.050.0
    6
        Frequency9393
        Percent50.050.0
    7
        Frequency9897
        Percent50.349.7
    8
        Frequency82112
        Percent42.357.7
    9
        Frequency91102
        Percent47.252.8
    10
        Frequency95100
        Percent48.751.3
product_type_name: 1: food; 2: toys; 3: home appliances; 4: electronics; 5: home decor; 6: books; 7: clothing; 8: pet products; 9: grooming products; 10: sporting goods.
Table A2. Distribution of shop types by period (pre vs. post).
Table A2. Distribution of shop types by period (pre vs. post).
Post
01
shop_type_id
    1
        Frequency429457
        Percent48.451.6
    2
        Frequency142142
        Percent50.050.0
    3
        Frequency412412
        Percent50.050.0
1: Taobao store; 2: Tmall store; 3: Tmall flagship store.
Table A3. Distribution of brand type attributes by period (pre vs. post).
Table A3. Distribution of brand type attributes by period (pre vs. post).
Post
01Total
big_brand010.3340.3290.331
free_ship010.9690.9700.970
promo010.7550.7790.767

Appendix B

Table A4. T-test.
Table A4. T-test.
(1) (2) T-Test
0 1 (1)–(2)
VariableNMeanSENMeanSEDifference
sales983438.33420.03810111582.00260.326−1143.668 ***
ln_sales9835.2470.05110116.7920.035−1.546 ***
default_ratio9830.2680.00410110.7500.008−0.482 ***
ln_total_reviews9834.7650.04210114.9880.036−0.223 ***
ln_price9836.3170.04610113.5970.0462.721 ***
*** p < 0.01.

Appendix C

Table A5. Bootstrap standard errors.
Table A5. Bootstrap standard errors.
(1)(2)
Ln_salesLn_sales
default_ratio−2.582 ***−2.582 ***
(0.270)(0.273)
1.post1.447 ***1.447 ***
(0.139)(0.136)
1.post#c.default_ratio1.046 ***1.046 ***
(0.305)(0.304)
ln_price−0.143 ***−0.143 ***
(0.018)(0.018)
ln_total_reviews0.777 ***0.777 ***
(0.021)(0.021)
Observations19941994
R-squared0.6980.698
Standard errors in parentheses. Model includes product type fixed effects. Column (1) reports heteroskedasticity-robust SEs; Column (2) reports bootstrap SEs (2000 replications). *** p < 0.01.
Table A6. Additional controls.
Table A6. Additional controls.
(1)(2)
Ln_salesLn_sales
default_ratio−2.582 ***−2.564 ***
(0.270)(0.270)
1.post1.447 ***1.459 ***
(0.139)(0.139)
1.post#c.default_ratio1.046 ***1.017 ***
(0.305)(0.305)
ln_price−0.143 ***−0.145 ***
(0.018)(0.018)
ln_total_reviews0.777 ***0.778 ***
(0.021)(0.021)
Observations19941994
R-squared0.6980.701
*** p < 0.01.
Table A7. Category-specific post shifts.
Table A7. Category-specific post shifts.
(1)(2)
Ln_salesLn_sales
default_ratio−2.582 ***−2.646 ***
(0.270)(0.268)
1.post#c.default_ratio1.046 ***1.106 ***
(0.305)(0.308)
ln_price−0.143 ***−0.150 ***
(0.018)(0.023)
ln_total_reviews0.777 ***0.772 ***
(0.021)(0.021)
Observations19941994
R-squared0.6980.702
*** p < 0.01.
Table A8. Winsorized sales.
Table A8. Winsorized sales.
(1)(2)
Ln_salesLn_sales
default_ratio−2.582 ***−2.589 ***
(0.270)(0.270)
1.post1.447 ***1.439 ***
(0.139)(0.138)
1.post#c.default_ratio1.046 ***1.062 ***
(0.305)(0.305)
ln_price−0.143 ***−0.142 ***
(0.018)(0.018)
ln_total_reviews0.777 ***0.774 ***
(0.021)(0.021)
Observations19941994
R-squared0.6980.699
*** p < 0.01.
Table A9. Alternative estimator (PPML).
Table A9. Alternative estimator (PPML).
(1)(2)
Ln_salesSales
default_ratio−2.582 ***−1.060 ***
(0.270)(0.227)
1.post1.447 ***1.820 ***
(0.139)(0.123)
1.post#c.default_ratio1.046 ***−0.690 ***
(0.305)(0.259)
ln_price−0.143 ***−0.191 ***
(0.018)(0.020)
ln_total_reviews0.777 ***0.641 ***
(0.021)(0.029)
Observations19941994
R-squared0.698
*** p < 0.01.

Appendix D

Table A10. Alternative measures of default reviews.
Table A10. Alternative measures of default reviews.
(1)(2)(3)
Ln_salesLn_salesLn_sales
default_ratio−2.582 ***
(0.270)
0.post0.0000.0000.000
(.)(.)(.)
1.post1.447 ***2.441 ***1.579 ***
(0.139)(0.170)(0.087)
0.post#c.default_ratio0.000
(.)
1.post#c.default_ratio1.046 ***
(0.305)
ln_price−0.143 ***−0.113 ***−0.135 ***
(0.018)(0.018)(0.018)
ln_total_reviews0.777 ***1.037 ***0.788 ***
(0.021)(0.063)(0.023)
1.product_type_name0.0000.0000.000
(.)(.)(.)
2.product_type_name0.153 *0.172 **0.153 *
(0.078)(0.079)(0.078)
3.product_type_name0.242 **0.1600.226 **
(0.097)(0.101)(0.096)
4.product_type_name0.0910.0540.069
(0.094)(0.099)(0.096)
5.product_type_name−0.129−0.183 *−0.117
(0.099)(0.104)(0.102)
6.product_type_name−0.005−0.039−0.033
(0.093)(0.093)(0.094)
7.product_type_name0.216 **0.247 **0.200 **
(0.096)(0.097)(0.097)
8.product_type_name0.006−0.0720.020
(0.079)(0.082)(0.080)
9.product_type_name0.151 *0.1240.153 *
(0.090)(0.092)(0.090)
10.product_type_name0.330 ***0.298 ***0.331 ***
(0.088)(0.089)(0.090)
ln_default_reviews −0.117 *
(0.063)
0.post#c.ln_default_reviews 0.000
(.)
1.post#c.ln_default_reviews −0.282 ***
(0.033)
dr_logit −0.254 ***
(0.048)
0.post#c.dr_logit 0.000
(.)
1.post#c.dr_logit 0.071
(0.052)
_cons3.031 ***1.332 ***1.974 ***
(0.211)(0.227)(0.206)
Observations199419941994
R-squared0.6980.6810.694
* p < 0.10, ** p < 0.05, *** p < 0.01.

Appendix E

Figure A1. Simple slope (marginal effect) estimates of default review ratio on log sales by period.
Figure A1. Simple slope (marginal effect) estimates of default review ratio on log sales by period.
Jtaer 21 00089 g0a1

References

  1. Chevalier, J.A.; Mayzlin, D. The Effect of Word of Mouth on Sales: Online Book Reviews. J. Mark. Res. 2006, 43, 345–354. [Google Scholar] [CrossRef]
  2. Forman, C.; Ghose, A.; Wiesenfeld, B. Examining the relationship between reviews and sales: The role of reviewer identity disclosure. Inf. Syst. Res. 2008, 19, 291–313. [Google Scholar] [CrossRef]
  3. Hu, N.; Bose, I.; Koh, N.S.; Liu, L. Manipulation of online reviews: An analysis of ratings, readability, and sentiments. Decis. Support Syst. 2012, 52, 674–684. [Google Scholar] [CrossRef]
  4. Luca, M.; Zervas, G. Fake it till you make it: Reputation, competition, and Yelp review fraud. Manag. Sci. 2016, 62, 3412–3427. [Google Scholar] [CrossRef]
  5. Mayzlin, D.; Dover, Y.; Chevalier, J.A. Promotional reviews: An empirical investigation of online review manipulation. Am. Econ. Rev. 2014, 104, 2421–2455. [Google Scholar] [CrossRef]
  6. Dellarocas, C.; Wood, C.A. The sound of silence in online feedback: Estimating trading risks in the presence of reporting bias. Manag. Sci. 2008, 54, 460–476. [Google Scholar] [CrossRef]
  7. Hu, N.; Pavlou, P.A.; Zhang, J. On self-selection biases in online product reviews. MIS Q. 2017, 41, 449–471. [Google Scholar] [CrossRef]
  8. An, H.; Li, W.; Yu, Y.; Wang, Z. Biases in online reviews: The default positive review rule and the conditional rebate strategy. Internet Res. 2025. Epub ahead of printing. [Google Scholar] [CrossRef]
  9. Maslowska, E.; Segijn, C.M.; Vakeel, K.A.; Viswanathan, V. How consumers attend to online reviews: An eye-tracking and network analysis approach. Int. J. Advert. 2019, 39, 282–306. [Google Scholar] [CrossRef]
  10. Reeck, C.; Posner, N.A.; Mrkva, K.; Johnson, E.J. Nudging app adoption: Choice architecture facilitates consumer uptake of mobile apps. J. Mark. 2023, 87, 510–527. [Google Scholar] [CrossRef]
  11. Häubl, G.; Trifts, V. Consumer decision making in online shopping environments: The effects of interactive decision aids. Mark. Sci. 2000, 19, 4–21. [Google Scholar] [CrossRef]
  12. Bauer, J.M.; Bergstrøm, R.; Foss-Madsen, R. Are you sure, you want a cookie? The effects of choice architecture on users’ decisions about sharing private online data. Comput. Hum. Behav. 2021, 120, 106729. [Google Scholar] [CrossRef]
  13. Weinmann, M.; Schneider, C.; vom Brocke, J. Digital nudging: Guiding online user choices through interface design. Bus. Inf. Syst. Eng. 2016, 58, 433–436. [Google Scholar] [CrossRef]
  14. Jachimowicz, J.M.; Duncan, S.; Weber, E.U.; Johnson, E.J. When and Why Defaults Influence Decisions: A Meta-Analysis of Default Effects. Behav. Public Policy 2019, 3, 159–186. [Google Scholar] [CrossRef]
  15. Eslami, M.; Krishna Kumaran, S.R.; Sandvig, C.; Karahalios, K. Communicating algorithmic process in online behavioral advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, QC, Canada, 21–27 April 2018; pp. 1–28. [Google Scholar] [CrossRef]
  16. Grimmelikhuijsen, S.; Jilke, S.; Olsen, A.L.; Tummers, L. Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Adm. Rev. 2023, 83, 241–262. [Google Scholar] [CrossRef]
  17. Johnson, E.J.; Bellman, S.; Lohse, G.L. Defaults, framing and privacy: Why opting in–opting out. Mark. Lett. 2002, 13, 5–15. [Google Scholar] [CrossRef]
  18. Dennis, A.R.; Yuan, L.; Feng, X.; Webb, E.; Hsieh, C.J. Digital nudging: Numeric and semantic priming in e-commerce. J. Manag. Inf. Syst. 2020, 37, 39–65. [Google Scholar] [CrossRef]
  19. Aysolmaz, B.; Müller, R.; Meacham, D. The public perceptions of algorithmic decision-making systems: Results from a large-scale survey. Telemat. Inform. 2023, 79, 101954. [Google Scholar] [CrossRef]
  20. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  21. Zhu, F.; Zhang, X. Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics. J. Mark. 2010, 74, 133–148. [Google Scholar] [CrossRef]
  22. Pooja, K.; Upadhyaya, P. What makes an online review credible? A systematic review of the literature and future research directions. Manag. Rev. Q. 2024, 74, 627–659. [Google Scholar] [CrossRef]
  23. Metzger, M.J.; Flanagin, A.J.; Medders, R.B. Social and heuristic approaches to credibility evaluation online. J. Commun. 2010, 60, 413–439. [Google Scholar] [CrossRef]
  24. Maslowska, E.; Malthouse, E.C.; Viswanathan, V. Do customer reviews drive purchase decisions? The moderating roles of review exposure and price. Decis. Support Syst. 2017, 98, 1–9. [Google Scholar] [CrossRef]
  25. Camilleri, A.R. The presentation format of review score information influences consumer preferences through the attribution of outlier reviews. J. Interact. Mark. 2017, 39, 1–14. [Google Scholar] [CrossRef]
  26. Alzate, M.; Arce-Urriza, M.; Cebollada, J. Is review visibility fostering helpful votes? The role of review rank and review characteristics in the adoption of information. Comput. Hum. Behav. 2024, 153, 108088. [Google Scholar] [CrossRef]
  27. Huang, N.; Sun, T.; Chen, P.; Golden, J.M. Word-of-mouth system implementation and customer conversion: A randomized field experiment. Inf. Syst. Res. 2019, 30, 805–818. [Google Scholar] [CrossRef]
  28. Lappas, T.; Sabnis, G.; Valkanas, G. The Impact of Fake Reviews on Online Visibility: A Vulnerability Assessment of the Hotel Industry. Inf. Syst. Res. 2016, 27, 665–991. [Google Scholar] [CrossRef]
  29. He, S.; Hollenbeck, B.; Proserpio, D. The Market for Fake Reviews. Mark. Sci. 2022, 41, 896–921. [Google Scholar] [CrossRef]
  30. Zhuang, M.; Cui, G.; Peng, L. Manufactured opinions: The effect of manipulating online product reviews. J. Bus. Res. 2018, 87, 24–35. [Google Scholar] [CrossRef]
  31. Cui, G.; Lui, H.-K.; Guo, X. The effect of online consumer reviews on new product sales. Int. J. Electron. Commer. 2012, 17, 39–58. [Google Scholar] [CrossRef]
  32. Filieri, R.; McLeay, F. E-WOM and accommodation: An analysis of the factors that influence travelers’ adoption of information from online reviews. J. Travel Res. 2014, 53, 44–57. [Google Scholar] [CrossRef]
  33. Mudambi, S.M.; Schuff, D. What makes a helpful online review? A study of customer reviews on Amazon.com. MIS Q. 2010, 34, 185–200. [Google Scholar] [CrossRef]
  34. Filieri, A.; Alguezaui, S.; McLeay, F. Why do travelers trust TripAdvisor? Antecedents of trust towards consumer generated media and its influence on recommendation adoption and word of mouth. Tour. Manag. 2015, 51, 174–185. [Google Scholar] [CrossRef]
  35. Ansari, S.; Gupta, S. Review manipulation: Literature review, and future research agenda. Pac. Asia J. Assoc. Inf. Syst. 2021, 13, 97–121. [Google Scholar] [CrossRef]
  36. Kwon, W.; Lee, M.; Back, K.-J.; Lee, K.Y. Assessing restaurant review helpfulness through big data: Dual-process and social influence theory. J. Hosp. Tour. Technol. 2021, 12, 177–195. [Google Scholar] [CrossRef]
  37. Lee, S.; Lee, S.; Baek, H. Does the dispersion of online review ratings affect review helpfulness? Comput. Hum. Behav. 2021, 117, 106670. [Google Scholar] [CrossRef]
  38. Shan, Y. How credible are online product reviews? The effects of self-generated and system-generated cues on source credibility evaluation. Comput. Hum. Behav. 2016, 55, 633–641. [Google Scholar] [CrossRef]
  39. He, J.; Wang Shane, X.; Vandenbosch, M.B.; Nault, B.R. Revealed preference in online reviews: Purchase verification in the tablet market. Decis. Support Syst. 2020, 132, 113281. [Google Scholar] [CrossRef]
  40. Sunstein, C.R. Choosing Not to Choose: Understanding the Value of Choice; Oxford University Press: New York, NY, USA, 2015; pp. 1–219. [Google Scholar]
  41. Gray, C.M.; Kou, Y.; Battles, B.; Hoggatt, J.; Toombs, A.L. The dark (patterns) side of UX design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, QC, Canada, 21–27 April 2018; pp. 1–14. [Google Scholar] [CrossRef]
  42. Pavlou, P.A.; Dimoka, A. The nature and role of feedback text comments in online marketplaces: Implications for trust building, price premiums, and seller differentiation. Inf. Syst. Res. 2006, 17, 392–414. [Google Scholar] [CrossRef]
  43. Harrison-Walker, L.J.; Jiang, Y. Suspicion of online product reviews as fake: Cues and consequences. J. Bus. Res. 2023, 160, 113780. [Google Scholar] [CrossRef]
  44. Feldman, J.M.; Lynch, J.G. Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior. J. Appl. Psychol. 1988, 73, 421. [Google Scholar] [CrossRef]
  45. Herr, P.M.; Kardes, F.R.; Kim, J. Effects of word-of-mouth and product-attribute information on persuasion: An accessibility-diagnosticity perspective. J. Consum. Res. 1991, 17, 454–462. [Google Scholar] [CrossRef]
  46. Filieri, R. What makes online reviews helpful? A diagnosticity-adoption framework to explain informational and normative influences in e-WOM. J. Bus. Res. 2015, 68, 1261–1270. [Google Scholar] [CrossRef]
  47. Sun, X.; Han, M.; Feng, J. Helpfulness of online reviews: Examining review informativeness and classification thresholds by search products and experience products. Decis. Support Syst. 2019, 124, 113099. [Google Scholar] [CrossRef]
  48. Thaler, R.H.; Sunstein, C.R.; Balz, J.P. Choice architecture. In The Behavioral Foundations of Public Policy; Shafir, E., Ed.; Princeton University Press: Princeton, NJ, USA, 2013; pp. 428–439. [Google Scholar] [CrossRef]
  49. Donkers, B.; Dellaert, B.G.; Waisman, R.M.; Häubl, G. Preference dynamics in sequential consumer choice with defaults. J. Mark. Res. 2020, 57, 1096–1112. [Google Scholar] [CrossRef]
  50. Gu, X.; Cao, J.; Fang, Y. Review Manipulation and Filtering on Digital Platforms. Inf. Syst. Res. 2025. Epub ahead of printing. [Google Scholar] [CrossRef]
  51. Ochmann, J.; Michels, L.; Tiefenbeck, V.; Maier, C.; Laumer, S. Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Inf. Syst. J. 2024, 34, 384–414. [Google Scholar] [CrossRef]
  52. Floyd, K.; Freling, R.; Alhoqail, S.; Cho, H.Y.; Freling, T. How Online Product Reviews Affect Retail Sales: A Meta-analysis. J. Retail. 2014, 90, 217–232. [Google Scholar] [CrossRef]
  53. Babić Rosario, A.; Sotgiu, F.; de Valck, K.; Bijmolt, T.H.A. The Effect of Electronic Word of Mouth on Sales: A Meta-Analytic Review of Platform, Product, and Metric Factors. J. Mark. Res. 2016, 53, 297–318. [Google Scholar] [CrossRef]
  54. Rackowitz, L.; Haampland, O. The sound of salience: How platform design impacts consumption. Inf. Econ. Policy 2025, 71, 101144. [Google Scholar] [CrossRef]
  55. Gutt, D.; Neumann, J.; Zimmermann, S.; Kundisch, D.; Chen, J. Design of Review Systems—A Strategic Instrument to Shape Online Reviewing Behavior and Economic Outcomes. J. Strateg. Inf. Syst. 2019, 28, 104–117. [Google Scholar] [CrossRef]
  56. Archak, N.; Ghose, A.; Ipeirotis, P.G. Deriving the pricing power of product features by mining consumer reviews. Manag. Sci. 2011, 57, 1485–1509. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.