Next Article in Journal
Digital Transformation and Enterprise Innovation Capability: From the Perspectives of Enterprise Cooperative Culture and Innovative Culture
Previous Article in Journal
Building Better Conservation Organisations with Blockchain Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Role of Product Type in Online Review Generation and Perception: Implications for Consumer Decision-Making

1
Department of International Trade, Konkuk University, Seoul 05029, Republic of Korea
2
Department of Business & Finance Education, College of Education, Kongju National University, Gongju 32588, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 135; https://doi.org/10.3390/jtaer20020135
Submission received: 19 April 2025 / Revised: 19 May 2025 / Accepted: 5 June 2025 / Published: 6 June 2025

Abstract

:
Product type plays a critical role in shaping how consumers generate, perceive, and utilize online reviews in decision-making. While previous studies have examined various review features, this study highlights the distinct effects of product classification—search goods, experience goods, and credence goods—on both review generation and perceived helpfulness. Drawing on Product Classification Theory, as well as Self-Determination Theory and the Theory of Planned Behavior, we analyze how review characteristics such as text length and photo inclusion vary across product types and influence consumer perceptions. Using a large-scale dataset of verified Amazon reviews, we find that consumers are more likely to produce longer and more visually rich reviews for search and experience goods than for credence goods, which are harder to evaluate and, thus, elicit less elaborate content. In terms of review helpfulness, reviews for experience goods are rated as more helpful than those for credence goods, while those for search goods are seen as less helpful. Furthermore, review length significantly boosts helpfulness for search goods, while photo inclusion enhances it for experience goods. These findings contribute to review effectiveness research by emphasizing the moderating role of product type and offering actionable insights for e-commerce platforms to improve review design and consumer decision-making.

1. Introduction

Online marketplaces have reshaped the global retail environment, with platforms like Amazon.com becoming central to consumer purchasing behavior. As of 2023, Amazon accounted for approximately 38% of U.S. e-commerce sales and hosted over 12 million products, making it one of the most dominant online marketplaces worldwide [1]. The continued growth of digital commerce has increased consumer dependence on online sources of information, especially in contexts where physical inspection of products is not feasible. This challenge is rotted in information asymmetry—a condition in which one party holds more or better information about a product than the buyer [2]. In online shopping environments, consumers are often unable to directly verify product quality, usage experience, or performance, leading to uncertainty in decision-making [3].
This study focuses specifically on product reviews collected from Amazon.com, one of the largest and most influential online marketplaces. We analyze consumer-generated reviews across three representative product types—search goods, experience goods, and credence goods—to examine how product characteristics shape both review behavior and perceived helpfulness. Each of these product types represents a distinct level of evaluability and information asymmetry, which influence not only how consumers experience products but also how they write and interpret reviews. A better understanding of these distinctions helps explain variations in review content and effectiveness across product categories, which is critical for both theory and practice in digital commerce.
To further understand how and why consumers generate different types of reviews, this study draws on Self-Determination Theory (SDT) and the Theory of Planned Behavior (TPB), which together explain motivational and attitudinal drivers of user engagement in review writing. These psychological frameworks help explain the varying levels of cognitive and emotional effort invested in review formats depending on product characteristics. Unlike brand-specific online stores, where the seller controls both the content and the purchase environment, online marketplaces like Amazon bring together multiple independent sellers and brands, creating a heterogeneous and decentralized review ecosystem. This distinction is important because consumer reviews in online marketplaces are shaped not only by product experience but also by diverse seller behaviors and platform-level dynamics.
Among these informational cues, online product reviews play a particularly critical role in reducing uncertainty and facilitating consumer decision-making. According to Chen et al. [4], 93% of consumers rely on online reviews before making a purchase, highlighting the centriole role in e-commerce. Consumers are influenced by reviews when assessing product quality, evaluating seller credibility, and managing purchasing risks in the absence of direct inspection [5,6]. However, the form and structure of user-generated content vary widely—from brief ratings to detailed narratives or image-embedded posts [7]. This variability raises key questions about which reviews are more likely to be generated and perceived as helpful and under what conditions such differences arise.
One important condition that may explain this variability is the nature of the product being reviewed [8]. Products differ in how easily their quality can be evaluated prior to or after purchase, leading to the classic typology of search, experience, and credence goods [9,10]. For example, search goods such as USB drives can be assessed through specifications before purchase, experience goods like clothing require personal use, and credence goods such as dietary supplements often remain difficult to evaluate even post-consumption. This variation in evaluability reflects differing levels of information asymmetry between buyers and sellers, where one party has more or better information than the other [11]. In online markets, consumers face higher information asymmetry because they cannot directly inspect products, making them heavily reliant on peer reviews to close the knowledge gap.
Product type plays a central role in shaping how consumers evaluate offerings and how they rely on reviews to mitigate uncertainty [8,12,13]. The classic typology of search, experience, and credence goods [9,10] provides a foundational framework for categorizing products based on the ease with which quality can be assessed. Search goods are characterized by attributes that can be evaluated prior to purchase through objective information such as specifications and features [14]. Experience goods require personal use to assess quality or satisfaction, such as taste, fit, or usability [15]. Credence goods, by contrast, involve characteristics that cannot be fully verified even after consumption—such as nutritional benefits or technical effectiveness—making consumers more dependent on expert or peer validation [16,17]. These fundamental differences in information asymmetry may not only influence how consumers seek and interpret reviews but also how they generate them [15,18,19].
While prior research has examined review features such as valence, length, or sentiment [5,20], relatively few studies have systematically investigated how these features interact with product type. In particular, existing models tend to treat reviews as uniform signals, often overlooking the moderating role of product classification in shaping review behavior and perceived helpfulness. This results in a limited understanding of how consumer-generated review varies across product categories and why certain formats are more effective in specific contexts.
To address this theoretical gap, we adopt a two-stage framework that first explores how product type influences review generation (i.e., the inclusion of ratings, text, and photos) and second, how review features interact with product type to affect perceived review helpfulness. This study is grounded in Product Classification Theory, which categorizes goods into search, experience, and credence types based on their evaluability. To explain the psychological mechanisms behind review generation, we draw on two complementary frameworks: Self-Determination Theory (SDT) [21], which emphasizes intrinsic and extrinsic motivation, and the Theory of Planned Behavior (TPB) [22], which links attitudes, norms, and perceived control to behavioral intentions. By focusing on these theories, we provide a coherent explanation of how product characteristics influence not only consumers’ motivation to write reviews, but also the perceived effectiveness of different re-view formats:
How do product types shape the way reviews are generated (in terms of format and depth)?
How do consumers interpret and evaluate the helpfulness of different review formats depending on product type?
By integrating these theoretical perspectives, we argue that product characteristics fundamentally shape both review generation and the effectiveness of specific review for-mats. Using a large-scale dataset of verified reviews across search, experience, and credence goods, this study empirically tests hypotheses linking product type to review behavior and perceived review helpfulness.
In doing so, this paper contributes to the literature by bridging review analytics with Product Classification Theory, offering a theory-driven explanation of how consumer-generated content functions across different product domains. It also provides practical insights for platforms aiming to improve the quality and utility of reviews by tailoring prompts, filters, or visual layouts to fit product types.

2. Literature Review and Hypothesis Development

2.1. Theoretical Foundation: Product Classification Theory and Review Behavior

Product Classification Theory provides the core theoretical foundation for this study. This typology categorizes products based on the degree to which their quality can be evaluated before or after purchase—resulting in the classic classification of search, experience, and credence goods [9,10]. Search goods (e.g., USB drives) can be evaluated through specifications prior to purchase; experience goods (e.g., clothing or cosmetics) require usage to assess satisfaction. Recent research on cross-platform user-generated content (UGC) further illustrates that for experience goods—such as fashion or cosmetics—consumers often integrate traditional customer reviews with influencer or vlogger content to form impressions and guide purchase decisions [23]. This finding underscores the importance of emotionally expressive and narrative-rich content in shaping consumer perceptions of experience-based products.
On the other hand, credence goods (e.g., supplements or technical services) remain difficult to evaluate even post-consumption [16,17]. For such goods, the lack of observable outcomes intensifies reliance on peer-generated content to compensate for information gaps. For example, recent studies on digital fashion emphasize how intangible or unverifiable attributes (e.g., virtual garments or metaverse-based products) lead consumers to depend more heavily on social proof and community-driven signals when forming evaluations [24].
These product types represent fundamentally different levels of evaluability and information asymmetry, which influence how consumers generate and interpret reviews [15,18,19]. In online marketplaces—where physical product inspection is not possible—these asymmetries are exacerbated, making user-generated reviews a crucial source of decision-support information. Thus, this classification not only explains variability in consumer experiences but also serves as the theoretical anchor for understanding differences in review behavior and perceived helpfulness across product types.
To explain why consumers generate certain types of reviews and how such reviews are later perceived and evaluated by others, we draw on two complementary behavioral frameworks. The Theory of Planned Behavior (TPB) posits that intention to perform a behavior is determined by attitudes toward the behavior, perceived social norms, and perceived behavioral control [22]. In the context of review writing, a favorable attitude toward helping others, the expectation that peers value one’s contribution, and confidence in one’s ability to articulate product experiences all combine to influence whether and how elaborately a consumer reviews a product. These factors are themselves shaped by product evaluability: consumers feel most confident and controlled when assessing search goods, leading to more detailed, structured reviews.
Self-Determination Theory (SDT) provides a further layer of explanation by distinguishing between intrinsic and extrinsic motivations for behavior [21]. Intrinsic motivations (e.g., altruism, enjoyment of self-expression) drive consumers to craft narrative-rich or multimedia-enhanced reviews, particularly when products evoke strong sensory or emotional responses. Extrinsic motivations (e.g., social validation, reciprocity) also encourage users to invest effort in their reviews. For example, experience goods often elicit richer narratives and visual content because the act of sharing an emotional or sensory experience satisfies intrinsic desires for self-expression. Conversely, a review of search goods tends to emphasize analytical detail and context, reflecting a utility-driven motive to inform others.
By integrating TPB’s emphasis on behavioral intention with SDT’s focus on motivation, this study develops an account of review generation behavior. Together, those theories allow us to predict and interpret why consumers produce longer text, include photos, or limit their contributions depending on the product type and their individual motivational profiles. Likewise, the perceived helpfulness of reviews may vary by product type, as the review’s relevance and informativeness depend on the evaluability of the product. Prior studies have found that detailed reviews are more helpful for search goods [25], narrative and expressive reviews resonate with experience goods [26,27], and reviewer credibility plays a critical role in credence goods [28,29].
Despite these insights, few studies offer a unified model that systematically examines how review format and evaluation vary across product types. This study addresses this gap by incorporating Product Classification Theory as the overarching framework and SDT and TPB as psychological lenses to understand review generation behavior. The next section further elaborates on review feature characteristics and their expected variation across product types.

2.2. Online Review Behavior and Review Features

Online reviews are a primary means through which consumers communicate their product experiences and assist others in making informed choices [30,31]. Rather than treating review generation as a uniform behavior, this study focuses on specific review features—such as the inclusion of detailed text and photos—that reflect different levels of cognitive and emotional engagement. These features serve as behavioral manifestations of review efforts, shaped by product characteristics and user motivation.
Review format and richness significantly affect their perceived helpfulness. Rating-only reviews offer quick feedback but are often criticized for lacking explanatory power [32]. Text reviews provide context, reasoning, and emotional expression, helping readers interpret product attributes more deeply [33]. Reviews combining text and photos further enhance credibility by offering visual confirmation, which is especially important for subjective or quality-sensitive attributes [34].
The concept of perceived helpfulness refers to the extent to which a review aids others’ purchase decisions [35]. Helpfulness is amplified when reviews are specific, emotionally engaging, or aligned with the reader’s own decision context [36]. Yet, these effects may not be uniform across product types. For instance, extensive narrative content might add value for experience goods but be redundant for straightforward search goods. Similarly, for credence goods, reviewer expertise or trust signals may outweigh emotional tone or verbosity [16].
In summary, the intersection of product type and review characteristics has been underexplored in the prior literature. This study addresses this gap by examining how product typology influences review format choices (e.g., text length, visual content) and how these features, in turn, affect perceived helpfulness among other consumers. Helpfulness is typically assessed through user evaluations, such as helpfulness votes, and reflects the informational and persuasive quality of the review [37]. A growing body of literature has identified multiple review characteristics that affect perceived helpfulness, including content length, emotional tone, specificity, structure, and the presence of multimedia.
Longer and more detailed reviews are generally perceived as more helpful, as they tend to provide richer product-related insights and practical guidance [5,38]. However, excessively lengthy reviews may become counterproductive if they lack focus or clarity [39]. Reviews that include both positive and negative aspects of a product, often called balanced reviews, are considered more credible and helpful because they reflect impartiality and provide a more complete evaluation [40]. Emotional expressiveness can further enhance perceived helpfulness by increasing relatability and reader engagement, particularly for affective or hedonic products [41]. In contrast, for search goods like electronics or appliances, which are primarily evaluated based on functional attributes, emotional tone may play a lesser role in determining review helpfulness.
The inclusion of photos or videos has also been found to elevate review helpfulness by providing visual verification of product quality, usage, or context, especially in categories where appearance or fit plays a central role [33,34]. In addition, reviews written by verified buyers or individuals with higher reviewer status (e.g., those with many helpful votes) are often perceived as more trustworthy, enhancing the perceived value of their content [42,43].
Despite these well-documented factors, existing studies have largely treated review helpfulness in a product-agnostic manner. Few have investigated how the effectiveness of specific review features varies depending on product type. For instance, while visual content may enhance review helpfulness for experience goods like apparel or cosmetics, it may add limited value for search goods whose attributes are clearly stated in product listings. Conversely, for credence goods—such as dietary supplements or educational services—the credibility of the reviewer and the depth of experiential insight may matter more than the inclusion of multimedia or emotional tone [16,28]. These observations suggest that product characteristics may serve as important moderators of review helpfulness, yet empirical evidence on this interaction remains limited.
Accordingly, this study builds on prior findings by systematically investigating how product type interacts with review length and photo inclusion to influence perceived helpfulness. By focusing on these specific review features, we aim to offer a more granular understanding of what makes reviews effective in varying product contexts.

2.3. Hypothesis Development

2.3.1. Review Generation for Different Product Types

Product characteristics, as structured by Product Classification Theory [9,10], fundamentally influence how consumers evaluate and express their experiences through online reviews. Search goods offer objective, verifiable attributes that can be evaluated prior to purchase (e.g., electronics or office supplies), experience goods require post-purchase usage to assess quality (e.g., apparel or restaurants), and credence goods remain difficult to evaluate even after consumption (e.g., dietary supplements or legal services). These differences in evaluability shape the effort and format consumers choose when generating reviews [5,44].
Drawing on the Theory of Planned Behavior (TPB) [22], consumers are more likely to generate detailed reviews—including descriptive text and photos—when they hold strong evaluative attitudes, perceive reviewing as socially expected (subjective norms), and feel confident in their ability to assess the product (perceived behavioral control) [45]. Self-Determination Theory (SDT) [46] further explains consumers facing higher certainty and emotional involvement are more likely to invest cognitive and emotional effort in writing richer reviews [47,48].
Search goods are characterized by clearly defined attributes, which increase consumers’ confidence in evaluating product quality. When consumers perceive low uncertainty and high control, they are more inclined to provide comprehensive, structured reviews that include both explanatory text and visual content [5].
In contrast, credence goods present higher levels of evaluative ambiguity due to the latent nature of their benefits. Because consumers may lack expertise or confidence in making judgments—even after use—they may hesitate to provide detailed or visually supported reviews [49]. As a result, reviews for credence goods are often less elaborate and narrower in scope [50].
H1a. 
Consumers are more likely to generate reviews with photos or detailed text for search goods than for credence goods.
Experience goods can be evaluated based on post-purchase consumption and often evoke emotional or sensory responses. These affective experiences tend to increase intrinsic motivation to share expressive and illustrative reviews, aligning with SDT’s emphasis on emotional engagement as a driver of behavior [21,43]. Compared to credence goods, the immediacy and personal relevance of experience goods foster more narrative-rich and visually detailed reviews.
H1b. 
Consumers are more likely to generate reviews with photos or detailed text for experience goods than for credence goods.

2.3.2. Perceived Helpfulness of Reviews by Product Type

The perceived helpfulness of online reviews is not uniform across product categories. According to Product Classification Theory [9,10], the evaluability of a product—whether it is a search, experience, or credence good—shapes how consumers interpret and rely on user-generated reviews. Reviews are particularly valued when they reduce uncertainty and enhance decision confidence [51], but the type of uncertainty consumers face varies depending on the product category.
Search goods are characterized by attributes that can be objectively verified prior to purchase, such as specifications or technical features [5,9]. While this lowers consumers’ reliance on peer input, reviews that provide novel or user-contextualized insights beyond the product listing can still be useful. However, reviews that simply restate product facts are often seen as redundant and less helpful [5]. The inherent low uncertainty of search goods limits the perceived informational gain from reviews, thus reducing their overall helpfulness compared to more ambiguous product types.
Credence goods, in contrast, are difficult to evaluate even after consumption [10], such as dietary supplements or technical services. Due to their unverifiable nature, consumers face high uncertainty and often depend on socially constructed cues like reviewer credibility, review specificity, or verification status to reduce perceived risk [49,51]. From the lens of the Theory of Planned Behavior, the greater uncertainty encourages consumers to engage in central-route processing, placing higher value on detailed and informative reviews [52]. Thus, reviews for credence goods are more likely to be perceived as helpful because they compensate for the lack of direct product evaluability.
H2a. 
Consumers perceive reviews for search goods as less helpful than reviews for credence goods.
Experience goods, such as apparel or restaurants, are evaluated through post-consumption experiences. These products involve subjective, sensory, or emotional attributes that consumers can judge only by personal use, and then rich narratives and vivid examples directly inform prospective buyers about fit, texture, or enjoyment [53]. According to the Self-Determination Theory, such emotional involvement enhances intrinsic motivation, encouraging consumers to produce expressive and illustrative reviews [46]. These narrative and visual elements—photos, metaphors, and emotional detail—resonate with readers facing similar experiential uncertainty, thereby increasing perceived review helpfulness [51,52]. By contrast, the lack of observable outcomes in credence goods often limits such affective engagement, leading to less relatable or vivid reviews. Thus, experience goods tend to elicit review formats that are perceived as more helpful by others.
H2b. 
Consumers are more likely to perceive reviews for experience goods as more helpful than those for credence goods.

2.3.3. Perceived Helpfulness of Review Characteristics

Consumers tend to perceive reviews as more helpful when they include factual, specific, and contextually relevant information. However, the value attributed to specific review features—such as length or the inclusion of photos—varies depending on the product type and the consumer’s motivation to process the content.
For search goods, which are defined by clearly measurable attributes [9], consumers often rely on objective product specifications to form evaluations. In this context, longer reviews that elaborate on product performance or highlight discrepancies between expectations and actual experience serve to complement factual knowledge and reduce residual uncertainty. When consumers feel confident in their ability to assess such products, as suggested by TPB [22], they are more likely to appreciate detailed evaluations that align with their analytic processing style. Therefore, review length functions as a proxy for informativeness and credibility.
H3a. 
For search goods, longer reviews are more likely to be perceived as helpful.
In contrast, experience goods are assessed based on post-consumption impressions, often involving sensory or emotional responses. These evaluations are more subjective and harder to articulate solely through text. According to the Self-Determination Theory [46], emotionally engaged consumers are intrinsically motivated to share expressive and personally meaningful content, including images that capture nuanced aspects of product experience. Photos serve not only as visual evidence but also as affective cues that help other consumers relate to the reviewer’s experience. Thus, for experience goods, the inclusion of photos increases relatability and perceived helpfulness.
H3b. 
For experience goods, the inclusion of photos increases the perceived helpfulness of reviews.
Building on the preceding theoretical foundations and hypotheses, the following section presents the overall research framework that integrates the two analytical stages—review generation and review consumption.

2.4. Research Framework

As shown in Figure 1, this study is structured around two interrelated stages: review generation and review consumption.
Stage 1: Review Generation—This stage examines how different product types—search goods, experience goods, and credence goods—influence the format of reviews consumers generate, such as the inclusion of longer text or photos. This stage is theoretically grounded in Product Classification Theory, which explains how product evaluability shapes review behavior and is supported by Self-Determination Theory (SDT) and the Theory of Planned Behavior (TPB). These behavioral theories provide insights into the psychological motivations—both intrinsic and extrinsic—that influence consumers’ decision to engage in richer review content.
Stage 2: Review Consumption—This stage investigates how consumers perceive the helpfulness of reviews based on both product type and review characteristics. It evaluates whether consumers find certain types of reviews—such as those with longer text or visual elements—more helpful depending on the product category. This stage is grounded in Product Classification Theory, with Self-Determination Theory and Theory of Planned Behavior offering psychological explanations for how review richness and evaluability influence consumer judgment.

3. Data and Methodology

3.1. Methodology

This study adopts a quantitative research design to examine how product type influences both the generation and perceived helpfulness of consumer reviews. We employed a two-stage analytical approach using multivariate regression models. In the first stage, we investigated review generation behavior using linear regression models. In the second stage, we analyzed review helpfulness using both linear regression and generalized linear models (GLMs).
To examine the impact of product type on consumers’ review generation behavior, we employed linear regression models. Credence goods were used as the reference category to compare against search goods and experience goods, enabling us to analyze how these product types influence the inclusion of photos and the length of text in consumer reviews. Linear regression has been widely used in review behavior research to estimate the relationship between product attributes and review content variables due to its interpretability and ability to isolate the effect of categorical predictors [53,54].
All statistical analyses were conducted using Python 3.13 (pandas, stats models) and Stata 17.0. To ensure robustness, logarithmic transformations were applied to the number of photos and the word count of reviews, allowing us to account for skewed distributions [54]. Three sets of variables were utilized to confirm the relationship between product types and the dependent variables. Control variables, including product price, yearly dummies, and monthly dummies, were incorporated into the models to account for the potential effects of pricing and temporal trends on review generation. Due to the large number of time-related variables, the results for these variables are omitted from the table for clarity. The regression model is specified as follows:
Y = β + β × s g + β × e g + β × r a t i n g + β × r a t i n g ² + β × e x p o s + β × e x n e g + β × p r i c e + β × y e a r l y   d u m m y + β × m o n t h l y   d u m m y + ϵ ,    
where Y represents the dependent variable (logarithmic values of photo count or word count for a review), sg and eg represent binary variables for search goods and experience goods, respectively, rating is the product rating, rating2 is the squared product rating, expos and exneg represent extreme positive and negative ratings, price is the product price (to control for price effects), monthly dummy and yearly dummy control for time effects, and ϵ is the error term.
To examine how product types affect the perceived helpfulness of reviews, we test hypotheses H2a, H2b, H3a, and H3b using linear regression models for Models (1), (2), and (3), and generalized linear models (GLM) for Models (4), (5), and (6). GLM is employed to model the perceived helpfulness variable, which is a non-negative, right-skewed count-like measure (number of helpful votes), following the prior literature in eWOM and review analytics that utilize log-link functions for such outcomes [55,56,57]. By employing both linear regression models and GLMs, the analysis ensures robust results and a comprehensive understanding of how product types influence review behavior and helpfulness. The dependent variable in this analysis is log helpful, representing the perceived helpfulness of a review. A logarithmic transformation is applied to normalize the distribution of perceived helpfulness and reduce right skewness in the data.
Y = β 0 + β 1 s g + β 2 e g + β 3 r a t i n g + β 4 r a t i n g 2 + β 5 e x p o s + β 6 e x n e g + β 7 l o g l o g   p h o t o + β 8   l o g l o g   w c + β 9 s g   p h o t o + β 10 e g   p h o t o + β s g   w c + β e g   w c + β p r i c e + β y e a r l y   d u m m y + β m o n t h l y   d u m m y + ϵ ,          
where the variables “log photo” and “log wc” are the logarithmic transformations of the number of photos and word count in the review, respectively. “sg_photo,” “eg_photo,” “sg_wc,” and “eg_wc” indicate the interaction terms between product type and review characteristics (photos and word count). The remaining variables are consistent with the previous formulas. This formula reflects the relationships examined in the regression models for the perceived helpfulness of reviews, including both linear and interaction effects.

3.2. Data Collection and Variable Description

This study utilizes a dataset of 23,250 verified consumer reviews collected from Amazon.com, covering 142 individual products across nine product categories. These categories are classified into three product types—search goods, experience goods, and credence goods—based on established definitions regarding consumers’ ability to evaluate product quality either before, during, or after consumption [9,10]. Each product type is represented by three categories: search goods include Laser Printer_HP, Tablets, and Cell Phones; experience goods comprise BB Facial Creams, Nintendo Switch Games, and LEGO Toys; and credence goods include LMNT Electrolytes, Norton 360 Premium, and Activity/Wellness Monitors. These categories were selected based on their theoretical relevance and frequent use in prior studies of consumer review behavior (e.g., [5,12,58,59,60]). Products were chosen based on review volume and diversity to ensure sufficient data representation across product types. In regression analyses, credence goods serve as the reference category due to their high evaluation uncertainty, making them a theoretically meaningful baseline for comparison.
Table 1 presents the definitions of the key variables used in the analysis. Consumer satisfaction is measured using a continuous rating scale from 1 to 5 (“Rating”), with an additional squared term (“Rating2”) included to capture potential non-linear effects. Following Mayzlin et al. [61], two binary variables—“expos” (extremely positive) for 5-star reviews and “exneg” (extremely negative) for 1- to 2-star reviews—are constructed to reflect sentiment extremity, allowing the analysis to assess how review polarity influences content characteristics and perceived helpfulness.
To measure review format richness, this study includes the log-transformed number of photos (“log photo”) and the log-transformed word count (“log wc”) as proxies for visual and textual elaboration. Interaction terms between product type and these format variables (e.g., “Search Goods × log photo,” “Experience Goods × log wc”) are included to explore whether the perceived helpfulness of visual and textual cues depends on the type of product. In addition, the logarithmic transformation of the helpfulness score (“log helpful”) is used to normalize the distribution of review helpfulness, which is operationalized based on the number of helpful votes a review receives.
Additional control variables include “product price,” which may influence consumer expectations and evaluation behavior [62], as well as “monthly” and “yearly” dummies to account for temporal patterns in review activity. Together, these variables provide a comprehensive foundation for testing how product types and review features jointly shape both review generation behavior and perceived helpfulness.

3.3. Descriptive Statistics

Table 2 presents summary statistics based on 23,250 reviews across the selected product categories. The dataset is balanced across product types, with 24.4% classified as search goods, 42.6% as experience goods, and 33% as credence goods. This distribution ensures that variation in consumer behavior can be effectively analyzed across different product typologies.
The average product rating is 3.39 (SD = 1.60), indicating a general tendency toward positive evaluations. The squared rating variable (Rating2) has a mean of 14.03, capturing potential non-linear relationships in sentiment. Reviews with extreme sentiment are common, with 39.1% of reviews rated as highly positive (5 stars) and 33.1% rated as highly negative (1–2 stars), highlighting the prominence of polarized consumer feedback in shaping review dynamics.
In terms of review content, the mean log-transformed photo count is 0.079 (SD = 0.364), suggesting that photo inclusion is infrequent but non-negligible. The average log-transformed word count is 3.29 (SD = 1.13), indicating considerable variation in review length and textual richness. The average log-transformed helpfulness score is 0.541 (SD = 0.81), reflecting a skewed but measurable distribution of perceived review usefulness based on helpful votes.
Product price, used as a control variable, ranges from USD 6.99 to USD 2399, with an average price of USD 150.37. Since more expensive products may prompt more elaborate reviews, price is included to control for heterogeneity in perceived value and risk. Monthly and yearly dummy variables are also incorporated to account for seasonal and temporal trends in review behavior.
These descriptive statistics establish the foundation for the multivariate analyses that follow, providing insight into the baseline characteristics of review generation and perceived helpfulness across different product types.
Table 3 presents the correlation matrix for the 15 primary variables in this study, providing insights into their interrelationships. Beyond expected associations (e.g., between squared and original rating variables), several noteworthy and theoretically meaningful correlations emerge. For instance, a strong positive correlation is observed between (1) sg and (15) price (0.578), suggesting that search goods in this dataset tend to be higher-priced items. Similarly, (9) log_wc and (10) log_helpful show a moderate-to-strong correlation (0.431), indicating that longer reviews are more likely to be perceived as helpful by other consumers. A moderate correlation between (3) cg and (14) eg_wc (−0.548) suggests an inverse relationship between credence goods and the textual richness of reviews for experience goods. These findings provide empirical support for the proposed links between product type, review richness, and perceived helpfulness and justify the use of these variables in subsequent regression analyses.
Strong negative correlations are also observed, such as between (4) rating and (7) exneg (−0.902) and between (5) rating2 and (7) exneg (−0.849). These results highlight the inverse relationship between positive ratings and the occurrence of extremely negative reviews. Such patterns suggest that as ratings increase, the likelihood of extremely negative reviews decreases significantly.
Moderate correlations include the relationship between (1) sg and (15) price (0.578) and between (2) eg and (3) cg (−0.605). These findings suggest some degree of association between product types and price, as well as between experience goods and credence goods. Another moderate correlation is observed between (3) cg and (14) eg_wc (−0.548), indicating a connection between credence goods and the word count of reviews for experience goods. Additionally, (10) log_helpful shows a moderate positive correlation with (9) log_wc (0.431) and weak but significant positive correlations with several other variables, indicating that longer and richer reviews tend to receive more helpfulness votes.
Weak correlations are evident for variables such as (1) sg and (4) rating (−0.026) and (5) rating2 and (8) log_photo (−0.017). These results suggest minimal association between product types and ratings, as well as between squared ratings and the inclusion of photos in reviews, implying that these variables may operate independently.

4. Results

4.1. Review Generation

Table 4 presents the results of regression analysis, using credence goods as the reference group, to examine how product type influences photo generation and text length in consumer reviews. The analysis reveals notable differences between product types, with consumers showing distinct tendencies to generate different types of reviews based on product characteristics. In models (1) and (4), the average product rating is used as a linear variable, while models (2) and (5) include the squared term of the rating to explore potential non-linear relationships between review generation and the rating given by consumers. Additionally, models (3) and (6) investigate the impact of extreme reviews (i.e., very positive or negative reviews) on review generation.
For search goods, all models show positive and statistically significant coefficients (0.043, p-value < 0.01; 0.043, p-value < 0.01; 0.043, p-value < 0.01; 0.484, p-value < 0.01; 0.491, p-value < 0.01; and 0.486, p-value < 0.01, respectively), indicating that consumers are more likely to generate longer text-based and photo-inclusive reviews for search goods compared to credence goods. This suggests that the clarity and measurable attributes of search goods simplify evaluation, thereby encouraging the creation of detailed reviews. These findings support H1a, demonstrating that search goods prompt more detailed review generation due to their objective characteristics.
Experience goods, such as cosmetics or video games, also show a positive relationship with the generation of longer reviews. Except for model (4), where the relationship is not statistically significant, all other models demonstrate positive and significant coefficients (0.054, p-value < 0.01; 0.053, p-value < 0.01; 0.054, p-value < 0.01; 0.022, not significant; 0.042, p-value < 0.05; 0.039, p-value < 0.05, respectively). These results indicate that while experience goods require post-consumption evaluation, their emotional engagement and subjective nature still facilitate the generation of longer text and photo reviews compared to credence goods. This supports H1b, indicating that experience goods prompt more detailed reviews compared to credence goods.
These results underscore the influence of product types on review generation behavior. Search goods, due to their clarity and ease of evaluation, encourage consumers to produce more detailed text and photo reviews. Although experience goods require post-use evaluation, their subjective and emotionally engaging characteristics make them easier to review compared to credence goods, which demand long-term use or expert validation. Therefore, consumers are more likely to generate longer text and photo reviews for experience goods than for credence goods.

4.2. Perceived Helpfulness of Reviews

In Table 5, the coefficients for search goods (sg.) are consistently negative and statistically significant across all models, indicating that reviews for search goods are perceived as less helpful than those for credence goods. The negative coefficients (−0.133, p-value < 0.01; −0.136, p-value < 0.01; −0.137, p-value < 0.01; −0.133, p-value < 0.01; −0.136, p-value < 0.01; −0.137, p-value < 0.01, respectively) highlight a significant and robust relationship. These results support H2a, suggesting that consumers find reviews for search products less helpful due to their repetitive nature, which often provides little new insight beyond the product’s description.
For experience goods (eg.), the coefficients are positive and statistically significant across all models (0.068, p-value < 0.05; 0.059, p-value < 0.10; 0.060, p-value < 0.10; 0.068, p-value < 0.05; 0.059, p-value < 0.10; 0.060, p-value < 0.10, respectively). These results suggest that consumers perceive reviews for experience goods as more helpful than those for credence goods. The positive relationship indicates that reviews for experience products provide emotional and subjective insights that help consumers assess the product, making them more useful and relatable. This supports H2b, as experience goods tend to evoke personal evaluations that contribute to the perceived helpfulness of the review.
To test H3a, we focus on the interaction between Search Goods and Log Word Count (sg_wc), which examines how the length of the review affects reviews for search goods. The coefficients for “sg_wc” are consistently positive and statistically significant across all models (0.096, p-value < 0.01; 0.095, p-value < 0.01; 0.097, p-value < 0.01; 0.096, p-value < 0.01; 0.095, p-value < 0.01; and 0.097, p-value < 0.01, respectively). This indicates that longer reviews for search products are perceived as more helpful. As search products are typically evaluated based on measurable, objective attributes, detailed reviews provide more information on these attributes, helping consumers make better-informed purchasing decisions. This finding emphasizes the importance of thorough, detailed reviews for search products to reduce uncertainty and guide others in making more confident decisions.
For H3b, we examine the interaction between experience goods and log photo (eg_photo), which explores how the inclusion of photos in the review affects reviews for experience goods. The coefficient for “eg_photo” is consistently positive and statistically significant across all models (0.143, p-value < 0.01; 0.137, p-value < 0.01; 0.138, p-value < 0.01; 0.143, p-value < 0.01; 0.137, p-value < 0.01; and 0.138, p-value < 0.01, respectively). This indicates that the inclusion of photos significantly enhances the perceived helpfulness of reviews for experience goods. Since experience products are often evaluated based on subjective, personal experiences, photos help convey emotional or esthetic aspects that text alone cannot fully capture. The positive coefficients highlight that photos provide tangible evidence that helps potential buyers connect with the reviewer’s experience, making the review more relatable and useful. This finding underscores the importance of including photos in reviews for experience products, as they offer a more authentic, vivid sense of the product’s qualities, thereby enhancing the perceived helpfulness of the review and improving the purchasing process.
Notably, the interaction between search goods and photo inclusion (sg_photo) was not statistically significant across all models. This suggests that, unlike text length, the addition of visual content does not meaningfully increase perceived helpfulness for search goods, likely because such products are already evaluated using clearly defined functional criteria. In contrast, the interaction between experience goods and word count (eg_wc) was marginally significant (p-value < 0.05), indicating that text-based elaboration still contributes to helpfulness evaluations for experience goods, although its impact may be weaker than visual cues.

5. Discussion and Implications

5.1. Discussion

This study examined how product types influence both the generation of online reviews and their perceived helpfulness, drawing on a large-scale dataset spanning search, experience, and credence goods. The findings reveal that consumers are more likely to generate reviews with detailed text and photos for search and experience goods compared to credence goods. Search goods, characterized by measurable and objective attributes, facilitate confident evaluation, while experience goods evoke emotional and sensory feedback—both conditions that motivate consumers to produce richer reviews. By contrast, the intangible nature of credence goods discourages elaborate reviewing due to lower perceived evaluative certainty.
These findings are consistent with previous research suggesting that products with low evaluative ambiguity—such as search goods—enable more detailed consumer input [5] and that emotionally charged products stimulate expressive review formats [53]. Furthermore, our results support Geuens et al. [63], who found that hedonic or affective products often elicit richer reviews due to emotional involvement. In contrast, the low observability of credence goods aligns with previous findings indicating limited consumer confidence in sharing such product evaluations [64].
In terms of perceived helpfulness, our finding that reviews for search goods were consistently rated as less helpful than those for credence goods agrees with findings by Willemsen et al. [43], who argue that review redundancy diminishes perceived informativeness when product specifications are already well understood. On the other hand, re-views for experience goods were perceived as more helpful than those for credence goods, reflecting the value of emotional and practical detail in evaluating affective or sensory-driven products—echoing insights by Sen and Lerman [49].
Further, the moderating role of the review format was confirmed. For search goods, longer reviews were perceived as more helpful, providing cognitive detail aligned with the product’s objective nature. For experience goods, the inclusion of photos significantly enhanced perceived helpfulness, offering visual context and emotional depth. These findings demonstrate that review format affects perceived usefulness differently depending on product typology, with cognitive cues favored for functionally evaluable products and affective cues for experientially evaluated ones.
Unlike the prior literature that often treats review helpfulness in a product-agnostic manner, this study adds value by highlighting how the interplay between product type and review richness shapes consumer judgment. These results extend the understanding of eWOM by contextualizing review behavior within product evaluability frameworks, offering insights for marketers and platform designers aiming to enhance the effectiveness of user-generated content.

5.2. Theoretical Implications

This study advances theory by positioning Product Classification Theory as a foundational framework to explain heterogeneity in both online review generation and perceived helpfulness. Unlike prior work that often treats user-generated content in a product-agnostic manner, our findings emphasize that the evaluability of the product—whether it is a search, experience, or credence good—shapes not only what type of content is generated but also how it is interpreted by others. This reinforces the central tenet of Product Classification Theory—that differences in information asymmetry across product types fundamentally condition consumer behavior in digital contexts.
By demonstrating that consumers produce and assess reviews differently based on product category, this study contributes to a more differentiated theoretical understanding of online word-of-mouth. For search goods, clearly defined attributes facilitate fact-based, detailed reviews that are evaluated for specificity and structure. Experience goods, which rely on affective and sensory feedback, are best complemented by emotionally expressive and photo-enriched reviews. Credence goods, whose qualities are difficult to verify even post-consumption, require credibility cues such as expertise or balanced narratives to be considered helpful. These insights align directly with the review behavior distinctions posited by Product Classification Theory and offer empirical elaboration of its digital-age implications.
In support of this core framework, we also draw on Self-Determination Theory (SDT) and the Theory of Planned Behavior (TPB) to explain the motivational underpinnings of review generation. SDT helps clarify why consumers might invest effort into producing longer or multimedia-rich reviews—motivated by intrinsic satisfaction (e.g., helping others) or extrinsic rewards (e.g., recognition). TPB extends this by situating review behavior within a broader context of attitudes, norms, and perceived behavioral control. Our findings that review richness varies across product types suggest that these motivations are not uniformly activated but are instead shaped by how evaluable or ambiguous a product is.
Finally, this study contributes to the broader literature on uncertainty reduction in digital commerce. It shows that consumers adjust their reliance on different review features based on the informational gaps associated with each product type. This extends current models of consumer decision-making by illustrating that review helpfulness is not a universal construct but rather a context-sensitive outcome mediated by product characteristics and content alignment. These insights carry implications not only for consumer behavior theory but also for the design of review platforms and content algorithms, which may benefit from tailoring review presentations and prioritization strategies based on product category.
In summary, by integrating Product Classification Theory as the central explanatory model and situating SDT and TPB as complementary lenses, this study offers a layered theoretical contribution that captures both the structural and psychological dimensions of review behavior in online marketplaces.

5.3. Practical Implications

This study offers practical implications tailored specifically to online marketplaces, where peer-generated content plays a central role in consumer decision-making. Grounded in Product Classification Theory and supported by behavioral insights from Self-Determination Theory and the Theory of Planned Behavior, the findings underscore the importance of aligning review system design with product type and information asymmetry levels.
For online platforms and marketplace operators, the results suggest the need for differentiated review solicitation and display strategies. For search goods, platforms should prioritize structured review prompts that encourage factual comparisons and usage scenarios, reinforcing the functional attributes consumers care about. For experience goods, platforms can enhance engagement and perceived value by promoting photo-based, emotionally expressive reviews that resonate with consumers’ sensory experiences. For credence goods, where evaluation remains difficult post-consumption, platforms should highlight credibility signals such as verified purchase badges, reviewer history, expert endorsements, or long-term follow-up reviews. Algorithmic curation of “most helpful” reviews should be calibrated to emphasize review features that best match each product category’s evaluability.
For sellers on online marketplaces, adapting review activation strategies to product types can serve as a competitive advantage. In categories dominated by search goods, sellers should encourage consumers to elaborate on performance in specific use contexts to differentiate from similar listings. For experience goods, visual storytelling through consumer-generated images or videos can improve product appeal and trust. For credence goods, sellers may enhance review usefulness by featuring testimonials from credible users, linking to third-party certifications, or providing expert-generated Q&A content within the review section.
For consumers, understanding that the helpfulness of reviews is product-dependent can lead to more informed choices. Search goods benefit from concise, comparative insights; experience goods require affective and visual cues; and credence goods demand credible, context-rich feedback. Consumers who contribute reviews can maximize their impact by tailoring content to the product type—e.g., including both emotional and practical information for experience goods or documenting sustained usage for credence goods. Doing so not only supports fellow buyers but also improves the integrity and utility of the marketplace review ecosystem.

6. Conclusions

This study explored how product types—categorized as search, experience, and credence goods—influence both the generation and perceived helpfulness of consumer reviews in online marketplaces. Drawing on Product Classification Theory, along with supporting insights from Self-Determination Theory and the Theory of Planned Behavior, the findings highlight that consumer review behavior varies systematically depending on the degree of evaluability associated with each product type. Reviews for search goods tended to be fact-oriented and concise, experience goods elicited more expressive and visual content, while credence goods benefited from credibility-enhancing cues such as reviewer expertise or balanced narratives.
These results contribute to the literature by offering a product-sensitive framework for understanding online word-of-mouth behavior. Rather than treating user-generated reviews as universally effective, the study emphasizes that the usefulness and structure of reviews are context-dependent. This nuanced approach advances the theoretical understanding of digital content consumption and enriches current models of consumer decision-making in environments characterized by information asymmetry.
In practical terms, the findings provide guidance for online platform operators, sellers, and consumers. Digital marketplaces can enhance review systems by tailoring prompts, ranking algorithms, and display features to match the nature of the product. Sellers may also benefit from review strategies aligned with product type, such as emphasizing photos for experience goods or highlighting expertise for credence goods. Consumers, in turn, can make more informed decisions by interpreting reviews in relation to product-specific cues.

7. Limitations and Future Research

While this study offers valuable insights into how product types shape consumer review generation and perceived helpfulness, several limitations point to promising directions for future research.
One limitation is that the analysis centers on consumer behavior without considering how businesses or platforms interpret and utilize review helpfulness. Future studies could explore how review platforms and sellers assess helpfulness metrics by product type and how these assessments influence the design of review systems or content-promotion algorithms. Such investigations could enhance the effectiveness of review filtering, prioritization, and prompting strategies.
Another limitation is the exclusive reliance on secondary data from Amazon. While the platform provides rich and diverse review content, its user base may not represent broader global or platform-specific behaviors. Future research could replicate this study across other e-commerce platforms or in culturally diverse markets to assess the generalizability of the findings and identify possible cultural or contextual differences in review practices.
Additionally, this study treats product type as a categorical variable without exploring intra-category heterogeneity. Even within a single product type, characteristics such as brand reputation, price tier, or product complexity may influence how reviews are written and interpreted. Future work could incorporate more granular analyses—such as topic modeling or sentiment detection—to capture variation in review content within product categories. This would offer a more detailed view of how specific product features interact with consumer-generated content.
By addressing these limitations, future research can enrich our understanding of the multifaceted dynamics between product characteristics and review behavior. Expanding the theoretical scope and empirical coverage will also help inform platform design, content moderation strategies, and more tailored consumer engagement approaches.

Author Contributions

Conceptualization, J.M.K. and H.D.; methodology, J.M.K.; software, H.D.; validation, J.M.K., H.D., and K.K.-c.P.; formal analysis, J.M.K.; investigation, H.D.; resources, J.M.K.; data curation, H.D.; writing—original draft preparation, H.D.; writing—review and editing, J.M.K. and K.K.-c.P.; visualization, K.K.-c.P.; supervision, J.M.K.; project administration, J.M.K.; funding acquisition, J.M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study were collected privately by the authors. However, the authors are willing to share the data upon reasonable request, provided it aligns with ethical and privacy guidelines.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Contimod. Amazon Statistics: The Ultimate Numbers Must Know in 2025. 2025. Available online: https://www.contimod.com/amazon-statistics/ (accessed on 18 April 2025).
  2. Domínguez, S.; Pedreros, S.; Delgadillo, D.; Anzola, J. A Depreciation Method Based on Perceived Information Asymmetry in the Market for Electric Vehicles in Colombia. World Electr. Veh. J. 2024, 15, 511. [Google Scholar] [CrossRef]
  3. Rosillo-Díaz, E.; Muñoz-Rosas, J.F.; Blanco-Encomienda, F.J. Impact of Heuristic–Systematic Cues on the Purchase Intention of the Electronic Commerce Consumer through the Perception of Product Quality. J. Retail Consum. Serv. 2024, 81, 103980. [Google Scholar] [CrossRef]
  4. Chen, T.; Samaranayake, P.; Cen, X.; Qi, M.; Lan, Y.C. The impact of online reviews on consumers’ purchasing decisions: Evidence from an eye-tracking study. Front. Psychol. 2022, 13, 865702. [Google Scholar] [CrossRef]
  5. Mudambi, S.M.; Schuff, D. Research note: What makes a helpful online review? A study of customer reviews on Amazon.com. MIS Q. 2010, 34, 185–200. [Google Scholar] [CrossRef]
  6. Alzate, M.; Arce-Urriza, M.; Cebollada, J. Online Reviews and Product Sales: The Role of Review Visibility. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 638–669. [Google Scholar] [CrossRef]
  7. Wang, L.; Che, G.; Hu, J.; Chen, L. Online Review Helpfulness and Information Overload: The Roles of Text, Image, and Video Elements. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 1243–1266. [Google Scholar] [CrossRef]
  8. Mirhoseini, M.; Pagé, S.-A.; Léger, P.-M.; Sénécal, S. What Deters Online Grocery Shopping? Investigating the Effect of Arithmetic Complexity and Product Type on User Satisfaction. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 828–845. [Google Scholar] [CrossRef]
  9. Nelson, P. Information and consumer behavior. J. Polit. Econ. 1970, 78, 311–329. [Google Scholar] [CrossRef]
  10. Darby, M.R.; Karni, E. Free Competition and the Optimal Amount of Fraud. J. Law Econ. 1973, 16, 67–88. [Google Scholar] [CrossRef]
  11. Akerlof, G.A. The market for ‘lemons’: Quality uncertainty and the market mechanism. In Market Failure or Success; Edward Elgar Publishing: Cheltenham, UK, 2002; pp. 66–81. [Google Scholar]
  12. Gunasti, K.; Kara, S.; Ross, W.T., Jr. Effects of search, experience and credence attributes versus suggestive brand names on product evaluations. Eur. J. Mark. 2020, 54, 12. [Google Scholar] [CrossRef]
  13. Chocarro, R.; Cortinas, M.; Villanueva, M.L. Different channels for different services: Information sources for services with search, experience and credence attributes. Serv. Ind. J. 2021, 41, 261–284. [Google Scholar] [CrossRef]
  14. Osterbrink, L.; Alpar, P.; Seher, A. Influence of images in online reviews for search goods on helpfulness. Rev. Mark. Sci. 2020, 18, 43–73. [Google Scholar] [CrossRef]
  15. Dai, H.; Chan, C.; Mogilner, C. People rely less on consumer reviews for experiential than material purchases. J. Consum. Res. 2020, 46, 1052–1075. [Google Scholar] [CrossRef]
  16. Lantzy, S.; Hamilton, R.W.; Chen, Y.J.; Stewart, K. Online reviews of credence service providers: What do consumers evaluate, do other consumers believe the reviews, and are interventions needed? J. Public Policy Mark. 2021, 40, 27–44. [Google Scholar] [CrossRef]
  17. d’Andria, D. The economics of professional services: Lemon markets, credence goods, and C2C information sharing. Serv. Bus. 2013, 7, 1–15. [Google Scholar] [CrossRef]
  18. Cui, Y.; Wang, X. Investigating the role of review presentation format in affecting the helpfulness of online reviews. Electron. Commer. Res. 2022, 22, 2499–2518. [Google Scholar] [CrossRef]
  19. Kerschbamer, R.; Sutter, M. The economics of credence goods–a survey of recent lab and field experiments. CESifo Econ. Stud. 2017, 63, 1–23. [Google Scholar] [CrossRef]
  20. Li, H.; Zhang, L.; Guo, R.; Ji, H.; Yu, B.X. Information enhancement or hindrance? Unveiling the impacts of user-generated photos in online reviews. Int. J. Contemp. Hosp. Manag. 2023, 35, 2322–2351. [Google Scholar] [CrossRef]
  21. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68. [Google Scholar] [CrossRef]
  22. Ajzen, I.; Driver, B.L. Prediction of leisure participation from behavioral, normative, and control beliefs: An application of the theory of planned behavior. Leis. Sci. 1991, 13, 185–204. [Google Scholar] [CrossRef]
  23. Jia, Y.; Feng, H.; Wang, X.; Alvarado, M. “Customer Reviews or Vlogger Reviews?” The Impact of Cross-Platform UGC on the Sales of Experiential Products on E-Commerce Platforms. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 1257–1282. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Liu, C.; Lyu, Y. Examining Consumers’ Perceptions of and Attitudes toward Digital Fashion in General and Purchase Intention of Luxury Brands’ Digital Fashion Specifically. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 1971–1989. [Google Scholar] [CrossRef]
  25. Varga, M.; Albuquerque, P. The impact of negative reviews on online search and purchase decisions. J. Mark. Res. 2024, 61, 803–820. [Google Scholar] [CrossRef]
  26. Choi, Y.; Kim, J. Consumer preferences in user-vs. item-based recommender systems for search and experience products. J. Mark. Manag. 2025, 1, 84–110. [Google Scholar] [CrossRef]
  27. Mukhopadhyay, S.; Kumar, V.; Sharma, A.; Chung, T.S. Impact of review narrativity on sales in a competitive environment. Prod. Oper. Manag. 2022, 31, 2538–2556. [Google Scholar] [CrossRef]
  28. Verma, D.; Dewani, P.P. eWOM credibility: A comprehensive framework and literature review. Online Inf. Rev. 2021, 45, 481–500. [Google Scholar] [CrossRef]
  29. Cardoso, A.; Gabriel, M.; Figueiredo, J.; Oliveira, I.; Rêgo, R.; Silva, R.; Meirinhos, G. Trust and loyalty in building the brand relationship with the customer: Empirical analysis in a retail chain in northern Brazil. J. Open Innov. Technol. Mark. Complex. 2022, 8, 109. [Google Scholar] [CrossRef]
  30. Riswanto, A.L.; Ha, S.; Lee, S.; Kwon, M. Online Reviews Meet Visual Attention: A Study on Consumer Patterns in Advertising, Analyzing Customer Satisfaction, Visual Engagement, and Purchase Intention. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 3102–3122. [Google Scholar] [CrossRef]
  31. Pooja, K.; Upadhyaya, P. Does Negative Online Review Matter? An Investigation of Travel Consumers. Int. J. Consum. Stud. 2025, 49, e70043. [Google Scholar] [CrossRef]
  32. Sandhu, R.; Singh, A.; Faiz, M.; Kaur, H.; Thukral, S. Enhanced Text Mining Approach for Better Ranking System of Customer Reviews. In Multimodal Biometric and Machine Learning Technologies: Applications for Computer Vision; Springer: Cham, Switzerland, 2023; pp. 53–69. [Google Scholar]
  33. Ceylan, G.; Diehl, K.; Proserpio, D. Words meet photos: When and why photos increase review helpfulness. J. Mark. Res. 2024, 61, 5–26. [Google Scholar] [CrossRef]
  34. Felbermayr, A.; Nanopoulos, A. The role of emotions for the perceived usefulness in online customer reviews. J. Interact. Mark. 2016, 36, 60–76. [Google Scholar] [CrossRef]
  35. Lopez, A.; Garza, R. Do sensory reviews make more sense? The mediation of objective consumption in online review helpfulness. J. Res. Interact. Mark. 2022, 16, 438–456. [Google Scholar]
  36. Madadi, R.; Torres, I.M.; Zúñiga, M.Á. The semiotics of emojis in advertising: An integrated quantitative and qualitative examination of emotional versus functional ad dynamics. Psychol. Mark. 2024, 41, 1223–1241. [Google Scholar] [CrossRef]
  37. Korfiatis, N.; García-Bariocanal, E.; Sánchez-Alonso, S. Evaluating content quality and helpfulness of online product reviews: The interplay of review helpfulness vs. review content. Electron. Commer. Res. Appl. 2012, 11, 205–217. [Google Scholar] [CrossRef]
  38. Zhu, L.; Yin, G.; He, W. Is this opinion leader’s review useful? Peripheral cues for online review helpfulness. J. Electron. Commer. Res. 2014, 15, 267. [Google Scholar]
  39. Pan, Y.; Zhang, J.Q. Born unequal: A study of the helpfulness of user-generated product reviews. J. Retail. 2011, 87, 598–612. [Google Scholar] [CrossRef]
  40. Filieri, R.; Raguseo, E.; Vitari, C. When are extreme ratings more helpful? Empirical evidence on the moderating effects of review characteristics and product type. Comput. Hum. Behav. 2018, 88, 134–142. [Google Scholar] [CrossRef]
  41. Schindler, R.M.; Bickart, B. Perceived helpfulness of online consumer reviews: The role of message content and style. J. Consum. Behav. 2012, 11, 234–243. [Google Scholar] [CrossRef]
  42. Forman, C.; Ghose, A.; Wiesenfeld, B. Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Inf. Syst. Res. 2008, 19, 291–313. [Google Scholar] [CrossRef]
  43. Willemsen, L.M.; Neijens, P.C.; Bronner, F.; De Ridder, J.A. Highly recommended! The content characteristics and perceived usefulness of online consumer reviews. J. Comput. Mediat. Commun. 2011, 17, 19–38. [Google Scholar] [CrossRef]
  44. Filieri, R. What makes online reviews helpful? A diagnosticity-adoption framework to explain informational and normative influences in e-WOM. J. Bus. Res. 2015, 68, 1261–1270. [Google Scholar] [CrossRef]
  45. Cheung, C.M.; Thadani, D.R. The impact of electronic word-of-mouth communication: A literature analysis and integrative model. Decis. Support Syst. 2012, 54, 461–470. [Google Scholar] [CrossRef]
  46. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Springer: Boston, MA, USA, 1985. [Google Scholar]
  47. Bronner, F.; de Hoog, R. Vacationers and eWOM: Who posts, and why, where, and what? J. Travel Res. 2011, 50, 15–26. [Google Scholar] [CrossRef]
  48. Yoo, K.H.; Lee, Y.; Gretzel, U.; Fesenmaier, D.R. Trust in travel-related consumer generated media. In Information and Communication Technologies in Tourism 2009; Höpken, W., Gretzel, U., Law, R., Eds.; Springer: Vienna, Austria, 2009; pp. 50–61. [Google Scholar]
  49. Sen, S.; Lerman, D. Why are you telling me this? An examination into negative consumer reviews on the web. J. Interact. Mark. 2007, 21, 76–94. [Google Scholar] [CrossRef]
  50. Zhu, F.; Zhang, X. Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics. J. Mark. 2010, 74, 133–148. [Google Scholar] [CrossRef]
  51. Filieri, R. What makes an online consumer review trustworthy? Ann. Tour. Res. 2016, 58, 46–64. [Google Scholar] [CrossRef]
  52. Cheung, C.M.; Lee, M.K.; Rabjohn, N. The impact of electronic word-of-mouth: The adoption of online opinions in online customer communities. Internet Res. 2008, 18, 229–247. [Google Scholar] [CrossRef]
  53. Yin, D.; Bond, S.D.; Zhang, H. Anxious or angry? Effects of discrete emotions on the perceived helpfulness of online reviews. MIS Q. 2014, 38, 539–560. [Google Scholar] [CrossRef]
  54. Ghose, A.; Ipeirotis, P.G. Estimating the helpfulness and economic impact of product reviews: Mining text and reviewer characteristics. IEEE Trans. Knowl. Data Eng. 2010, 23, 1498–1512. [Google Scholar] [CrossRef]
  55. Otterbacher, J. Helpfulness in online communities: A measure of message quality. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 955–964. [Google Scholar]
  56. Hong, Y.; Pavlou, P.A. Product fit uncertainty in online markets: Nature, effects, and antecedents. Inf. Syst. Res. 2014, 25, 328–344. [Google Scholar] [CrossRef]
  57. Liu, Z.; Park, S. What makes a useful online review? Implication for travel product websites. Tour. Manag. 2015, 47, 140–151. [Google Scholar] [CrossRef]
  58. Huang, P.; Lurie, N.H.; Mitra, S. Searching for experience on the web: An empirical examination of consumer behavior for search and experience goods. J. Mark. 2009, 73, 55–69. [Google Scholar] [CrossRef]
  59. Lee, S.G.; Trimi, S.; Yang, C.G. Perceived usefulness factors of online reviews: A study of Amazon.com. J. Comput. Inf. Syst. 2018, 58, 344–352. [Google Scholar] [CrossRef]
  60. Wan, Y.; Nakayama, M.; Sutcliffe, N. The impact of age and shopping experiences on the classification of search, experience, and credence goods in online shopping. Inf. Syst. E-Bus. Manag. 2012, 10, 135–148. [Google Scholar] [CrossRef]
  61. Mayzlin, D.; Dover, Y.; Chevalier, J. Promotional reviews: An empirical investigation of online review manipulation. Am. Econ. Rev. 2014, 104, 2421–2455. [Google Scholar] [CrossRef]
  62. Lii, Y.S.; Sy, E. Internet differential pricing: Effects on consumer price consumption, emotions, and behavioral responses. Comput. Hum. Behav. 2009, 25, 770–777. [Google Scholar] [CrossRef]
  63. Geuens, M.; De Pelsmacker, P.; Faseur, T. Emotional advertising: Revisiting the role of product category. J. Bus. Res. 2011, 64, 418–426. [Google Scholar] [CrossRef]
  64. Dulleck, U.; Kerschbamer, R.; Sutter, M. The economics of credence goods: An experiment on the role of liability, verifiability, reputation, and competition. Am. Econ. Rev. 2011, 101, 526–555. [Google Scholar] [CrossRef]
Figure 1. Research Framework.
Figure 1. Research Framework.
Jtaer 20 00135 g001
Table 1. Definitions of the variables.
Table 1. Definitions of the variables.
VariableDescription
Search Goods (sg.)Search goods are products evaluated before purchase using available information.
Experience Goods (eg.)Experience goods are products evaluated after consumption based on personal experience.
Credence Goods (cg.)Credence goods are products that are difficult to evaluate, requiring expert validation or long-term use, and rely on online word-of-mouth (eWOM). In this study, they are used as the reference for binary classification.
RatingContinuous variable representing the overall product rating (1–5).
Rating2Squared value of the rating, capturing non-linear relationships.
Extremely Positive Rating (expos)Indicates reviews with a 5-star rating (extremely positive).
Extremely Negative Rating (exneg)Indicates reviews with a 1–2 star rating (extremely negative).
log photoLogarithmic value of the number of photos in a review, representing visual content.
log word count (log wc)Logarithmic value of the word count in a review, representing text length.
log helpfulness (log helpful)Logarithmic value of the helpfulness score, typically based on the number of helpful votes received by a review.
Search Goods × log photo (sg. photo)Interaction between ‘Search Goods’ and ‘log photo’.
Experience Goods × log photo (eg. photo)Interaction between ‘Experience Goods’ and ‘log photo’.
Search Goods × log word count (sg. wc)Interaction between ‘Search Goods’ and ‘log word count’.
Experience Goods × log word count (eg. wc)Interaction between ‘Experience Goods’ and ‘log word count’.
Product Price (price)Price of the product, used to measure its impact on consumer consumption and decision-making.
Monthly_dummyMonth when the review was posted (1–12).
Yearly_dummyYear when the review was posted (2008–2024).
Table 2. Summary statistics of the primary variables.
Table 2. Summary statistics of the primary variables.
VariableObsMeanStd. Dev.MinMax
sg23,2500.2440.4301
eg23,2500.4260.49501
cg23,2500.330.4701
rating23,2503.3861.60215
rating223,25014.0289.969125
expos23,2500.3910.48801
exneg23,2500.3310.47101
log photo23,2500.0790.36404.127
log wc23,2503.2871.13107.41
log helpful23,2500.5410.81006.396
sg photo23,2500.0290.22504.127
eg photo23,2500.0360.24903.258
sg wc23,2500.8841.66207.209
eg wc23,2501.3421.71707.41
price23,250150.373259.2256.992399
yearly dummy23,2502021.7692.38520082004
monthly dummy23,2506.2533.341112
Table 3. Correlation matrix of the primary variables.
Table 3. Correlation matrix of the primary variables.
(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)
(1) sg1.00
(2) eg−0.4901.00
(3) cg−0.398−0.6051.00
(4) rating−0.0260.138−0.1211.00
(5) rating2−0.0120.134−0.1300.9871.00
(6) expos0.0230.106−0.1320.8080.8821.00
(7) exneg0.043−0.1240.091−0.902−0.849−0.5641.00
(8) log photo0.0580.010−0.064−0.023−0.017−0.0050.0241.00
(9) log wc0.169−0.106−0.043−0.208−0.225−0.2330.1600.1351.00
(10) log helpful0.165−0.001−0.150−0.135−0.126−0.1010.1160.1510.4311.00
(11) sg photo0.223−0.109−0.0890.0130.0160.015−0.0140.6000.1290.1131.00
(12) eg photo−0.0820.167−0.101−0.022−0.016−0.0020.0250.6660.0520.1070.0181.00
(13) sg wc0.937−0.459−0.373−0.066−0.056−0.0260.0720.0930.3390.2460.2720.0761.00
(14) eg wc−0.4440.907−0.5480.0770.0670.034−0.0790.0410.1740.1080.0990.1970.4161.00
(15) price0.578−0.195−0.3230.0860.1020.128−0.0520.0620.0610.1360.1610.0260.5340.1931.00
Note: The bolded numbers indicate values less than 0.01.
Table 4. Review generation by product type.
Table 4. Review generation by product type.
(1)(2)(3)(4)(5)(6)
Variableslog_photolog_photolog_photolog_wclog_wclog_wc
sg.0.043 ***0.043 ***0.043 ***0.484 ***0.491 ***0.486 ***
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
eg.0.054 ***0.053 ***0.054 ***0.0220.042 **0.039 **
(0.001)(0.001)(0.001)(0.198)(0.015)(0.023)
rating−0.008 ***−0.033 ***−0.019 ***−0.141 ***0.436 ***0.006
(0.001)(0.001)(0.001)(0.001)(0.001)(0.748)
rating2Not included0.004 ***Not includedNot included−0.094 ***Not included
(0.004) (0.001)
exposNot includedNot included0.023 **Not includedNot included−0.510 ***
(0.032) (0.001)
exnegNot includedNot included−0.021Not includedNot included0.086 **
(0.120) (0.045)
price0.001 ***0.001 ***0.001 ***0.001 **0.0010.001
(0.001)(0.001)(0.001)(0.032)(0.530)(0.415)
yearly dummyIncludedIncludedIncludedIncludedIncludedIncluded
monthly dummyIncludedIncludedIncludedIncludedIncludedIncluded
Constant−0.029 ***0.0040.0175.772 ***3.450 ***5.165 ***
(0.001)(0.781)(0.476)(0.001)(0.001)(0.001)
Observations23,25023,25023,25023,25023,25023,250
R-squared1.3%1.3%1.3%7.9%9.7%9.3%
Robust pval in parentheses *** p < 0.01, ** p < 0.05.
Table 5. Regression analysis of perceived helpfulness of reviews.
Table 5. Regression analysis of perceived helpfulness of reviews.
(1)(2)(3)(4)(5)(6)
Variableslog_helpfullog_helpfullog_helpfullog_helpfullog_helpfullog_helpful
sg−0.133 ***−0.136 ***−0.137 ***−0.133 ***−0.136 ***−0.137 ***
(0.002)(0.001)(0.001)(0.002)(0.001)(0.001)
eg0.068 **0.059 *0.060 *0.068 **0.059 *0.060 *
(0.031)(0.062)(0.056)(0.031)(0.062)(0.056)
rating−0.036 ***−0.264 ***−0.126 ***−0.036 ***−0.264 ***−0.126 ***
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
rating2Not included0.037 ***Not includedNot included0.037 ***Not included
(0.001) (0.001)
exposNot includedNot included0.195 ***Not includedNot included0.195 ***
(0.001) (0.001)
exnegNot includedNot included−0.164 ***Not includedNot included−0.164 ***
(0.001) (0.001)
log photo0.113 ***0.108 ***0.112 ***0.113 ***0.108 ***0.112 ***
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
log wc0.245 ***0.253 ***0.249 ***0.245 ***0.253 ***0.249 ***
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
sg photo0.0020.0050.0020.0020.0050.001
(0.969)(0.914)(0.973)(0.969)(0.914)(0.973)
eg photo0.143 ***0.137 ***0.138 ***0.143 ***0.137 ***0.138 ***
(0.001)(0.002)(0.001)(0.001)(0.002)(0.002)
sg wc0.096 ***0.095 ***0.097 ***0.096 ***0.095 ***0.097 ***
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
eg wc0.027 **0.0275 **0.028 **0.027 **0.028 **0.028 **
(0.016)(0.013)(0.012)(0.016)(0.013)(0.011)
price0.001 ***0.001 ***0.001 ***0.001 ***0.001 ***0.001 ***
(0.001)(0.001)(0.001)(0.001)(0.001)(0.001)
yearly dummyIncludedIncludedIncludedIncludedIncludedIncluded
monthly dummyIncludedIncludedIncludedIncludedIncludedIncluded
Constant−0.668 ***−0.388 ***−0.321 ***−0.347 ***−0.133 ***−0.080
(0.001)(0.001)(0.001)(0.001)(0.001)(0.142)
Observations23,25023,25023,25023,25023,25023,250
R-squared23.4%23.9%23.7%---
AIC---2.1545072.1476482.15117
Robust pval in parentheses *** p < 0.01, ** p < 0.05, * p < 0.1.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dong, H.; Park, K.K.-c.; Kim, J.M. The Role of Product Type in Online Review Generation and Perception: Implications for Consumer Decision-Making. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 135. https://doi.org/10.3390/jtaer20020135

AMA Style

Dong H, Park KK-c, Kim JM. The Role of Product Type in Online Review Generation and Perception: Implications for Consumer Decision-Making. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(2):135. https://doi.org/10.3390/jtaer20020135

Chicago/Turabian Style

Dong, Hang, Keeyeon Ki-cheon Park, and Jong Min Kim. 2025. "The Role of Product Type in Online Review Generation and Perception: Implications for Consumer Decision-Making" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 2: 135. https://doi.org/10.3390/jtaer20020135

APA Style

Dong, H., Park, K. K.-c., & Kim, J. M. (2025). The Role of Product Type in Online Review Generation and Perception: Implications for Consumer Decision-Making. Journal of Theoretical and Applied Electronic Commerce Research, 20(2), 135. https://doi.org/10.3390/jtaer20020135

Article Metrics

Back to TopTop