Next Article in Journal
Dynamic Incentive Mechanisms for Collaborative Innovation of Green Supply Chain Considering Digital Capability and Consumer Green Preference
Previous Article in Journal
Digital Wallet, Happy Heart: An Analysis Based on the Economic–Social–Environmental Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Review Helpfulness and Information Overload: The Roles of Text, Image, and Video Elements

1
School of Finance and Economics, University of Sanya, Sanya 572022, China
2
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China
3
School of Management, Wuhan Institute of Technology, Wuhan 430205, China
*
Authors to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2024, 19(2), 1243-1266; https://doi.org/10.3390/jtaer19020064
Submission received: 19 March 2024 / Revised: 21 May 2024 / Accepted: 22 May 2024 / Published: 29 May 2024

Abstract

:
Online reviews have become an important source of information for consumers, significantly influencing their purchasing decisions. However, the abundance and variety of review formats, especially the mix of text, image, and video elements, can lead to information overload and hinder effective decision-making. This study investigates how different review formats and their combinations affect the perceived helpfulness of reviews. We develop a comprehensive framework to analyze the interactions between text, image, and video elements and their impact on the helpfulness of reviews. We collect and code 8693 online reviews from JingDong Mall Mallacross six product categories, including both experience products and search products, and use multiple regression analysis to test our hypotheses. Our results show that textual review elements significantly increase review helpfulness. However, their effectiveness decreases as the amount of information increases, indicating cognitive overload. Text reviews are more prone to contribute to information overload, while visual elements such as images and videos generally do not contribute to information overload in the coexistence of text, image, and video reviews. Imagery components have a minimal effect on review helpfulness. Video elements are relatively short, which may not be sufficient to convey useful information. We also find that the impact of review formats varies between experience products and search products, and that star ratings moderate the alignment of textual or imagery components with consumer expectations. We conclude that the hybrid of text, image, and video elements in online reviews plays a crucial role in shaping consumer decision-making and information overload. Our research contributes to the literature on online reviews and information overload while providing practical implications for online retailers, review platforms, and consumers to optimize review formats, star ratings, and product types to facilitate informed purchase decisions.

1. Introduction

Online reviews have become a vital source of information for consumers in the digital era, as they offer rich and diverse insights into product attributes and user experiences. As a result, they have a significant impact on product sales [1,2]. With the advancement of e-commerce technologies, online reviews have also incorporated various content elements, such as text, images, and videos, to enhance their expressiveness and persuasiveness. While these content elements may increase richness and reduce uncertainty for consumers, they may also create information overload and cognitive burden for consumers, especially when they come from multiple sources and have conflicting or inconsistent messages [3,4]. This backdrop sets the stage for our investigation into the dynamics of online review content elements and their impact on consumer perceptions and behaviors. A central inquiry emerges: Does the inclusion of a wider variety of review content elements necessarily enhance their perceived helpfulness to consumers, or is there a quantitative relationship indicating an optimal balance? This question probes the complex interplay between the richness of review content elements and their utilities, suggesting that beyond a certain threshold, additional information may not only cease to add value but could also detract from the overall utility of the reviews.
Previous research has highlighted the moderating role of product type, classified as either sensory or non-sensory, in the relationship between review content elements and perceived helpfulness [5]. Sensory products are those with attributes that can only be evaluated through direct experiences, such as taste, smell, or touch, while non-sensory products are those with attributes that can be evaluated through indirect information, such as functionality, durability, or reliability [6]. Empirical studies have demonstrated that the length of textual reviews has a stronger positive effect on review helpfulness for sensory products compared to non-sensory products, as consumers seek more detailed information to reduce the uncertainty associated with sensory attributes [7]. However, the extant literature reveals a gap in understanding how alternative review elements, particularly the hybrid of text, image, and video elements within a single review, interact with product types to influence review helpfulness. Given their capacity to convey richer and more vivid descriptions compared to text alone, images and videos are posited to be especially beneficial for reviews of experiential products, providing consumers with a more immersive insight into the product experience [7,8]. Moreover, prior research has primarily relied on controlled laboratory experiments to investigate the differential impact of review formats across product types, leaving a notable scarcity of empirical investigations conducted within the authentic context of online shopping environments where hybrid review elements coexist and interact [9].
This research gap underscores the need for a more nuanced understanding of how product types might jointly modulate the effects of diverse review elements on review helpfulness in real-world e-commerce settings. The interplay between various content elements and their combined influence on perceived helpfulness across different product categories remains largely unexplored. Consequently, we articulate our second research inquiry as follows: How do various content elements interact with each other, and how do different product types moderate the effects of these presentation formats on review helpfulness amidst the coexistence of multiple review content elements in the digital marketplace?
By addressing this question, we aim to contribute to the existing body of knowledge by providing a more comprehensive understanding of the complex dynamics between review content elements, product types, and perceived helpfulness in the context of authentic online shopping environments. This research endeavor holds significant implications for both theory and practice, as it seeks to unravel the intricate relationships between various presentation formats and their effectiveness in aiding consumer decision-making across different product types.
Star ratings, which serve as numerical representations of consumers’ overall evaluations, range from one to five stars and play a pivotal role in shaping the helpfulness of online reviews. These ratings, as encapsulated in prior research, interact synergistically with other review presentation formats—such as photographs and textual descriptions—to modulate perceived review helpfulness. Filieri et al. [10] illuminate the nuanced dynamics between extreme star ratings and the inclusion of reviewer-generated photographs, elucidating that reviews featuring extreme ratings coupled with photographs are perceived as more helpful. This effect is particularly pronounced in contexts requiring visual validation, such as hotel reviews. Further extending this inquiry, Filieri et al. [11] discern that reviews characterized by extremely negative ratings garner higher helpfulness evaluations when they are not only lengthy but also exhibit readability. Nonetheless, the burgeoning prevalence of video reviews on digital platforms introduces a novel dimension to the discourse on review helpfulness. Video reviews, by providing a rich tapestry of visual and auditory information, stand to offer a more immersive evaluative experience than their textual or photographic counterparts. However, this richness potentially exacts a cognitive toll on consumers, necessitating heightened attentional resources for processing and integration [8]. The evolving landscape of online reviews, increasingly characterized by a confluence of multiple content elements—including text, images, and videos—needs a refined understanding of how star ratings influence the efficacy of these diverse formats in aiding consumer evaluations. In light of the foregoing considerations, our inquiry pivots to the third critical research question: How do star ratings moderate the effects of hybrid content elements, encompassing text, images, and videos, on the perceived helpfulness of online reviews? This question seeks to unravel the complex interplay between quantitative evaluations encapsulated by star ratings and the qualitative richness afforded by different content elements.
Therefore, it is essential to understand how hybrid content elements affect the perceived helpfulness and information overload of online reviews, and how this effect is moderated by other factors, such as product types, review ratings, and other content characteristics in the authentic context of online shopping environments. In this study, we construct and empirically validate a theoretical framework grounded in the principles of information overload theory and dual-coding theory. Employing a comprehensive and varied dataset from JingDong Mall, a leading online retail platform in China, accessed on 14 and 15 January 2024, we investigate how text, images, and videos in online reviews jointly influence the dichotomous outcome of review helpfulness, while accounting for variables such as product type, review ratings, and response to review usefulness. We also conduct some robustness checks using different model specifications and measurement methods. We found that while longer textual reviews initially enhance perceived helpfulness, their effectiveness diminishes beyond a certain length due to cognitive overload. Text reviews are more prone to contribute to information overload, while visual elements such as images and videos generally do not contribute to information overload in the coexistence of text, image, and video reviews. Short videos fail to provide sufficient information, whereas longer videos become more helpful once they exceed a specific duration. The number of images in a review does not significantly affect its helpfulness. Additionally, our findings indicate that product types and star ratings moderate the effects of these review elements. Experience products benefit more from detailed reviews across all formats, while search products require fewer elements. High star ratings particularly enhance the helpfulness of textual and image reviews. These insights suggest that platforms should carefully balance content richness and conciseness, tailoring review formats to specific product types to optimize consumer decision-making and satisfaction.
Our investigation focuses on determining whether increasing the diversity of review content elements (text, images, and videos) enhances perceived helpfulness or if there is an optimal combination where additional information detracts from utility. This problem is significant given the rapid growth of content-rich online reviews and the potential for information overload impacting decision quality. The remainder of this paper is structured as follows: Section 2 provides a literature review. Section 3 lays out the theoretical foundation and develops the hypotheses. Section 4 explains the research methodology, including data collection, variables, and analysis methods. Section 5 presents the empirical results of the statistical analysis and discusses additional robustness checks performed. Section 6 offers an in-depth discussion of key findings related to image quantity and presentation formats. Finally, Section 7 summarizes theoretical and practical implications, limitations, and future research directions.

2. Literature Review

2.1. Elements of Online Review Content and Online Review Helpfulness

Online reviews serve as a crucial information source for consumers seeking to make well-informed purchasing decisions. They offer diverse information, including details on product attributes, quality, performance, usage, satisfaction, and social–emotional cues like opinions, ratings, emotions, and interactions [12]. Additionally, online reviews can shape consumer attitudes, beliefs, preferences, and behaviors, including trust, confidence, satisfaction, loyalty, and purchasing intentions [12,13]. However, they also present a challenge to consumers, who must sift through and process vast amounts of information from numerous sources. Online reviews can be multimodal, including text, images, and videos, and displayed on platforms with various design elements, such as color, font, and layout [14,15,16]. The combination of different modalities constitutes the unique presentation form of each review, markedly influencing the efficiency and efficacy with which consumers gather and interpret information. Varied modalities within reviews convey distinct types of information, triggering diverse cognitive and emotional responses among consumers [17].
In the current e-commerce context, many platforms enable consumers to submit reviews in different formats. In this study, we consider a text-based review as a review that contains textual elements, an image-based review as a review that contains image elements, and a video-based review as a review that contains video elements. A review may contain one or more of these elements in combination. Presentation formats, characterized by the inclusion of different elements, may differently affect consumer information processing and decision-making, conveying varied information types and eliciting diverse cognitive and emotional responses [18,19].
Review helpfulness serves as a key indicator of online review quality and value, mirroring consumer evaluations and judgments [8,20]. Prior research has examined the effects of presentation format variables, such as using one or two presentation formats, on the effectiveness of review helpfulness and their relative impact [8,21,22,23]. However, these studies mainly focused on the direct effects of presentation format variables, overlooking the combination effect of different presentation formats. Given that these presentation formats originate from a single source—the platform—it is important to consider their interdependence in the analysis [24]. For example, the effectiveness of image and text elements within videos may depend on the contextual interaction between modalities, highlighting the role of their common source in the analytical discourse. Moreover, when information is presented in multiple modalities, consumers may use different cognitive processing channels simultaneously, requiring a synthesis process to form a coherent understanding [25]. This implies the existence of trade-offs that deserve attention. Therefore, we aim to explore the individual and interactive effects of content elements on review helpfulness and to elucidate their roles and relationships in the context of multimodal review presentations.

2.2. Information Overload in Online Consumer Decisions

Information overload, pivotal in online decision-making, is defined by an abundance of information surpassing consumer processing capacity, resulting in adverse decision outcomes [26,27]. This phenomenon affects both cognitive and emotional aspects of consumers, influencing their decision quality and confidence and impacting their information search and evaluation strategies. Numerous studies have explored the dynamics between review quantity, perceived quality, and consumer satisfaction, revealing that information overload and decision difficulty serve as central mediators [28,29]. Their findings suggest that improved perceived quality of reviews may alleviate information overload, particularly with greater review engagement, stressing the importance of balancing information quantity and utility.
Previous studies have shown that consumers rely on a limited set of reviews for decision-making, and that review sentiment plays a critical role. Jabr and Rahman [30] further explore the role of top reviews, those reviews highlighted or featured by online platforms due to factors such as high helpfulness ratings from the community, extreme valence, recency, and verification of purchase, in alleviating information overload in contexts with a large number of reviews. They find that the effectiveness of top reviews in reducing overload depended on their number and consistency with the overall review sentiment. Furner and Zinko [31] examine the effects of information overload on trust and purchase intentions on various digital platforms and discovered that while a moderate amount of information can enhance trust and purchase intentions, excessive information can lead to decreased trust and purchase intentions due to information overload. This effect was more pronounced on mobile platforms, suggesting that the smaller screen size and navigation difficulties contribute to a quicker onset of information overload compared to web environments. Townsend and Kahn [19] enhance our understanding of the influence of information presentation formats, especially the visual preference heuristic. They showed that although visual presentation increased perceived variety, it also increased choice complexity and overload.
In the context of online consumer decision-making, information overload is a multifaceted phenomenon that extends beyond the mere quantity of information available. Eppler and Mengis [26] emphasize that the quality, relevance, and presentation of information also play crucial roles in determining the occurrence and severity of information overload. They argue that irrelevant, ambiguous, or inaccurate information can exacerbate the cognitive burden on consumers, hindering their ability to effectively process and utilize the available information. Additionally, different elements of information content may affect consumers’ information processing capabilities differently, thereby influencing the perception of information overload.
Textual information usually requires word-by-word reading and comprehension, and it may contain detailed descriptions and complex information. Images and video information provide a more intuitive and vivid expression of information, which can help consumers quickly understand product features or user experience. However, image and video information may also introduce additional information, such as background, color, motion, and other details, that require extra cognitive processing from consumers.
When textual, image, and video information coexist, they may complement each other and provide more comprehensive information, but they may also cause a dramatic increase in information volume, exceeding the processing capacity of consumers. Furthermore, different consumers may have different preferences and abilities for processing information, and some consumers may be more adept at processing textual information, while others may favor visual information. Therefore, the effects of information presentation formats and combinations of different review content elements on information overload are complex and variable.
In our examination of the multimodal elements of online reviews and their impact on information overload and cognitive burden, we particularly focused on the interplay of text, image, and video elements across different product types and rating levels. To comprehensively understand the key findings and research gaps in the existing literature, we conducted a review of signifcant studies that influence the perceived helpfulness of online reviews.
Table 1 summarizes the key literature on online review helpfulness and information overload. This table highlights the major findings from previous studies and identifies the research gaps that warrant further investigation. This literature review aids in delineating the current limitations in the field and provides a theoretical foundation and direction for our study.

3. Theoretical Background and Hypotheses

3.1. Cognitive Load Theory and Dual-Coding Theory

Cognitive load theory (CLT) is a framework that explains how the human cognitive system processes and stores information in working memory and long-term memory [32]. According to CLT, working memory has a limited capacity and can only handle a few elements of information at a time, while long-term memory has a virtually unlimited capacity and can store complex schemas of information [33,34]. Therefore, learning occurs when new information is processed in working memory and integrated with existing schemas in long-term memory. However, learning can be hindered by cognitive overload, which occurs when the amount and complexity of information exceed the working memory capacity. CLT suggests that cognitive load can be classified into three types: intrinsic, extraneous, and germane. Intrinsic cognitive load refers to the inherent difficulty of the learning material, which depends on the learner’s prior knowledge and the number of interacting elements. Extraneous cognitive load refers to the additional load imposed by the instructional design, such as the presentation format, the organization, and the clarity of the information. Germane cognitive load refers to the load devoted to constructing and automating schemas, which facilitates learning and transfer [35]. CLT proposes that effective instruction should aim to reduce extraneous cognitive load, optimize intrinsic cognitive load, and foster germane cognitive load.
Dual-coding theory (DCT) is a framework that explains how the human cognitive system represents and processes verbal and nonverbal information [36]. According to DCT, there are two independent but interconnected systems for encoding and retrieving information: the verbal system and the nonverbal system. The verbal system deals with linguistic information, such as words and sentences, while the nonverbal system deals with imagery information, such as pictures and sounds. DCT suggests that information can be encoded and retrieved through either system or both systems simultaneously, resulting in different modes of representation and processing. DCT proposes that dual coding, which involves the use of both verbal and nonverbal information, enhances learning and memory, as it creates multiple memory traces and increases the chances of recall [37]. Consequently, textual, visual, and video elements within reviews may concurrently influence the helpfulness of the review. Based on these two theories, we propose the following hypotheses regarding the effects of varied content elements, information load, star ratings, and product type on review helpfulness:
Hypothesis 1 (H1).
Distinct review content elements (text, image, and video) can increase review helpfulness by reducing extraneous cognitive load and enhancing dual coding.

3.2. Information Overload and Underload in Hybrid Content Elements

Information load theory is a cognitive theory that explains how individuals process information in decision-making situations. According to this theory, individuals have a limited capacity to process information and tend to seek an optimal level of information load that matches their information needs and processing abilities [26]. When the information load is too high, individuals may experience information overload, which can impair decision quality and satisfaction due to the excess complexity and volume of information [38]. Conversely, when the information load is too low, individuals may suffer from information underload, lacking sufficient or relevant information to make informed decisions, which increases uncertainty and dissatisfaction. Both information overload and underload can have negative consequences for consumers, such as lower satisfaction, confidence, trust, and purchase intention.
Information load theory has been applied to various domains, such as marketing, accounting, management, and e-commerce. In the context of online reviews, information load theory can help to understand how different review content elements affect the amount and complexity of information that consumers receive, and how this influences their perception of review helpfulness. The review content elements can influence the degree of information overload and underload experienced by consumers, as they determine the quantity and quality of information provided by the reviews. Different review elements have different characteristics, such as richness, vividness, interactivity, and modality, that can affect the information load and cognitive processing of consumers. Text-based reviews convey verbal information, which requires more cognitive effort to interpret and analyze, but can provide more detailed and nuanced information about product attributes and experience [39]. Image-based reviews convey visual information, which is processed faster and more easily by the brain, but can provide less comprehensive and reliable information about product quality and performance [21]. Video-based reviews combine verbal, visual, and auditory information, which can create a more engaging and immersive experience, but can also impose more cognitive load and require more attention [40]. User-generated reviews serve as a crucial conduit of product-related information, reflecting buyers’ experiences and quality appraisals. A pervasive assumption is that the more detailed a review, the richer and more useful the information conveyed [12]. Nonetheless, emerging evidence indicates that this correlation might not be purely linear, especially when exploring the dimensions of text, image, and video reviews [24,41].
For text, factors such as complexity, readability, word frequency, and non-literalness can influence the information load [42,43]. For images, perceptual complexity, visual complexity, and visual features might dictate the information load [44,45,46]. For videos, additional elements, such as the number of frames, motion information, visuospatial sketchpad, and cognitive engagement, can also impact the information load [47,48]. When text, images, and videos interplay, the fusion of their features can also affect the overall information load [49,50].
In the realm of textual reviews, length often signifies perceived expertise, with longer reviews signaling a wealth of information. However, considering the constraints of human cognitive capacity, an overabundance of information can induce cognitive overload, thereby diminishing the review’s value. Consequently, there seems to be an optimal review length that maximizes perceived helpfulness, beyond which the benefits wane. To measure the information load of text-based reviews, we use the word count as a proxy, following previous studies [12,41]. We expect that the word count of text-based reviews will have a positive effect on review helpfulness up to a certain point, after which the effect will become negative due to information overload. Therefore, we propose the following hypothesis:
Hypothesis 2a (H2a).
The relationship between the word count of text-based reviews and review helpfulness is inverted U-shaped, such that review helpfulness increases with word count to an optimal point and then decreases with information overload.
For image-based reviews, the number of images can indicate the amount of visual information provided by the reviews [51,52]. However, more pictures do not necessarily mean more information, as some pictures may be redundant, irrelevant, or low-quality [51]. Thus, the effect of image count on review helpfulness may not be linear, but rather inverted U-shaped. When the image count is too low, consumers may face information underload, as they do not have enough visual evidence to verify the product quality and performance. In such cases, increasing the number of pictures can enhance review helpfulness by providing additional relevant information. However, as the image count continues to rise, consumers may face information overload, as they have to process more visual information and filter out the noise. Therefore, there may be an optimal image count that balances the information load and maximizes the review helpfulness. To measure the information load of image-based reviews, we use the image count as a proxy. Therefore, we hypothesize that the image count of image-based reviews will have a positive effect on review helpfulness up to a certain point, after which the effect will become negative due to information overload. Hence, we propose the following hypothesis:
Hypothesis 2b (H2b).
The relationship between the image count of image-based reviews and review helpfulness is inverted U-shaped, such that review helpfulness increases with image count to an optimal point and then decreases with information overload.
For video-based reviews, the duration of the videos can reflect the amount of verbal, visual, and auditory information provided by the reviews [52]. Similar to text-based reviews, longer videos may convey more information and expertise, but they may also impose more cognitive load and require more time and attention from consumers. Therefore, the effect of video duration on review helpfulness may also be inverted U-shaped. When the video duration is too short, consumers may not receive enough information to make informed decisions. When the video duration is too long, consumers may lose interest or become overwhelmed by the information [38,39]. Therefore, there may be an optimal video duration that balances the information load and maximizes the review helpfulness. To measure the information load of video-based reviews, we use the video duration as a proxy, because the longer the video is, the more complicated the review is. We expect that the video duration of video-based reviews will have a positive effect on review helpfulness up to a certain point, after which the effect will become negative due to information overload. Therefore, we propose the following hypothesis:
Hypothesis 2c (H2c).
The relationship between the video duration of video-based reviews and review helpfulness is inverted U-shaped, such that review helpfulness increases with video duration up to an optimal point and then decreases with information overload.
To test H2a–H2c, we will include a quadratic term of each presentation format (word count, image count, and video duration) in our regression models, as they capture the nonlinear effects of information load on review helpfulness.

3.3. Product Types

Product type plays a pivotal role in shaping consumers’ perceptions and assessments of online reviews. Different product types, with their unique attributes, risks, and uncertainties, cater to the varied information needs and preferences of consumers [53,54]. Historically, products have been categorized into multiple types, such as search vs. experience products [55,56], durable vs. nondurable [45], and hedonic vs. utilitarian [53]. The search and experience product dichotomy, introduced by Nelson, remains particularly influential [57]. It differentiates products based on the ease and cost of assessing their quality pre- and post-purchase.
Search products, like books and electronics, have qualities that are ascertainable prior to purchase [58,59,60]. In contrast, experience products (e.g., movies, hotels) reveal their true quality only after purchase or consumption. Search products typically carry less risk and uncertainty, possessing objective, verifiable attributes. Conversely, experience products, with their subjective qualities, present higher uncertainty and risk. Previous research has yielded mixed results regarding the impact of product type on the helpfulness of online reviews. Some suggest the greater helpfulness of reviews for experience products due to their ability to mitigate uncertainty [12], while others argue for the supremacy of search products in this context, citing their reliance on objective information [42,43]. In light of the unique needs of experience products, we propose the following revised hypotheses:
Hypothesis 3a (H3a).
The impact of text-based reviews on perceived helpfulness is moderated by product types, with a greater benefit for experience products. This stems from text reviews’ ability to provide detailed narratives about unobservable, experiential attributes.
Hypothesis 3b (H3b).
Image-based reviews exert a more significant impact on the perceived helpfulness of experience products than search products.
Hypothesis 3c (H3c).
Video-based reviews are more effective in enhancing the helpfulness perception of experience products compared to search products.

3.4. Star Ratings

Star ratings, an integral component of online reviews, serve as a rapid and intuitive indicator of product quality and consumer satisfaction. Predominantly ranging from one to five stars, these ratings encapsulate a succinct evaluation of a consumer’s experience with a product. Within the digital commerce landscape, star ratings exert a significant influence on consumer perceptions and decision-making processes. They often represent the preliminary element observed in an online review, profoundly impacting consumer behavior. Elevated star ratings can allure potential purchasers by fostering trust and perception of superior quality, while diminished ratings may dissuade acquisitions, signaling potential deficiencies or discontent. The ramifications of star ratings extend beyond mere attraction or deterrence; they sculpt consumer expectations and perceptions of the product even before the engagement with the detailed review narrative. For instance, elevated star ratings, when coupled with an elaborate text review, can augment the credibility of the review. Overall retailer product rating increases with the increase in review length [59]. Conversely, identical ratings paired with a terse or ambiguous review may provoke skepticism regarding its authenticity. Similarly, the integration of star ratings with image or video reviews can either amplify or diminish the perceived utility of the review, contingent upon the congruence between the visual content and the assigned rating. Star ratings frequently act as a heuristic for consumers, especially in scenarios characterized by an abundance of choices or information overload.
In contexts of information saturation, consumers may increasingly rely on star ratings for expedited decision-making, potentially eclipsing the nuanced review content. To elucidate the role of star ratings in relation to the perceived helpfulness of online reviews and their presentation formats, we articulate the following hypotheses:
Hypothesis 4a (H4a).
The perceived helpfulness of text-based reviews is moderated by their associated star ratings, with higher ratings hypothesized to enhance this helpfulness.
Hypothesis 4b (H4b).
The influence of image-based reviews on perceived helpfulness is shaped by star ratings, with congruence between high ratings and visual content being posited to amplify this helpfulness.
Hypothesis 4c (H4c).
In video-based reviews, the interaction between star ratings and content is anticipated to be significant, with high ratings aligned with video content hypothesized to elevate perceived helpfulness.
These hypotheses are designed to clarify the complex relationship between star ratings and various review formats, highlighting their contribution to the overall perception of online reviews. Based on the above theoretical background and hypotheses, we develop a research model that depicts the relationships among presentation format, information load, product type, star ratings, and review helpfulness. The research model is shown in Figure 1.

4. Research Methodology

4.1. Data Collection

We use a Python web scraping tool to collect data from Jing Dong Mall, one of China’s largest online retail platforms. We retrieved 8693 online reviews of six products on 14 and 15 January 2024. These products represented two types of products: experience products and search products, following Nelson’s categorization [38,39]. Experience products, such as the SK-II Star Luxury Skincare Experience Set, Wuliangye 8th Generation 52 Degrees Strong Aroma Chinese Spirit, and He Feng Yu Men’s Perfume Gift Box, are those whose quality can only be assessed after consumption. Search products, like Huawei HUAWEI P40 (5G) 8G+128G, Canon PowerShot G7, and Lenovo Xiaoxin 16 2023 Ultra-thin 16, i5-13500H 16G 512G Standard Edition IPS Full Screen, have attributes that can be evaluated before purchase.
For each online review, we collected the following data: (1) star ratings by the reviewer (1 to 5); (2) total number of likes or thumbs up for the review; (3) word count of the review, or zero if the review is textless; (4) number of pictures in the review, or zero if the review is imageless; (5) video duration of the review, or zero if the review is videoless; (6) product category, corresponding to one of the six products; (7) number of responses to the review; (8) product type, coded as one for experience products and zero for search products. These variables indicate the online review elements that may influence consumer perception.
We classified these products into positive (4–5 stars), neutral (2–3 stars), and negative (1 star) reviews, following JingDong Mall’s evaluation management rules. These ratings represent consumer insights based on their shopping experiences and product quality perceptions. We also gathered reviews without helpfulness votes, indicating no consumer feedback on their quality. We used the recommended order for data acquisition, as such reviews tend to have more words, helpfulness votes, responses, and a variety of presentation formats, which align with our research objectives. Table 2 summarizes the data collection process and the descriptive statistics of the data.
As shown in Table 2, all 8693 online reviews contain textual elements, accounting for 100% of the reviews. Among them, 5545 reviews contain imagery elements, accounting for 63.794% of the reviews, and 1034 reviews contain video elements, accounting for 11.895% of the reviews. To further explore the composition of review elements, we account for the three types of elements (text, image, and video) in the number of reviews. We find that nearly half of the reviews consist of text and images, while the rest are composed of text only or hybrid review content elements (text, images, and videos), with proportions of 36.121% and 11.895%, respectively (Figure 2). This indicates that the mixed use of multiple review content elements is prevalent on JingDong Mall.

4.2. Variables

Our dependent variable is identified as ‘Review Helpfulness Votes’ (Helpfulness), which is defined as the total count of people who interacted with the upward-facing thumb symbol, indicating their endorsement of the review’s helpfulness. This variable reflects the extent to which other consumers find a review useful for their purchase decisions. A higher count indicates a higher level of helpfulness.
The independent variables are related to the review presentation format, which consists of three content elements: text, image, and video. For text, we use the word count (RLength) of the review as a proxy for the information provided by the reviewer. We expect that longer reviews will be more helpful than shorter ones, as they can convey more details and insights about the product. For images, we use the number of images (NImage) included in the review as a proxy for the visual appeal and credibility of the review. We expect that reviews with more pictures will be more helpful than those with fewer or no pictures, as they can enhance the consumer’s understanding and trust of the product. For videos, we use the duration of videos (DVideo) attached to the review as a proxy for the interactivity and engagement of the review. We expect that reviews with longer videos will be more helpful than those with shorter or no videos, as they can provide more vivid and dynamic demonstrations of the product.
We also include some control variables that may affect the review helpfulness, such as the star ratings (SRating) of the review, the product types (PType) that contain experience products and search products, the product category, and the number of responses (RResponse). The star ratings of a review serve as an indicator of the reviewer’s level of satisfaction or dissatisfaction with the product. This sentiment can significantly shape the tone and overall perception conveyed in the review. In our dataset, star ratings are represented on a scale of 1 to 5, with 1-star ratings signifying an extremely negative attitude towards the product and 5-star ratings reflecting a highly positive experience. The product types capture the difference in the information asymmetry and uncertainty between the two types of products, which may affect the consumer’s reliance on online reviews. We expect that reviews of experience products will be more helpful than those of search products, as they can provide more valuable information that is difficult to obtain before purchase. The product category controls for the variation in product characteristics and consumer preferences across different categories, such as beauty, liquor, perfume, mobile phone, camera, and laptop. The number of responses is measured by how many messages other consumers or readers left under a review, which may affect the review’s popularity and visibility. We expect that reviews with more responses will be more helpful than those with fewer or no responses, as they can indicate more interest and engagement from other consumers. Table 3 shows the summary statistics of the main variables below.

4.3. Analysis Method

To test our hypotheses and to estimate the effects of hybrid review content elements on review helpfulness, we employed a multilevel mixed-effects generalized linear model (meglm) using Stata software. This statistical method can handle count data with binomial distribution and random effects at multiple levels. We use the data collected from JingDong Mall to fit our model. To facilitate the interpretability of our findings, we z-standardize all variables so that we can compare the effects of regression coefficients on the dependent variables measured in standard deviations.
We specified five models of meglm regression, each with a different set of independent variables. The baseline model (Model 1) included only the control variables, such as the star ratings, the product type, and the number of responses. The second model (Model 2) added the main effects of the three types of review elements. The third model (Model 3) added the quadratic terms of the three types of reviews, to capture the possible nonlinear effects of information load. The fourth model (Model 4) added the interaction terms of product types and the three types of reviews, to examine the moderating role of product types. The fifth model (Model 5) added the interaction terms of star ratings and the three types of reviews, to investigate the moderating role of star ratings. By comparing the five models, we could assess the incremental contribution of each dimension of the review presentation format to the review helpfulness.
We followed the approach of Lutz et al. [41] and Yin et al. [60] to model the number of helpful votes, as a binomial variable with probability parameters and trials. The baseline model was as follows:
Logit   ( θ ) = β 0 + β 1   SRating + β 2   PType + β 3   RResponse + β 4   RLength   + β 5   NImage + β 6   DVideo + β 7   RLength 2 + β 8   NImage 2 + β 9   DVideo 2 + β 10   P T y p e × R L e n g t h + β 11   PType   ×   NImage + β 12   PType   ×   DVideo + α C + α P + ε .
RHVotes Binomial [ R V otes, θ ] .
Here, β 0 is intercept, α C and α P are random intercepts for each category and product type, and ε is the error term. In addition, our regression model contained three interaction terms to test hypotheses H3a, H3b, and H3c. We will incorporate more interaction terms to test other hypotheses.

5. Results and Robustness Checks

5.1. Main Results

We present the results of our multilevel mixed-effects generalized linear regressions in Table 4, which shows the coefficients, the standard errors, the significance levels, the log-likelihood, and the AIC of each model.
The results of Model 1 show that only the number of responses has a significant positive effect ( β   =   1.14453 ,   p < 0.01 ) on the review helpfulness votes, indicating that more popular reviews are more helpful. These regression results suggest that the model needs to include more explanatory variables to account for the variation in the review helpfulness votes.
The results of Model 2 show that the review content element variables also have significant effects on the review helpfulness votes. The word count, the number of images, and the video duration have positive significant effects ( β   =   0.2520958 ,   p < 0.01 ;   β   =   0.6736385 ,   p < 0.01 ;   β   =   0.2474221 ,   p < 0.01 , respectively), indicating that more extended and more interactive reviews are more helpful than shorter and less interactive reviews. Meanwhile, the control variable star ratings turn negative and significant ( β   =   0.5604907 ,   p < 0.01 ). As more variables are added to the models, the significance level almost stays the same. However, according to Model 5, the number of images has no significant effect, contrary to our expectations. This means that more visual reviews are not necessarily more helpful than less visual reviews, which is in line with Li and Ensafjoo [17] and Li and Xie [51]. This may be due to the low quality, irrelevance, or redundancy of some images, or the high cognitive load and interaction impact caused by other types of information elements. These findings partially support Hypothesis H1, which posits that distinct review content elements can augment review helpfulness by mitigating extraneous cognitive load and leveraging dual coding. However, the limited capacity of human cognition, especially when confronted with multiple modalities of information, suggests a preference for textual and video elements over images in online review contexts. This preference indicates that while text and video reviews significantly contribute to information processing and decision-making, the role of images remains nuanced, potentially limited by factors such as quality and relevance. Therefore, our results support the view of Townsend and Kahn [19] that while visual presentation can be appealing and effective for small choice sets, verbal presentation may be more advantageous for making informed decisions in larger assortments.
We further investigate the potential non-linear effects of review content elements on review helpfulness by incorporating the squared terms of the variables in Model 3. The results in Model 5 reveal that both the squared term of word count (RLength2) and video duration (DVideo2) have significant impacts on review helpfulness ( β = −0.3036335, p < 0.01; β = 0.1194758, p < 0.01, respectively). These findings suggest the presence of diminishing returns for word count and increasing returns for video duration in terms of their influence on the perceived helpfulness of reviews. The inverted U-shaped relationship between word count and review helpfulness, as hypothesized in H2a, is supported by our analysis. This implies that beyond a certain threshold, increasing the length of text-based reviews may not enhance their helpfulness, and could even lead to a decline. On the contrary, for video-based reviews, our results indicate that short videos may not be sufficient to convey enough useful information. The helpfulness of video-based reviews becomes more evident only when the video duration surpasses a specific threshold. It is worth noting that JingDong Mall currently imposes a 15 s limit on user-uploaded videos, which is relatively short compared to other social media platforms such as Twitter and Instagram, for longer videos can provide more space to convey a complete message or story, which may be necessary for some products or services. Interestingly, the squared term of the number of images (NImage2) becomes insignificant in Model 5, suggesting that the hypothesized inverted U-shaped relationship between image count and review helpfulness (H2b) is not supported in the presence of multiple content elements. This finding further reinforces our previous observation that image elements have a limited influence on review helpfulness when various types of review content elements are considered simultaneously.
Model 4 reveals that product types significantly influence review helpfulness votes, even after accounting for other variables. The product types dummy variables exhibit varying signs and magnitudes, suggesting varying levels of review helpfulness across different products. For instance, reviews of experiential products, like the SK-II [New Year Coupon] Star Luxury Skincare Experience Set, Wuliangye 8th Generation 52 Degrees Strong Aroma Chinese Spirit, and He Feng Yu Men’s Perfume Gift Box, demonstrate positive effects, whereas reviews of search products, such as the Huawei P40 (5G) 8G+128G, Canon PowerShot G7, and Lenovo Xiaoxin 16 2023 Ultra-thin, i5–13500H 16G 512G Standard Edition with IPS Full Screen, exhibit adverse effects. With the inclusion of additional variables in Model 5, the confidence level stayed ( p < 0.01 ) with the position effect. These findings imply that the impact of the review presentation format varies across product types. Additionally, these results corroborate hypotheses H3a, H3b, and H3c, positing that product types moderate the effect of text-, image-, and video-based reviews on review helpfulness.
The results of Model 5 indicate significant effects of the interaction between review length and star ratings on review helpfulness votes, even after accounting for other variables. The interaction terms exhibit varied signs and magnitudes, suggesting that the interaction between review length and star ratings (SRating × RLength) shows a significant positive effect ( β   =   0.1185116 ,   p < 0.01 ). These findings corroborate hypothesis H4a, which posits that the perceived helpfulness of text-based reviews is moderated by their associated star ratings. Specifically, it is hypothesized that higher star ratings amplify the perceived helpfulness of detailed text reviews. Additionally, the interaction between the number of images and the review rating (SRating × NImage) reveals a positive effect ( β   =   0.1526409 ,   p < 0.05 ), suggesting that higher star ratings boost the perceived helpfulness of image-based reviews. This outcome supports hypothesis H4b, asserting that congruence between high star ratings and visual content in the image reviews element enhances perceived helpfulness. However, the interaction between video duration and review rating (SRating × Dvideo) is insignificant, suggesting that video-based reviews’ impact on helpfulness is not moderated by review ratings. This finding contradicts hypothesis H4c, which posits that high star ratings aligned with comprehensive, congruent video content would increase the perceived helpfulness of video reviews.
Our final model emphasizes several crucial insights. Textual reviews enhance perceived helpfulness initially, but their effectiveness diminishes with length, confirming an inverted U-shaped relationship. Similarly, video reviews are more helpful only after surpassing a certain duration, whereas additional images do not necessarily increase helpfulness. These results indicate that excessive information can cause cognitive overload, reducing the utility of reviews. Thus, our findings emphasize the need to balance content richness and cognitive load in online reviews. Moreover, product types significantly moderate the impact of review elements on perceived helpfulness. Experience products consistently benefit from detailed information across all review formats, while search products do not. Additionally, we identify significant interaction effects between star ratings and review elements, with higher star ratings amplifying the helpfulness of text- and image-based reviews. However, this effect is not observed in video reviews. These insights underscore the nuanced roles of review content and product types in shaping consumer perceptions.

5.2. Robustness Checks

To test the robustness of our results, we perform some additional analyses using different model specifications and measurement methods. We report the main findings of these analyses in Table 5 and discuss them below.
First, we incorporate the interaction term between response to reviews and product types, inspired by Cui and Wang [22], who suggested that product types may moderate the effect of review responses. The regression results (Model 6) show that the interaction term (RResponse × PType) has a significant positive effect on review helpfulness ( β = 0.4568665 , p < 0.01 ). This indicates that the impact of response count on review helpfulness differs by product type. Specifically, for apparel and beauty products, more responses are associated with higher helpfulness, while for electronics, more responses are associated with lower helpfulness. This may be explained by the fact that the number of responses may signal the expertise and credibility of the reviewers, especially for experiential products. The coefficients of other variables remain relatively stable in Model 6, confirming the robustness of Model 5.
Second, we replace the PType variable with six dummy variables for the specific product categories, to analyze the differences in the helpfulness of reviews among different product categories in more detail (Model 7). We find that the coefficients of DVideo and DVideo2 changed in direction or significance, while the coefficients of other variables remained relatively stable. The results are also consistent with our final model, confirming the robustness of our final model.
Third, to further validate the robustness of our findings, we conducted an additional analysis using the number of likes for each review as a more granular measure of review helpfulness (Model 8). We employed an ordered logistic regression approach to model the ordinal structure of the number of likes, which can range from low to high values [61,62]. This approach estimates a series of threshold parameters that represent the boundaries between adjacent levels of the outcome variable. It also assumes that there is a continuous latent variable whose mapping to the observed ordered categories is determined by these thresholds. The ordered logistic model accounts for the inherent ranking information in the like counts and allows us to interpret the effect of predictors on incremental changes in the helpfulness level, as reflected by obtaining additional likes. However, comparing the model fit indicators reveals that this approach has poorer explanatory power than the final model. The AIC of Model 8 is substantially higher than that of Model 5 (6494.367 vs. 13,315.79), and the log-likelihood of Model 8 is markedly lower than that of Model 5 (−3229.1835 vs.−6566.896). Moreover, the significance and directionality of all predictor variables except NImage and PType × RLength remain largely consistent with Model 5, lending support to the stability of our findings.

6. Discussion

6.1. Impact of Image Quantity on Other Content Elements

In this part of our study, we extend our exploration to the domain of online review helpfulness, with a focus on the role of image quantity within the mix of text and video reviews. This examination is critical for delineating the nuanced impact of various review content elements on information overload in an online retail context.
Using Model 5, we integrated a novel perspective by considering the lengths of text and video duration as potential moderators influencing the impact of image quantity. Through a rigorous application of multilevel logistic regression analysis, we sought to uncover the intricate interplay among these multifaceted review elements. The results are summarized in Table 6. The regression results (Model 9) show that both Length × NImage and DVideo × NImage are significantly negative. When we include RResponse × PType again in the model (Model 10), the two interaction terms are still significantly negative, while the coefficients of other variables remain relatively stable. This implies that adding more images to longer text reviews or longer video reviews may reduce their helpfulness.
The findings of this analysis revealed a pivotal and somewhat unexpected trend: the presence of text and video review elements exerted a negative moderating effect on the utility of increasing image numbers in online reviews. Specifically, we observe that in situations where reviews already contained extensive text or detailed video content, the addition of more images did not necessarily enhance the review’s usefulness. On the contrary, this could lead to a state of information overload. This outcome suggests that when substantial information is already provided through text and video, the incorporation of additional images might not only be superfluous but could also distract consumers, thereby impeding their ability to distill valuable information from the review. While previous research has suggested that images can mitigate the negative effects of improper text length in reviews (Zinko et al. [21]), our analysis reveals a more nuanced picture. We find that when reviews already contain extensive textual or video content, the addition of more images may not necessarily enhance the review’s usefulness and could even contribute to information overload. This apparent discrepancy in findings can be mainly attributed to the factors that the impact of images on review helpfulness likely depends on the overall information load of reviews. As Zinko et al. [21] note, images can serve as a useful supplement when textual information is insufficient. However, our study suggests that there may be a tipping point beyond which additional images become counterproductive, particularly when the review already contains a wealth of information in the form of text and video.
These insights bear significant implications for designing and managing online retail platforms’ review systems. They underscore the necessity for a delicate balance between the quantity and quality of varied information presentation formats. For instance, in reviews that are already rich in textual and video content, the encouragement to upload numerous images might be redundant. In contrast, for reviews with brief texts and no video content, the inclusion of a moderate number of high-quality images might be more beneficial in enhancing the richness and usefulness of the information presented.
Furthermore, this study highlights that the effectiveness of a review is not solely dependent on the quantity of information it contains but also significantly on the type of information and the consumers’ capacity to process this information. In an environment where multiple types of reviews are prevalent, consumers may prefer reviews that convey information most efficiently. This insight calls for platform designers to thoughtfully optimize the structure and content of reviews, aiming to reduce redundancy and prevent information overload.
In conclusion, our research emphasizes the need for ongoing investigations, particularly in understanding the diverse information preferences across various product categories and consumer demographics. Recognizing these differences is vital for developing more personalized and effective online review systems. Our findings suggest that in contexts where text, image, and video reviews coexist, increasing the number of images does not automatically translate to enhanced review helpfulness. In fact, excessive imagery could lead to information overload, thereby diminishing the overall quality and usefulness of reviews. Accordingly, we advocate for online retail platforms to be judicious in the use of content elements, particularly when reviews already contain extensive textual or video content. By carefully curating the mix of information formats and ensuring the relevance and quality of images, retailers can help consumers navigate the wealth of available information and make more informed purchase decisions.

6.2. Impact of Presentation Formats on Review Helpfulness

In this part of our study, we venture into the intricate domain of online review systems, examining the influence of presentation formats—text, image, and video—on the perceived helpfulness of reviews in the multimodal era. This exploration is pivotal as the choice of presentation format itself represents a decision-making dilemma for consumers engaged in online shopping.
Drawing on cognitive load and dual-coding theories, our study confirms that in a context where multiple review formats are available, text reviews significantly enhance review helpfulness, while adding more visual elements does not necessarily increase review helpfulness. This finding diverges from traditional research that often compares presentation formats in isolation, typically through controlled experiments or by analyzing data that consider formats in a pairwise manner.
Our empirical analysis reveals that while textual content contributes positively to reviewing helpfulness, its benefits are subject to a threshold. Beyond this threshold, additional information contributes to cognitive overload, thereby diminishing the perceived helpfulness of the review. This suggests a nuanced approach to content review, favoring conciseness and relevance over sheer volume of information. When considering image-based reviews and video-based reviews, the hybrid presentation formats have very complex interactions on review helpfulness.
Model 4 and Model 5 of our analysis further highlight the moderating role of product type and review ratings on the helpfulness of reviews. The results of the interaction terms analysis are significant and uniform. Experience products benefit more from various review content elements than search products. The impact of product type on review helpfulness is complicated, calling for further research. The interaction between star ratings and review content also plays a significant role in shaping perceptions of helpfulness. High star ratings enhance the impact of detailed textual reviews and imagery reviews, but do not necessarily complement video reviews to the same extent. This suggests that extreme ratings have a significant interactive impact on text elements and image elements, but not on video elements. This finding complements the work of Filieri et al. [11], which showed that several review and reviewer characteristics moderate the effect of extreme ratings.
Considering these insights, we propose that online retailers and platform operators consider strategies that align with a nuanced understanding of presentation formats. Encouraging concise and informative reviews, tailoring presentation format recommendations based on product type, educating consumers on the interplay between star ratings and review content, and implementing user interface designs that facilitate easy navigation and processing of review information are all strategies that can enhance the effectiveness of review systems.
Our research underscores the importance of considering the coexistence of multiple presentation formats and their collective impact on review helpfulness. By deepening our understanding of these relationships, we can contribute to the development of more effective and user-friendly e-commerce platforms that support informed consumer choices and enhance overall satisfaction.

7. Conclusions

Our study elucidates critical insights into optimizing online review formats to enhance perceived helpfulness while mitigating cognitive overload. We confirmed an inverted U-shaped relationship for textual reviews, where increased length initially boosts helpfulness but eventually leads to cognitive overload and diminished utility. Video reviews’ effectiveness is contingent on surpassing a certain duration, emphasizing the need for adequate length to convey comprehensive information. Contrary to expectations, additional images do not significantly enhance helpfulness, highlighting that quality supersedes quantity. Our findings also reveal that product types and star ratings significantly modulate the impact of review content. Experience products benefit more from detailed reviews across all formats, while high star ratings amplify the effectiveness of text and image reviews but not videos. These findings provide a nuanced understanding of balancing content richness and cognitive load in online reviews, facilitating improved consumer decision-making and satisfaction.

7.1. Theoretical Contributions

This study makes some key theoretical contributions. First, we advance the understanding of online review systems by examining the interplay between text, image, and video elements and their joint influence on review helpfulness in authentic online shopping environments. Unlike prior studies that often consider these elements in isolation, we consider presentation formats as the combination of text, image, and video elements, and we extend the research on review presentation formats to multimodal review analysis. Our work reflects the authentic context where multiple review content elements exist simultaneously. Second, by integrating cognitive load theory and dual-coding theory into our regression models, and including a quadratic term for each presentation format (word count, image count, and video duration), we demonstrate that the inclusion of additional review content may exhibit diminishing returns and eventually contribute to information overload. Our findings reveal that certain review content elements exhibit significant interaction effects, suggesting that the relationship between review content and helpfulness is more complex than previously understood. Third, we elucidate the nuanced moderating effects of product type and star ratings in shaping the effectiveness of varied content elements. Our results show that experience products benefit more than search products from a consistent review of content elements. Additionally, we show that the alignment of star ratings with review content plays a pivotal role, where different review elements have different moderating effects on review helpfulness. Overall, our theorization and empirical examination yield a refined understanding of review systems, accounting for the complex interdependencies between presentation formats, product types, and rating mechanisms. These insights pave the way for more effective review platform design.

7.2. Practical Contributions

The findings of this study provide several actionable insights for online retailers and platform operators aiming to optimize their review systems and enhance consumer decision-making processes.
Our research underscores the importance of carefully balancing the length and richness of textual content in reviews. While consumers value detailed reviews, there is a threshold beyond which additional information can lead to cognitive overload, diminishing the perceived usefulness of the review. Platforms should guide users toward crafting detailed yet concise reviews. This can be achieved through character or word limit recommendations, particularly for text-heavy products like electronics or mobile phones, where less voluminous content is more effective.
The utility of video reviews becomes evident only when the content exceeds a certain duration. Short videos often fail to provide sufficient information, while longer videos enhance review helpfulness. Platforms should encourage users to upload videos that are comprehensive enough to convey vital details without being unnecessarily long. Guidelines or incentives for optimal video length can help in maximizing the informative value of video reviews, especially for products where a visual demonstration is crucial, such as beauty items or gadgets.
Although the number of images in a review does not significantly enhance perceived helpfulness, the quality and relevance of images are crucial. Platforms should focus on the quality of images rather than quantity, promoting the inclusion of high-resolution, relevant images that effectively showcase the product’s features. Tools that assist users in uploading clearer and more relevant images can enhance the visual appeal and credibility of reviews without contributing to information overload.
The ideal review format varies with product type and star ratings, with experience products like Chinese spirits and beauty items benefiting more from the extensive use of text, images, and videos. In contrast, search products require less elaborate reviews. Moreover, high star ratings bolster the helpfulness of text and image reviews more than lower ratings. Customizing review prompts and guidelines based on product category can help in aligning the review format with consumer expectations and needs.
In environments where text, image, and video reviews coexist, text reviews are more likely to cause information overload. Platforms can develop intelligent algorithms to identify and highlight the most informative and helpful reviews across different formats. These algorithms should consider factors such as review length, image quality, video duration, and user engagement metrics. By prominently displaying exemplary reviews, platforms can enhance the overall review consumption experience and help consumers quickly access valuable insights, improving their shopping experience and satisfaction.
Given the nuanced modulation between different review presentation formats and their associated information burdens, it is imperative for platforms to incentivize reviewers to employ the most suitable review formats. Text reviews should be encouraged for in-depth analyses, while images and videos can be promoted for demonstrating specific features or usage scenarios. Tailored incentives, such as points or badges for high-quality reviews in the preferred format, can motivate users to provide valuable and balanced content.
By understanding and leveraging these dynamics, online retailers and platform managers can significantly enhance the effectiveness and helpfulness of reviews. This approach facilitates better consumer decision-making and satisfaction, as users can more easily navigate through the wealth of available information and find the most relevant and trustworthy reviews. Ultimately, this fosters loyalty and trust in the platform, as consumers appreciate the efforts made to curate and present reviews in a manner that optimizes their shopping experience and supports their purchase decisions.

7.3. Limitations and Future Research

Despite the strengths of our study, it is not without limitations. Our research is based on data from a single online retail platform in China, which may limit the generalizability of our findings. The patterns of use and customs of internet users can vary significantly across different markets, influenced by factors such as local culture, demographics, and technological infrastructure. For instance, consumers in developed economies with higher levels of internet penetration and e-commerce adoption may exhibit distinct behaviors and preferences compared to those in emerging markets. Moreover, the design features and functionalities of online review systems can differ across platforms, shaping user interactions and perceptions. Therefore, future research could replicate this study in different cultural contexts and with data from multiple platforms to enhance the robustness and external validity of the findings. Conducting cross-cultural comparisons can provide valuable insights into how sociocultural factors moderate the effects of review content elements on perceived helpfulness. Additionally, examining data from diverse online retail platforms, such as those catering to specific product categories or geographies, can provide a more comprehensive understanding of the phenomenon.
Moreover, as online shopping environments and consumer behaviors continue to evolve, there is a need for ongoing research to explore new content elements or combinations and their effects on consumer decision-making [63]. Our study focuses on the quantity of information in reviews but does not account for the quality of the information provided. Fake reviews or spam reviews can have a significant negative impact on review quality. With the advent of the AI-generated content (AIGC) era, a large amount of machine-generated text, image, and video information may pose new challenges to the authenticity and credibility of online reviews [64]. Future research could investigate how to effectively identify and filter out low-quality or fraudulent content, and how the presence of such content may affect consumer perceptions and behaviors.
Furthermore, there is an opportunity for future research to refine the measurement of information quantity across different review elements. Our study employs proxy variables to capture the amount of information conveyed through text, image, and video components. However, given the unique characteristics and informational properties of each content element, there may be alternative measurement approaches that more precisely capture the depth and richness of the information provided. Future studies could explore novel quantification techniques that account for the specific features and attributes of different review elements and investigate optimal information quantity for different review elements, therefore enabling a more nuanced understanding of the informational value of review elements.

Author Contributions

Conceptualization, L.W.; data curation, L.W.; funding acquisition, L.C.; methodology, L.W., G.C., J.H. and L.C.; software, L.W. and L.C.; supervision, J.H. and L.C.; validation, G.C.; writing—original draft, L.W., G.C., J.H. and L.C.; writing—review and editing, L.W., G.C., J.H. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 72102171), Humanities and Social Sciences Youth Foundation, Ministry of Education of the People’s Republic of China (Grant No. 21YJC630006), and Hainan Province Philosophy and Social Science Planning Project (Grant No. HNSK(YB)20-44).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alzate, M.; Arce-Urriza, M.; Cebollada, J. Online Reviews and Product Sales: The Role of Review Visibility. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 638–669. [Google Scholar] [CrossRef]
  2. Zhai, M. The Importance of Online Customer Reviews Characteristics on Remanufactured Product Sales: Evidence from the Mobile Phone Market on Amazon.Com. J. Retail. Consum. Serv. 2024, 77, 103677. [Google Scholar] [CrossRef]
  3. Xia, L.; Bechwati, N.N. Word of Mouse: The Role of Cognitive Personalization in Online Consumer Reviews. J. Interact. Advert. 2008, 9, 3–13. [Google Scholar] [CrossRef]
  4. Wang, Y.; Ngai, E.W.T.; Li, K. The Effect of Review Content Richness on Product Review Helpfulness: The Moderating Role of Rating Inconsistency. Electron. Commer. Res. Appl. 2023, 61, 101290. [Google Scholar] [CrossRef]
  5. Li, H.; Wang, C.R.; Meng, F.; Zhang, Z. Making Restaurant Reviews Useful and/or Enjoyable? The Impacts of Temporal, Explanatory, and Sensory Cues. Int. J. Hosp. Manag. 2019, 83, 257–265. [Google Scholar] [CrossRef]
  6. Pauwels, K.; Leeflang, P.S.H.; Teerling, M.L.; Huizingh, K.R.E. Does Online Information Drive Offline Revenues? Only for Specific Products and Consumer Segments! J. Retail. 2011, 87, 1–17. [Google Scholar] [CrossRef]
  7. Li, C.; Liu, Y.; Du, R. The Effects of Review Presentation Formats on Consumers’ Purchase Intention. J. Glob. Inf. Manag. 2021, 29, 1–20. [Google Scholar] [CrossRef]
  8. Xu, P.; Chen, L.; Santhanam, R. Will Video Be the next Generation of E-Commerce Product Reviews? Presentation Format and the Role of Product Type. Decis. Support Syst. 2015, 73, 85–96. [Google Scholar] [CrossRef]
  9. Choi, H.S.; Leon, S. An Empirical Investigation of Online Review Helpfulness: A Big Data Perspective. Decis. Support Syst. 2020, 139, 113403. [Google Scholar] [CrossRef]
  10. Filieri, R.; Raguseo, E.; Vitari, C. When Are Extreme Ratings More Helpful? Empirical Evidence on the Moderating Effects of Review Characteristics and Product Type. Comput. Hum. Behav. 2018, 88, 134–142. [Google Scholar] [CrossRef]
  11. Filieri, R.; Raguseo, E.; Vitari, C. What Moderates the Influence of Extremely Negative Ratings? The Role of Review and Reviewer Characteristics. Int. J. Hosp. Manag. 2019, 77, 333–341. [Google Scholar] [CrossRef]
  12. Mudambi, S.M.; Schuff, D. Research Note: What Makes a Helpful Online Review? A Study of Customer Reviews on Amazon.Com. MIS Q. 2010, 34, 185–200. [Google Scholar] [CrossRef]
  13. Chevalier, J.A.; Mayzlin, D. The Effect of Word of Mouth on Sales: Online Book Reviews. J. Mark. Res. 2006, 43, 345–354. [Google Scholar] [CrossRef]
  14. Godes, D.; Mayzlin, D. Firm-Created Word-of-Mouth Communication: Evidence from a Field Test. Mark. Sci. 2009, 28, 721–739. [Google Scholar] [CrossRef]
  15. Jiang, Z.; Benbasat, I. The Effects of Presentation Formats and Task Complexity on Online Consumers’ Product Understanding. MIS Q. 2007, 31, 475–500. [Google Scholar] [CrossRef]
  16. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
  17. Li, J.; Ensafjoo, M. Media Format Matters: User Engagement with Audio, Text and Video Tweets. J. Radio Audio Media 2024, 1–21. [Google Scholar] [CrossRef]
  18. Bettman, J.R.; Kakkar, P. Effects of Information Presentation Format on Consumer Information Acquisition Strategies. J. Consum. Res. 1977, 3, 233–240. [Google Scholar] [CrossRef]
  19. Townsend, C.; Kahn, B.E. The “Visual Preference Heuristic”: The Influence of Visual versus Verbal Depiction on Assortment Processing, Perceived Variety, and Choice Overload. J. Consum. Res. 2014, 40, 993–1015. [Google Scholar] [CrossRef]
  20. Ganguly, B.; Sengupta, P.; Biswas, B. What Are the Significant Determinants of Helpfulness of Online Review? An Exploration Across Product-types. J. Retail. Consum. Serv. 2024, 78, 103748. [Google Scholar] [CrossRef]
  21. Zinko, R.; Stolk, P.; Furner, Z.; Almond, B. A Picture Is Worth a Thousand Words: How Images Influence Information Quality and Information Load in Online Reviews. Electron. Mark. 2020, 30, 775–789. [Google Scholar] [CrossRef]
  22. Cui, Y.; Wang, X. Investigating the Role of Review Presentation Format in Affecting the Helpfulness of Online Reviews. Electron. Commer. Res. 2022, 1–20. [Google Scholar] [CrossRef]
  23. Ceylan, G.; Diehl, K.; Proserpio, D. Words Meet Photos: When and Why Photos Increase Review Helpfulness. J. Mark. Res. 2024, 61, 5–26. [Google Scholar] [CrossRef]
  24. Grewal, R.; Gupta, S.; Hamilton, R. Marketing Insights from Multimedia Data: Text, Image, Audio, and Video. J. Mark. Res. 2021, 58, 1025–1033. [Google Scholar] [CrossRef]
  25. Tavassoli, N.T.; Lee, Y.H. The Differential Interaction of Auditory and Visual Advertising Elements with Chinese and English. J. Mark. Res. 2003, 40, 468–480. [Google Scholar] [CrossRef]
  26. Eppler, M.J.; Mengis, J. The Concept of Information Overload: A Review of Literature from Organization Science, Accounting, Marketing, MIS, and Related Disciplines. Inf. Soc. 2004, 20, 325–344. [Google Scholar] [CrossRef]
  27. Glatz, T.; Lippold, M.A. Is More Information Always Better? Associations among Parents’ Online Information Searching, Information Overload, and Self-Efficacy. Int. J. Behav. Dev. 2023, 47, 444–453. [Google Scholar] [CrossRef]
  28. Jacoby, J.; Speller, D.E.; Kohn, C.A. Brand Choice Behavior as a Function of Information Load. J. Mark. Res. 1974, 11, 63–69. [Google Scholar] [CrossRef]
  29. Hu, H.; Krishen, A.S. When Is Enough, Enough? Investigating Product Reviews and Information Overload from a Consumer Empowerment Perspective. J. Bus. Res. 2019, 100, 27–37. [Google Scholar] [CrossRef]
  30. Jabr, W.; Rahman, M. Online Reviews and Information Overload: The Role of Selective, Parsimonious, and Concordant Top Reviews. MIS Q. 2022, 46, 1517–1550. [Google Scholar] [CrossRef]
  31. Furner, C.P.; Zinko, R.A. The Influence of Information Overload on the Development of Trust and Purchase Intention Based on Online Product Reviews in a Mobile vs. Web Environment: An Empirical Investigation. Electron. Mark. 2017, 27, 211–224. [Google Scholar] [CrossRef]
  32. Sweller, J. Cognitive Load Theory, Learning Difficulty, and Instructional Design. Learn. Instr. 1994, 4, 295–312. [Google Scholar] [CrossRef]
  33. Wirzberger, M.; Esmaeili Bijarsari, S.; Rey, G.D. Embedded Interruptions and Task Complexity Influence Schema-Related Cognitive Load Progression in an Abstract Learning Task. Acta Psychol. 2017, 179, 30–41. [Google Scholar] [CrossRef] [PubMed]
  34. Loaiza, V.M.; Oftinger, A.-L.; Camos, V. How Does Working Memory Promote Traces in Episodic Memory? J. Cogn. 2023, 6, 4. [Google Scholar] [CrossRef] [PubMed]
  35. Zukić, M.; Đapo, N.; Husremović, D. Construct and Predictive Validity of an Instrument for Measuring Intrinsic, Extraneous and Germane Cognitive Load. Univers. J. Psychol. 2016, 4, 242–248. [Google Scholar] [CrossRef]
  36. Paas, F.; Renkl, A.; Sweller, J. Cognitive Load Theory and Instructional Design: Recent Developments. Educ. Psychol. 2003, 38, 1–4. [Google Scholar] [CrossRef]
  37. Paivio, A. Imagery and Verbal Processes; Psychology Press: New York, NY, USA, 1979. [Google Scholar]
  38. Lee, A.Y.; Aaker, J.L. Bringing the Frame into Focus: The Influence of Regulatory Fit on Processing Fluency and Persuasion. J. Pers. Soc. Psychol. 2004, 86, 205–218. [Google Scholar] [CrossRef] [PubMed]
  39. Dang, A.; Nichols, B.S. The Effects of Size Referents in User-generated Photos on Online Review Helpfulness. J. Consum. Behav. 2023, 23, 1493–1511. [Google Scholar] [CrossRef]
  40. Chen, T.; Rao, R.R. Audio-Visual Integration in Multimodal Communication. Proc. IEEE 1998, 86, 837–852. [Google Scholar] [CrossRef]
  41. Lutz, B.; Pröllochs, N.; Neumann, D. Are Longer Reviews Always More Helpful? Disentangling the Interplay between Review Length and Line of Argumentation. J. Bus. Res. 2022, 144, 888–901. [Google Scholar] [CrossRef]
  42. Kemper, S.; Herman, R.E. Age Differences in Memory-Load Interference Effects in Syntactic Processing. J. Gerontol. Ser. B. 2006, 61, 327–332. [Google Scholar] [CrossRef]
  43. Laposhina, A.N.; Lebedeva, M.Y.; Khenis, A.A.B. Word Frequency and Text Complexity: An Eye-tracking Study of Young Russian Readers. Russ. J. Linguist. 2022, 26, 493–514. [Google Scholar] [CrossRef]
  44. Ullman, S.; Vidal-Naquet, M.; Sali, E. Visual Features of Intermediate Complexity and Their Use in Classification. Nat. Neurosci. 2002, 5, 682–687. [Google Scholar] [CrossRef]
  45. Ghose, A.; Ipeirotis, P.G. Estimating the Helpfulness and Economic Impact of Product Reviews: Mining Text and Reviewer Characteristics. IEEE Trans. Knowl. Data Eng. 2011, 23, 1498–1512. [Google Scholar] [CrossRef]
  46. Sun, L.; Yamasaki, T.; Aizawa, K. Photo Aesthetic Quality Estimation Using Visual Complexity Features. Multimed. Tools Appl. 2018, 77, 5189–5213. [Google Scholar] [CrossRef]
  47. Hughes, C.; Costley, J.; Lange, C. The Effects of Multimedia Video Lectures on Extraneous Load. Distance Educ. 2019, 40, 54–75. [Google Scholar] [CrossRef]
  48. Lackmann, S.; Léger, P.-M.; Charland, P.; Aubé, C.; Talbot, J. The Influence of Video Format on Engagement and Performance in Online Learning. Brain Sci. 2021, 11, 128. [Google Scholar] [CrossRef]
  49. Kübler, R.V.; Lobschat, L.; Welke, L.; van der Meij, H. The Effect of Review Images on Review Helpfulness: A Contingency Approach. J. Retail. 2023, 100, 5–23. [Google Scholar] [CrossRef]
  50. Yang, Y.; Wang, Y.; Zhao, J. Effect of User-Generated Image on Review Helpfulness: Perspectives from Object Detection. Electron. Commer. Res. Appl. 2023, 57, 101232. [Google Scholar] [CrossRef]
  51. Li, Y.; Xie, Y. Is a Picture Worth a Thousand Words? An Empirical Study of Image Content and Social Media Engagement. J. Mark. Res. 2020, 57, 1–19. [Google Scholar] [CrossRef]
  52. Song, J.; Tang, T.; Hu, G. The Duration Threshold of Video Content Observation: An Experimental Investigation of Visual Perception Efficiency. Comput. Sci. Inf. Syst. 2023, 20, 879–892. [Google Scholar] [CrossRef]
  53. Dhar, R.; Wertenbroch, K. Consumer Choice between Hedonic and Utilitarian Goods. J. Mark. Res. 2000, 37, 60–71. [Google Scholar] [CrossRef]
  54. Chen, Y.; Xie, J. Online Consumer Review: Word-of-Mouth as a New Element of Marketing Communication Mix. Manag. Sci. 2008, 54, 477–491. [Google Scholar] [CrossRef]
  55. Nelson, P. Information and Consumer Behavior. J. Polit. Econ. 1970, 78, 311–329. [Google Scholar] [CrossRef]
  56. Nelson, P. Advertising as Information. J. Polit. Econ. 1974, 82, 729–754. [Google Scholar] [CrossRef]
  57. Park, D.-H.; Lee, J.; Han, I. The Effect of On-Line Consumer Reviews on Consumer Purchasing Intention: The Moderating Role of Involvement. Int. J. Electron. Commer. 2007, 11, 125–148. [Google Scholar] [CrossRef]
  58. Koh, N.S.; Hu, N.; Clemons, E.K. Do Online Reviews Reflect a Product’s True Perceived Quality? An Investigation of Online Movie Reviews across Cultures. Electron. Commer. Res. Appl. 2010, 9, 374–385. [Google Scholar] [CrossRef]
  59. Altab, H.M.; Mu, Y.; Sajjad, H.M.; Frimpong, A.N.K.; Frempong, M.F.; Adu-Yeboah, S.S. Understanding Online Consumer Textual Reviews and Rating: Review Length with Moderated Multiple Regression Analysis Approach. Sage Open 2022, 12, 2158244022110480. [Google Scholar] [CrossRef]
  60. Yin, D.; Mitra, S.; Zhang, H. Research Note—When Do Consumers Value Positive vs. Negative Reviews? An Empirical Investigation of Confirmation Bias in Online Word of Mouth. Inf. Syst. Res. 2016, 27, 131–144. [Google Scholar] [CrossRef]
  61. Horton, N.J. Multilevel and Longitudinal Modeling Using Stata. Am. Stat. 2006, 60, 293–294. [Google Scholar] [CrossRef]
  62. Li, B.; Lingsma, H.F.; Steyerberg, E.W.; Lesaffre, E. Logistic Random Effects Regression Models: A Comparison of Statistical Packages for Binary and Ordinal Outcomes. BMC Med. Res. Methodol. 2011, 11, 77. [Google Scholar] [CrossRef] [PubMed]
  63. Chen, L.; Shen, H.; Liu, Q.; Rao, C.; Li, J.; Goh, M. Joint optimization on green investment and contract design for sustainable supply chains with fairness concern. Ann. Operat. Res. 2024, 1–39. [Google Scholar] [CrossRef]
  64. Liu, Q.; Ma, Y.; Chen, L.; Pedrycz, W.; Skibniewski, M.; Chen, Z. Artificial intelligence for production, operations and logistics management in modular construction industry: A systematic literature review. Inform. Fusion 2024, 109, 102423. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Jtaer 19 00064 g001
Figure 2. Components of review elements.
Figure 2. Components of review elements.
Jtaer 19 00064 g002
Table 1. The key literature on online review helpfulness and information overload.
Table 1. The key literature on online review helpfulness and information overload.
ReferenceYearKey FindingsGaps Identified
Mudambi and Schuff [12]2010Review depth affects helpfulness ratings, complex dynamics between review length and perceived helpfulness levelDid not explore multimodal content (text, images, video)
Chevalier and Mayzlin [13]2006Online book reviews influence sales significantlyLimited to books, not considering the mix of media elements in reviews
Townsend and Kahn [19]2014Visual depictions can increase perceived variety but may overload and complicate decision-makingNeed to incorporate mixed media elements beyond visual and verbal formats
Xu et al. [8]2015The presentation format and product type significantly affect the impact of reviews on purchase intentionNeed for exploring other combinations of review content elements in real-world settings
Zinko et al. [21]2020Images in reviews can balance out the effects of lengthy textual contentDid not fully explore video or interactive elements in reviews
Cui and Wang [22]2022Presentation format affects review helpfulness influenced by word count and response countNeed for exploring the authentic context of online shopping environments where hybrid review elements coexist and interact
Ceylan et al. [23]2024Greater similarity between review text and photos increases review helpfulness due to ease of processingDid not explore the effect of video or interactive elements in reviews
Jabr and Rahman [30]2022Top reviews play a crucial role in mitigating information overload, with effectiveness varying by review volume, parsimony, concordance, and product popularityDid not specifically address the impact of multimedia elements (images, videos) in online review helpfulness
Furner and Zinko [31]2017Information overload negatively affects trust and purchase intentions in online product reviewsDid not explore the impact of various multimedia elements (like images or videos) in reviews
Table 2. Products Details.
Table 2. Products Details.
Review TypeExperience ProductSearch Product
SK-II [New Year Coupon] Star Luxury Skincare Experience SetWuliangye 8th Generation 52 Degrees Strong Aroma Chinese SpiritHe Feng Yu Men’s Perfume Gift BoxHuawei HUAWEI P40 (5G) 8G+128GCanon PowerShotG7Lenovo Xiaoxin 16 2023 Ultra-thin 16, i5-13500H 16G 512G Standard Edition IPS Full ScreenTotal
Textual elements161598224591089145010988693
Imagery elements993710105785091010255545
Video elements2820228311384091034
Table 3. Summary statistics.
Table 3. Summary statistics.
VariableMeanS.D. MinMax
RLength62.0251951.282030500
NImage2.0172551.99998309
DVideo1.1580893.309312015
PType0.58161740.49332201
SRating3.8175541.66086515
RResponse0.81410331.969498096
Table 4. Main meglm regression results.
Table 4. Main meglm regression results.
VariableModel 1Model 2Model 3Model 4Model 5
Baseline ModelEffects of Three Types of ReviewsQuadratic Terms of Three Types of ReviewsInteraction Terms of Product Types and Three Types of ReviewsInteraction Terms of Star Ratings and Three Types of Reviews
SRating −0.0094512 (0.0353704)−0.5604907 *** (0.0461091)−0.5148615 *** (0.0508109)−0.5857195 *** (0.0519262)−0.4930427 *** (0.0656872)
PType−0.374899 (0.668288)−0.1652225 (0.8420286)−0.1223541 (0.8428844)−0.37136 (0.7070186)−0.4278527 (0.7197488)
RResponse1.14453 *** (0.0747517)1.034944 *** (0.0794695)1.04114 *** (0.0798164)1.17875 *** (0.0832794)1.183912 *** (0.0836325)
RLength-0.2520958 *** (0.0321143)0.5600407 *** (0.0767043)0.4586954 *** (0.0859834)0.4747597 *** (0.0836756)
NImage-0.6736385 *** (0.0431683)0.3077441 *** (0.1127373)0.1773168 (0.1170098)0.1779609 (0.1182195)
DVideo-0.2474221 *** (0.032336)0.2244095 *** (0.0355803)−0.1295235 ** (0.0540558)−0.1634889 ** (0.0665812)
  RLength 2 --−0.3179691 *** (0.0732998)−0.2828878 *** (0.0741595)−0.3036335 *** (0.0718357)
NImage 2 --0.287492 *** (0.0920753)0.2015993 ** (0.0907878)0.1133691 (0.0984913)
DVideo 2 --0.0686756 * (0.0354769)0.1198493 *** (0.036711)0.1194758 *** (0.0368086)
PType × RLength---0.1725752 *** (0.0664521)0.1962015 *** (0.0649207)
PType × NImage---0.3376643 *** (0.0708082)0.3732293 *** (0.0714518)
PType × DVideo---0.704002 *** (0.0708605)0.7325866 *** (0.0718867)
SRating × RLength----0.1185116 *** (0.0306011)
SRating × NImage----0.1526409 ** (0.0685903)
SRating × DVideo----0.0299873 (0.0610099)
Intercept−1.352547 *** (0.4727249)−1.587855 *** (0.5957433)−1.61709 *** (0.5963793)−1.344825 *** (0.5002534)−1.456717 *** (0.5108506)
Category-specific random effects
Product-type-specific random effects
Observations86938693869386938693
AIC7293.2226733.3756709.3196510.8656494.367
Log-likelihood−3640.611−3357.688−3334.8877−3239.7858−3229.1835
Note: ***, **, and * represent the significance levels of 1%, 5%, and 10%, respectively.The checkmarks (✓) denote the inclusion of the corresponding random effects terms at the product category and product type levels, allowing the intercept to vary randomly across these levels to account for heterogeneity in review helpfulness votes. The same notation is used consistently in all tables.
Table 5. Meglm regression for robustness checks.
Table 5. Meglm regression for robustness checks.
VariableModel 6Model 7Model 8
Interaction Term of Response and Product TypeDummy Product CategoriesOrdered Logistic Regression
SRating −0.480597 ***
(0.0652739)
−0.4588905 *** (0.0661166)−0.5965664 *** (0.0551167)
PType−0.4164621
(0.7217275)
−0.2068887 (0.2341358)−0.6700705 (0.8038127)
RResponse1.001158 ***
(0.1003105)
1.058021 *** (0.0810086)0.7270809 *** (0.0413358)
RLength0.4953353 ***
(0.0839767)
0.8273235 *** (0.1191163)0.5629864 *** (0.0757548)
NImage0.1836417
(0.1176959)
0.1160616 (0.1346521)0.3174738 *** (0.1077916)
DVideo−0.1625694 **
(0.0661902)
0.7428473 *** (0.135011)−0.1758015 *** (0.0588212)
  RLength 2 −0.3162264 ***
(0.0720411)
−0.3468706 *** (0.0749603)−0.3038904 *** (0.0650162)
NImage 2 0.0939385
(0.0981905)
0.1825292 (0.1034378)0.0664214 (0.0845221)
DVideo 2 0.1194046 ***
(0.0367394)
0.0496003 (0.0359453)0.1183577 *** (0.0358613)
PType × RLength0.1974928 ***
(0.0649661)
−0.0740211 *** (0.023857)0.0369258 (0.060756)
PType × NImage0.4266552 ***
(0.0738749)
0.0699344 *** (0.025906)0.3915435 *** (0.0635056)
PType × DVideo0.7424756 ***
(0.0719275)
−0.1071241 *** (0.027712)0.6770692 *** (0.0671368)
SRating × RLength0.1192885 ***
(0.0305453)
0.0582222 * (0.0308609)0.1327962 *** (0.0267572)
SRating × NImage0.1720769 **
(0.0681431)
0.1228174 ** (0.0701131)0.2547917 *** (0.0590973)
SRating × DVideo0.0247345
(0.0609282)
−0.0864279 (0.0642071)−0.0146173 (0.0494329)
RResponse × PType0.4568665 ***
(0.1579479)
--
Intercept−1.475673 ***
(0.5122107)
−1.0055 (0.9121967)-
Category-specific random effects
Product-type-specific random effects
Observations869386938693
AIC6488.005 6689.16913,315.79
Log-likelihood−3225.003−3326.584−6566.896
Note: We used a multilevel ordinal logistic regression (meologit) for Model 8. Instead of traditional intercept terms, this model has a series of threshold cut points that reflect the nature of ordinal logistic regression. These cut points define the boundaries between the ordered categories of the dependent variable, based on an underlying continuous latent variable. Each cut point (cut1, cut2, ..., cut74) indicates a transition between adjacent ordinal outcomes. Therefore, the table does not show a single global intercept term, consistent with the ordinal logit model specification.
Table 6. Meglm regression for image quantity and its moderators.
Table 6. Meglm regression for image quantity and its moderators.
VariableModel 9Model 10
Image Quantity and Its ModeratorsInteraction Term of Response and Product Type
SRating −0.4837298 *** (0.0661919)−0.4695552 *** (0.0657944)
PType−0.4281081 (0.7240757)−0.4162143 (0.726257)
RResponse1.189342 *** (0.0837727)0.9989255 *** (0.1001109)
RLength0.4637037 *** (0.0824031)0.4847465 *** (0.0827195)
NImage0.1110063 (0.1205665)0.1149707 (0.1199841)
DVideo−0.112208 (0.0687552)−0.1096264 (0.0683078)
  RLength 2 −0.2850631 *** (0.0695438)−0.2969254 *** (0.0698416)
NImage 2 0.2035386 ** (0.1026454)0.1851388 * (0.1022554)
DVideo 2 0.1231488 *** (0.0364154)0.1233512 *** (0.0363315)
PType × RLength0.2059165 *** (0.0638397)0.207092 *** (0.063874)
PType × NImage0.3712098 *** (0.0710646)0.4272585 *** (0.0735465)
PType × DVideo0.7476366 *** (0.0730458)0.7594635 *** (0.0731057)
SRating × RLength0.1674511 *** (0.0363951)0.1696441 *** (0.0363623)
SRating × NImage0.1484898 ** (0.0687052)0.1690226 ** (0.0682775)
SRating × DVideo0.1009002 (0.0683825)0.0991415 (0.0682961)
RLength × NImage−0.0826407 ** (0.0339903)−0.0845091 ** (0.0338529)
DVideo × NImage−0.0983329 ** (0.0418905)−0.102918 ** (0.0419894)
RResponse × PType-0.4779957 *** (0.1582596)
Intercept−1.424818 *** (0.5139615)−1.443889 *** (0.5154571)
Category-specific random effects
Product-type-specific random effects
Observations86938693
AIC6487.6246480.507
Log-likelihood−3223.812−3219.253
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Che, G.; Hu, J.; Chen, L. Online Review Helpfulness and Information Overload: The Roles of Text, Image, and Video Elements. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 1243-1266. https://doi.org/10.3390/jtaer19020064

AMA Style

Wang L, Che G, Hu J, Chen L. Online Review Helpfulness and Information Overload: The Roles of Text, Image, and Video Elements. Journal of Theoretical and Applied Electronic Commerce Research. 2024; 19(2):1243-1266. https://doi.org/10.3390/jtaer19020064

Chicago/Turabian Style

Wang, Liang, Gaofeng Che, Jiantuan Hu, and Lin Chen. 2024. "Online Review Helpfulness and Information Overload: The Roles of Text, Image, and Video Elements" Journal of Theoretical and Applied Electronic Commerce Research 19, no. 2: 1243-1266. https://doi.org/10.3390/jtaer19020064

APA Style

Wang, L., Che, G., Hu, J., & Chen, L. (2024). Online Review Helpfulness and Information Overload: The Roles of Text, Image, and Video Elements. Journal of Theoretical and Applied Electronic Commerce Research, 19(2), 1243-1266. https://doi.org/10.3390/jtaer19020064

Article Metrics

Back to TopTop