Next Article in Journal
Exploring Apparel E-Commerce Unethical Return Experience: A Cross-Country Study
Previous Article in Journal
The Next-Generation Shopper: A Study of Generation-Z Perceptions of AI in Online Shopping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effects of E-Commerce Recommendation System Transparency on Consumer Trust: Exploring Parallel Multiple Mediators and a Moderator

1
School of Economics and Management, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
Information and Library Center, Chongqing Medical and Pharmaceutical College, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2024, 19(4), 2630-2649; https://doi.org/10.3390/jtaer19040126
Submission received: 30 June 2024 / Revised: 15 September 2024 / Accepted: 19 September 2024 / Published: 1 October 2024

Abstract

:
Recommendation systems are used in various fields of e-commerce and can bring many benefits to consumers but consumers’ trust in recommendation systems (CTRS) is lacking. Recommendation system transparency (RST) is an important factor that affects CTRS. Applying a three-layered trust model, this paper discusses the influence of RST on CTRS in the e-commerce domain, demonstrating the mediating role of perceived effectiveness and discomfort and the moderating role of consumers’ domain knowledge. We recruited 500 participants for an online hypothetical scenario experiment. The results show that consumers’ perceived effectiveness and discomfort can mediate the relationship between RST and CTRS. Specifically, RST (vs. non-transparency) leads to higher perceived effectiveness ( promoting CTRS) and lower levels of discomfort (which inhibits CTRS), in turn increasing CTRS. Domain knowledge positively moderates the positive impact of RST on perceived effectiveness, while negatively moderating the negative impact of RST on discomfort. Further, gender has a negative impact on CTRS when consumers are purchasing experience products but there is no effect when purchasing search products.

1. Introduction

With the development of e-commerce, a large amount of product information leads to information overload. Recommendation systems can alleviate information overload and help consumers make purchasing decisions more easily [1,2]. However, consumers are often reluctant to adopt a recommendation system, largely due to a lack of trust in those systems [3]. Here, we focus on how to improve consumers’ trust in recommendation systems (CTRS).
CTRS is influenced by a number of factors, including the perceived accuracy and relevance of the offered recommendations, the perceived fairness of the system, and the degree of transparency for how the recommendations are generated [4,5,6]. Recommendation system transparency (RST) refers to the disclosure of the (partial) reasoning process behind the recommendation mechanism and explains how the system operates [7]. RST is thus an important factor that affects CTRS [8,9].
In academia, there is controversy about the impact of RST on CTRS. Three main viewpoints exist: (1) Positive correlation. For example, in a music recommendation system, users prefer recommendations that they consider transparent and will have more confidence in such recommendations [10]. (2) Negative correlation. Disclosing too many details about the internal logic of the system may lead to information overload, confusion, and low levels of perceptual understanding, reducing user trust and acceptance of the system [11,12]. (3) Non-correlation. Cramer et al.’s study on recommendation systems in cultural heritage did not find a positive effect of transparency on trust in the systems, although transparency increased the acceptance of recommendations [13]. This earlier research clearly shows that the relationship between RST and CTRS is still inconsistent, and more research is needed to explore it fully.
In the corporate world, some online platforms are or have tried to improve RST, such as Twitter [14], Instagram, and Facebook [15]; however, in the field of e-commerce, few companies have yet tried to increase user trust by making their recommendation systems more transparent. Therefore, we examine whether and how RST can improve CTRS in the e-commerce domain.
We suggest the reason for the inconsistent impact of RST on CTRS is that most empirical studies only consider the direct effects of RST on CTRS, without considering its potential indirect effects and boundary conditions, thereby resulting in two research gaps. The first gap (of indirect effects) is from the lack of research on the mediators for the relationship between RST and CTRS. Zhao et al. found that users’ perceived understanding of the online shopping advice-giving system (AGSS) can mediate the relationship between subjective AGS transparency and users’ trust in the AGSS [16]. Cramer et al. investigated the effects of transparency of recommendation systems on user trust in the cultural heritage domain, using perceived competence and perceived understanding as mediators [13]. However, these articles both lacked any research from an emotional perspective, making it difficult to clearly explain the impacts of system transparency on human trust in the system. The second gap (of boundary conditions) is from the lack of empirical study on the moderators of that relationship. To our knowledge, no empirical study has addressed such moderators.
Based on Hoff and Bashir’s three-layer trust model for automated systems [17], we developed a conceptual model that compensates for these gaps. In the dynamic learned trust layer, we argue that RST (design features) indirectly affects CTRS through the mediating role of the consumer’s perception of the recommendation system’s performance. For perceptual performance, we use the cognitive perspective of effectiveness and the affective perspective of discomfort as mediators. The influence of this initial learned trust layer occurs before the use of the recommendation system and can be considered as a boundary condition for accessing the learned layer. Therefore, we use the consumers’ domain knowledge of the recommendation system as a moderator of the model. Next, we consider both dispositional and situational trust layers, which are the prior conditions of the learned trust layer [17] that can be used as boundary conditions for consumers using recommendation system processes. We use consumer gender, age, and income (dispositional trust layer) as control factors for the model and test the model using the scenario of purchasing different types of products (situational trust layer).
Compared to previous studies, this paper offers a model of parallel multiple mediating effects with further moderating effects by analyzing the indirect effects (perceived effectiveness has a positive impact on CTRS, and discomfort has a negative impact on CTRS) and boundary conditions (domain knowledge as moderator and the different product-type scenarios), thus providing a more detailed theoretical explanation of the relationship between RST and CTRS. We also extend Hoff and Bashir’s three-layer model of trust by combining it with the cognitive and affective aspects of trust and applying the extended approach to gain a better understanding of recommendation systems in the e-commerce domain.

2. Theoretical Background and Research Hypotheses

2.1. CTRS

Resnick and Varian introduced the concept of a “recommendation system” to the e-commerce field. They defined an “e-commerce recommendation system” as a decision support system that provides product information and suggestions to users via e-commerce websites and thereby helps users to make produce/purchase decisions by simulating a salesperson who aids users move through the purchasing process successfully [18].
The definitions of trust in automated systems have been used to define trust in AI systems [19]. The recommendation system is one of the applications of AI systems; thus, the definition of trust in a recommendation system can be derived from the meaning of “automated trust”. Applying Mayer et al.’s definition of trust in organizational relationships [20], Lee and See defined automation trust as having “the attitude that an agent will help achieve personal goals in situations of uncertainty and vulnerability” [21]. Hoff and Bashir classified this concept into dispositional, situational, and learned trust [17], referring to Marsh and Dibben’s framework [22]. Dispositional trust is the tendency to trust automation and is influenced by individual characteristics (e.g., age, gender, etc.). Situational trust depends on the specific interaction situation and is influenced by external variables (e.g., task type, etc.) and internal variables (e.g., mood). Learned trust is the user’s evaluation of a system based on past experience or current interaction, pre-existing knowledge, and the perception of that system’s performance. Design features indirectly influence learned trust through the perceived system performance [17].
As shown in Figure 1, the learned trust layer is divided into dynamic learned trust and initial learned trust. The system’s design features (such as transparency) indirectly influence trust via the mediating role of perceived system performance that occurs during use [17]. The formation of such trust involves both thinking and feeling [21], and is influenced by both cognitive and emotional dimensions [23]. Therefore, for perceptual performance, we use the cognitive perspective of effectiveness and the affective perspective of discomfort as mediators. We thus envision the effect of RST on CTRS occurring in the learned layer, where perceived effectiveness and discomfort are the consumer perceptions of the recommendation system performance and become mediators in the learned trust layer. The effects of the initial learned layer occur before the use of the recommendation system, and thus can be considered as boundary conditions for the learned trust layer. We hypothesize that consumers’ domain knowledge of recommendation systems (initial learned layer factors) can indeed moderate the relationship between RST, perceived effectiveness, and discomfort for CTRS.
For the dispositional trust layer, consumers’ personal characteristics, such as gender, age, and culture, will also affect consumers’ trust in the system; therefore, we use consumers’ gender, age, and income as control variables for the model.
For the situational trust layer, according to the previous research, different types of products will affect consumers’ choices and attitudes [24]. We thus verified whether the mechanism of the impact of RST on CTRS changes when purchasing different types of products.

2.2. RST and CTRS

RST refers to the disclosure of the (partial) reasoning process behind the recommendation mechanism to explain how the system operates [7]. Highly transparent systems articulate the goals of these systems, the purposes for collecting data from users [25], and the rationale for system outputs [13,26,27]. Several researchers have argued that the trustworthiness of a recommendation system should be especially considered when assessing its quality [28,29]. Fully transparent systems allow consumers to understand how the system works and will explain the choices and behaviors of that system. Better understanding of a system will help users decide whether they can trust that system [21] and indeed influence their attitudes toward the system [30]. Providing explanations for the results of a recommendation system—such as providing the reasons behind recommendations as textual explanations—can improve system transparency [7,10], thereby increasing consumer trust in the system [7,31,32] and, subsequently, its use.
The previous literature has shown that transparency is a way to increase trust [33]. Diakopoulos and Koliska’s research demonstrated that system transparency is considered a key influencing factor for user trust in a system [27]. Swearingen found that users prefer recommendations that they can consider transparent and will thus have more confidence in them [10]. Therefore, we propose the following hypothesis:
H1: 
RST positively relates to CTRS.

2.3. The Mediating Effect of Consumers’ Perceived Effectiveness of Recommendation Systems

Perceived effectiveness is defined as “the extent to which a person believes that using a particular system will improve their job performance” [34]. The effectiveness of a recommendation system thus depends on the accuracy of its recommendation algorithms [35]. The fact that AI algorithmic judgments have higher accuracy than human judgments has been proven by many scholars and is now widely accepted [36]. As one of the different applications of AI, the effectiveness of a recommendation system is also recognized by people. Hingston’s research shows that providing appropriate explanations for a recommendation system can improve the effectiveness of that system [37]; the study emphasized that beliefs about the effectiveness of a technology are a fundamental determinant of whether a system is ultimately adopted or not [34]. Indeed, the perceived effectiveness of a recommendation system will increase consumer satisfaction [38] and trust [35].
We argue that when recommendation systems become more transparent, those recommendation systems will disclose more information about their algorithms’ decision-making process, which consumers can then use to better evaluate the effectiveness of the recommendation systems. The effectiveness of recommendation systems can increase CTRS. Therefore, we propose the following two hypotheses:
H2a: 
RST positively relates to consumers’ perceived effectiveness of the system;
H2b: 
Consumers’ perceived effectiveness of the system positively relates to CTRS.
In addition to the direct impact of perceived effectiveness, the research has also noted the mediating role of perceived effectiveness in various fields [39,40]. For example, Yu and Li found that how employees perceived AI transparency and AI effectiveness had a chain mediating effect for AI decision transparency and employee trust in AI [41]. Perceived consumer effectiveness and environmental values can play a chain mediating role between income quality and organic food purchase intention [42]. We argue that RST (vs. non-transparency) leads to higher consumers’ perceived effectiveness and generates higher CTRS. Therefore, we propose the following hypothesis:
H2c: 
Consumers’ perceived effectiveness of the system mediates the relationship between RST and CTRS.

2.4. The Mediating Effect of Consumers’ Discomfort with Recommendation Systems

User discomfort is defined as having a lack of control over technology and a feeling of being overwhelmed by it [43]. Users’ negative responses to information technology are mainly two types, namely, psychological and behavioral responses [44]. Research has primarily focused on consumer resistance from the perspective of psychological responses. This particular focus refers to the actions that individuals take in opposition to situations where they perceive themselves to be in a compelled condition [45]. If personalized recommendations interfere with user autonomy during the decision-making process, these processes may be counterproductive and lead to psychological resistance [46]. For example, increasing the autonomy of the algorithm without giving an explanation for that increase means less human involvement. Then, consumers will have a low sense of control over the algorithms, which may cause discomfort for these consumers [47].
In this paper, the main reason cited for causing algorithmic aversion and discomfort among consumers is the lack of transparency in recommendation systems. When consumers encounter a recommendation system, they believe that a recommendation system is like a “black box” and have no idea how it operates, let alone whether the recommendation system has or will misuse their private data. As a result, consumers experience psychological discomfort when confronted with an e-commerce recommendation system, which in turn makes it difficult for them to trust that system and the recommendations it makes. Therefore, when a recommendation system becomes more transparent, consumer discomfort with that system will decrease. Therefore, we propose the following hypotheses:
H3a: 
RST negatively relates to consumers’ discomfort;
H3b: 
Consumers’ discomfort negatively relates to CTRS.
Additionally, studies have tested the mediating effects of discomfort. For example, El Barachi et al. tested the mediating role of discomfort in the relationship between residents’ technological readiness and their willingness to continue using smart city services [48]. Yu and Li found that discomfort mediates the relationship between artificial intelligence transparency and employee trust [41]. We thus argue that RST (vs. non-transparency) leads to lower levels of consumers’ discomfort and, thereby, generates higher CTRS, and we propose the following hypothesis:
H3c: 
Consumers’ discomfort mediates the relationship between RST and CTRS.

2.5. Domain Knowledge

There is an aversion to new technologies such as algorithms [36], and one of the major reasons for this aversion is the false human expectations of new technologies [49]. This leads to a bias against all technology [50]. Solutions to this problem include developing people’s domain knowledge, which requires training not only in their specific professional domain but also on how to interact with algorithmic tools, how to interpret statistical outputs, and how to appreciate the usefulness and utility of these decision aids [42,51].
Domain knowledge refers to a person’s professional knowledge in a specific field. In consumer behavior research, domain knowledge is believed to strengthen the foundation and development of trust [52] and mitigate uncertainty in economic activity [53,54]. In addition, domain knowledge has been shown to affect users’ reliance and trust in these intelligent systems [17,55]. For example, Wang and Yin found that offering more domain knowledge and providing explanations enhanced users’ understanding of the system and reduced perceived uncertainty [56]. Users with more domain knowledge will have a higher intention to use the conversational recommendation system [57]. Users’ existing knowledge of an automated system also affects their initial trust in that system [17].
Therefore, we suggest that people with different levels or ranges of system domain knowledge may have different attitudes toward recommendation systems. Domain knowledge may be a potential moderator of the relationship between recommendation system transparency and its effectiveness and discomfort. Consumers with more domain knowledge (vs. those with low knowledge) will perceive the effectiveness of that recommendation system more when confronted with a transparent recommendation system, experience less discomfort with that recommendation system, and thereby trust that recommendation system more. Therefore, we propose the following two hypotheses:
H4a: 
Consumers’ domain knowledge of a recommendation system moderates the relationship between RST and CTRS. That is, for consumers with high domain knowledge (vs. low knowledge), the increase in RST leads to a higher perceived effectiveness of that system;
H4b: 
Consumers’ domain knowledge of a recommendation system moderates the relationship between RST and consumers’ discomfort. That is, for consumers with high domain knowledge (vs. low), an increase in RST leads to lower levels of discomfort with that system.

2.6. Product Types

Researchers have shown that the type of product affects consumers’ purchasing choices [24]. Representative product types include utilitarian/hedonic products [58], search/experience products [59], public/private products [60], etc. Different types of information are required when evaluating different products [61]. Based on Nelson’s theory of the “search” and “experience” of goods [62], search products refer to products whose quality can be measured based on objective characteristics, while experience products refer to products that rely more on subjective experience and personal tastes. Researchers have found that consumer behavior also varies depending on product type (search or experience) whenever using recommendation systems [59,63].
In an online shopping scenario, consumers can access information about products on the web and thus measure the quality of search products. For experience products, consumers are more likely to judge the quality of those products from a subjective emotional aspect. Therefore, we argue that consumers have different reactions when purchasing different types of products (search vs. experience). Thus, we propose the following hypothesis:
H5: 
In the cases of purchasing search (vs. experience) products, there will be differences in the impact of RST on CTRS.

3. Methodology

3.1. Procedure and Manipulation

Formal experiments were conducted from 18 April 2022 to 20 May 2022 using an online experimental method. The first part involved the recommendation system’s definition and scenario description. Based on previous studies [6], we chose a Bluetooth headset as the search product and a T-shirt as the experience product. We used four paragraphs to describe the scene (search/experiential products × transparent/non-transparent) as the stimulus. The participants were randomly assigned to one of the four conditions. To enhance the rigor of the research, except for differences in the product descriptions and the transparency of the system, all scenarios had the same content, and the word count of the four text descriptions was similar. After reading the scenario description, the participants answered the scenario questions for comprehensibility and authenticity. Two questions were adopted to ensure that all participants fully understood the hypothetical scenario, a common practice in transparency/AI/algorithm-scenario experiments [64,65,66]. The second part included scales for perceived transparency, perceived effectiveness, discomfort, and trust, and an attention check question was designed. In the final part, demographic information (gender, age, education, income, and domain knowledge) of the participants was collected and presented.

3.2. The Sample

We recruited participants online from Credamo (similar to Mturk and papers based on the data collected by Credamo that were previously accepted by journals [67,68,69]) to complete the online experimental research and the questionnaire. We conducted a strict screening of the questionnaire and strictly eliminated those participants who failed to pass the screening questions, whose answers were regular and answered with contradictory content, and whose filling time was either too short or too long. A total of 500 valid questionnaires were collected. The number of participants in each group was 124, 125, 125, and 126. The sample size was determined by prior power analysis, performed using G*Power, with the following settings: two-tailed; effect size: 0.5; alpha: 0.05; and power: 0.95. The required sample size of each group, as calculated by G*Power, was 105, and the sample size of this study thus met the requirements [70].
Table 1 shows the demographics of the participants included in the formal experiment. Females accounted for 65%. All participants were over 18 years old, with the majority between 18 and 25 (M = 26.79, SD = 6.90). The monthly income of most participants was below CNY 2000 (M = 6492, SD = 5216.55). The highest education level for most of the participants was a Bachelor’s degree (70%).
There are some reasons to explain the age and gender distribution: First, according to an industry report on the demographics of online shoppers, women make up a higher percentage of online shoppers than men, and with the popularity of the Internet and the optimization of e-commerce platforms, middle-aged and elderly people have begun to try and accept online shopping [71]. Second, in empirical studies related to e-commerce consumers, the age and gender distribution of participants is similar to our study. There are more women than men [72,73], and the age range of participants is concentrated between 18 and 35 years old [74]. Therefore, the sample in our study is appropriate. Third, we used Credamo to recruit participants, gender and age were randomized, and these data were not omitted; therefore, participants aged 40–45, 46–50, and 50 or older were included in this study, although their proportions are very low. In addition, the demographics such as age, gender, and income of the participants were used as control variables in the model (see Section 4.3), so we can control the impact of these factors.

3.3. Measures

Since the experiment was carried out in China, we adopted back-translation techniques to translate the scales. The original items were modified according to the research background. All constructs were measured on 7-point Likert scales ranging from very inconsistent (1) to very consistent (7). The scale of perceived transparency was adapted from Zhao et al. [16], the scale of trust was adapted from Höddinghaus et al. [64], the scale of effectiveness and discomfort was adapted from Castelo et al. [75], and the scale of domain knowledge was adapted from Zhou et al. [76]. The “artificial intelligence system” in the above items was changed to “recommendation system”. The measurement items for each variable are shown in Table 2.

4. Results

4.1. Reliability and Validity

SPSS 23 was used to test data reliability. Cronbach’s α for perceived transparency, trust, perceived effectiveness, discomfort, and domain knowledge was 0.971, 0.908, 0.862, 0.916, and 0.878. Then, LISREL 8.80 was used to test the data validity. The CFA results showed that the five-factor (perceived transparency, trust, perceived effectiveness, discomfort, and domain knowledge) model was a good fit according to the following fit statistics: χ2 = 240.180, df = 80, χ2/df = 3.002, RMSEA = 0.063, NNFI = 0.985, CFI = 0.988, IFI = 0.988, GFI = 0.940, and AGFI = 0.910. Moreover, the Harman single-factor model showed that common method bias was not serious, according to the following fit statistics: χ2 = 3354.039, df = 90, χ2/df = 37.267, RMSEA = 0.270, NNFI = 0.723, CFI = 0.763, IFI = 0.763, GFI = 0.527, and AGFI = 0.370. The standardized factor loadings of each item were between 0.754 and 0.973 and all reached a high level of significance (p < 0.001) with good convergent validity. The results of the confirmatory factor analysis and the reliability analysis are shown in Table 3. In addition, in Table 4, none of the factor correlation coefficients plus or minus twofold standard errors (i.e., 95% confidence intervals for the correlation coefficients) contained 1 or −1, indicating good discriminant validity.

4.2. Testing the Manipulation of AI Transparency

To detect the success of RST operations, we introduced consumers’ perceived transparency to verify. For the search product, the results of the independent samples t-test showed a significant difference between the perceived transparency groups (t = −33.410, p < 0.001, Mlow = 1.801, Mhigh = 5.576). For the experience product, the results of the independent samples t-test showed a significant difference between the perceived transparency groups (t = −39.133, p < 0.001, Mlow = 1.731, Mhigh = 5.675). These results showed that in different types of products, the textual descriptions used in the experiment resulted in differences in the participants’ perceptions of transparency, and thus, the transparency manipulation was successful.

4.3. Testing the Hypotheses

To test the mediating effect of the perceived effectiveness and discomfort, model 4 in the PROCESS macro for SPSS developed by Hayes [77] was used for a regression analysis. Trust was used as the dependent variable; RST (0 =non-transparency, 1 = transparency) was used as the independent variable; perceived effectiveness and discomfort were used as mediators; and gender, log of age, and log of income were the control variables.
As shown in Table 5, in the case of purchasing a search product, RST had a significant positive effect on CTRS (β = 0.764, p < 0.001), and H1 is thus supported. RST had a significant positive effect on consumers’ perceived effectiveness (β = 1.120, p < 0.001), so H2a is supported. Consumers’ perceived effectiveness had a significant positive effect on CTRS (β = 0.534, p < 0.001), and so H2b is supported. RST had a significant negative impact on consumers’ discomfort (β = −1.069, p < 0.001), and so H3a is supported. Consumers’ discomfort had a significant negative impact on CTRS (β = −0.174, p < 0.001),thus, H3b is supported.
As shown in Table 6, the confidence interval of the mediating effect of perceived effectiveness was [0.435, 0.777] and did not include 0; the confidence interval of the mediating effect of discomfort was [0.073, 0.327] and did not include 0. Therefore, perceived effectiveness and discomfort played partial mediating roles between RST and CTRS, and H2c and H3c are supported.
As shown in Table 7, in the case of purchasing an experience product, RST had a significant positive effect on CTRS (β = 0.870, p < 0.001), so H1 is supported. RST had a significant positive effect on consumers’ perceived effectiveness (β = 1.168, p < 0.001), so H2a is supported. Consumers’ perceived effectiveness had a significant positive effect on CTRS (β = 0.479, p < 0.001), and so H2b is supported. RST had a significant negative impact on consumers’ discomfort (β = −1.109, p < 0.001), and so H3a is supported. Consumers’ discomfort had a significant negative impact on CTRS (β = −0.306, p < 0.001), thus, H3b is supported.
As shown in Table 8, the confidence interval of the mediating effect of perceived effectiveness was [0.417, 0.722] and did not include 0; the confidence interval of the mediating effect of discomfort was [0.209, 0.478] and did not include 0. Therefore, perceived effectiveness and discomfort played partial mediating roles between RST and CTRS, and so H2c and H3c are supported.
To test the moderating effect of domain knowledge, model 7 in the PROCESS macro for SPSS developed by Hayes [77] was used for regression analysis. Trust was used as the dependent variable; RST (0 =non-transparency, 1 = transparency) was used as the independent variable; perceived effectiveness and discomfort were used as mediators; domain knowledge was used as a moderator; and gender, log of age, and log of income were the control variables.
As shown in Table 9, when purchasing a search product, the cross-term of RST and domain knowledge had a significantly positive effect on consumers’ perceived effectiveness (β = 0.348, p < 0.01), and H4a is supported. The cross-term of RST and domain knowledge had a significantly negative impact on consumers’ discomfort (β = −0.540, p < 0.01), so H4b is supported.
As shown in Table 10, in the case of purchasing an experience product, the cross-term of RST and domain knowledge had a significantly positive effect on consumers’ perceived effectiveness (β = 0.389, p < 0.01), so H4a is supported. The cross-term of RST and domain knowledge had a significantly negative impact on consumers’ discomfort (β = −0.418, p < 0.01), so H4b is also supported.
As can be seen from these results, there is no difference between the moderating and mediating effects when buying search and experience products. Thus, H5 was not supported. However, gender in the control variable had a negative effect on trust when buying experience products but this had no effect on trust when buying search products. These results are shown in Figure 2 below.

5. Discussion

This paper studies three research questions. First, how does RST affect CTRS through perceived effectiveness and discomfort? Second, does consumers’ domain knowledge about recommendation systems moderate the relationship between RST perceived effectiveness and discomfort? Third, will the relationship between RST, consumer perceived effectiveness, discomfort, and CTRS differ when purchasing different products? Through an empirical test, H1, H2a, H2b, H2c, H3a, H3b, H3c, H4a, and H4b are supported and H5 was not supported. These specific results are discussed below.
First, in the dynamic learned trust layer, RST (vs. non-transparency) produces higher CTRS (H1) and perceived effectiveness (H2a) and lower levels of discomfort (H3a). That finding is partially different from previous research [41], where Yu and Li demonstrate that AI transparency will increase employee discomfort with an AI system. Consumers’ perceived effectiveness of a recommendation system has a positive impact on CTRS (H2b). Consumers’ discomfort with a recommendation system has a negative impact on CTRS (H3c). These results substantiate the prior findings noted in the literature [41,75]. We further confirmed the parallel multiple mediation effect, that is, one mediating path has a positive effect on CTRS and the other mediating path has a negative effect on CTRS. Since the mediating effect of perceived effectiveness (H2c) is greater than the mediating effect of discomfort (H3c), the total effect of RST on CTRS is positive. This result shows that even though there is discomfort when using the recommendation system, consumers attach more importance to the performance of the recommendation system, and that perceived effectiveness of the recommendation system offsets the negative impact of discomfort, thus, consumers tend to trust the recommendation system.
Second, in the initial learned trust layer, consumers’ domain knowledge of the recommendation system moderates the relationship between RST and perceived effectiveness and discomfort. Domain knowledge also positively moderates the positive effect of RST on perceived effectiveness and negatively moderates the negative effect of RST on discomfort. This result suggests that when RST increases, consumers with high domain knowledge (vs. low knowledge) are more able to perceive the effectiveness of recommendation systems and experience lower levels of discomfort. This finding is similar to previous findings [57] that found that users with higher domain knowledge have a more positive attitude toward these systems.
Third, in the situational trust layer, there is no difference in the relationship between RST, consumers’ perceived effectiveness, discomfort, and CTRS when purchasing different products. This finding contradicts earlier results reported in the literature [66,78]. Consumers perceived lower trusting beliefs in the context of experience products when compared to search products [59]. Perceived usefulness was more significantly affected for the search product than for the experience product [79]. Acharya et al. also found that the impact of product type on consumer behavior is not significant [6]. Based on their understanding, we argue that the nature of the specific shopping environment online can provide a possible explanation for the results of the current study. Recommendation systems are used in online shopping scenarios where experience products are less likely to be experienced before purchase, and consumers cannot actually perceive the difference between search and experience products [80,81,82].
Fourth, in the dispositional trust layer, we found a negative impact of gender on CTRS. When purchasing experience products, female consumers have lower trust in a recommendation system, while there is no difference between the genders when purchasing search products.
The above results indicate that there is a crossover layer of influence among the factors that affect trust in the three-layer trust model.

5.1. Theoretical Implications

The theoretical implications of this study are as follows. First, our research contributes to the consumer psychology research literature by proposing and confirming the impact of RST on CTRS in the e-commerce shopping environment. Although previous researchers have noticed the impact of RST on user trust, they have mainly focused on non-shopping scenarios such as music and cultural heritage recommendation systems [10,13]. In the shopping scenario, consumers need to pay, so it is more necessary to increase trust in the recommendation system and reduce uncertainty. To our knowledge, our paper is the first empirical study to examine the impact of RST on CTRS in the context of e-commerce. Our research enriches the theoretical framework of RST. This expansion not only enhances our understanding of consumer behavior in the context of e-commerce but also contributes to the theoretical foundation of future consumer psychology research in this context.
Second, this paper deepens the understanding of the mediating mechanism between RST and CTRS. Most studies mainly focused on direct effects, resulting in a lack of research on mediators [10]. Moreover, the effects of transparency on trust have primarily been studied from a cognitive perspective, while research from an emotional perspective is still lacking [83]. The formation of trust involves both thinking and feeling [21], and is influenced by both cognitive and emotional dimensions [23]. Therefore, the analysis from only one aspect of cognition or emotion is not comprehensive. We found two mediating paths between RST and CTRS from both cognitive and emotional perspectives: RST (vs. non-transparency) leads to higher perceived effectiveness (which positively relates to CTRS) and lower levels of discomfort (which negatively relates to CTRS). Therefore, the cumulative effect of the influence of these two mediating paths may help to explain the inconsistent conclusions of previous studies on the relationship between RST and CTRS.
Third, this paper extends the study of boundary conditions for the relationship between RST and CTRS. We found that consumers’ domain knowledge has a moderating effect on consumers’ attitudes toward a recommendation system. Previous studies have shown that users’ existing knowledge of automation systems can affect their attitudes toward a system [78,84], and our results reveal the specific mechanisms of this effect. In particular, when consumers have a better understanding of the recommendation system, it is easier for them to understand the meaning of the transparent recommendation system’s algorithm; in turn, this understanding enhances consumers’ perceived effectiveness of the recommendation system and reduces their discomfort, leading to higher CTRS. We did not find any evidence that differences in product types directly lead to these differences in CTRS but we found an interaction between individual consumer characteristics and product types. Specifically, we found that gender has a negative effect on trust when consumers purchase experience products, while there is no difference when consumers purchase search products. Overall, these findings add new evidence to help reconcile the inconsistency of existing empirical studies on the relationship between RST and CTRS.
Fourth, we apply Hoff and Bashir’s three-layered trust model of automation [17] to recommendation systems in the e-commerce domain, making two key extensions. On the one hand, we modify the learned layer by substituting the original automation system performance with consumer discomfort and perceived effectiveness of the recommendation system. The recommendation system is a kind of artificial intelligence system, and compared with an automatic system, users’ evaluation of a recommendation system includes more aspects. The original system performance measure is suitable for automation systems, while the discomfort and perceived effectiveness measures are more suitable for recommendation systems [75]. On the other hand, we explore the cross-layer connections between factors in the different layers, such as how the interaction of consumer gender (the personality layer) and product type (the context layer) affects CTRS (the learning layer). As such, our framework provides a new perspective for future CTRS research.

5.2. Practical Implications

For recommendation system designers, improving the transparency of the system should be an important goal in the design process. Specifically, this goal can be achieved through the following measures: Firstly, user interface design—designers can integrate functionality into the user interface to explain the working principle of the recommendation system. For example, providing clear algorithm explanations, model descriptions, and visualizations of the recommendation process can help users understand the basis and mechanisms of the recommendations. This not only enhances RST but also further strengthens CTRS by increasing consumers’ perceived effectiveness of the recommendation system. Secondly, data privacy transparency—clearly demonstrating how recommendation systems collect, use, and protect personal information is another key measure to improve CTRS. Providing an easily understandable privacy policy and data usage instructions can significantly reduce consumers’ discomfort in recommendation systems, thereby increasing CTRS.
For e-commerce companies, the following strategies can help increase the frequency of consumers using recommendation systems: Firstly, during regular promotional activities, e-commerce companies can introduce consumers to the working principles and advantages of recommendation systems through online seminars, blog articles, or infographics. For example, explaining how recommendation systems generate personalized recommendations based on consumer behavior and preferences can enhance consumers’ perceived effectiveness of a recommendation system. Meanwhile, introducing how the system securely processes user information can alleviate users’ concerns about privacy and reduce their discomfort. Secondly, e-commerce companies should segment consumer groups based on their characteristics. We found that gender has a negative effect on trust when consumers purchase experience products. Therefore, for different types of products, companies should consider the impact of differences in gender, age, and other characteristics on trust and develop targeted strategies to enhance consumers’ trust. This segmentation not only helps optimize marketing strategies but also enhances consumers’ purchasing experience, ultimately achieving higher market competitiveness.
The results of this study also provide valuable insights for governments and regulatory agencies: Firstly, formulate policies and regulations—the government and regulatory agencies can introduce relevant regulations requiring companies to provide transparent algorithm details and data processing instructions in their recommendation systems. By establishing transparency standards, it can promote fair competition within the industry, enhance the credibility of recommendation systems, and protect consumer rights. Secondly, promote industry standards—encourage standardization organizations within the industry to develop best practices for transparency and privacy protection, and encourage companies to follow these standards when designing recommendation systems, thereby enhancing transparency and consumer trust throughout the industry.

5.3. Limitations and Suggestions for Future Research

The research of this paper has the following limitations. First, this article adopts online scenario experiments and does not fully simulate the scenario of e-commerce shopping, which may affect the participants’ true perception of the recommendation system. Future research can improve the accuracy of these experimental results by using a field experiment method where participants make real e-commerce purchases.
Secondly, consumers’ domain knowledge of a recommendation system is derived from self-evaluation and is not objective. Future research should examine this variable using more objective methods.
Third, the design of recommendation systems involves many factors. In addition, perceptions of transparency and trust may be different for basic data, such as recency, frequency, and monetary value (RFM) than for detailed data, such as clickstream. Therefore, future research can benefit from testing other factors to assess their impact on recommendation systems, such as the level of detail of input data, recommendations (content), personalized UI recommendations (layout), and so on.
Fourth, the mediating effect results of this study show that the mediating effect of perceived effectiveness is greater than the summing effect of discomfort; however, the reasons for this finding have not been explored. Future research can conduct further experiments on this mediating effect model to explore its mechanisms.

6. Conclusions

This paper explores how RST affects CTRS in a recommendation system. Based on the three-layer trust model, we constructed a research model to investigate how RST affects consumers’ perceived effectiveness, discomfort, and CTSR, as well as the moderating effects of consumers’ domain knowledge when purchasing search and experience products. Using online scenario experiments to simulate consumer online shopping scenarios, 500 participants were recruited for the online experiment. After empirical testing, it was found that RST (vs. non-transparency) leads to higher perceived effectiveness (which promotes CTRS) and lower levels of discomfort (which inhibits CTRS). Consumers with higher (vs. lower) domain knowledge, thus, will have more positive attitudes toward a transparent recommendation system. Therefore, when purchasing search and experience products, this model holds.

Author Contributions

Conceptualization, Y.L. and X.D.; methodology, Y.L. and X.D.; software, X.D.; validation, X.D.; formal analysis, X.D.; investigation, Y.L. and X.D.; resources, Y.L. and X.H.; data curation, X.D.; writing—original draft preparation, X.D.; writing—review and editing, Y.L. and J.L.; visualization, X.D.; supervision, Y.L.; project administration, Y.L. and J.L.; funding acquisition, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chongqing Education Science Planning Project “Research on student learning behavior analysis and teaching innovation in the virtual and real fusion environment” from Chongqing Academy of Education Science, grant number 2019-GX-009.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. (The data are not publicly available due to privacy restrictions).

Acknowledgments

The authors gratefully acknowledge the financial support by Chongqing Education Science Planning Project “Research on student learning behavior analysis and teaching innovation in the virtual and real fusion environment” (NO:2019-GX-009) from Chongqing Academy of Education Science.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, Q.; Zhang, X.; Zhang, L.; Zhao, Y. The interaction effects of information cascades, word of mouth and recommendation systems on online reading behavior: An empirical investigation. Electron. Commer. Res. 2019, 19, 521–547. [Google Scholar] [CrossRef]
  2. Alamdari, P.M.; Navimipour, N.J.; Hosseinzadeh, M.; Safaei, A.A.; Darwesh, A. A systematic study on the recommender systems in the E-commerce. IEEE Access 2020, 8, 115694–115716. [Google Scholar] [CrossRef]
  3. Tharwat, M.E.A.A.; Jacob, D.W.; Fudzee, M.F.M.; Kasim, S.; Ramli, A.A.; Lubis, M. The role of trust to enhance the recommendation system based on social network. Int. J. Adv. Sci. Eng. Inf. Technol. 2020, 10, 1387–1395. [Google Scholar] [CrossRef]
  4. Wang, S.; Zhang, X.; Wang, Y.; Ricci, F. Trustworthy recommender systems. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–20. [Google Scholar] [CrossRef]
  5. Wang, Y.; Ma, W.; Zhang, M.; Liu, Y.; Ma, S. A survey on the fairness of recommender systems. ACM Trans. Inf. Syst. 2023, 41, 1–43. [Google Scholar] [CrossRef]
  6. Acharya, N.; Sassenberg, A.M.; Soar, J. Consumers’ Behavioural Intentions to Reuse Recommender Systems: Assessing the Effects of Trust Propensity, Trusting Beliefs and Perceived Usefulness. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 55–78. [Google Scholar] [CrossRef]
  7. Tintarev, N.; Masthoff, J. Explaining Recommendations: Design and Evaluation. In Recommender Systems Handbook; Springer: Berlin/Heidelberg, Germany, 2015; pp. 353–382. [Google Scholar] [CrossRef]
  8. Gedikli, F.; Jannach, D.; Ge, M. How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum.-Comput. Stud. 2014, 72, 367–382. [Google Scholar] [CrossRef]
  9. Wang, Y.; Liu, W.; Yao, M. Which recommendation system do you trust the most? Exploring the impact of perceived anthropomorphism on recommendation system trust, choice confidence, and information disclosure. New Media Soc. 2024. [Google Scholar] [CrossRef]
  10. Sinha, R.; Swearingen, K. The Role of Transparency in Recommender Systems. In CHI’02 Extended Abstracts on Human Factors in Computing Systems; ACM Press: New York, NY, USA, 2002; pp. 830–831. [Google Scholar] [CrossRef]
  11. Tintarev, N.; Masthoff, J. A Survey of Explanations in Recommender Systems. In Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering Workshop, Istanbul, Turkey, 17–20 April 2007; pp. 801–810. [Google Scholar] [CrossRef]
  12. Balog, K.; Radlinski, F.; Arakelyan, S. Transparent, Scrutable and Explainable User Models for Personalized Recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 265–274. [Google Scholar] [CrossRef]
  13. Cramer, H.; Evers, V.; Ramlal, S.; Van Someren, M.; Rutledge, L.; Stash, N.; Aroyo, L.; Wielinga, B. The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adapt. Interact. 2008, 18, 455–496. [Google Scholar] [CrossRef]
  14. Twitter Reveals Code Showing Why Tweets Pop-Up. 2023. Available online: https://www.dw.com/en/twitter-reveals-code-showing-why-tweets-pop-up/a-65204619 (accessed on 1 May 2024).
  15. Understanding Social Media Recommendation Algorithms. 2023. Available online: https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms (accessed on 1 May 2024).
  16. Zhao, R.; Benbasat, I.; Cavusoglu, H. Do Users Always Want to Know More? Investigating the Relationship Between System Transparency and Users’ Trust in Advice-Giving Systems. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, 8–14 June 2019. [Google Scholar]
  17. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. J. Hum. Factors Ergon. Soc. 2015, 57, 407–434. [Google Scholar] [CrossRef]
  18. Resnick, P.; Varian, H.R. Recommender systems. Commun. ACM 1997, 40, 56–58. [Google Scholar] [CrossRef]
  19. Li, Y.; Zhou, X.; Jiang, X.; Fan, F.; Song, B. How service robots’ human-like appearance impacts consumer trust: A study across diverse cultures and service settings. Int. J. Contemp. Hosp. Manag. 2024, 36, 3151–3167. [Google Scholar] [CrossRef]
  20. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  21. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
  22. Marsh, S.; Dibben, M.R. The role of trust in information science and technology. Annu. Rev. Inf. Sci. Technol. 2003, 37, 465–498. [Google Scholar] [CrossRef]
  23. Johnson, D.; Grayson, K. Cognitive and Affective Trust in Service Relationships. J. Bus. Res. 2005, 58, 500–507. [Google Scholar] [CrossRef]
  24. Wien, A.H.; Peluso, A.M. Influence of human versus AI recommenders: The roles of product type and cognitive processes. J. Bus. Res. 2021, 137, 13–27. [Google Scholar] [CrossRef]
  25. Hedbom, H.; Pulls, T.; Hansen, M. Transparency Tools. In Privacy and Identity Management for Life; Springer: Berlin/Heidelberg, Germany, 2011; pp. 135–153. [Google Scholar] [CrossRef]
  26. Zouave, E.T.; Marquenie, T. An Inconvenient Truth: Algorithmic Transparency & Accountability in Criminal Intelligence Profiling. In Proceedings of the 2017 European Intelligence and Security Informatics Conference (EISIC), Athens, Greece, 11–13 September 2017; pp. 17–23. [Google Scholar] [CrossRef]
  27. Diakopoulos, N.; Koliska, M. Algorithmic transparency in the news media. Digit. J. 2017, 5, 809–828. [Google Scholar] [CrossRef]
  28. Berkovsky, S.; Taib, R.; Conway, D. How to Recommend?: User Trust Factors in Movie Recommender Systems. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, Limassol, Cyprus, 13–16 March 2017; pp. 287–300. [Google Scholar] [CrossRef]
  29. Pearl, P.; Chen, L. Trust-inspiring explanation interfaces for recommender systems. Knowl.-Based Syst. 2007, 20, 542–556. [Google Scholar] [CrossRef]
  30. Alpert, S.R.; Karat, J.; Karat, C.; Brodie, C.; Vergo, J.G. User attitudes regarding a user-adaptive e-Commerce web site. User Model. User-Adapt. Interact. 2003, 13, 373–396. [Google Scholar] [CrossRef]
  31. Felfernig, A.; Gula, B. An Empirical Study on Consumer Behavior in the Interaction with Knowledge-based Recommender Applications. In Proceedings of the 8th IEEE International Conference on E-Commerce Technology and The 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services (CEC/EEE’ 06), San Francisco, CA, USA, 26–29 June 2006; pp. 159–169. [Google Scholar] [CrossRef]
  32. Herlocker, J.L.; Konstan, J.A.; Riedl, J. Explaining Collaborative Filtering Recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, Philadelphia, PA, USA, 2–6 December 2000; pp. 241–250. [Google Scholar] [CrossRef]
  33. Walker, K.L. Surrendering Information through the Looking Glass: Transparency, Trust, and Protection. J. Public Policy Mark. 2016, 35, 144–158. [Google Scholar] [CrossRef]
  34. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 318–339. Available online: http://misq.org/perceived-usefulness-perceived-ease-of-use-and-user-acceptance-of-information-technology.html (accessed on 1 May 2024). [CrossRef]
  35. Tintarev, N.; Masthoff, J. Evaluating the effectiveness of explanations for recommender systems: Methodological issues and empirical studies on the impact of personalization. User Model. User-Adapt. Interact. 2012, 22, 399–439. [Google Scholar] [CrossRef]
  36. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef] [PubMed]
  37. Hingston, M. User Friendly Recommender Systems. Master’s Thesis, Sydney University, Sydney, Australia, 2006. [Google Scholar]
  38. Knijnenburg, B.P.; Willemsen, M.C.; Gantner, Z.; Soncu, H.; Newell, C. Explaining the user experience of recommender systems. User Model. User-Adapt. Interact. 2012, 22, 441–504. [Google Scholar] [CrossRef]
  39. Hartanto, D.; Dalle, J.; Akrim, A.; Anisah, H.U. Perceived effectiveness of e-governance as an underlying mechanism between good governance and public trust: A case of Indonesia. Digit. Policy. Regul. Gov. 2021, 23, 598–616. [Google Scholar] [CrossRef]
  40. Song, H.; Kim, T.; Kim, J.; Ahn, D.; Kang, Y. Effectiveness of VR crane training with head-mounted display: Double mediation of presence and perceived usefulness. Autom. Constr. 2021, 122, 103506. [Google Scholar] [CrossRef]
  41. Yu, L.; Li, Y. Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort. Behav. Sci. 2022, 12, 127. [Google Scholar] [CrossRef]
  42. Zheng, Q.; Wen, X.; Xiu, X.; Chen, Q. Income Quality and Organic Food Purchase Intention: The Chain Mediating Role of Environmental Value, Perceived Consumer Effectiveness. SAGE Open 2023, 13. [Google Scholar] [CrossRef]
  43. Parasuraman, A.; Colby, C.L. An updated and streamlined technology readiness index: TRI 2.0. J. Serv. Res. 2015, 18, 59–74. [Google Scholar] [CrossRef]
  44. Ma, X.; Sun, Y.; Guo, X.; Lai, K.H.; Vogel, D. Understanding users’ negative responses to recommendation algorithms in short-video platforms: A perspective based on the Stressor-Strain-Outcome (SSO) framework. Electron. Mark. 2022, 32, 41–58. [Google Scholar] [CrossRef]
  45. Tucker, C.E. Social networks, personalized advertising, and privacy controls. J. Mark. Res. 2014, 51, 546–562. [Google Scholar] [CrossRef]
  46. He, X.; Liu, Q.; Jung, S. The Impact of Recommendation System on User Satisfaction: A Moderated Mediation Approach. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 448–466. [Google Scholar] [CrossRef]
  47. Murphy, J.; Hofacker, C.; Gretzel, U. Dawning of the age of robots in hospitality and tourism: Challenges for teaching and research. Eur. J. Tour. Res. 2017, 15, 104–111. [Google Scholar] [CrossRef]
  48. El Barachi, M.; Salim, T.A.; Nyadzayo, M.W.; Mathew, S.; Badewi, A.; Amankwah-Amoah, J. The relationship between citizen readiness and the intention to continuously use smart city services: Mediating effects of satisfaction and discomfort. Technol. Soc. 2022, 71. [Google Scholar] [CrossRef]
  49. Burton, J.W.; Stein, M.K.; Jensen, T.B. A systematic review of algorithm aversion in augmented decision making. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
  50. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Manag. Sci. 2018, 64, 1155–1170. [Google Scholar] [CrossRef]
  51. Kuncel, N.R. Some New (and Old) Suggestions for improving personnel selection. Ind. Organ. Psychol. 2008, 1, 343–346. [Google Scholar] [CrossRef]
  52. Han, J.; Seo, Y.; Ko, E. Staging luxury experiences for understanding sustainable fashion consumption: A balance theory application. J. Bus. Res. 2017, 74, 162–167. [Google Scholar] [CrossRef]
  53. Nuttavuthisit, K.; Thøgersen, J. The importance of consumer trust for the emergence of a market for green products: The case of organic food. J. Bus. Ethics 2017, 140, 323–337. [Google Scholar] [CrossRef]
  54. Hassan, L.; Shaw, D.; Shiu, E.; Walsh, G.; Parry, S. Uncertainty in ethical consumer choice: A conceptual model. J. Consum. Behav. 2013, 12, 182–193. [Google Scholar] [CrossRef]
  55. Sanchez, J.; Rogers, W.A.; Fisk, A.D.; Rovira, E. Understanding reliance on automation: Effects of error type, error distribution, age and experience. Theor. Issues Ergon. Sci. 2014, 15, 134–160. [Google Scholar] [CrossRef] [PubMed]
  56. Wang, X.; Yin, M. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In Proceedings of the 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, 14–17 April 2021; pp. 318–328. [Google Scholar] [CrossRef]
  57. Cai, W.; Jin, Y.; Chen, L. Impacts of Personal Characteristics on User Trust in Conversational Recommender Systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–14. [Google Scholar] [CrossRef]
  58. Chiou, J.S.; Ting, C.C. Will you spend more money and time on internet shopping when the product and situation are right? Comput. Hum. Behav. 2011, 27, 203–208. [Google Scholar] [CrossRef]
  59. Ashraf, M.; Ismawati Jaafar, N.; Sulaiman, A. System- vs. consumer-generated recommendations: Affective and socialpsychological effects on purchase intention. Behav. Inf. Technol. 2019, 38, 1259–1272. [Google Scholar] [CrossRef]
  60. Jiang, X.; Wu, J.; Yao, Q.; Yang, D. You deserve it! Why do consumers prefer arrogant brands? Nankai Bus. Rev. Int. 2022, 26, 1–24. [Google Scholar] [CrossRef]
  61. Mudambi, S.M.; Schuff, D. Research note: What makes a helpful online review? A study of customer reviews on Amazon. com. MIS Q. 2010, 34, 185–200. [Google Scholar] [CrossRef]
  62. Nelson, P. Information and consumer behavior. J. Political Econ. 1970, 78, 311–316. [Google Scholar] [CrossRef]
  63. Senecal, S.; Nantel, J. The influence of online product recommendations on consumers’ online choices. J. Retail. 2004, 80, 159–169. [Google Scholar] [CrossRef]
  64. Höddinghaus, M.; Sondern, D.; Hertel, G. The Automation of Leadership Functions: Would People Trust Decision Algorithms? Comput. Hum. Behav. 2021, 116, 106635. [Google Scholar] [CrossRef]
  65. Kim, T.; Hinds, P. Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction. In Proceedings of the ROMAN 2006-the 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 80–85. [Google Scholar] [CrossRef]
  66. Ötting, S.K.; Maier, G.W. The importance of procedural justice in human–machine interactions: Intelligent systems as new decision agents in organizations. Comput. Hum. Behav. 2018, 89, 27–39. [Google Scholar] [CrossRef]
  67. Hu, J.; Ma, X.; Xu, X.; Liu, Y. Treat for affection? Customers’ differentiated responses to pro-customer deviance. Tour. Manag. 2022, 93, 104619. [Google Scholar] [CrossRef]
  68. Huang, M.; Ju, D.; Yam, K.C.; Liu, S.; Qin, X.; Tian, G. Employee Humor Can Shield Them from Abusive Supervision. J. Bus. Ethics 2023, 186, 407–424. [Google Scholar] [CrossRef]
  69. Zhang, Q.; Wang, X.-H.; Nerstad, C.G.; Ren, H.; Gao, R. Motivational Climates, Work Passion, and Behavioral Consequences. J. Organ. Behav. 2022, 43, 1579–1597. [Google Scholar] [CrossRef]
  70. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A Flexible Statistical Power Analysis Program for the Social, Behavioral, and Biomedical Sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  71. Insight into the Chinese E-Commerce Platform Market in 2022. 2022. Available online: https://www.cbndata.com/report/3076/detail?isReading=report&page=1 (accessed on 1 May 2024).
  72. Molinillo, S.; Aguilar-Illescas, R.; Anaya-Sánchez, R.; Liébana-Cabanillas, F. Social commerce website design, perceived value and loyalty behavior intentions: The moderating roles of gender, age and frequency of use. J. Retail. Consum. Serv. 2021, 63, 102404. [Google Scholar] [CrossRef]
  73. Zhang, L.; Shao, Z.; Li, X.; Feng, Y. Gamification and online impulse buying: The moderating effect of gender and age. Int. J. Inf. Manag. 2021, 61, 102267. [Google Scholar] [CrossRef]
  74. Zhou, M.; Huang, J.; Wu, K.; Huang, X.; Kong, N.; Campy, K.S. Characterizing Chinese consumers’ intention to use live e-commerce shopping. Technol. Soc. 2021, 67, 101767. [Google Scholar] [CrossRef]
  75. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-Dependent Algorithm Aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  76. Zhou, L.; Wang, W.; Xu, J.D.; Liu, T.; Gu, J. Perceived information transparency in B2C e-commerce: An empirical investigation. Inf. Manag. 2018, 55, 912–927. [Google Scholar] [CrossRef]
  77. Hayes, A.F. An Index and Test of Linear Moderated Mediation. Multivar. Behav. Res. 2015, 50, 1–22. [Google Scholar] [CrossRef]
  78. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  79. Benlian, A.; Titah, R.; Hess, T. Differential Effects of Provider Recommendations and Consumer Reviews in E-Commerce Transactions: An Experimental Study. J. Manag. Inform. Syst. 2012, 29, 237–272. [Google Scholar] [CrossRef]
  80. Wang, Y.Y.; Luse, A.; Townsend, A.M.; Mennecke, B.E. Understanding the moderating roles of types of recommender systems and products on customer behavioral intention to use recommender systems. Inf. Syst. E-Bus. Manag. 2015, 13, 769–799. [Google Scholar] [CrossRef]
  81. Liu, Z.; Lei, S.H.; Guo, Y.L.; Zhou, Z.A. The interaction effect of online review language style and product type on consumers’ purchase intentions. Palgrave Commun. 2020, 6, 11. [Google Scholar] [CrossRef]
  82. van der Heijden, H. User acceptance of hedonic information systems. MIS Q. 2004, 28, 695–704. Available online: http://misq.org/user-acceptance-of-hedonic-information-systems.html (accessed on 1 May 2024). [CrossRef]
  83. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  84. Allen, R.; Choudhury, P. Algorithm-Augmented Work and Domain Experience: The Countervailing Forces of Ability and Aversion. Organ Sci. 2022, 33, 149–169. [Google Scholar] [CrossRef]
Figure 1. Conceptual model.
Figure 1. Conceptual model.
Jtaer 19 00126 g001
Figure 2. Results of this study. ** p < 0.01, *** p < 0.001; To make the table more concise, we only labeled the significant control variable (gender).
Figure 2. Results of this study. ** p < 0.01, *** p < 0.001; To make the table more concise, we only labeled the significant control variable (gender).
Jtaer 19 00126 g002
Table 1. Demographics of participants.
Table 1. Demographics of participants.
ItemCategoryFrequencyPercentage Rates (%)
GenderMale17735.4
Female32364.5
Age18–25 years old25050.0
26–30 years old12224.4
31–35 years old7715.4
36–40 years old265.2
40–45 years old91.8
46–50 years old71.4
More than 50 years old91.8
IncomeBelow CNY 200010921.8
CNY 2001–40009619.2
CNY 4001–60006412.8
CNY 6001–80006913.8
CNY 8001–10,0007114.2
CNY 10,001–12,000204.0
CNY 12,001–14,000204.0
CNY 14,001–16,000142.8
CNY 16,001–18,000102.0
CNY 18,001–20,000102.0
Above CNY 20,000173.4
EducationSenior high school and below91.8
Junior college428.4
Undergraduate course35270.4
Graduate students
(including MBA/MPA, etc.)
8617.2
PhD or above112.2
Table 2. Measurement scales.
Table 2. Measurement scales.
ConstructsItemsReferences
Perceived transparencyI can access a great deal of information that explains how the recommendation system works.[16]
I can see plenty of information about the recommendation system’s inner logic.
I feel that the amount of available information regarding the recommendation system’s reasoning is large.
TrustI would heavily rely on this recommendation system.[64]
I would trust this recommendation system completely.
I would feel comfortable relying on this recommendation system.
EffectivenessI think this recommendation system does a better job than humans.[75]
I think this recommendation system is useful.
I think this recommendation system can do well.
DiscomfortThis recommendation system makes me feel uncomfortable.[75]
This recommendation system makes me feel resistant.
This recommendation system makes me feel unsettled.
Domain knowledgeI know a lot about recommendation systems.[76]
Among my circle of friends, I am one of the “experts” on recommendation systems.
Compared to most other people, I know more about recommendation systems.
Table 3. Reliability and validity.
Table 3. Reliability and validity.
ConstructsItemsStandardized Factor Loading (λ)t-ValueCronbach’s αCRAVE
Perceived transparencyTRA010.96029.0560.9710.9710.919
TRA020.94228.116
TRA030.97329.811
TrustTRU010.90125.6210.9080.9110.773
TRU020.85323.443
TRU030.88324.770
EffectivenessEFF010.75419.1250.8620.8680.688
EFF020.88324.236
EFF030.84622.663
DiscomfortDIS010.89425.1080.9160.9180.789
DIS020.90925.790
DIS030.86123.643
Domain knowledgeKNO010.78520.0770.8780.8810.713
KNO020.86122.887
KNO030.88423.789
Table 4. The mean value, standard deviation, correlation coefficient matrix, and standard error of all variables.
Table 4. The mean value, standard deviation, correlation coefficient matrix, and standard error of all variables.
ConstructsMeanSD12345
1. Perceived transparency3.8092.22610.699 ***0.556 ***−0.422 ***0.370 ***
2. Trust4.2481.3140.750 ***10.745 ***−0.639 ***0.468 ***
(0.022)
3. Effectiveness4.9811.0840.597 ***0.837 ***1−0.622 ***0.298 ***
(0.033)(0.019)
4. Discomfort3.0371.360−0.446 ***−0.706 ***−0.702 ***1−0.223 ***
(0.038)(0.027)(0.028)
5. Domain knowledge4.2581.1750.385 ***0.508 ***0.309 ***−0.241 ***1
(0.041)(0.038)(0.046)(0.047)
Note: data below the diagonal are the interactor correlation coefficients of the LISREL output, and in parentheses are the standard errors; *** p < 0.001.
Table 5. Mediation analysis for search product.
Table 5. Mediation analysis for search product.
Dependent VariableIndependent VariableβSET95% Confidence IntervalsR2F
LLCIULCI
EffectivenessConstant3.394 ***0.9253.6701.5735.2160.27022.561
Transparency1.120 ***0.1229.1760.8791.360
Gender0.0390.1330.292−0.2230.301
Ln (Age)−0.0380.317−0.12−0.6620.586
Ln (Income)0.1130.0811.398−0.0460.271
DiscomfortConstant5.097 ***1.1794.3222.7747.4200.16912.391
Transparency−1.069 ***0.156−6.871−1.376−0.763
Gender−0.0250.169−0.146−0.3590.309
Ln (Age)−0.5500.404−1.363−1.3460.245
Ln (Income)0.0340.1030.331−0.1680.236
TrustConstant−0.6080.820−0.741−2.2231.0070.67683.962
Transparency0.764 ***0.1106.9750.5480.980
Effectiveness0.534 ***0.0618.7010.4130.655
Discomfort−0.174 ***0.048−3.624−0.269−0.080
Gender0.0590.1020.580−0.1420.261
Ln (Age)0.3670.2451.495−0.1170.851
Ln (Income)0.1200.0631.920−0.0030.243
Note: *** p < 0.001.
Table 6. Mediating effect test for search product.
Table 6. Mediating effect test for search product.
EffectBoot SE95% Confidence Intervals
LLCIULCI
Total effect 1.5490.1261.3011.797
Direct effect 0.7640.1100.5480.980
Mediating effectTotal0.7850.0930.6120.968
Transparency→Effectiveness→Trust0.5980.0880.4350.777
Transparency→Discomfort→Trust0.1870.0640.0730.327
(C1) Discomfort-Effectiveness−0.4120.122−0.651−0.167
Table 7. Mediation analysis for experience product.
Table 7. Mediation analysis for experience product.
Dependent VariableIndependent VariableβSET95% Confidence IntervalsR2F
LLCIULCI
EffectivenessConstant5.056 ***0.8855.7103.3126.8000.31528.290
Transparency1.168 ***0.11010.6181.1680.110
Gender−0.0270.122−0.222−0.2670.213
Ln (Age)−0.1670.331−0.506−0.8180.484
Ln (Income)−0.0290.073−0.395−0.1740.116
DiscomfortConstant6.437 ***1.2075.3314.0598.8160.19514.875
Transparency−1.090 ***0.150−7.268−1.385−0.795
Gender−0.160 *0.166−0.963−0.4870.167
Ln (Age)−0.9460.451−2.099−1.834−0.059
Ln (Income)0.0380.1000.378−0.1590.235
TrustConstant2.925 **0.8753.3441.2024.6480.716102.746
Transparency0.870 ***0.1107.9280.6541.087
Effectiveness0.479 ***0.0598.1680.3640.595
Discomfort−0.306 ***0.043−7.115−0.391−0.222
Gender−0.304 **0.100−3.042−0.501−0.107
Ln (Age)−0.3510.274−1.277−0.8910.190
Ln (Income)0.0800.0601.330−0.0380.198
Note: * p < 0.05, ** p < 0.01, *** p < 0.001.
Table 8. Mediating effect test for experience product.
Table 8. Mediating effect test for experience product.
EffectBoot SE95% Confidence Intervals
LLCIULCI
Total effect 1.7640.1231.5222.007
Direct effect 0.8700.1100.6541.087
Mediating effectTotal0.8940.0960.7171.088
Transparency→Effectiveness→Trust0.5600.0760.4170.722
Transparency→Discomfort→Trust0.3340.0680.2090.478
(C1) Discomfort-Effectiveness−0.2260.109−0.442−0.012
Table 9. The results of the moderating effect for search product.
Table 9. The results of the moderating effect for search product.
Dependent VariableIndependent VariableβSET95% Confidence IntervalsR2F
LLCIULCI
EffectivenessConstant4.777 ***0.9095.2572.9876.5670.33119.984
Transparency1.035 ***0.1218.5620.7971.273
Domain knowledge0.154 **0.0522.9760.0520.256
Transparency × Domain knowledge0.348 **0.0993.4980.1520.543
Gender0.0580.1290.445−0.1970.312
Ln (Age)−0.2150.307−0.702−0.8200.389
Ln (Income)0.0770.0780.984−0.0770.231
DiscomfortConstant3.769 **1.1623.2431.4806.0580.23412.350
Transparency−1.021 ***0.155−6.602−1.325−0.716
Domain knowledge−0.0940.066−1.415−0.2240.037
Transparency × Domain knowledge−0.540 ***0.127−4.252−0.791−0.290
Gender−0.0050.165−0.030−0.3310.321
Ln (Age)−0.3590.392−0.915−1.1310.414
Ln (Income)0.0610.1000.613−0.1350.258
TrustConstant−0.2240.824−0.272−1.8471.3980.67683.962
Transparency0.764 ***0.1106.9750.5480.980
Effectiveness0.534 ***0.0618.701−0.269−0.080
Discomfort−0.174 ***0.048−3.624−0.1420.261
Gender0.0590.1020.580−0.1170.851
Ln (Age)0.3670.2451.495−0.0030.243
Ln (Income)0.1200.0631.920−0.269−0.080
Note: ** p < 0.01, *** p < 0.001.
Table 10. The results of the moderating effect for experience product.
Table 10. The results of the moderating effect for experience product.
Dependent VariableIndependent VariableβSET95% Confidence IntervalsR2F
LLCIULCI
EffectivenessConstant5.918 ***0.8516.9574.2427.5940.37824.679
Transparency1.062 ***0.1139.3960.8391.285
Domain knowledge0.131 **0.0502.6430.0330.229
Transparency × Domain knowledge0.389 ***0.0934.1880.2060.572
Gender0.0630.1190.530−0.1710.296
Ln (Age)−0.1620.318−0.508−0.7880.465
Ln (Income)−0.0820.073−1.112−0.2260.063
DiscomfortConstant5.604 ***1.1834.7373.2747.9340.23912.785
Transparency−0.982 ***0.157−6.248−1.292−0.672
Domain knowledge−0.1340.069−1.945−0.2700.002
Transparency × Domain knowledge−0.418 **0.129−3.234−0.672−0.163
Gender−0.2540.165−1.541−0.5790.071
Ln (Age)−0.9490.442−2.145−1.820−0.077
Ln (Income)0.0910.1020.896−0.1100.293
TrustConstant3.362 ***0.8813.8171.6275.0970.716102.746
Transparency0.870 ***0.1107.9280.6541.087
Effectiveness0.479 ***0.0598.1680.3640.595
Discomfort−0.306 ***0.043−7.115−0.391−0.222
Gender−0.304 **0.100−3.042−0.501−0.107
Ln (Age)−0.3510.274−1.277−0.8910.190
Ln (Income)0.0800.0601.330−0.0380.198
Note: ** p < 0.01, *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Deng, X.; Hu, X.; Liu, J. The Effects of E-Commerce Recommendation System Transparency on Consumer Trust: Exploring Parallel Multiple Mediators and a Moderator. J. Theor. Appl. Electron. Commer. Res. 2024, 19, 2630-2649. https://doi.org/10.3390/jtaer19040126

AMA Style

Li Y, Deng X, Hu X, Liu J. The Effects of E-Commerce Recommendation System Transparency on Consumer Trust: Exploring Parallel Multiple Mediators and a Moderator. Journal of Theoretical and Applied Electronic Commerce Research. 2024; 19(4):2630-2649. https://doi.org/10.3390/jtaer19040126

Chicago/Turabian Style

Li, Yi, Xiaoya Deng, Xiao Hu, and Jing Liu. 2024. "The Effects of E-Commerce Recommendation System Transparency on Consumer Trust: Exploring Parallel Multiple Mediators and a Moderator" Journal of Theoretical and Applied Electronic Commerce Research 19, no. 4: 2630-2649. https://doi.org/10.3390/jtaer19040126

APA Style

Li, Y., Deng, X., Hu, X., & Liu, J. (2024). The Effects of E-Commerce Recommendation System Transparency on Consumer Trust: Exploring Parallel Multiple Mediators and a Moderator. Journal of Theoretical and Applied Electronic Commerce Research, 19(4), 2630-2649. https://doi.org/10.3390/jtaer19040126

Article Metrics

Back to TopTop