Next Article in Journal
Interference-Aware Opportunistic Dynamic Energy Saving Mechanism for Wi-Fi Enabled IoTs
Previous Article in Journal
Combining Adaptive Holonic Control and ISA-95 Architectures to Self-Organize the Interaction in a Worker-Industrial Robot Cooperative Workcell
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Framework for Recovering From Trust Breakdowns in Online Community Settings

1
School of Software Technology, Dalian University of Technology, Dalian 116023, China
2
Faculty of Management Sciences, Prince of Songkla University, Hatyai, Songkhla 90110, Thailand
3
School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Future Internet 2017, 9(3), 36; https://doi.org/10.3390/fi9030036
Submission received: 8 June 2017 / Revised: 1 July 2017 / Accepted: 12 July 2017 / Published: 17 July 2017

Abstract

:
The violation of trust as a result of interactions that do not proceed as expected gives rise to the question as to whether broken trust can possibly be recovered. Clearly, trust recovery is more complex than trust initialization and maintenance. Trust recovery requires a more complex mechanism to explore different factors that cause the decline of trust and identify the affected individuals of trust violation both directly and indirectly. In this study, an extended framework for recovering trust is presented. Aside from evaluating whether there is potential for recovery based on the outcome of a forgiveness mechanism after a trust violation, encouraging cooperation between interacting parties after a trust violation through incentive mechanisms is also important. Furthermore, a number of experiments are conducted to validate the applicability of the framework and the findings show that the e-marketplace incorporating our proposed framework results in improved efficiency of trading, especially in long-term interactions.

1. Introduction

The evolution of the Internet has provided a variety of modalities of interaction allowing a diverse range of artificial software agents to cooperate, negotiate, delegate and so on [1]. This new type of online interaction channels contributes to the growth of the e-marketplace, facilitating the exchange of goods between individual parties. In addition, successful e-business transactions crucially depend on establishing, maintaining and managing trust in an online setting [2,3].
Many computational trust models use reputation information to enhance the level of trust among its members. Reputation information can be used as a guideline for assessing possible interactions, selecting and understanding behaviours of trading partners [4]. Most reputation systems use historical information either reported by each participant’s direct observation or indirect feedback from others to derive the reputation of interacting parties [5,6]. Yet, the fact is that past experience information of each party is sometimes unavailable. For example, when a new agent has been deployed for the first time, historical information about its behaviours apparently does not exist. As a result, it is hard to decide whether the newly deployed agent is fraudulent or trustworthy and it is also difficult to compete with other existing agents. This problem has been recognised as a reputation bootstrapping problem. There are many studies proposing different approaches to provide the solutions to such problems.
For example, a mathematical model based on probability theory is proposed to identify the relationship between trust [7], reputation and reciprocity of agents in dyadic interactions. To initialize a trust value of an agent, the notion of a stereotype reputation is introduced based on the information-sharing among agents [8,9]. In [10], an initial trust value is calculated from defective transaction rates submitted by consumer agents. If consumer agents for a specific provider agent do not exist or the collected defective transaction rates are insufficient, then default trust is the average value of all existing consumer agents’ reputations. A more advanced method is proposed by Skopik et al. [11] in which trust mirroring and trust teleportation based on profile similarities are used to assign an agent’s initial reputation value. In a service-oriented environment, the study by Wu et al. [12] implements an artificial neural network (ANN) technique to find correlations between features and performance of existing services in order to establish a tentative reputation of an unknown service. Similarly, Yahyaoui et al. [13] compute a matching result of the observations sequence between a web service and a user which is modelled as a hidden Markov model and pre-defined trust patterns to bootstrap trust of web services. In a peer-to-peer (P2P) environment, Javanmardi et al. [14] introduce a reputation model based on fuzzy theory to compute a peer trust value in a semantic P2P grid where nodes are grouped according to the semantic similarities between their services.
However, when trust is violated as a result of interactions that do not proceed as expected, it gives rise to a question about the possibility that the broken trust may be recovered similarly to trust initialization of new entrant agents [15]. Clearly, recovering trust is more difficult than bootstrapping trust as a violation may lead the trust of a transgressor to plunge below its initial level and lead to a negative state of motivation [16]. As suggested in [17,18], trust recovery requires a more complex mechanism to explore different factors that cause the decline of trust and can also be used to heal the one’s initial negative motivations at the same time. Undoubtedly, some trust violations, for example, intentional fraud or a high severe offences do not deserve to be repaired. Violations caused by an unintentional error or a mistake, on the other hand, should function with a trust recovery mechanism. As such, an understanding of the factors that are applicable in terms of trust recovery is essential when dealing with a trust crisis. In addition, in the context of online communities, agents are self-organized and connected to each other to form communities according to their common interests. Therefore, identification of those who are directly and indirectly affected individuals of trust violation in communities in order to obtain sources of trust recovery information is also an important process.
In this study, we strengthen the concept and computational model of the forgiveness mechanism in our previous work [19] by incorporating two types of incentive mechanisms e.g., financial and reputational incentives, to encourage cooperation among involved participants. The rest of paper is structured as follows: In Section 2, we present related work on various trust recovery mechanisms. Section 3 is dedicated to our extended framework. A series of experiments and results are provided in Section 4. Last section, we conclude our study along with possible future work.

2. Related Work

To facilitate the repair of trust breakdowns, the most famous online auction site, eBay, uses a feature called the “feedback revision” (formerly mutual feedback withdrawal) [20]. Since there are a variety of reasons for buyers to leave negative feedback, eBay’s feedback revision provides a place for the seller to identify, apologize, and solve the buyer’s problem. Once the buyer is satisfied with the seller’s response, the seller sends a request asking the buyer to revise their negative feedback. The study of the German eBay website by Abeler et al. [21] confirms that apology as a cheapest tool works more effectively than monetary compensation strategies to induce consumers to withdraw their neutral and negative evaluations.
Allowing untrustworthy agents to build up their reputation, two main prosocial motivations are required: forgiveness and regret. A combinatorial framework of trust, reputation and forgiveness is proposed by the study by Vasalou et al. [22] called the DigitalBlush System. The system is inspired by human forgiveness, and uses expressions of shame and embarrassment to elicit potential forgiveness by others in the society. In more detail, the offender’s natural reactions after shame and embarrassment (i.e., the blush) can prompt sympathy or forgiveness from the victim. However, misinterpretation of emotional signals can be more problematic than they are not applied. In subsequent works by Vasalou et al. [23,24], when trust breaks down, the trustworthiness of the offender will be detected by identifying a number of motivation constituents [23]. If the result is positive, the victim will be presented along with those motivation constituents to consider before reassigning a reputation value to the offender. In particular, this intervention mechanism intends to alleviate the victim’s negative attributions, while at the same time it aims to prevent the unintentional/infrequent offender from receiving an unfair judgement. In [24], they investigate trust repairing in one-off online interactions by conducting an experiment that hypothesizes and show that systems designed to stimulate forgiveness can restore a victim’s trust in the offender. In [25,26], attribution models are proposed for identifying the causes of trust violation and rebuilding damaged trust through reparative actions which are based on emotion and motivation.
The concept of regret has been proposed by Marsh et al. [27] as a cognitive inconsistency. Regret can occur from the truster, trustee, or both counterparts. A truster feels regret because a positive trust decision is betrayed by a trustee. In other words, a truster’s regret occurs when their expectations of the interaction toward a trustee are violated and the corresponding betrayal produces severe damage to trust. A trustee feels regret because a negative trust decision is erroneous. This means a trustee expresses regret for what they have done whether it was a wrongdoing or not. Both the truster and trustee can feel regret for a missed opportunity for what they did not do. Forgiveness and regret are considered as implementable properties to formalize the incorporation of trust defining a computational model.
The forgiveness factor has been used as an extended component of classical reputation models [28,29]. It is an optimistic view of reconciliation based on the fact that individuals are more likely to forgive someone who committed an offence that seems distant, rather than close, in time. In other words, an agent should always forgive after a sufficiently large time has passed without any interaction. Moreover, an agent should assign a reputation value to its partner initially or increasing to the highest possible value. Choi et al. [18] analyse the implications of different trust violations and reconciliation tactics based on the development of an agent-based simulation model. The implementation of reconciliation tactics needed will vary depending on the severity of the offence In particular, highly effective reconciliation efforts always result in better outcomes in rebuilding trust, but surely incur a higher cost. Trust-repairing strategies are proposed by Chen et al. [17] to induce consumers’ (as the victims) positive moods. E-vendors (as the offenders) should respond to negative events by initiating various trust-repairing strategies e.g., apology, adequate information and financial compensation, to change the emotional state of the consumers. The result of this study shows that informational repair e.g., clarifying facts and/or updating information, is considered to be the most effective strategy in dealing with negative feelings, especially for online consumers in Taiwan.
Most approaches in the existing works consider sources of information only from dyadic interactions [24,27]. Our approach combines information from both dyadic and community interactions. Besides, some approaches apply only one condition (e.g., regret or forgetting) for evaluating the transgressor’s forgiveness value [27,28,29]. In contrast to our approach, our proposed forgiveness mechanism assembles five different factors to ensure the potential transgressor will engage in future transactions successfully. Moreover, to the best of our knowledge, most existing approaches do not provide mechanisms to encourage future interactions after a reconciliation process, which is contrary to our approach that promotes future interactions through incentive mechanism.
There is also an argument regarding whether forgetting should be a part of a reconciliation process when a transgression occurs. For example, Ambrose et al. [30] state that social benefits of issuing forgiveness can be fully obtained only if the transgression can be forgotten. This is in line with the study of Bishop et al. [31] in which there is a strong relation between forgiving and forgetting. The study explores different approaches to dealing with information the transgressor wishes to be forgotten on the Web e.g., controlling information diffusion through revoking access, hiding the wrong deed by creating false information or flooding with large amounts of similar information, etc. However, as opposed to previous studies, Vasalou et al. [32] and Exline et al. [33] conform to the condition where trust violation cannot be completely forgotten, resulting in a new trust value for the transgressor for which full recovery is impractical. Based on the same perception, our study applies this condition as well.

3. Extended Framework

In our previous work [19], a forgiveness mechanism is defined as a function of five positive motivations: intent, history, apology, severity and importance. Intent refers to the extent to which a trust violation is more or less forgivable depending on a victim’s attribution of a transgressor’s intention. History is one of the key components for trust rebuilding that can increase the tendency towards a victim’s forgiveness. An apology is expected from either a transgressor him/herself as an interpersonal apology or his/her organization as a corporate apology, or both. Severity is similar to intent i.e., greater severity of the trust violation can lead to less positive judgements. In contrast, forgiveness is more likely to be offered when the trust violation is perceived as less severe. Importance refers to the competence of a transgressor to provide a product or service to a victim or other members in the community. If the need for the product or service provided by a transgressor is high, forgiveness tends to be granted in order to fulfil the transaction’s requirement and avoid the lack of available service providers even knowing that the outcomes of future transactions may not be maximized.
Table 1 presents a mapping between positive motivations and three sources of forgiveness. As it can be seen from the table, our forgiveness mechanism is an aggregation function of both subjective and objective views.
The subjective view ( F v ) is a forgiveness value computed from the point of view of a victimized individual:
F v = P M h i + P M a p + P M i m P M i n P M s e
The objective views are forgiveness values assessed at community level i.e., from both victim and transgressor communities:
F v c = i v c i v ( P M h i ( i ) P M s e ( i ) ) + P M a p + P M i m ,
F t c = P M a p + P M i m i t c ( P M s e ( i ) )
The following subsections describe how we extend our forgiveness mechanism i.e., by stimulating cooperation between thevictim and potential transgressor through an incentive mechanism; and defining a range of new reputation value as the zone of forgivability.

3.1. Incentive Mechanism

In order to encourage a victim and members of victim community to cooperate with a transgressor in future transactions, we propose two types of incentives:
  • Financial incentive. A reparative action in the form of monetary compensation provided by the transgressor can be a primary response of functional recovery effort in business environment [34]. In addition, reducing the product or service price can make the transgressor more attractive and more competitive [35]. In this study, financial incentive is in the form of a one-time product’s discount price offered by the transgressor to the victim and the victim community members. The value of the discount price is different depending on the value of forgiveness contributed by the the victim and the victim community members. For the victim, the value of the discount price can be calculated as follows:
    D i s c v = ω v F v ( x , y , Y ) × P t ( y ) ,
    where F v ( x , y , Y ) is the forgiveness value evaluated by the victim x for the violation committed by the transgressor y who is a member of the community Y. ω v is a weight factor of forgiveness value provided by the victim x. P t ( y ) is the product or service price offered by the transgressor y. We calculate the value of discount price for members of victim community by assuming that each member contributes to the forgiveness assessment equivalently:
    D i s c v c = ω v c F v c ( X , y , Y ) N m v c × P t ( y ) ,
    where F v c ( X , y , Y ) is the forgiveness value aggregated from members of victim community X for the violation committed by the transgressor y who is a member of the community Y. ω v c is a weight factor of a forgiveness value provided by the victim community X. N m v c is the number of victim community members.
  • Reputational incentive. The transgressor has an obvious incentive to be trustworthy in order to build up its reputation. A one-time discount price as a reparative monetary compensation can motivate cooperation resulting in increasing the transgressor’s reputation. We evaluate reputational incentive from cooperation between the transgressor and the victim according to the expression:
    R I v = { D i s c v P t m a x × ( 1 R o l d t ) i f R o l d t 0 D i s c v P t m a x × ( 1 + R o l d t ) i f R o l d t < 0
    where P t m a x is the maximum price of product/service offered by the transgressor and R o l d t is the transgressor’s reputation after trust is violated or before considering forgiveness. The reputational incentive from cooperation between the transgressor and the victim community members can be aggregated as follows:
    R I v c = { D i s c v c P t m a x × N m v c × ( 1 R o l d t ) i f R o l d t 0 D i s c v c P t m a x × N m v c × ( 1 + R o l d t ) i f R o l d t < 0
Therefore, the total reputational incentives the transgressor could obtain from cooperation providing financial incentives to the victim and the victim community members is:
R I t o t a l = R I v + R I v c

3.2. Zone of Forgivability

Thus far, we have identified the transgressor’s forgiveness value evaluated by the victim, the victim community and the transgressor community, together with the expected reputation value calculated from future transactions with financial incentives as the reparative mechanism. The transgressor’s new reputation value is simply the summation of these two values along with the transgressor’s existing reputation value before considering forgiveness as follows:
R n e w t = R o l d t + F t o t a l ( t ) + R I t o t a l
The result of Equation (9) should have some limits or boundary values reflecting the fact that the violation that might be forgiven should not be completely forgotten [32,33]. Marsh et al. [27], introduce the concept of the Limit of Forgivability as a minimum baseline of trust value for determining the worth of a transgressor entering into redemption strategies. However, the concept does not state how great the appropriate boundary values after granting the transgressor’s forgiveness should be, which means it is possible that the transgressor’s trust can be fully reinstated.
In this study, the boundary values are indicated as the Zone of Forgivability shown in Figure 1. Specifically, the minimum boundary value of forgivability can be determined as:
M i n f o r g i v a b i l i t y = M a x u n t r u s t + F t h r e s h o l d ,
where M a x u n t r u s t is the possible maximum value of an untrustworthy service provider (which is –1) and F t h r e s h o l d is a constant forgiveness threshold. A transgressor is considered to be potential if the aggregation of existing reputation value and forgiveness assessment exceeds the minimum value of forgivability, that is R o l d t + F t o t a l ( t ) M i n f o r g i v a b i l i t y . Note that the cooperation threshold is used as the maximum value of forgivability which means if the calculated reputation value exceeds the maximum boundary value, then the cooperation threshold will be used as a new reputation:
R n e w t = { R n e w t i f R n e w t < C t h r e s h o l d C t h r e s h o l d i f R n e w t C t h r e s h o l d
where C t h r e s h o l d is a constant cooperation threshold. It is worth noting that the Zone of Forgivability still impedes a potential transgressor from being directly selected for future transactions if R n e w t is less than the cooperation threshold. However, it is not completely rejected but is rather a baseline for interacting counterparts to incorporate some additional information e.g., cost and quality into decision-making [36].

4. Experiments and Results

We have conducted a number of experiments to validate the applicability of the proposed exploration framework based on forgiveness and incentive mechanisms. We use the Java programming environment to simulate the transacting agents and their interactions in e-marketplaces. The main purpose of our experiments is to evaluate the efficiency of simulated e-marketplaces by comparing the implementation of our framework with three other approaches:
  • ALLD: the e-marketplace with a defective attitude and punishment mechanism. This type of marketplace does not consider forgiveness as a part of the online transaction process. Transactions take place only when consumers trade with trustworthy service providers.
  • ALLC: the e-marketplace with a cooperative attitude that always forgives a service provider who violates a transactional agreement. In this type of marketplace, we apply the concept of forgiveness based on quiescence time proposed by [29]. In other words, following the defective transaction, if the current transaction is successful or not carried out after a sufficient quiescence period, the reputation of the transgressive service provider will be slightly increased.
  • Griffiths: the e-marketplace based on the model proposed by Griffiths [36]. This type of marketplace is more competitive but also does not implement forgiveness the same as ALLD. However, transactions can still take place if the interacting service providers are untrustworthy by simply switching to other trustworthy service providers based on some discrete or probabilistic values. In our study, the service provider with highest reputation will be chosen.

4.1. Experimental Setting

In our simulated e-marketplaces, there are service providers and consumers interacting for the same kind of product with homogenous acceptable quality. At the outset, the numbers of service providers and consumers are 50 and 100, respectively. Each service provider has the same number of products, 1000 units, and only one unit of product is traded for each transaction. Both service providers and consumers value the price of product differently, which is randomly distributed between 100 and 200 for service providers and between 200 and 250 for consumers.
In each time period, transactions are carried out by random matching between service providers and consumers. In other words, there is a maximum of 100 transactions in each time period, in which some service providers may have been chosen for trading more than once and some may possibly have not been chosen. The reputation value of service providers lies in a range between 1 and 1 and is updated after the transaction is completed as follows:
R s p t = i = 0 t Γ s p i × R s p i i = 0 t Γ s p i ,
where Γ s p is a service provider’s reputation fading factor. As the recent reputation value is more important than the older one, we simply set Γ s p t = 1 = 0.8 and Γ s p t = 0 = 0.2 . We also define a trustworthy service provider as an agent having a positive reputation value ( R s p > 0 ) whereas an untrustworthy service provider is an agent having a negative reputation value ( R s p 0 ). All service providers are bootstrapped by initially assigning a reputation value of 0.5.
After each transaction, we calculate the consumer’s utility obtained according to the law of diminishing marginal utility in economics [37] based on the use of an exponential function. The utility function is proportional to the last updated reputation of the interacting service provider and is defined as follows:
U c t = { U m a x i f R s p t = 1 U m a x × ( 1 e λ R s p t ) i f 0 R s p t < 1 U m a x × ( e λ R s p t 1 ) i f 1 < R s p t < 0 U m i n i f R s p t = 1
where λ is a constant scaling factor for controlling the speed at which utility is increased or decreased and is set to 5 in our experiments. U m a x and U m i n are the maximum and minimum utility the consumers can gain for each transaction which we assume to be 1000 and 1000 , respectively. Additionally, there is a maximum of five service providers which are randomly marked as defective for every time period of interaction. In this way, the transaction between a service provider and a consumer is carried out if it satisfies the following conditions:
  • The service provider is trustworthy ( R s p > 0 ) and the number of products is greater than 0.
  • The service provider is defective ( R s p > 0 ) and the number of products is greater than 0.
  • The service provider is untrustworthy ( R s p 0 ) and the number of products is greater than 0. Also, the number of forgiveness interventions (denoted as η ) is not more than 2, which means each untrustworthy service provider can be granted forgiveness only twice.
More specifically, the condition (1) and (2) are applied to all simulated e-marketplaces (our framework, ALLD, ALLC, and Griffiths) while the condition (3) is applied only to the e-marketplace incorporating our proposed framework. We summarize all parameter settings used in the experiment in Table 2.

4.2. Experiment 1: Robustness Evaluation

The purpose of this experiment is to examine how robust the e-marketplace can operate as the number of untrustworthy service providers continuously increases over time and new service providers are not allowed to join the marketplace. The number of service providers is initially set to 50 and they are all trustworthy. However, at the end of each time period, some defective service providers will become untrustworthy and will be forced to leave the marketplace. As a result, the number of untrustworthy service providers continuously increases and ultimately leads to the e-marketplace failure due to the lack of trustworthy service providers.
In the first sub-experiment, we compare the average reputation values of service providers in different e-marketplaces. The average reputation value is calculated only from the reputation of trustworthy service providers. Before the collapse of the e-marketplaces as shown in Figure 2a, the e-marketplace incorporating our framework has a more stable average reputation value and a longer period of interaction time than all other e-marketplaces. The average reputation values in ALLD and Griffiths nosedive to the lowest value much quicker, by about three times compared to ALLC and four times compared to our framework.
In the second sub-experiment, we examine the numbers of market transactions generated at different time periods. The results are shown in Figure 2b. At the beginning, all e-marketplaces can successfully generate market transactions as all service providers are trustworthy. Over time, as the numbers of untrustworthy service providers increase, the transaction volumes in ALLD drop much more quickly while ALLC and the e-marketplace incorporating our framework decrease at a slower pace. A considerably different result can be observed in Griffiths where the maximum number of market transactions can be generated in each time step as long as there are trustworthy service providers available to consumers. Otherwise, we can notice a sudden drop of transaction volumes as the e-marketplace only performs by defective service providers.
In the third sub-experiment, we compare the numbers of untrustworthy service providers as time progresses. It can be seen in Figure 3a that the numbers of untrustworthy service providers increase constantly to the point that trustworthy service providers are no longer available to all consumers. However, the increasing rate of number of untrustworthy service providers in the e-marketplace incorporating our framework is much less than that in ALLD, Griffiths and ALLC. In other words, for the e-marketplace in which new service providers are not allowed to join, our framework can improve the e-marketplace to operate more efficiently than other approaches.
In the fourth sub-experiment, we compare the utilities that all consumers obtain at different time points calculated by using Equation (13). As depicted in Figure 3b, the consumers in ALLC and the e-marketplace incorporating our framework gain a higher utility because of a cooperative setting with conditions (i.e., our framework) and without condition (i.e., ALLC). The consumers in Griffiths suffer a sudden drop of utilities which is similar to the result of transaction volumes in Figure 2b due to a rapid decline of number of trustworthy service providers.

4.3. Experiment 2: Comparison of Dynamic E-Marketplaces

The purpose of this experiment is to investigate the efficiency of the proposed framework by evaluating the average reputation values, total market transactions, market transactions at different time points, number of untrustworthy service providers and number of utilities obtained at different time points in different dynamic e-marketplaces. Initially, the e-marketplaces have the same number of trustworthy service providers as in Experiment 1 (i.e., 50). However, in this experiment we allow new service providers to join the e-marketplace at the end of each time period. The number of newly joined service providers is equal to the number of untrustworthy service providers leaving the e-marketplace in each time period. As a consequence, the number of trustworthy service providers remains constant over the course of the experiment. For simplicity, the results of each sub-experiment are analysed after 1500 time periods.
In the first sub-experiment, we compare the average reputation values of service providers in different e-marketplaces. As shown in Figure 4, the average reputation values of service providers in the e-marketplace incorporating our framework are slightly higher than in ALLD and ALLC. Note that when compared with Griffiths, the result is almost equivalent to our framework. However, all e-marketplaces maintain the same high average reputation values throughout the entire experiment. It is also worth noting that implementing different reputation approaches clearly results in a different outcomes.
In the second sub-experiment, total transactions and transaction volumes at different time periods are measured. In Figure 5a, ALLC has the highest total number of transactions compared to other e-marketplaces as a result of an optimistic setting. Interestingly, the total transaction volumes of Griffiths increase sharply, then remain nearly constant after time period of 720 as the ability for switching to other trustworthy service providers makes the majority of them run out of products. The increase rate of the total number of transactions of the e-marketplace incorporating our framework is greater than for ALLD and Griffiths (after a time period of 1050). In Figure 5b, the comparison of transaction volumes generated by different e-marketplaces at different time periods are presented. Again, we notice a sudden drop of number of transaction volumes generated in Griffiths which is similar to the results in Figure 2b. However, the reason of this drop-off is not due to the lack of trustworthy service providers, but rather the lack of available products provided by trustworthy service providers. Conversely, the decreasing number of market transactions in ALLD, ALLC and the e-marketplace incorporating our framework is mainly due to the increasing number of untrustworthy service providers. Clearly, for an e-marketplace with limited inventory, our framework is capable of dealing with a constant number of defective service providers, helping the e-marketplace to operate and generate market transactions better than other approaches.
In the third sub-experiment, the numbers of untrustworthy service providers in different e-marketplaces are compared. As we can see in Figure 6a, the e-marketplace incorporating our framework and ALLC have the lowest number of untrustworthy service providers as these e-marketplaces are driven by the benefits of the forgiveness mechanism. Interestingly, the number of untrustworthy service providers in Griffiths outnumbers the number of untrustworthy service providers in all other e-marketplaces by a huge margin due to the capability of product consumption. In other words, the ability to switch to other trustworthy service providers reduces the number of service providers with available products very quickly and at the same time increases the tendency of defective service providers to become untrustworthy.
In the fourth sub-experiment, the utilities of each consumer which are calculated according to Equation (13) are aggregated and compared. Figure 6b confirms the results in the second sub-experiment and can be described in the same manner as in Figure 5a,b.

5. Conclusions and Future Work

In this study, an extended framework of the forgiveness mechanism by incorporating incentive mechanisms to encourage the victim and the members of victim community to cooperate with the transgressor in future transactions is presented. We propose two types of incentives: a financial incentive as a reparative action provided by the transgressor and a reputational incentive for the transgressor to cooperate in order to rebuild trust. The outcome of the proposed framework can be utilized in deciding whether there is potential for recovery after a trust violation. However, even though the violation of norms can be forgiven, it should not be completely forgotten. Therefore, the Zone of Forgivability is introduced, indicating the minimum and maximum boundary values of forgivability. We further conduct a series of experiments to evaluate the effectiveness of the proposed framework through the comparison of different simulated e-marketplaces. The findings of our study are underlined by the fact that incorporation of our proposed framework into the e-marketplace is efficient, especially in long-term interactions, as a result of collaboration with potential counterparts whose trust is conditionally recovered from violation.
There are some issues worth being further addressed in future work. For example, risk assessment is considered as necessary when allowing trust of untrustworthy counterparts to be recovered, especially in risky environments. In this regard, incorporating risk assessment within the framework can make the process of trust recovery more robust. Another issue is the incentive mechanism as a reparative action. Apart from a monetary compensation, it would be interesting for the affected individuals of trust violation to be more flexible in negotiating other attributes such as quality, warranties and delivery time. Moreover, it may be of interest to validate the effectiveness of our approach by extending experiments in more varied dynamic e-marketplace environments.

Acknowledgments

This paper is supported by Nature Science Foundation of China (NSFC) under grant number 61572095 and 61272173.

Author Contributions

Ruchdee Binmad and Mingchu Li conceived and designed the framework; Ruchdee Binmad performed the experiments; Ruchdee Binmad, Mingchu Li, Zhen Wang, Nakema Deonauth, and Chettupally Anil Carie analyzed the experimental results; Ruchdee Binmad and Nakema Deonauth wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Falcone, R.; Castelfranchi, C. Socio-Cognitive Model of Trust. In Encyclopedia of Information Science and Technology; IGI Publishing: Hershey, PA, USA, 2005; pp. 2534–2538. [Google Scholar]
  2. Braynov, S.; Sandholm, T. Contracting with uncertain level of trust. Comput. Intell. 2002, 18, 501–514. [Google Scholar] [CrossRef]
  3. Jøsang, A.; Ismail, R.; Boyd, C. A survey of trust and reputation systems for online service provision. Decis. Support Syst. 2007, 43, 618–644. [Google Scholar] [CrossRef] [Green Version]
  4. Giardini, F.; Conte, R.; Mario, P. Reputation. In Simulating Social Complexity: A Handbook; Edmonds, B., Meyer, R., Eds.; Springer: Berlin, Germany, 2013; Chapter 15; pp. 365–399. [Google Scholar] [Green Version]
  5. Wang, Y.; Vassileva, J. Toward trust and reputation based web service selection: A survey. Int. Trans. Syst. Sci. Appl. 2007, 3, 118–132. [Google Scholar]
  6. Tavakolifard, M.; Almeroth, K.C. Social computing: An intersection of recommender systems, trust/reputation systems, and social networks. Netw. IEEE 2012, 26, 53–58. [Google Scholar] [CrossRef]
  7. Mui, L. Computational Models of Trust and Reputation: Agents, Evolutionary Games, and Social Networks. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2002. [Google Scholar]
  8. Burnett, C.; Norman, T.; Sycara, K. Bootstrapping Trust Evaluations through Stereotypes. In Proceedings of the 9th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2010), Toronto, ON, Canada, 10–14 May 2010; pp. 241–248. [Google Scholar]
  9. Liu, X.; Datta, A.; Rzadca, K. Trust beyond reputation: A computational trust model based on stereotypes. Electr. Commer. Res. Appl. 2013, 12, 24–39. [Google Scholar] [CrossRef]
  10. Malik, Z.; Bouguettaya, A. Reputation bootstrapping for trust establishment among web services. IEEE Internet Comput. 2009, 13, 40–47. [Google Scholar] [CrossRef]
  11. Skopik, F.; Schall, D.; Dustdar, S. Start trusting strangers? Bootstrapping and prediction of trust. In Web Information Systems Engineering—WISE 2009; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5802, pp. 275–289. [Google Scholar]
  12. Wu, Q.; Zhu, Q.; Li, P. A neural network based reputation bootstrapping approach for service selection. Enterp. Inf. Syst. 2015, 9, 768–784. [Google Scholar] [CrossRef]
  13. Yahyaoui, H.; Zhioua, S. Bootstrapping trust of Web services based on trust patterns and Hidden Markov Models. Knowl. Inf. Syst. 2012, 37, 389–416. [Google Scholar] [CrossRef]
  14. Javanmardi, S.; Shojafar, M.; Shariatmadari, S.; Ahrabi, S.S. FR TRUST: A Fuzzy Reputation Based Model for Trust Management in Semantic P2P Grids. Int. J. Grid Util. Comput. 2014, 6, 57–66. [Google Scholar] [CrossRef]
  15. Ramchurn, S.D.; Huynh, D.; Jennings, N.R. Trust in multi-agent systems. Knowl. Eng. Rev. 2004, 19, 1–25. [Google Scholar] [CrossRef]
  16. Kim, H.P.; Dirks, T.K.; Cooper, D.C.; Ferrin, L.D. When more blame is better than less: The implications of internal vs. external attributions for the repair of trust after a competence- vs. integrity-based trust violation. Organ. Behav. Hum. Decis. Process. 2006, 99, 49–65. [Google Scholar] [CrossRef]
  17. Chen, Y.H.; Wu, J.J.; Chang, H.T. Examining the mediating effect of positive moods on trust repair in e-commerce. Internet Res. 2013, 23, 355–371. [Google Scholar] [CrossRef]
  18. Choi, J.; Nazareth, D.L. Repairing trust in an e-commerce and security context: An agent-based modeling approach. Inf. Manag. Comput. Secur. 2014, 22, 490–512. [Google Scholar] [CrossRef]
  19. Binmad, R.; Li, M. Improving the efficiency of an online marketplace by incorporating forgiveness mechanism. ACM Trans. Internet Technol. 2017, 17. [Google Scholar] [CrossRef]
  20. eBay. All about Feedback. Available online: http://pages.ebay.com/help/feedback/allaboutfeedback.html (accessed on 14 June 2015).
  21. Abeler, J.; Calaki, J.; Andree, K.; Basek, C. The power of apology. Econ. Lett. 2010, 107, 233–235. [Google Scholar] [CrossRef]
  22. Vasalou, A.; Pitt, J. Reinventing Forgiveness: A formal Investigation of Moral Facilitation. In Proceedings of the 3rd international conference on Trust Management, Paris, France, 23–26 May 2005; Volume 3477, pp. 146–160. [Google Scholar]
  23. Vasalou, A.; Pitt, J.; Piolle, G. From Theory to Practice: Forgiveness as a Mechanism to Repair Conflicts in CMC. In Trust Management; Springer: Berlin, Germany, 2006; Volume 3986, pp. 397–411. [Google Scholar]
  24. Vasalou, A.; Hopfensitz, A.; Pitt, J.V. In praise of forgiveness: Ways for repairing trust breakdowns in one-off online interactions. Int. J. Hum. Comput. Stud. 2008, 66, 466–480. [Google Scholar] [CrossRef]
  25. Bansal, G.; Zahedi, F.M. Trust Violation and Repair: The Information Privacy Perspective. Decis. Support Syst. 2015, 71, 62–77. [Google Scholar] [CrossRef]
  26. Tomlinson, E.C.; Mryer, R.C. The Role of Causal Attribution Dimensions in Trust Repair. Acad. Manag. Rev. 2009, 34, 85–104. [Google Scholar] [CrossRef]
  27. Marsh, S.; Briggs, P. Examining Trust, Forgiveness and Regret as Computational Concepts. In Computing with Social Trust; Golbeck, J., Ed.; Human-Computer Interaction Series; Springer: Berlin, Germany, 2009; pp. 9–43. [Google Scholar]
  28. Burete, R.; Bădică, A.; Bădică, C. Reputation Model with Forgiveness Factor for Semi-Competitive E-Business Agent Societies. In Networked Digital Technologies; Springer: Berlin, Germany, 2010; pp. 402–416. [Google Scholar]
  29. Burete, R.; Bădică, A.; Bădică, C.; Moraru, F. Enhanced Reputation Model with Forgiveness for E-Business Agents. Int. J. Agent Technol. Syst. 2011, 3, 11–26. [Google Scholar] [CrossRef]
  30. Ambrose, M.L.; Friess, N.; Matre, J.V. Seeking Digital Redemption: The Future of Forgiveness in the Internet age. Santa Clara Comput. High Technol. Law J. 2012, 29, 99–163. [Google Scholar]
  31. Bishop, M.; Butler, K.; Gates, C.; Greenspan, S. Forgive and Forget: Return to Obscurity. In Proceedings of the 2013 New Security Paradigms Workshop, Banff, AB, Canada, 9–12 September 2013; pp. 1–9. [Google Scholar]
  32. Vasalou, A.; Riegelsberger, J.; Joinson, A. The Application of Forgiveness in Social System Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 225–228. [Google Scholar]
  33. Exline, J.J.; Worthington, E.L.; Hill, P.; McCullough, M.E. Forgiveness and justice: A research agenda for social and personality psychology. Personal. Soc. Psychol. Rev. 2003, 7, 337–348. [Google Scholar] [CrossRef] [PubMed]
  34. Xie, Y.; Peng, S. How to repair customer trust after negative publicity: The roles of competence, integrity, benevolence, and forgiveness. Psychol. Mark. 2009, 26, 572–589. [Google Scholar] [CrossRef]
  35. Burnett, C.; Norman, T.; Sycara, K. Trust Decision-Making in Multi-Agent Systems. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence Trust, Barcelona, Catalonia, Spain, 16–22 July 2011; pp. 115–120. [Google Scholar]
  36. Griffiths, N. A Fuzzy Approach to Reasoning with Trust, Distrust and Insufficient Trust. In Cooperative Information Agents X; Lecture Notes in Computer Science; Springer: Berlin, Germany, 2006; pp. 360–374. [Google Scholar]
  37. Mcculloch, J.H. The Austrian Theory of the Marginal Use And of Ordinal Marginal Utility. J. Econ. 1977, 37, 249–280. [Google Scholar] [CrossRef]
Figure 1. The Zone of Forgivability.
Figure 1. The Zone of Forgivability.
Futureinternet 09 00036 g001
Figure 2. (a) Average reputation values; (b) Transaction volumes at different time periods.
Figure 2. (a) Average reputation values; (b) Transaction volumes at different time periods.
Futureinternet 09 00036 g002
Figure 3. (a) Number of untrustworthy service providers; (b) Utility obtained at different time points.
Figure 3. (a) Number of untrustworthy service providers; (b) Utility obtained at different time points.
Futureinternet 09 00036 g003
Figure 4. Average reputation of each e-marketplace.
Figure 4. Average reputation of each e-marketplace.
Futureinternet 09 00036 g004
Figure 5. (a) Total transactions; (b) Transactions volumes at different time periods.
Figure 5. (a) Total transactions; (b) Transactions volumes at different time periods.
Futureinternet 09 00036 g005
Figure 6. (a) Number of untrustworthy service providers; (b) Utility obtained at different time periods.
Figure 6. (a) Number of untrustworthy service providers; (b) Utility obtained at different time periods.
Futureinternet 09 00036 g006
Table 1. Positive motivations for evaluating forgiveness from the viewpoints of different forgiveness sources [19].
Table 1. Positive motivations for evaluating forgiveness from the viewpoints of different forgiveness sources [19].
Positive MotivationsVictimVictim CommunityTransgressor Community
(PM)( F v )( F vc )( F tc )
Intent ( P M i n )
History ( P M h i )
Apology ( P M a p )
Severity ( P M s e )
Importance ( P M i m )
Table 2. Summary of experimental parameters.
Table 2. Summary of experimental parameters.
Number of service providers50
Number of consumers100
Number of products1000
Service providers’ price of product[100, 200]
Consumers’ price of product[200, 250]
Reputation value of service providers[−1, 1]
Initial reputation value of service providers 0.5
Service providers’ reputation fading factor ( Γ s p t = 0 ) 0.2
Service providers’ reputation fading factor ( Γ s p t = 1 ) 0.8
Consumers’ utility scaling factor ( λ )5
Consumers’ utility[−1000, 1000]
Maximum defective service providers5
Number of forgiveness interventions ( η )2
Forgivability threshold ( F t h r e s h o l d ) 0.5
Cooperation threshold ( C t h r e s h o l d ) 0.5
Forgiveness contribution weight factors ( ω v , ω v c ) 0.5 , 0.3

Share and Cite

MDPI and ACS Style

Binmad, R.; Li, M.; Wang, Z.; Deonauth, N.; Carie, C.A. An Extended Framework for Recovering From Trust Breakdowns in Online Community Settings. Future Internet 2017, 9, 36. https://doi.org/10.3390/fi9030036

AMA Style

Binmad R, Li M, Wang Z, Deonauth N, Carie CA. An Extended Framework for Recovering From Trust Breakdowns in Online Community Settings. Future Internet. 2017; 9(3):36. https://doi.org/10.3390/fi9030036

Chicago/Turabian Style

Binmad, Ruchdee, Mingchu Li, Zhen Wang, Nakema Deonauth, and Chettupally Anil Carie. 2017. "An Extended Framework for Recovering From Trust Breakdowns in Online Community Settings" Future Internet 9, no. 3: 36. https://doi.org/10.3390/fi9030036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop