# An Adaptive Dempster-Shafer Theory of Evidence Based Trust Model in Multiagent Systems

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

**consistency**. Consistency displays the differences or similarities with other evidence. It is helpful, especially for those agents whose trust and certainty are unknown. However, most of our reviewed articles did not pay much attention to consistency for trust estimation in MASs.

## 3. Dempster-Shafer Theory of Evidence

**Definition**

**1.**

**Definition**

**2.**

**Definition**

**3.**

**Definition**

**4.**

**Definition**

**5.**

**Definition**

**6.**

**Example**

**1.**

- 1.
- ${m}_{1}(Cat)=0.6$, ${m}_{1}(Dog)=0.3$, ${m}_{1}(Cat,Dog)=0.1$
- 2.
- ${m}_{2}(Cat)=0.4$, ${m}_{2}(Dog)=0.4$, ${m}_{2}(Cat,Dog)=0.2$
- 3.
- ${m}_{3}(Cat)=0.1$, ${m}_{3}(Dog)=0.4$, ${m}_{3}(Cat,Dog)=0.5$

- $\overrightarrow{{m}_{1}}=\{Cat,Dog,(Cat,Dog)\}=(0.6,0.3,0.1)$. Similarly, $\overrightarrow{{m}_{2}}=(0.4,0.4,0.2)$, $\overrightarrow{{m}_{3}}=(0.1,0.4,0.5);$
- $\mathrm{D}=\begin{array}{c}Cat\\ Dog\\ \left(Cat,Dog\right)\end{array}\begin{array}{c}\begin{array}{ccc}Cat& Dog& \left(Cat,Dog\right)\end{array}\\ \left[\begin{array}{ccc}1& 0& 0.5\\ 0& 1& 0.5\\ 0.5& 0.5& 1\end{array}\right]\end{array}$
- $(\overrightarrow{{m}_{1}}-\overrightarrow{{m}_{2}})=(0.2,-0.1,-0.1);$
- $d({m}_{1},{m}_{2})=\sqrt{\frac{1}{2}{(\overrightarrow{{m}_{1}}-\overrightarrow{{m}_{2}})}^{T}D(\overrightarrow{{m}_{1}}-\overrightarrow{{m}_{2}})}=0.158$. In a similar way, $d({m}_{1},{m}_{3})=0.361$ and $d({m}_{2},{m}_{3})=0.212.$

- $Pl\_{P}_{m}(Cat)={\displaystyle \frac{P{l}_{m}(Cat)}{{\sum}_{x\in \Omega}(P{l}_{m}(x))}}=$$\frac{0.6\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}0.1}{0.6\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}0.1\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}0.3\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}0.1}}=$$\frac{7}{11}$. In the same manner, $Pl\_{P}_{m}(Dog)=\frac{4}{11};$
- Thus, H(${m}_{1}$)$=\frac{7}{11}lo{g}_{2}(\frac{7}{11})$$\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}\frac{4}{11}lo{g}_{2}(\frac{4}{11})\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}$$0.6\ast log2(|Cat|)\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}0.3\ast log2(|Dog|)\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}$$0.1\ast log2(|(Cat,Dog)|)=1.046.$ In the same way, H(${m}_{2})=1.2,$ and H(${m}_{3})=1.47.$ As we can see, the testimony given by the third passer-by is the most vague and uncertain.

## 4. Proposed Trust Model

#### 4.1. Direct Trust

#### 4.2. Indirect Reputation

- Some agents might be dishonest;
- A group of agents can conspire to be deceptive;
- Some agents might have incorrect or insufficient information;
- Agents may be uncertain about their provided information;
- Agents might be used for deception.

- The estimated trust from the witness’ experience: This aspect could be modeled with the same method as direct trust. The framework of the Dempster-Shafer theory of evidence is adopted to capture its direct trust from the perspective of the information provider (witness).
- Evidence consistency: As explained, both witnesses and service providers might present misleading information. For instance, the client ${c}_{i}$ trusts the witness ${w}_{k}$. However, ${w}_{k}$ does not have sufficient testimony about the service provider ${s}_{j}$, or ${w}_{k}$ presents false information deliberately because of a conflict of interest. Under those circumstances, the quality of its received information makes great sense. The conflict is applied to identify the differences between evidences, and—contrary to the differences–the similarities between received evidences could be adopted to judge the quality of the provided information.
- The credibility of witnesses: This factor is measured by a real number in the range of $[0,1]$. We initialize it to a fixed number indicating how much the witness can be trusted. Subsequently, it is updated interaction by interaction by the interactive feedback. The credibility of close to 1 shows that it is reliable. Otherwise, the witness is of low trustworthiness, and interaction should be avoided. In this way, a group of agents no longer has the opportunity to conspire to deceive.
- Certainty of a witness: This factor plays a large role in interaction-based trust estimation among MASs. For instance, agent A trusts agent B completely. However, it is questionable to trust agent B if B is uncertain of its evidence to agent C. In this paper, we believe that certainty is dependent on evidence, and we refine certainty from the received evidence from the entropy perspective. We assume that trusted, witness-provided, good-quality, high-certainty evidence makes a greater contribution to trust estimation.

#### 4.2.1. Evidence Consistency

#### 4.2.2. Credibility of Individual Witnesses

#### 4.2.3. Model Certainty

- First, certainty decreases as the extent of conflict increases in the evidence (we manage certainty by the ratio of positive and negative observations, namely, trust and distrust). Therefore, evidence certainty decreases as the ratio of positive and negative interactions increases.
- Second, certainty decreases as uncertain information increases. That is to say, evidence certainty increases if $m(T,nT)$ decreases.

#### 4.2.4. Model Indirect Reputation

#### 4.3. Model Overall Trust

#### 4.4. Update Credibility

- If the interactive feedback is positive:
**case 1:**$Pl\_{P}_{m}(T)\ge 0.5,$- $\hspace{1em}Cred{({c}_{i},{w}_{k})}_{t+1}=Cred{({c}_{i},{w}_{k})}_{t}\ast (1+\upsilon {D}_{k});$
**case 2:**$Pl\_{P}_{m}(T)<0.5,$- $\hspace{1em}Cred{({c}_{i},{w}_{k})}_{t+1}=Cred{({c}_{i},{w}_{k})}_{t}\ast (1-\upsilon {D}_{k});$

- If the interactive feedback is negative:
**case 3:**$Pl\_{P}_{m}(T)\ge 0.5,$- $\hspace{1em}Cred{({c}_{i},{w}_{k})}_{t+1}=Cred{({c}_{i},{w}_{k})}_{t}\ast (1-\upsilon {D}_{k});$
**case 4:**$Pl\_{P}_{m}(T)<0.5,$- $\hspace{1em}Cred{({c}_{i},{w}_{k})}_{t+1}=Cred{({c}_{i},{w}_{k})}_{t}\ast (1+\upsilon {D}_{k}).$

#### 4.5. Incentives

- If the interactive feedback matches the testimony, then$$C{R}_{i}=C{R}_{i}[1+Cred({c}_{i},{w}_{k})\ast Cer({c}_{i},{w}_{k})]$$
- If the interactive feedback does not match the testimony, then$$C{R}_{i}=C{R}_{i}[1-Cred({c}_{i},{w}_{k})\ast Cer({c}_{i},{w}_{k})]$$

## 5. Simulation

#### 5.1. Evidence Generation

#### 5.2. Manage Trust Certainty with Entropy

#### 5.2.1. Certainty Rises with Increasing Experiences under Fixed Conflict

#### 5.2.2. Certainty Rises with Increasing Conflict under Fixed Experience

#### 5.2.3. Comparison and Discussion

#### 5.3. The Overall Trust Model

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Liau, C.J. Belief, information acquisition, and trust in multi-agent systems—A modal logic formulation. Artif. Intell.
**2003**, 149, 31–60. [Google Scholar] [CrossRef] [Green Version] - Jøsang, A.; Ismail, R. The beta reputation system. In Proceedings of the 15th Bled Electronic Commerce Conference, Bled, Slovenia, 17–19 June 2002; Volume 5, pp. 2502–2511. [Google Scholar]
- Burnett, C.; Oren, N. Sub-delegation and trust. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain, 4–8 June 2012; Volume 3, pp. 1359–1360. [Google Scholar]
- Yu, H.; Shen, Z.; Leung, C.; Miao, C.; Lesser, V.R. A survey of multi-agent trust management systems. IEEE Access
**2013**, 1, 35–50. [Google Scholar] - Jøsang, A.; Ismail, R.; Boyd, C. A survey of trust and reputation systems for online service provision. Decis. Support Syst.
**2007**, 43, 618–644. [Google Scholar] [CrossRef] [Green Version] - Falcone, R.; Pezzulo, G.; Castelfranchi, C.; Calvi, G. Why a cognitive trustier performs better: Simulating trust-based contract nets. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, New York, NY, USA, 19–23 July 2004; IEEE Computer Society: Washington, DC, USA, 2004; Volume 3, pp. 1394–1395. [Google Scholar]
- Peng, M.; Xu, Z.; Pan, S.; Li, R.; Mao, T. AgentTMS: A MAS Trust Model based on Agent Social Relationship. JCP
**2012**, 7, 1535–1542. [Google Scholar] [CrossRef] [Green Version] - Ferrario, A.; Loi, M.; Viganò, E. In AI We Trust Incrementally: A Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions. Philos. Technol.
**2020**, 33, 523–539. [Google Scholar] [CrossRef] [Green Version] - Teacy, W.L.; Patel, J.; Jennings, N.R.; Luck, M. Travos: Trust and reputation in the context of inaccurate information sources. Auton. Agents Multi-Agent Syst.
**2006**, 12, 183–198. [Google Scholar] [CrossRef] [Green Version] - Jiang, S.; Zhang, J.; Ong, Y.S. An evolutionary model for constructing robust trust networks. In Proceedings of the 2013 International Conference on Autonomous Agents and Multiagent Systems, Saint Paul, MN, USA, 6–10 May 2013; pp. 813–820. [Google Scholar]
- Teacy, W.L.; Luck, M.; Rogers, A.; Jennings, N.R. An efficient and versatile approach to trust and reputation using hierarchical bayesian modelling. Artif. Intell.
**2012**, 193, 149–185. [Google Scholar] [CrossRef] - Elham, P.; HosseinNikravan, M.; Zillesa, S. Indirect trust Is Simple to Establish. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 3216–3222. [Google Scholar]
- Marsh, S.P. Formalising Trust as a Computational Concept. Ph.D. Thesis, Ontario Tech University, Oshawa, ON, Canada, 1994. [Google Scholar]
- Reagle, J.M. Trust in a Cryptographic Economy and Digital Security Deposits: Protocols and Policies. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1996. [Google Scholar]
- Basheer, G.S.; Ahmad, M.S.; Tang, A.Y.; Graf, S. Certainty, trust and evidence: Towards an integrative model of confidence in multi-agent systems. Comput. Hum. Behav.
**2015**, 45, 307–315. [Google Scholar] - Muller, G.; Vercouter, L.; Boissier, O. Towards a general definition of trust and its application to openness in MAS. In Proceedings of the AAMAS-2003 Workshop on Deception, Fraud and Trust, Melbourne, Australia, 14–18 July 2003. [Google Scholar]
- Feng, R.; Xu, X.; Zhou, X.; Wan, J. A trust evaluation algorithm for wireless sensor networks based on node behaviors and ds evidence theory. Sensors
**2011**, 11, 1345–1360. [Google Scholar] [CrossRef] - Urena, R.; Kou, G.; Dong, Y.; Chiclana, F.; Herrera-Viedma, E. A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. Inf. Sci.
**2019**, 478, 461–475. [Google Scholar] [CrossRef] - Cheng, M.; Yin, C.; Zhang, J.; Nazarian, S.; Deshmukh, J.; Bogdan, P. A general trust framework for multi-agent systems. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, Online, 3–7 May 2021; pp. 332–340. [Google Scholar]
- Wang, Y.; Singh, M.P. Formal Trust Model for Multiagent Systems. IJCAI
**2007**, 7, 1551–1556. [Google Scholar] - Fung, C.J.; Zhang, J.; Aib, I.; Boutaba, R. Dirichlet-based trust management for effective collaborative intrusion detection networks. IEEE Trans. Netw. Serv. Manag.
**2011**, 8, 79–91. [Google Scholar] [CrossRef] - Parhizkar, E.; Nikravan, M.H.; Holte, R.C.; Zilles, S. Combining Direct Trust and Indirect Trust in Multi-Agent Systems. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 11–17 July 2020. [Google Scholar]
- Fortino, G.; Fotia, L.; Messina, F.; Rosaci, D.; Sarné, G.M. Trust and reputation in the internet of things: State-of-the-art and research challenges. IEEE Access
**2020**, 8, 60117–60125. [Google Scholar] [CrossRef] - Shehada, D.; Yeun, C.Y.; Zemerly, M.J.; Al-Qutayri, M.; Al-Hammadi, Y.; Hu, J. A new adaptive trust and reputation model for mobile agent systems. J. Netw. Comput. Appl.
**2018**, 124, 33–43. [Google Scholar] [CrossRef] - Berenji, H.R. Treatment of uncertainty in artificial intelligence. Mach. Intell. Auton. Aerosp. Syst.
**1988**, 115, 233–247. [Google Scholar] - Barber, K.S.; Fullam, K.; Kim, J. Challenges for trust, fraud and deception research in multi-agent systems. In Workshop on Deception, Fraud and Trust in Agent Societies; Springer: Berlin/Heidelberg, Germany, 2002; pp. 8–14. [Google Scholar]
- Yu, B.; Singh, M.P. Distributed reputation management for electronic commerce. Comput. Intell.
**2002**, 18, 535–549. [Google Scholar] [CrossRef] - Yu, B.; Singh, M.P. Detecting deception in reputation management. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, Melbourne, Australia, 14–18 July 2003; ACM: New York, NY, USA, 2003; pp. 73–80. [Google Scholar]
- Yu, B.; Kallurkar, S.; Flo, R. A demspter-shafer approach to provenance-aware trust assessment. In Proceedings of the 2008 International Symposium on Collaborative Technologies and Systems, Irvine, CA, USA, 19–23 May 2008; pp. 383–390. [Google Scholar]
- Zuo, Y.; Liu, J. A reputation-based model for mobile agent migration for information search and retrieval. Int. J. Inf. Manag.
**2017**, 37, 357–366. [Google Scholar] [CrossRef] - Ramchurn, S.; Sierra, C.; Godó, L.; Jennings, N.R. A computational trust model for multi-agent interactions based on confidence and reputation. In Proceedings of the 6th International Workshop of Deception, Fraud and Trust in Agent Societies, Melbourne, Australia, 1 January 2003. [Google Scholar]
- Das, A.; Islam, M.M. SecuredTrust: A dynamic trust computation model for secured communication in multiagent systems. IEEE Trans. Dependable Secur. Comput.
**2011**, 9, 261–274. [Google Scholar] [CrossRef] - Bilgin, A.; Dooley, J.; Whittington, L.; Hagras, H.; Henson, M.; Wagner, C.; Malibari, A.; Al-Ghamdi, A.; Alhaddad, M.J.; Alghazzawi, D. Dynamic profile-selection for zslices based type-2 fuzzy agents controlling multi-user ambient intelligent environments. In Proceedings of the 2012 IEEE International Conference on Fuzzy Systems, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
- Fraser, M. How to Measure Anything: Finding the Value of “Intangibles” in Business. People Strategy
**2011**, 34, 58–60. [Google Scholar] - Huynh, T.D.; Jennings, N.R.; Shadbolt, N. Developing an integrated trust and reputation model for open multi-agent systems. In Proceedings of the 7th International Workshop on Trust in Agent Societies, New York, NY, USA, 1 January 2004. [Google Scholar]
- Noorian, Z.; Ulieru, M. The state of the art in trust and reputation systems: A framework for comparison. J. Theor. Appl. Electron. Commer. Res.
**2010**, 5, 97–117. [Google Scholar] [CrossRef] [Green Version] - Yu, H.; Shen, Z.; Miao, C.; An, B.; Leung, C. Filtering trust opinions through reinforcement learning. Decis. Support Syst.
**2014**, 66, 102–113. [Google Scholar] [CrossRef] - Rishwaraj, G.; Ponnambalam, S.; Kiong, L.C. An efficient trust estimation model for multi-agent systems using temporal difference learning. Neural Comput. Appl.
**2017**, 28, 461–474. [Google Scholar] [CrossRef] - Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping. Ann. Math. Stat.
**1967**, 38, 325–339. [Google Scholar] [CrossRef] - Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
- Sun, L.; Srivastava, R.P.; Mock, T.J. An information systems security risk assessment model under the Dempster-Shafer theory of belief functions. J. Manag. Inf. Syst.
**2006**, 22, 109–142. [Google Scholar] [CrossRef] [Green Version] - Li, Z.; Wen, G.; Xie, N. An approach to fuzzy soft sets in decision making based on grey relational analysis and Dempster-Shafer theory of evidence: An application in medical diagnosis. Artif. Intell. Med.
**2015**, 64, 161–171. [Google Scholar] [CrossRef] - Liu, M.; Chen, S. SAR target configuration recognition based on the Dempster-Shafer theory and sparse representation using a new classification criterion. Int. J. Remote Sens.
**2019**, 40, 4604–4622. [Google Scholar] [CrossRef] - Wang, K. A New Multi-Sensor Target Recognition Framework based on Dempster-Shafer Evidence Theory. Int. J. Perform. Eng.
**2018**, 14, 1224–1233. [Google Scholar] [CrossRef] - Jousselme, A.L.; Grenier, D.; Bossé, É. A new distance between two bodies of evidence. Inf. Fusion
**2001**, 2, 91–101. [Google Scholar] [CrossRef] - Wen, C.; Wang, Y.; Xu, X. Fuzzy information fusion algorithm of fault diagnosis based on similarity measure of evidence. In International Symposium on Neural Networks; Springer: Berlin/Heidelberg, Germany, 2008; pp. 506–515. [Google Scholar]
- Ristic, B.; Smets, P. The TBM global distance measure for the association of uncertain combat ID declarations. Inf. Fusion
**2006**, 7, 276–284. [Google Scholar] [CrossRef] - Cuzzolin, F. A geometric approach to the theory of evidence. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev.
**2008**, 38, 522–534. [Google Scholar] [CrossRef] [Green Version] - Harmanec, D.; Klir, G.J. Measuring total uncertainty in Dempster-Shafer theory: A novel approach. Int. J. Gen. Syst.
**1994**, 22, 405–419. [Google Scholar] [CrossRef] - Yao, K.; Ke, H. Entropy operator for membership function of uncertain set. Appl. Math. Comput.
**2014**, 242, 898–906. [Google Scholar] [CrossRef] - Lesne, A. Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci.
**2014**, 24, e240311. [Google Scholar] [CrossRef] [Green Version] - Höhle, U. Entropy with respect to plausibility measures. In Proceedings of the 12th IEEE International Symposium on Multiple Valued Logic, Paris, France, 25–27 May 1982. [Google Scholar]
- Smets, P. Information content of an evidence. Int. J. Man-Mach. Stud.
**1983**, 19, 33–43. [Google Scholar] [CrossRef] - Yager, R.R. Entropy and specificity in a mathematical theory of evidence. Int. J. Gen. Syst.
**1983**, 9, 249–260. [Google Scholar] [CrossRef] - Nguyen, H.T. On entropy of random sets and possibility distributions. Anal. Fuzzy Inf.
**1987**, 1, 145–156. [Google Scholar] - Dubois, D.; Prade, H. Properties of measures of information in evidence and possibility theories. Fuzzy Sets Syst.
**1987**, 24, 161–182. [Google Scholar] [CrossRef] - Deng, Y. Deng entropy. Chaos Solitons Fractals
**2016**, 91, 549–553. [Google Scholar] [CrossRef] - Özkan, K. Comparing Shannon entropy with Deng entropy and improved Deng entropy for measuring biodiversity when a priori data is not clear. Forestist
**2018**, 68, 136–140. [Google Scholar] - Cui, H.; Liu, Q.; Zhang, J.; Kang, B. An improved deng entropy and its application in pattern recognition. IEEE Access
**2019**, 7, 18284–18292. [Google Scholar] [CrossRef] - Jiroušek, R.; Shenoy, P.P. A new definition of entropy of belief functions in the Dempster-Shafer theory. Int. J. Approx. Reason.
**2018**, 92, 49–65. [Google Scholar] [CrossRef] [Green Version] - Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef] [Green Version] - Cobb, B.R.; Shenoy, P.P. On the plausibility transformation method for translating belief function models to probability models. Int. J. Approx. Reason.
**2006**, 41, 314–330. [Google Scholar] [CrossRef] [Green Version] - Yu, B.; Singh, M.P.; Sycara, K. Developing trust in large-scale peer-to-peer systems. In Proceedings of the IEEE First Symposium onMulti-Agent Security and Survivability, Drexel, PA, USA, 30–31 August 2004; pp. 1–10. [Google Scholar]
- Deng, Y. A threat assessment model under uncertain environment. Math. Probl. Eng.
**2015**, 2015, 878024. [Google Scholar] [CrossRef] [Green Version] - Jøsang, A. A subjective metric of authentication. In European Symposium on Research in Computer Security; Springer: Berlin/ Heidelberg, Germany, 1998; pp. 329–344. [Google Scholar]
- Wang, Y.; Singh, M.P. Trust via Evidence Combination: A Mathematical Approach Based on Certainty; Technical Report; Department of Computer Science, North Carolina State University: Raleigh, NV, USA, 2006. [Google Scholar]

**Figure 3.**The entropy when frame of discernment is $\Omega =\{T,nT\}$. As is shown, entropy is in the range of $[0,2]$.

**Figure 5.**The recent five-round cumulative absolute error, obtained in different interactive rounds of changing performance, under different probabilities: (

**a**): 10 rounds and the probability equals 0.1 or 0.9; (

**b**): 100 rounds and the probability equals 0.1 or 0.9; (

**c**): 10 rounds and the probability equals 0.3 or 0.7; (

**d**): 100 rounds and the probability equals 0.3 or 0.7.

**Figure 6.**Evidence certainty increases with the amount of interaction $(r+s)$ when satisfaction 0.5 is fixed; x-axis: amount of interaction; y-axis: certainty degree.

**Figure 7.**Evidence certainty varies with satisfaction when the amount of interaction 20 is fixed, and the satisfactory degree 0.5 leads to the lowest certainty.

**Figure 9.**Change in average credibility for each information sharing agent when five agents change their performance.

**Figure 10.**Change in average credibility for each information sharing agent when one agent changes its performance; the change process is as defined in [29].

**Figure 11.**Change in average credibility for each information sharing agent when five agents change their performances; the change process is as defined in [29].

Definition | Why the Factor Is Influential | |
---|---|---|

Uncertainty (Direct trust) | Uncertainty caused by fading, randomness, incompleteness, etc. | To ensure trust accuracy in dynamic MASs |

Consistency (Confidence) | The degree of similarity with third-party testimony | To tell high-quality evidence; Avoid sudden change |

Credibility (Confidence) | The credibility degree of third-party witnesses | To distinguish trusted witnesses and avoid group deception |

Certainty (Confidence) | The certainty degree of third-party testimony | To capture the certainty of testimony |

Motivation | Inspire witnesses to be honest | Ensure the system runs correctly |

${c}_{i}$ | The resource customer agent ${c}_{i}$ |

${s}_{i}$ | The resource supplier agent ${s}_{i}$ |

${w}_{k}$ | The information provider (witness) ${w}_{k}$ |

$Scu{r}_{ij}(t)$ | The evaluation of the tth interaction |

${S}_{ij}(t)$ | The satisfaction degree before the tth interaction |

${\mu}_{ij}(t)$ | The relationship between evaluation and current satisfaction |

$f{q}_{ij}(R)$ | The uncertainty factor |

$\Omega =\{T,nT\}$ | The frame of discernment |

${m}_{ij}^{d}$ | The direct evidence |

${m}_{ij}^{ind}$ | The indirect evidence |

$Conf({c}_{i},{w}_{k})$ | The confidence degree of ${w}_{k}$ from the perspective of ${c}_{i}$ |

$Dis({m}_{kj}^{ind})$ | The distance of ${m}_{kj}^{ind}$ in the group of L witness |

$Sim({m}_{kj}^{ind})$ | The similarity of ${m}_{kj}^{ind}$ in the group of L witness |

$Cred({c}_{i},{w}_{k})$ | The credibility value of ${w}_{k}$ from the perspective of ${c}_{i}$ |

$Cer({m}_{kj})$ | The certainty of ${w}_{k}$ |

**Table 3.**Certainty computed by different approaches for different satisfactory degrees with a fixed amount of interactions at 4.

$(4,0)$ | $(3,1)$ | $(2,2)$ | $(1,3)$ | $(0,4)$ | |
---|---|---|---|---|---|

Yu and Singh [27] | 0 | 0 | 0 | 0 | 0 |

Jøsangsang et al. [65] | $0.8$ | $0.8$ | $0.8$ | $0.8$ | $0.8$ |

Wang and Singh [20,66] | $0.54$ | $0.35$ | $0.29$ | $0.35$ | $0.54$ |

Proposed entropy-based approach | $0.69$ | $0.62$ | $0.59$ | $0.62$ | $0.69$ |

Parameters | Value | Explanation |
---|---|---|

$Wnum$ | 10 | Number of cloud manufacturing client agents |

$Pnum$ | 25 | Number of cloud manufacturing provider agents |

$Snum$ | 7 | Number of selected agents to share information |

$Cnum$ | 3 | Number of selected provider agents to conduct interactions |

$Cn1$ | 30 | From when some agents become malicious |

$Cn2$ | 70 | From when the malicious return to being honest |

Rounds | 200 | Total round |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, N.; Wei, D.
An Adaptive Dempster-Shafer Theory of Evidence Based Trust Model in Multiagent Systems. *Appl. Sci.* **2022**, *12*, 7633.
https://doi.org/10.3390/app12157633

**AMA Style**

Wang N, Wei D.
An Adaptive Dempster-Shafer Theory of Evidence Based Trust Model in Multiagent Systems. *Applied Sciences*. 2022; 12(15):7633.
https://doi.org/10.3390/app12157633

**Chicago/Turabian Style**

Wang, Ningkui, and Daijun Wei.
2022. "An Adaptive Dempster-Shafer Theory of Evidence Based Trust Model in Multiagent Systems" *Applied Sciences* 12, no. 15: 7633.
https://doi.org/10.3390/app12157633