A Lightweight Trust Mechanism with Attack Detection for IoT

In this paper, we propose a lightweight and adaptable trust mechanism for the issue of trust evaluation among Internet of Things devices, considering challenges such as limited device resources and trust attacks. Firstly, we propose a trust evaluation approach based on Bayesian statistics and Jøsang’s belief model to quantify a device’s trustworthiness, where evaluators can freely initialize and update trust data with feedback from multiple sources, avoiding the bias of a single message source. It balances the accuracy of estimations and algorithm complexity. Secondly, considering that a trust estimation should reflect a device’s latest status, we propose a forgetting algorithm to ensure that trust estimations can sensitively perceive changes in device status. Compared with conventional methods, it can automatically set its parameters to gain good performance. Finally, to prevent trust attacks from misleading evaluators, we propose a tango algorithm to curb trust attacks and a hypothesis testing-based trust attack detection mechanism. We corroborate the proposed trust mechanism’s performance with simulation, whose results indicate that even if challenged by many colluding attackers that can exploit different trust attacks in combination, it can produce relatively accurate trust estimations, gradually exclude attackers, and quickly restore trust estimations for normal devices.


Introduction
The Internet of Things (IoT) is a network framework merging the physical domain and the virtual domain through the Internet [1].IoT devices can collect information, process data, and interact with other connected members automatically.Security issues are major concerns persistent throughout the development of IoT.In IoT paradigms requiring device cooperation, a device may not have the capacity and integrity to complete most assignments and behave in the interest of most participants.Trust management is responsible for building and maintaining a profile of a device's trustworthiness in a network to ensure that most devices are trustworthy.It is crucial for applications depending on the collaboration among IoT devices to guarantee user experience [2].In this section, readers can interpret trust as a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action [3].Trust mechanisms for issues in the traditional security field may not acclimatize well to IoT applications due to the following technical characteristics of IoT [4]: • Popular IoT paradigms are heterogeneous, where devices have varying capabilities and communicate with various protocols.As a result, it is challenging to create a trust mechanism that can apply to different applications via easy adaptation.• IoT devices usually possess limited computing power and memory.A practical trust mechanism must balance the accuracy of trust estimations and algorithm complexity.
• IoT devices are numerous and ubiquitous.A trust mechanism should be scalable to remain efficient when the number of devices grows in a network.

•
Mobile IoT devices such as smartphones and intelligent vehicles are dynamic, frequently joining and leaving networks.It complicates maintaining their profiles of trustworthiness for trust mechanisms.
Apart from these challenges, malicious devices can mislead evaluators into producing incorrect trust estimations by launching trust attacks.It should be a consideration for trust mechanisms [5].Roles played by contemporary devices are more and more complex.The social IoT is a paradigm of this trend, where researchers introduce the concept of human social networks to study relations automatically built among devices whose roles can shift between the requester and server [6].It may render trust attacks more attainable and profitable.For example, malicious attackers collude to exaggerate conspirators and slander other devices, aiming at the monopoly of service providers in a network.On the other hand, it facilitates communication among devices with different views, which is very helpful in locating trust attackers.
Researchers have proposed various trust mechanisms for different IoT networks, most of which adopt distributed architectures due to the quantity and ubiquity of IoT devices.Usually, a trust mechanism ultimately quantifies a device's trustworthiness with a trust estimation derived from a data fusion process.Data fusion is responsible for utilizing a batch of descriptions or metrics of a device's behavior with different times and sources to evaluate this device's trustworthiness, as the core function of the trust mechanism.Bayesian inference or Dempster-Shafer theory [7] are widely used approaches for data fusion, applicable to different networks such as sensor networks and ad hoc networks [8][9][10][11][12][13].Supported by related statistics principles, the former can accomplish data fusion through simple computing.Analogous to human reasoning, the latter permits expressing the extent of uncertainty related to an event rather than entirely recognizing or denying its authenticity.This property is useful when acquired information about an event is temporarily scarce.They can work in conjunction: the former processes data gathered from direct observation and the latter processes data provided by other devices [8,10].For similar reasons to why Dempster-Shafer theory is adopted, there is research choosing fuzzy logic [14] or cloud theory [15] for data fusion.It is also common to construct experience-based formulas for data fusion to let a trust mechanism designed for a particular application fully consider the characteristics peculiar to this application [16][17][18][19][20].For example, Chen et al. propose a trust mechanism for social IoT systems where data fusion produces three universal metrics related to the properties of a device's performance in honesty, cooperativeness, and community interest.Further, they consist of an application-specific formula to compute a trust estimation [16].
However, the above summarized technical characteristics of IoT bring the following common challenges that remain to be addressed for many existing trust mechanisms, regardless of whether their data fusion employs theoretical or empirical foundations.Firstly, A trust mechanism designed to solve specific trust problems of several applications is hard to suit other applications via adaptation, although it is feasible to propose a universal trust mechanism [10].Secondly, trust mechanisms employing distributed architectures and asking devices to share trust data cannot efficiently manage many devices due to their limited storage and communication capabilities.Thirdly, trust mechanisms often assume that devices can guarantee service quality, which does not apply to applications having inherent uncertainty.For example, interactions may occasionally fail due to noise interference in communication channels [15].
Moreover, the explanation of how parameters related to data fusion determine trust estimations is not detailed in many existing trust mechanism, leading to the undesirable dependency on operational experience with trial and error in the deployment phase.This problem is not unique to experience-based data fusion; it can also occur in theory-based data fusion.A trust mechanism with theory-based data fusion may require extra parameters beyond the underlying theories to provide special features.For example, to endow newer information more weight in data fusion, the trust mechanisms using Bayesian inference proposed in [8,10] utilize an aging factor and a penalty factor, respectively.Bayesian inference alone cannot provide this feature, where evidence collected in different periods is equivalent to being used to update the prior distribution.A poorly explicable parameter may limit a trust mechanism's performance.For example, in [16], the presented simulation indicates that the variation in the proposed trust mechanism's estimations is not significant when altering the parameter related to external recommendations.Some research proposes cryptography-based reliable approaches that can protect the integrity and privacy of data for healthcare [21] and vehicle [22] applications.However, devices competent in quickly performing encryption and decryption operations, such as the cloud servers and intelligent vehicles in this research, are not generally deployed in the IoT field.
Finally, solutions to trust attacks are often absent in existing trust mechanisms.In this paper, trust attacks refer to the following attacks on service defined in [5]: on-off attacks (OOAs), bad-mouthing attacks (BMAs), ballot-stuffing attacks (BSAs), discrimination attacks (DAs), self-promoting (SP) attacks, value imbalance exploitation (VIE), Sybil attacks (SAs), and newcomer attacks (NCAs).Although the following research gives analyses of how their data fusion mitigates influences from trust attacks or methods to identify trust attackers, there are several vulnerabilities: It may be accurate to assume that attackers are bad service providers for faulty devices [8], but this assumption is no longer relevant for devices having more functions nowadays.The behavior of an attacker launching a DA may be different in front of two observers.Comparison-based methods against BMAs like the ones in [13,16,23] may cause them to misjudge each other.The fluctuation in trust estimations alone cannot serve as an indication for trust attacks [24] because they are not the only reason of this fluctuation.Moreover, there is a lack of discussion surrounding DAs, collusion among malicious devices, and the ability to launch multiple trust attacks.Table 1 lists protection against trust attacks explicitly stated in the related literature on trust mechanisms.
Fog computing [26] has been a popular technique for addressing IoT security issues [4].In recent research employing fog computing to design trust mechanism [27][28][29], a fog node serving as a forwarder or analyzer receives data from and disseminates results to devices.There is research [23][24][25][30][31][32] aimed at building bidirectional trust between devices and fog nodes for the case where a device needs to choose a service provider from nearby fog nodes.A fog node can complete some challenging tasks for conventional techniques such as processing big data from numerous devices, managing mobile devices, and giving responses with low latency [33].Although fog computing is a handy tool for researchers, it cannot directly solve the three summarized problems, as they remain in this research.
Additionally, most trust mechanisms proposed in the literature derive a device's trust estimation from two sources using direct information gathered during interactions with this device and indirect information about this device provided by other devices.Ganeriwal et al. proposed a structure consisting of two modules: the watchdog module and the reputation system module.The former receives data from a sensor during each interaction and outputs a metric of the credibility of these data using an outlier detection technique.The latter takes these metrics and external evaluations of this sensor to output a metric of whether this sensor is faulty to deliver incorrect data [8].This structure facilitates improvement in the trust mechanism's adaptability: the watchdog module processes direct information, and the reputation system module processes indirect information.For example, in addition to outlier detection, the watchdog module can utilize a weighted summation-based method [28] or machine learning to generate a metric.
Given these considerations, we proposed a Bayesian trust mechanism (BTM), which emphasizes researching the reputation system module.BTM does not rely on any specific IoT technique and takes simple input to address the challenge of heterogeneity, which requires two common assumptions to evaluate devices and to identify trust attackers by listening to feedback whose sources are diversified: first, devices frequently communicate with each other, and second, normal devices are in the majority.These are our contributions in detail:

•
This paper proposes a new trust estimation approach by adapting data structures and algorithms used in the beta reputation system (BRS).For e-commerce trust issues, BRS's feedback integration feature combines Bayesian statistics and Jøsang's belief model derived from Dempster-Shafer theory to let data fusion fully utilize feedback from different sources [34].It enables the BRS to produce more accurate trust estimations defined from a probabilistic perspective to quantify an IoT device's trustworthiness.In contrast to previous research utilizing the two techniques, the data fusion of BTM enables the following novel and practical features: trust estimations that are universal, accurate, and resilient to trust attacks; efficient detection against various trust attacks; an option to combine fog computing as an optimization technique to address the challenges of scalability and dynamic; and a probability theory-explicable parameter setting.• Based on the above trust evaluation, this paper proposes an automatic forgetting algorithm that gives more weight to newer interaction results and feedback in the computing process of trust estimations.It ensures that an IoT device's trust estimation reflects the device's current status in time, retards OOAs, and expedites the elimination of adverse influences from trust attacks.In contrast to conventional forgetting algorithms, this algorithm can automatically adjust this weight to achieve good performance.These two contributions form the trust evaluation module of BTM, which is less restricted by the heterogeneity of IoT and balances the accuracy of trust estimations and algorithm complexity.For the convenience of notation reference during the reading of subsequent sections in this paper, Table 2 lists all the notations used in BTM.(This paper continues to use the notation method in [34].When a superscript and a subscript appear simultaneously, the former indicates an evaluator, the latter indicates an evaluatee, and a second subscript indicates a position in a sequence.Sometimes, they are omitted for the sake of simplicity if there is no ambiguity).The inverse mapping of g.
An increment of rep i j , new evidence gathered from recent interactions with devices j. evi All external evidence of device k provided by device j, evi The forgetting factor in the conventional form of forgetting algorithms in current research.Section 2.4 The evidence queue of device j. φ The capacity of q.M i and N i Evaluator i saves external evidence in these two matrices.evi A very tiny significance level used to identify reckless BMAs, BSAs, or VIE.η Evaluator i judges device j as suspicious if ω < γ 1 happens more than η in a check.

Materials and Methods
In this section, we elaborate on how BTM functions in this sequence: its system model; its basic trust evaluation approach.Given two probabilistic definitions of trust and reputation, this approach lets the evaluator regard direct interaction results as evidence to perform Bayesian inference; its feedback integration mechanism, where Jøsang's belief model enables the evaluator to utilize external evidence from other devices as feedback in Bayesian inference; its forgetting mechanism; and its trust attack-handling module.

System Model
In BTM, devices are not necessarily homogeneous.The watchdog module generates a boolean value representing whether the device feels satisfied with the server's behavior during an interaction.Each device determines the design of this module according to its specific requirements.The reputation system module takes input from the watchdog module and feedback from other devices to produce trust estimations, including all algorithms proposed in this paper.BTM offers two feedback integration algorithms, providing two optional trust evaluation styles for the same network.Their trust estimations are virtually identical given the same input.Figure 1 illustrates BTM's architecture from the view of evaluator i.In the purely distributed style, each device is equipped with the two modules and undertakes trust management on behalf of itself.Furthermore, devices need to share local trust data to ensure the accuracy of trust estimations.If a device has accomplished at least one interaction with device i lately, it is a neighbor of device i.When device i initiates contact with device k, it initializes the trust data of device k based on its current knowledge.Then, it requests the trust data of device k from its all neighbors to perform feedback integration.There are two colluding malicious devices x and y trying to mislead device i into producing trust estimations in more favor of them by trust attacks.As a neighbor of devices i and k, device j satisfies device i's request.Meanwhile, the two attackers always return fake trust data adverse to device k.Attacker x also discriminates against device k by ignoring requests or providing trouble service.BTM should help device i solve the confusion why devices k and x criticize each other, as both them are good neighbors.
In the core style, a common device only has a watchdog module and directly submits its input as evaluations of other devices to a neighbor equipped with the two modules.It is responsible for the whole network's trust management and disseminates trust estimations as the sole evaluator.This evaluator is elected by devices or designated by managers beforehand.The device can send an evaluation after an interaction or merge multiple evaluations into one before reporting to this evaluator.The evaluator periodically checks whether each device functions well by demanding service or self-check reports.This process is not necessary if it can receive adequate evaluations from neighbors that can guarantee their service quality because of a property of BTM's feedback integration.
An application selects the better style according to its conditions.The main difference between the two styles is the scalability determinant: it hinges on the storage and communications abilities of most devices on average in the former, while it mainly depends on the storage and computing abilities of the sole evaluator in the latter.It is easier to strengthen this evaluator merely when wanting to accommodate more devices.Moreover, if devices merge evaluations and keep the frequency of interactions, the sole evaluator can invoke no more feedback integration algorithms than the purely distributed style.Given these considerations and that the research of elections among devices is not involved, we adopt the core style and assume that the sole evaluator is a fog node.The fog node is flexible to deploy and has more resources than devices to execute algorithms.Typically managed by a large organization such as a company, it is also more reliable [26].
BTM forbids a device from sending an evaluation of itself, which precludes SP attacks.BTM is expected to accurately and efficiently distinguish malicious devices that can use the following modeled trust attacks in combination:

•
OOAs, attackers periodically suspend attacks to avoid being noticed; • BMAs, attackers always send negative evaluations after interactions; • BSAs, attackers always send positive evaluations after interactions; • DAs, attackers treat other devices with a discriminatory attitude, providing victims with nothing or terrible service; • VIE, attackers always send evaluations contrary to reality after interactions.

Trust Evaluation Based on Direct Observation
Since direct observation is the most fundamental approach to trust evaluation, our study starts with an abstraction of the definition of trust given in Section 1 from the perspective of probability theory to introduce Bayesian statistics to process results from direct interactions.In BTM, a trust value quantifies a device's trustworthiness, derived from a reputation vector storing digested information from all previous interactions.Bayesian statistics enables initializing reputation vectors freely and updating them iteratively.
Given that device j accomplishes an assigned task with a probability p j , we define device i's trust in device j as its estimation of the probability of satisfying service from device j in the next interaction.It is desirable for device i that t i j approximates p j .In daily life, building trust by synthesizing what people have said and done in the past is deemed reasonable.In this kind of reputation-based trust model, reputation can be regarded as a random variable that is a function of previous behavior, and trust is a function of reputation.The two steps can be formulized as follows: where b i jn represents the behavior of device j observed by device i in the nth interaction, described by a random variable or a group of random variables.Updating the reputation given a new behavior is more convenient than updating the trust because the reputation can serve as a data structure containing digested information, and the trust's form can be more intelligible for people.
Traditionally, a device can qualitatively describe the other side's behavior in each interaction with a Boolean value.Such a value can serve as an instance of a random variable abiding by a binomial distribution B(1, θ), where θ represents an independent trial's success rate unknown to this device.In Bayesian statistics, a device can refer to acquired subjective knowledge to estimate a parameter with a few samples called evidence and to update the result iteratively.For B(1, θ), θ's prior distribution is a beta distribution: where α and β are hyperparameters set beforehand according to the domain knowledge of θ.The denominator is a beta function denoted by Beta(α, β).Note that the Beta(1, 1) is identical to U(0, 1), where θ is uniformly distributed over [0, 1].It is a reasonable prior distribution when the knowledge of θ is scarce.Given evidence, data = (x 1 , x 2 , . . . ,x n ) including r successful attempts and s unsuccessful attempts.The posterior distribution is obtained using Bayes' theorem, characterized by a conditional probability density function: Equation ( 3) is the prior distribution in the next estimation too.Rather than a posterior distribution giving all probabilities of an unknown parameter's values, it is more common to output the expected value of this distribution in Bayesian parameter estimation.The expected value of (2) is α α+β .Given (1), BTM represents device j's reputation at device i as rep i j = α i j , β i j , r i j , s i j and represents t i j as , where r i j = ∑ n k=1 b i jk and s i j = n − r i j .Because a greater α or β brings about less variation in the trust when r or s changes, device i can increase α i j and β i j if it has confidence in its knowledge of device j.As the evaluator should set α and β during the initialization of reputations, BTM does not suggest any operation on r and s without evidence-based reasons.Note that the feedback integration and forgetting mechanisms introduced in the following content do not change the fact that a trust is a parameter estimation in nature, on which BTM relies to handle the heterogeneity and inherent uncertainty.The presented simulation will confirm that inherent uncertainty cannot mislead evaluators into misjudging normal devices even if meeting trust attacks.In the following content, trust values and reputation vectors in BTM are abbreviated to trusts and reputations.

Feedback Integration
Feedback integration enables updating reputations using external evidence contained in evaluations from other devices to expedite the acquisition of accurate trusts.It also retards DAs by synthesizing evaluations of a device from diversified views.Derived from the combination of Jøsang's belief model with Bayesian statistics and formulized with group theory, BTM's feedback integration can serve as a more accurate extension of BRS [34].As illustrated in Section 2.1, BTM includes two feedback integration algorithms, providing two trust evaluation styles producing virtually identical trusts.The answer to which better hinges on applications.We also compare these algorithms with their counterpart in BRS.Note that BTM does not adopt the common practice that computes a global trust estimation by weight-summing direct and indirect trust estimations like [10,28].In BTM, when an evaluator receives a piece of feedback, it directly digests this feedback's influence on a device's trust into this device's reputation.

Derivation of Feedback Integration
An evaluation's effect should be proportional to the source's trustworthiness, which is practicable by circulating the opinion defined in Jøsang's belief model.Device i's opinion about device j is o i j = b i j , d i j , u i j , where b i j , d i j , u i j ∈ [0, 1] and b i j + d i j + u i j = 1.b i j is the probability of a statement from device j being true in device i's view, and d i j is the probability of this statement being false.The sum of b i j and d i j is not bound to be in unity, and u i j expresses device i's uncertainty of this statement.In other words, they are belief, disbelief, and uncertainty.Device j sends o j k as its evaluation of device k to device i. Device i processes o j k using an operation called belief discounting [34] that This process can be represented as a binary operation upon the opinion set ) is a monoid with an identity element (1, 0, 0).On the other hand, the updating of reputations using evidence can be represented as a binary operation upon a subset of the reputation set U r = c α , c β , r, s | r ≥ 0, s ≥ 0 , where α and β are constants.Given two reputations a, b ∈ U r , '.' denotes fetching a scalar in a vector.(U r , ⊕) is a commutative monoid.Its commutativity ensures no exception when simply adding positive and negative cases to merge evidence.
In BTM, o i j is determined by rep i j with a function from U r to U o defined in (6).It is a bijection, and the inverse function is (7).Algorithm 1 describes how device i integrates rep j k as an evaluation using these two equations.Equation ( 8) directly gives the result of the brief discounting.This algorithm precludes SP attacks because a sender cannot provide an evaluation of itself.Note that input parameters' original values change when altering them in BTM's algorithms.
Note that r j k suffers more discounting when the subjective parameters related to device j increase.Moreover, when α i k = α j k and β i k = β j k to let rep i j be comparable with rep i k , r i j = ∞ is the only way to exempt r j k from discounting: Algorithm 1 is suitable for the purely distributed style, where devices should periodically share their reputation data for the sake of feedback integration.Evaluator i prepares two reputations for device j: one only comprises evidence from interactions, while the other synthesizes both direct and discounted external evidence.The former is the base for the latter and is provided for other devices as the evaluation of device j.The latter is the base for t i j and discounting evaluations from device j.Note that when evaluator i has integrated an old rep j k , it needs to compute the latter reputation from scratch if it wants to update with a newer rep j k .

Incremental Feedback Integration
With the above practice, the device's storage and communication abilities for saving and sharing reputations on average determine the max member number of BTM.Adapted from Algorithm 1 according to f (x + ∆x) ≈ f (x) + f (x)∆x, Algorithm 2 concentrates all trust management tasks in a network to a sole evaluator.Imposing minimal trust management burdens on common devices and not requiring sending an evaluation's duplicates to different receivers, Algorithm 2 can extend BTM's scalability simply by strengthening the sole evaluator.Moreover, it can update a reputation iteratively with new evaluations rather than from scratch and can endow the evaluator with a global view to estimate devices and to detect trust attacks.Algorithm 2 applies to applications where cooperative devices have differential performance, such as smart home applications managing smart appliances using a smartphone or wireless router.Even if composed of homogeneous or dynamic devices like intelligent vehicles, applications can also adopt Algorithm 2 with the help of fog nodes [14,33].
∆rep from the device substitutes rep as the evaluation in Algorithm 2, which is the increment of rep.That is, restricting α and β to be constants, evidence is gathered from recent interactions with a device and sent out since the last sending evaluation.α and β are fixed to unities in common devices because they are not deeply involved in the details of generating trusts anymore.Evaluator i cannot know r j k and s j k directly using ∆rep.Therefore, they are saved in a vector where evi The disc function discounts ∆rep, where α j k + β j k is replaced by two.Note that ∆rep j k is a direct observation result in device j while an evaluation needs to be discounted in fog node i.In BTM, ∆rep j k is called direct evidence in the former case and feedback in the latter case.Additionally, (9) is the equation for integrating positive feedback in BRS: To enable the free initialization of a device's reputation even if without evidence, BTM separates α and β from r and s when representing this reputation and alters the mapping from reputations to opinions, resulting in the difference between ( 8) and ( 9).In the elementary form of providing feedback in BRS, the sender evaluates an agent's performance in a transaction with a pair (r, s) where r + s = w and w is a weight for normalization.The evaluator discounts this pair using (9) [34].However, r i:j k is a concave function of r j k .
But, evaluator i cannot directly know the r j k related to all previous transactions with this pair.The sender should add the pair of the new transaction to the pair of previous transactions and send this sum as the evaluation.Note that as the evaluation to be integrated in Algorithm 1, rep j k includes all evidence of previous interactions between devices j and k.This concavity provides some resistance against BMAs and BSAs, as Figure 2 shows, where rep i j = (1, 1, 8, 0), rep i k = (1, 1, 8, 0), and rep j k = (1, 1, 0, 0) at the outset.s j k increases by 1, and device j sends rep j k and ∆rep j k to device i per round.The following sections assume that α = β = 1 initially and that there is a fog node running Algorithm 2.

Forgetting Algorithm
In Section 2.3, integrating direct evidence and discounted feedback in any order will lead to the same reputation since (U r , ⊕) is commutative.However, a device does not necessarily behave smoothly; it may break down due to fatal defects or shift to the attack mode due to an OOA.A forgetting algorithm lets newer collected data carry more weight in trust evaluations to ensure that a device's trust estimation reflects its latest status in time.If the target value tar n after the nth interaction is derived from a statistic of the nth interaction stat n and previous statistics, a common forgetting form like the one used in BRS [16,28,34] is which uses a forgetting factor λ < 1.
As the first example of utilizing the separated subjective and objective parameters in reputations, we proposed Algorithm 3, which can achieve the same forgetting by automatically adjusting these parameters.Its idea is that given α and β embodying subjective information related to the trust, evaluator i stores direct evidence of device j over a queue q = ∆rep 0 , ∆rep 1 , . . ., ∆rep n−1 containing n evidence at most.The smaller the evidence's subscript is, the older it is.The oldest evidence is then discarded and becomes the experience to update α and β when new evidence arrives in a full queue.Evaluator i also merges discounted feedback into the element at the queue's rear.Using a single queue containing the two kinds of evidence can reduce the algorithm's complexity and memory space with negligible deviations.In Algorithm 3, pop(q, x) lets the oldest element quit q and gives its value to x. q's capacity φ varies with circumstances.A larger φ reduces the standard deviation of trusts but requires more memory space.The evaluator saves feedback in two double-dimensional arrays represented by two matrices M and N. M[j][k] denotes the element of the jth row kth column of M. Given the two matrices, evi indexes and serial numbers start with zero.The for-loop updates elements where device j is an evaluation sender in M i and N i because q i j brings r i j an upper bound.Without this operation, the evaluation's effect will indefinitely decline with the analysis of the concavity of (8) in Section 2.3.v is a random variable abiding by U(0, 1), used to choose which matrix to update because the order of the arrival of feedback is not recorded.
As explained in Section 2.3, the evaluator prepares two kinds of reputations for a device in the purely distributed style.Therefore, The evaluator also maintains the two queues of these reputations and updates them simultaneously in Algorithm 3.Moreover, when the evaluator sends out a reputation as an evaluation, it can append the corresponding queue to this reputation.Then, the receiver merges this queue into its one.In this way, the sender and the receiver can forget the same evidence at about the same time.
Input: ∆rep i j , rep i j , q i j , φ, device number n, M i , N i 1 if the size of q i j = φ then 2 pop q i j , x

Module against Trust Attacks
Algorithms 2 and 3 cannot guarantee the accuracy of trusts in the face of trust attacks.In this section, we analyze the ability and limitation of BTM's feedback integration against trust attacks first to clarify the aims of BTM's trust attack handling module.This module consists of a tango algorithm that can curb BMAs by adapting Algorithm 2 and a hypothesis testing-based trust attack detection mechanism against BMAs, DAs, BSAs, and VIE.

Influences of Trust Attacks and Tango Algorithm
For DAs, Algorithm 2 synthesizes feedback from different perspectives to render it unprofitable.For BMAs and BSAs, a reckless attacker sends a lot of fake feedback in a short time, which is inefficient due to the concavity of (8), illustrated in Figure 2. A patient attacker sends fake evaluations with an inconspicuous frequency for long-term interests, which works because Algorithm 2 does not check the authenticity of feedback.
Applying the principle that it takes two to tango to curb BMAs, Algorithm 4 is an adaptation of Algorithm 2. It divides blame between two sides when processing negative feedback, where the side having higher trust is given more ∆s to criticize the other side.Assuming most interactions between normal device success, Algorithm 4 renders BMAs lose-lose with O(1) time-complexity extra computing to make an independent BMA attacker's trust continuously decline.Algorithm 1 can be adapted with the same idea as Algorithm 4.

Trust Attack Detection
Algorithm 4 mitigates trust attacks when normal devices are in the majority to tolerate some malicious devices.We propose a trust attack detection mechanism to identify attackers for harsher circumstances, which works in parallel with Algorithm 4 because the latter can filter subjects for the former.
BTM saves feedback in M and N for feedback integration.They also correspond to directed graphs in graph theory.If device j sends criticism of device k, there is an edge from node j to node k whose weight is N[j][k].DAs and BMAs can cause abnormal in-degree and out-degree in N, respectively.BSAs can cause an abnormal out-degree in M. It is an outlier detection problem and has universal solutions [35].For example, The local outlier factor (LOF) algorithm [36] can check the degrees of n nodes with O(n 2 ) time complexity.
BTM uses a new approach quicker than LOF to detect these anomalies.With BTM's feedback integration, a device's trust is a parameter estimation hard to manipulate using trust attacks.If M i [j][k] = m and N i [j][k] = n, device j reports that device k succeeds m times and fails n times in recent interactions.Hypothesis testing can check its authenticity, whose idea is that a small probability event is unlikely to happen in a single trial.Using a p-value method, the null hypothesis is that device j honestly sends feedback, the test statistics are M i [j][k] and M i [j][k], the corresponding p-value denoted by ω is: If the null hypothesis is true, ω should not be less than a significance level like 0.05 denoted by γ.In Algorithm 5, against BMAs, BSAs, and VIE, if t i j < ζ, evaluator i calculates ω along the jth row.γ 1 is for patient attackers, which can tolerate a frequency of rejected null hypothesis no more than η.γ 2 is very tiny for reckless attackers.This algorithm can check a single node with O(n) time complexity.Note that Algorithm 3 makes m + n hover around φ.
The DA detection algorithm (Algorithm 6) is obtained by adapting Algorithm 5 via calculating ω along a column and deleting γ 2 .Although the purely distributed style does not need M and N for feedback integration, it can introduce the two matrices to use Algorithm 5, whose updating is simple: when device i receives rep j k from device j, it changes the elements of the jth row kth column in M i and N i to rep j k .
Input: subject's serial number j, all reputations, device number n, threshold ζ, significance levels γ 1 and γ 2 , tolerance η, M i , N i Output: a Boolean value of whether subject is suspicious Input: subject's serial number j, all reputations, device number n, threshold ζ, significance levels γ 1 , tolerance η, M i , N i Output: a Boolean value of whether subject is suspicious

Results
In this section, we corroborate that even if challenged by high-intensity trust attacks, BTM can accurately estimate normal devices and identify malicious devices for applications having inherent uncertainty by simulation.Note that this paper adopts the core trust evaluation style and assumes that the sole evaluator is a fog node.The platform is a host computer with AMD Ryzen 5700G, 16 GB RAM, and Windows 11 home edition.The program is written in C++ and compiled using MSVC (19.29.30137 edition for ×64).

Design, Trust Attack Tactics, and Metrics
Devices and fog nodes are simulated using independent threads whose execution sequences, and the results are unpredictable.To simulate the inherent uncertainty caused by various adversary factors with different sources such as physical environments and networks, an interaction between two devices ends successfully with a probability of 0.8.Denoted by n 0 , there are three initial device numbers: 10, 20, and 50.A device uniformly chooses a server from the other devices and sleeps for 1 millisecond after evaluating this interaction.The number of requests is limited to 20n 0 , representing a device's lifespan.When a device expires, it becomes inaccessible, and the fog node archives its trust data.n denotes the number of active devices.The fog node also forces a suspicious device to expire in advance to remove it.The fog node periodically performs a series of operations: requesting service from each device, digesting received feedback, applying the two trust attack detection algorithms to each device, and adjusting the interval by which it can receive about n 2 feedback before the next round.
Attackers act normally first to build credible profiles within several requests when created, which is a simple OOA tactic.An independent attacker's latency ranges within 0, 1  4 of 20n 0 .A colluding attacker's latency is 5n 0 to maximize the impact of attacks.There are three types of attackers.A fox is an independent BMA attacker sending a negative feedback after an interaction.A miser is a colluding DA attacker rejecting requests not coming from a conspirator or fog node.A hybrid is a stronger miser that can launch BMAs and BSAs.It sends positive feedback after an interaction when the server is a conspirator.Otherwise, it sends negative feedback.Table 3 lists the parameter values related to the simulation setting and BTM's algorithms.The fog node judges a device as suspicious if its trust is below 0.5 or fails to pass the trust attack detection.
There has been no widely accepted benchmark comparing the performance of trust mechanisms due to the diversity of underlying theories for their design.For the aspect of identifying trust attackers, we borrow five metrics of classifiers in machine learning: precision is the proportion of true positives in all positives; recall is the proportion of true positives in all attackers; specificity is the proportion of true negatives in all normal devices; the accuracy is the proportion of true positives and true negatives in all devices; F1 score is the harmonic mean of precision and recall; average deviation is the average of the absolute value of the normal device's final trust minus 0.8; average attacker trust is the average of the attacker's final trust; and check count is the frequency of checking a device with a trust less than η.
The following presented data are averaged over 2000 repeated trials.Precision, recall, F1 score, and average attacker trust are not quite relevant when all devices are normal.The fog node records the average trusts of normal devices per 0.5 s when n 0 = 50.

Presentation
Tables 4 and 5 present BTM's performance as an identifier when challenged by foxes and hybrids.We omit the data of misers because they are similar to the data of foxes.Figures 3-5 record how the average trusts of normal devices change as time goes by when challenged by foxes, misers, and hybrids.We will interpret these data at the end of this subsection.
In Table 4, the columns of F1 score and accuracy indicate that BTM can thoroughly and correctly distinguish independent BMA attackers from normal devices.This detection ability has the best performance in recall and strengthens with the number of devices, while the proportion of attackers mainly determines its cost of computing resources.In Figure 3, the curve of 0% foxes shows that occasional interaction failure and the side effect of Algorithm 4 cause a deviation in the normal device's average trust, whose order of magnitude is 0.01.Foxes shift to the attack mode one after another, leading to the decline in the other four curves.The normal device's average trust is still higher than ζ in the worst case.The main reason probably is the mathematical base for the feedback integration.Moreover, Algorithm 4 lets foxes pay a price of its trust for criticizing other devices, rendering its feedback less persuasive.According to the column of average attacker trust in Table 4, a fox's trust drops faster than a normal device to let Algorithm 5 check the former earlier.Therefore, the advantage of Algorithm 4 is far greater than its side effect.Algorithm 3 gradually restores the normal device's trust when the proportion of foxes decreases.Because the accuracy and F1 score are almost in unity, the normal device's trust can fully recover from BMAs.Moreover, the recovery rates of the four curves are similar.This result corroborates that BTM is robust and resilient against independent BMAs.In Figure 4, colluding DAs from misers can amplify the side effect of Algorithm 4. Still, the normal device's average trust is higher than ζ and fully recovers in the worst case.The miser's trust also drops faster than the normal device.This result corroborates that misers cannot overcome normal devices.Table 5 indicates the upper limit of BTM's protection against trust attacks.Hybrids can occupy small networks with a casualty rate of about 1  4 in half of all devices, but this casualty becomes very heavy when n 0 = 50.In Figure 5, the trends of the latter three curves are different from Figures 3 and 4 because BTM cannot guarantee specificity in these cases.When the proportion of hybrids is 30%, the fog node misjudges several normal devices and fully restores the trusts of the remaining devices.Note that the normal device's average trust also includes misjudged ones.
These data also indicate that normal and malicious devices cannot coexist and support an optimization method.The fog node does not need to interact with the device to examine its status in practice when devices that can guarantee service quality are sufficient; just performing imaginary checks always returns successful results regularly to keep the effect of feedback from devices.It is called token mode and can extra counteract the side effect of Algorithm 4.

Comparison with Existing Research
In this subsection, we compare the performance of BTM with a reliable trust computing mechanism (RTCM) [28] and trust-based service management (TBSM) [16].In both RTCM and TBSM, the forms of forgetting are conventional and similar to (10), and the global trust estimation is the weighted sum of the direct trust estimation from observation and the indirect trust estimation from feedback.RTCM features the utilization of feedback from multiple sources, where a fog node synthesizes direct trust estimations from devices and computes indirect trust estimations.The device requests indirect trust estimations from the fog node when it needs to compute global trust estimations.TBSM features comprehensive protection against trust attacks.As discussed in the Introduction, existing trust mechanisms have a common issue of dealing with trust attacks.The following simulation illustrates BTM's advantage of reaching a good trade-off between the protection against BMAs and OOAs.
The setting of this simulation is identical to Table 3 except for the interaction success rate being 1 and ζ = 0 to turn off the trust attack detection.There are five devices where devices 0, 1, and 2 are normal, but devices 3 and 4 are attackers.The program records trust estimations per 10 milliseconds, averaged from 2000 trials.
In the first case, devices 3 and 4 are colluding foxes.They launch BMAs against the other three devices.In view of device 0, Figure 6 records the average global trust estimations of devices 1 and 2 in the three trust mechanisms.Figure 7 records the average global trust estimations of attackers 3 and 4. They indicate that attackers can mislead the evaluator into producing trust estimations in more favor of them via BMAs in RTCM, as the fog node does not check the authenticity of feedback from devices.On the contrary, BMAs hardly influences TBSM because it ignores most feedback from the two attackers.TBSM's idea against BMAs can apply to Algorithm 1: Evaluator i calculates t i k − t j k /t i k when it receives rep j k .It ignores rep j k if the result transcends 0.5.However, this idea can lead to misjudgment when DAs exist, which happens in the following case.The performance of BTM is between RTCM and TBSM, where BMAs cannot bring attackers extra benefits.
In the second case, devices 3 and 4 are misers.They launch DAs against devices 1 and 2 while pretending to be normal in front of device 0. Figures 8 and 9 record the trust estimations corresponding to Figures 6 and 7.They indicate that the performance of RTCM is best to decrease trust estimations of attackers and not influence normal devices.TBSM is unaware of the existence of DAs due to device 0 wrongly regarding feedback from devices 1 and 2 as BMAs.The performance of BTM is between RTCM and TBSM due to the side effect of Algorithm 4, where DAs also cannot bring attackers extra benefits.

Discussion
The presented simulation results corroborate that BTM's idea of how to render trust estimations universal and accurate is feasible: assuming that devices frequently communicate with each other and that most of them are normal, the evaluator quantifies a device's trustworthiness with a strictly probabilistic value, whose updating utilizes direct and external evidence under the guidance of Bayesian statistics.As for the issue of trust attacks, BTM's feedback integration mechanism based on Jøsang's belief model features listening to multiple message sources and adopts the principle that it takes two to tango.Therefore, it can mitigate their harm and turn their profit negative when attackers are in the minority.For example, when the proportion of hybrids is 20%, BTM's performance in accuracy and average attacker trust means that attackers' trust estimations drop below 0.5 at such a drastic rate that trust attack detection is unnecessary.In environments with high-intensity attacks, the importance of BTM's trust attack detection mechanism manifests more.These results also confirm that even if simultaneously influenced by inherent uncertainty and trust attacks, BTM can prevent the latter from misleading evaluators into misjudging normal devices.In addition, this paper introduces fog computing to heighten BTM's scalability.As one motivation for the emergence of fog computing, it helps manage dynamic devices [33] and handle NCAs.For example, when a fog node meets an unacquainted intelligent vehicle, it can request related trust data from a cloud center or nearby fog node using this car's identifier.There remains room for improving BTM's security.A valuable research direction is solving the susceptibility to SAs.SA attackers can forge fake identities to spread misinformation whose sources are different.As a result, it can circumvent the second assumption on which BTM depends and successfully achieve trust attacks such as BMAs and BSAs.

Conclusions
This paper proposes BTM as a lightweight, adaptable, and universal trust mechanism for IoT devices, an enhanced edition of BRS providing more accurate trust estimations and better protection against trust attacks.Based on Bayesian statistics and Jøsang's belief model, an evaluator updates a device's reputation vector using direct interaction results and external feedback.A device's trust estimation comes from its reputation vector.This process can preclude SP attacks and employ fog computing as an optimization technique to address the challenges of managing numerous or dynamic devices.BTM's forgetting algorithm can automatically set its parameters and ensures that a trust estimation reflects a device's latest status, retards OOAs, and expedites eliminating influences from trust attacks.BTM's tango algorithm curbs BMAs with negligible side effects and extra computing by diving the blame of a failing interaction between the two sides during the processing of feedback from different devices.BTM's trust attack detection based on its trust estimations and hypothesis testing can identify BMAs, BSAs, DAs, and VIE.The simulation results corroborate that BTM can deal with colluding attackers having multiple abilities of BMAs, BSAs, and DAs if most devices are normal.

Figure 1 .
Figure 1.BTM's architecture with two optional trust evaluation styles: purely distributed style and core style, illustrated from the view of evaluator i.

Figure 2 .
Figure 2. Trust of the victim per round in different feedback integration modes when consecutive criticisms are met.

Figure 6 .Figure 7 .Figure 8 .Figure 9 .
Figure 6.Average global trust estimations of devices 1 and 2 in RTCM, TBSM, and BTM, in the view of device 0, recorded per 10 milliseconds.The forgetting factor is 0.5, and the parameter of indirect trust is 0.5 in RTCM.They are 0.3 and 0.1 in TBSM.φ = 5 and ζ = 0 in BTM.

Table 1 .
Coverage of trust attacks in existing trust mechanisms.
test statistic of hypothesis testing related to trust attack detection.

Table 3 .
Simulation and algorithm parameters.

Table 4 .
Data of the fox, round off to five decimal places.

Table 5 .
Data of the hybrid, round off to five decimal places.
Average trusts of normal devices per 0.5 s, n 0 = 50; all attackers are foxes.
Table 6 presents BTM's performance against hybrids in this mode.

Table 6 .
Data of the hybrid in token mode, round off to five decimal places.