3.2.1. The Process of Direct Trust Value Calculation
To overcome the obstacle of threshold setting in the threshold limitation approach that relies on the network administrator’s a priori knowledge of the particular network, we assign a cooperation probability to each value through the outlier detection method when calculating the direct trust values for DSR, CSR, DRR, CRR, ECR, and DA, and then we use the Bayesian Beta method to derive the direct trust values.
In contrast, the forwarding-related metrics, DFR and CFR, represent fundamentally different types of evidence. Forwarding is not a continuous measure but a discrete binary event: a packet is either forwarded successfully or it is not. Consequently, it is neither meaningful nor necessary to compute a probabilistic cooperation value based on distributional assumptions. Instead, forwarding trust is modelled using a binary assignment, where successful forwarding (observed through listening or ACK mechanisms) is assigned a cooperation value of 1 and unsuccessful forwarding is assigned a value of 0. These binary outcomes are then directly incorporated into the Bayesian Beta method. The processes of direct trust value calculation are shown in
Figure 4.
The evaluation is conducted over a defined period of time using the following framework for each metric. At the start of an evaluation period, we initialize the Beta distribution parameters for each metric: the count of cooperative interactions (α) and the count of non-cooperative interactions (). These two parameters form the basis for calculating trust values. The initial state for all nodes and all metrics is set to zero ( = 0, = 0), indicating no prior history.
Next, the node collects trust metric datasets for each metric. The cooperation probability () is calculated using a Gaussian distribution model based on the collected trust metric datasets. This allows the node to determine the likelihood that its neighbours are behaving cooperatively. It is used to update the and parameters. The direct trust value () is computed from and as the expected value of the Beta distribution at the end of the period, representing the trustworthiness of a node in a specific metric.
3.2.2. The Method of Cooperation Probability Calculation
Multiple studies, through experiments or simulations, have verified that in WSNs where node behaviour is independent, homogeneous, and static, the traffic of nodes and the collected sample data approximately follow a Gaussian distribution when the number of nodes is large enough [
37,
38,
39]. Therefore, we adopt a cooperation probability calculation method based on a Gaussian distribution.
The packet size is a common feature in traffic analysis and intrusion detection [
40]. However, it is not included in the proposed trust model because DoS attacks typically manifest through abnormal transmission rates and forwarding behaviours rather than size anomalies [
14]. This design choice reduces the computational overhead while maintaining detection effectiveness. Let us assume that the evaluation node is node
, and the set of all
neighbouring nodes being evaluated is represented as
. In an evaluation period, node
monitors its neighbours to build datasets for calculating the cooperation probabilities. The specific data collected for each metric is outlined in
Table 1.
This method assumes that the data follows a Gaussian distribution, meaning most data points are concentrated around the mean, and the probability of data points appearing decreases as they deviate further from the mean. Assume the data sample set collected by the sensor node is
. First, it is necessary to calculate the mean
and standard deviation
of the dataset as shown in Equations (1) and (2),
which are the basic parameters required for calculating the Gaussian distribution. For each data point
, the probability density value
can be calculated using the Gaussian distribution probability density function as shown in Equation (3):
where
is the normalization constant of the Gaussian distribution, ensuring the total probability is 1, and
is the probability density value for each data point, reflecting the degree of deviation of the data point from the mean.
is not the actual probability but the probability density value, indicating the relative frequency of occurrence near that point. The higher the density, the closer the data point is to the center (mean), and the higher the probability of occurrence; the lower the density, the further it is from the mean, possibly indicating an outlier. The curve of a Gaussian distribution probability density function is shown in
Figure 5. In this figure, the mean (
μ) is 50 and the standard deviation (σ) is 10.
Then, we define the cooperation probability of a data point as the ratio between its probability density and that of the mean, as shown in Equation (4),
where
represents the cooperation probability of
. The function curve of our cooperation probability is shown in
Figure 6. In this figure, the values of paraments
μ and σ are the same as in
Figure 5.
From this figure, we can see the range of the cooperation probability is from 0 to 1. The closer the value is to the mean, the closer its cooperation probability is to 1; the further away from the mean, the closer its cooperation probability is to 0.
Since a single node
sends multiple data packets (
packets, where
varies per node), a unique method is used to compute its overall DA cooperation probability. The final
for a node is the arithmetic mean of the cooperation probabilities calculated for each individual data packet it sent during the period, as shown in Equation (5),
where
represents the DA cooperation probability of node
,
represents the cooperation probability of the DA for the
kth data transmitted by node
, and
represents the total number of items sent by node
during the evaluation period.
3.2.3. The Method of Direct Trust Value Calculation
The direct trust value is then computed using the classical Bayesian Beta approach. Ganeriwal et al. [
11,
41] provided a detailed investigation into how Bayesian formulations and Beta distributions can be applied to trust modelling within WSNs. They extended the cooperation metric from a binary rating to an interval rating, meaning it is no longer simply cooperative or non-cooperative but is a probability. They applied the Dirichlet process and derived that the parameter update formulas for binary ratings are similarly applicable to interval ratings. After a single transaction, if the assigned cooperation probability is
, the Beta parameters are updated as shown as Equation (6).
From the viewpoint of node
, the parameters
and
characterize the cooperative and non-cooperative behaviours observed between nodes
and
at the initiation of a transaction. Similarly,
and
characterize the cooperative and non-cooperative behaviours of nodes
and
as observed by node
upon the completion of a single transaction. The trust score of node
computed from node
’s perspective is the expected value of the beta distributions, which can be easily computed, as shown in Equation (7).
We consider the behaviour of sending or receiving packets within an evaluation period as a single transaction. By substituting Equation (6) into Equation (7), we can derive the method for calculating the direct trust values for DSR, CSR, DRR, CRR, ECR, and DA at the end of an evaluation period, as shown in Equation (8),
for each metric
, where
represents six behavioural metrics (DSR, CSR, DRR, CRR, ECR, and DA), respectively.
and
represent the count of cooperative interactions and the count of noncooperative interactions of metric
of node
.
represents the assigned cooperation probability for node
with metric
. This probability is calculated based on observed behaviours. It is used to update the
and
parameters.
represents the direct trust value of node
in metric
. This value is computed from
and
at the end of the period. The relationship between these parameters for all six metrics is summarized in
Table 2.
3.2.4. The Specific Case of DFR and CFR
The direct trust values for DFR and CFR are also calculated using the classical Bayesian Beta method, as shown in Equation (5). However, unlike other trust metrics, there is no need to calculate the cooperation probability of the dataset. Instead, the cooperation probability is defined as a binary number, either 1 or 0, as in Equation (4), where
is 1 or 0. When it is confirmed through monitoring or ACK methods that node
has successfully forwarded a packet,
is assigned a value of 1; otherwise, it is assigned a value of 0. At the end of the evaluation period, the Beta parameters are updated, as shown in Equation (9),
where
represents the total number of data or control packets that node
either successfully forwarded or failed to forward during the evaluation period. For each forwarding action, a cooperation rating is assigned:
if the packet was forwarded successfully, and
otherwise. These ratings form the set:
= {
,
,
, …,
} ∈ {0, 1}. By substituting Equation (9) into Equation (5), the formula for the direct trust values of DFR and CFR at the end of an evaluation period is obtained, as in Equation (10),
where
and
represent the direct trust values of DFR and CFR for node
respectively. The parameters
,
, and
represent the counts of cooperative and non-cooperative forwarding actions for data and control packets, respectively, at the beginning of an evaluation period. For initialization, all Beta parameters are set to zero.
3.2.5. Calculating the Combined Trust Value
When calculating the combined trust value, we use a method that combines the reciprocal method and entropy-based method [
42] to assign weights to the direct trust values of each trust metric.
By taking the reciprocal of the direct trust values, we assign higher weights to lower trust values, accelerating the decline of combined trust for anomalous nodes. This helps achieve the goal of quickly identifying malicious nodes. We form a dataset of all direct trust values, organized according to the trust metrics in the sequence:
,
,
,
,
,
,
,
represented as
. The weighting scheme for each trust metric, illustrated in Equation (11),
is derived by applying the reciprocal function to each trust value.
represents the weighting factor assigned to the direct trust value from node
to node
j
for the
rth trust metric. The values of
ranges from 1 to 8.
is a very small number serving to prevent division by zero.
Building on the reciprocal method, we further use the entropy-based method to adjust the final weights. Entropy-based weighting is often interpreted as reducing the weight of highly uncertain information sources. In contrast, our design adopts a security-oriented interpretation: when a specific trust metric exhibits high volatility within a neighbourhood, this volatility frequently reflects a stronger discriminative power for identifying abnormal behaviour. Thus, after reciprocal weighting penalizes low trust values, entropy is used to amplify metrics that better separate benign from malicious behaviours. By calculating the “volatility” (entropy) of each metric, we automatically identify which metrics are more important in the network scenario and accordingly increase the weights of these trust values. This allows the trust evaluation to automatically adapt to different network or attack scenarios. For example, in scenarios where the attack affects transmission rates, such as alarm systems, the trust value weights of communication metrics are automatically strengthened. In scenarios where the attack aims to affect DA, such as medical monitoring systems, the impact of DA metrics is automatically amplified.
Shannon’s information theory defines entropy as a fundamental metric for quantifying uncertainty in a system [
42]. If a random variable
has a set of possible values
with a corresponding probability distribution
, then the entropy
is defined as shown in Equation (12):
where
) represents entropy of the random variable
;
represents the probability of the event
and
represents the base of the logarithm, typically
(for bits),
(for natural entropy, in nats), or
. The more uniform the probability distribution, the greater the entropy, and the higher the uncertainty of the system. The choice of the logarithmic base in entropy calculation significantly affects the sensitivity and discrimination of the resulting weights. In this study, we choose base
for the entropy function, as it offers higher sensitivity to ‘probability imbalance’. To illustrate the effect of the logarithm base, consider a simple probability distribution: Let
. Then, entropy values computed with different bases are show in
Table 3.
Among these three entropy values, the binary entropy value is the largest, indicating that it is more sensitive to the degree of value fluctuation. Furthermore, we calculate the information entropy with bases
,
, and
for the probability distributions (0.8, 0.2), (0.6, 0.4), and (0.5, 0.5), respectively, and perform a comparative analysis, as shown in
Table 4.
From
Table 4, we can see that the difference in entropy values for different probability distributions is the largest when the base is
. Therefore, in our trust model, when using base
for the entropy function, can more clearly distinguish which trust metrics are more important. For normalization, we use the factor
, which ensures that the entropy values are scaled to the [0, 1] interval.
First, we ‘normalize’ each trust metric to form a pseudo-probability distribution, so that it can be used as an input for information entropy to measure the fluctuation of each metric. Suppose the dataset of the
rth trust metric collected by node
from all its neighbouring nodes is
. Normalize this trust metric as shown in Equation (13),
where
denotes the count of immediate neighbours of node
.
is the pseudo-probability distribution corresponding to
,
.
To quantify uncertainty in the
rth trust metric, we employ Equation (12) to compute its entropy and normalize it as shown in Equation (14),
where
represents the normalized entropy of the
rth metric calculated by node
within its neighbourhood. The smaller the
, the greater the variation, and the more important the metric, which should be given a higher weight. The entropy weight factor calculation method is shown in Equation (15),
where
represents the entropy weight factor for the
rth metric calculated by node
within its neighbourhood.
3.2.7. Updating the Local Trust Value at the Node
We adopt an asymmetric update strategy in which trust values drop rapidly upon detecting abnormal behaviour and recover gradually during normal operation. This design follows common principles in trust management for WSNs, where rapid decay improves responsiveness to potential attacks while slow recovery prevents malicious nodes from regaining trust too quickly [
11]. Previous research showed that the unified decay–recovery strategy provides a stable performance across these behaviours without requiring attack-specific tuning [
21,
43].
By weighing the historical and current combined trust values with an aging factor, we derive the local trust value. To achieve a rapid decline in trust value when a node exhibits malicious behaviour and a slow recovery of trust value after the node returns to normal behaviour, we use a dynamically and automatically adjusted aging factor. Based on the logistic function [
44] we perform a nonlinear transformation to convert the change from the historical to the current combined trust value into an aging factor. We introduce explicit parameters for the slope (
) and midpoint (
), making the behaviour of the aging factor more transparent and tunable, as shown in Equation (18),
where
denotes the current moment,
and
denote the combined trust values from the previous and current evaluation periods, respectively,
denotes the evaluation period, and
denotes the aging factor.
As indicated by the formula, a higher current combined trust value relative to the historical value results in an aging factor close to 1, while a lower value leads it closer to 0.
Then, the combined trust value updated through the aging factor is used as the local trust value, as shown in Equation (19),
where
represents the value of local trust. When a node exhibits malicious behaviour, the current combined trust value decreases, the aging factor becomes smaller, and the weight of the current combined trust value increases, causing the local trust value to decline rapidly. Conversely, when the node returns to normal behaviour, the current combined trust value increases, the aging factor becomes larger, and the weight of the historical combined trust value increases, causing the local trust value to rise slowly.
3.2.8. Reporting the Local Trust Value
Each node needs to periodically send its local trust values for all its neighbours to the CH and the controller for trust aggregation. To avoid introducing an additional transmission overhead, we use the method from ETMRM [
10] for reporting local trust values. Instead of generating separate trust packets, each node embeds the trust values into the SDN-WISE report messages that are sent to the controller as part of the standard control-plane operation. As in the native SDN-WISE architecture, the cluster head forwards the report messages from all nodes in the cluster to the controller, enabling centralized trust aggregation. SDN-WISE report messages use a compact binary format optimized for resource-constrained sensor nodes.
Figure 7 shows the message structure used in this work.
To ensure that trust dissemination remains lightweight, the real-valued trust scores computed in the previous section (ranging from 0 to 1 and represented as 4-byte floating-point numbers) are converted into 1-byte unsigned integers in the range of [0, 100], following the approach in ETMRM [
10], as shown in Equation (20).
This compression significantly reduces the size of the embedded trust data and enables the inclusion of multiple trust values without increasing the overall packet size beyond SDN-WISE’s normal control-message budget. Consequently, the trust-reporting mechanism integrates seamlessly with existing SDN-WISE control-plane messages and incurs negligible communication costs.