You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

29 May 2025

A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations

and
School of Science & Technology, Hellenic Open University, 26 335 Patra, Greece
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Innovative Artificial Intelligence Methods, Tools and Methodologies to Address Challenging Real-World Problems

Abstract

Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges such as agent mobility, which causes agents to lose accumulated trust when moving across networks; changing behaviors, where previously reliable agents may degrade over time; and the cold start problem, which hinders the evaluation of newly introduced agents due to a lack of prior data. To address these issues, we introduced a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally. This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy. Despite these advantages, prior evaluations revealed the limitations of our model in adapting to provider population changes and continuous performance fluctuations. This study proposes a novel algorithm, incorporating a self-classification mechanism for providers to detect performance drops that are potentially harmful for service consumers. The simulation results demonstrate that the new algorithm outperforms its original version and FIRE, a well-known trust and reputation model, particularly in handling dynamic trustee behavior. While FIRE remains competitive under extreme environmental changes, the proposed algorithm demonstrates greater adaptability across various conditions. In contrast to existing trust modeling research, this study conducts a comprehensive evaluation of our model using widely recognized trust model criteria, assessing its resilience against common trust-related attacks while identifying strengths, weaknesses, and potential countermeasures. Finally, several key directions for future research are proposed.

1. Introduction

Conventional cryptographic methods, such as the use of certificates, digital signatures, and Public Key Infrastructure (PKI), depend on a static, centralized authority for certificate verification. However, this approach may be impractical or insecure in open, dynamic, and highly distributed networks [1]. Additionally, entities in such networks may have limited resources, making it difficult for them to handle the computational demands of cryptographic protocols [2]. To overcome these challenges, trust management provides an alternative approach, enabling entities to gather reliable and accurate information from their surrounding network.
Various methods for evaluating trust and reputation have been developed for real-world distributed networks that can be considered multi-agent systems (MASs), such as peer-to-peer (P2P) networks, online marketplaces, pervasive computing, Smart Grids, the Internet of Things (IoT), and many more. However, existing trust management approaches still face major challenges. In open MASs, agents frequently join and leave the system, making it difficult for most trust models to adapt [3]. Additionally, assigning trust values in the absence of evidence and detecting dynamic behavioral changes remain unresolved issues [4]. Trust and reputation models, which rely on data-driven methods, often struggle when assessing new agents with no prior interactions. This issue, known as the cold start problem, commonly arises in dynamic groups where agents may have interacted with previous neighbors but lack connections to newly introduced agents. Moreover, in dynamic environments, agents’ behaviors can change rapidly, necessitating that consumer agents recognize these changes to select reliable providers. The challenge in trust assessment lies in the fact that behavioral changes occur at varying speeds and times, making conventional methods, such as forgetting old data at a fixed rate, ineffective [4].
To address these challenges, we previously introduced a novel trust model for open MASs, inspired by synaptic plasticity, the process that enables neurons in the human brain to form structured groups known as assemblies, which is why we named our model “Create Assemblies (CA)”. CA’s distinguishing characteristic is that, unlike traditional trust models where the trustor selects a trustee, CA allows the trustee to determine whether it has the necessary skills to complete a given task. To briefly describe the CA approach, we note that activities are initiated by a service requester, the trustor, which broadcasts a request message that includes task details such as its category and specific requirements. Upon receiving the request, potential trustees (i.e., service providers within the service requester’s vicinity) store the message locally and establish a connection with the trustor if one does not already exist. Each connection is assigned a weight, a value between 0 and 1, representing the trust level or the probability of the trustee successfully completing the task. In the CA model, trust update is event-driven. After the task’s completion, the trustor provides performance feedback, which the trustee uses to adjust the connection weight—increasing it for successful execution and decreasing it otherwise.
The CA approach offers several key advantages in open and highly dynamic MASs due to its unique design, where the trustor does not select a trustee, and all trust-related data concerning the trustee are stored locally within the trustee. First, since each trustee retains its own trust information, these data can be readily accessed and utilized across different applications or systems the agent joins. This provides a major advantage in handling mobility, an ongoing challenge in MASs [5]. By allowing a new service provider to estimate its performance for a given task based on past executions in other systems, the CA model effectively addresses the cold start problem. Second, in conventional trust models, trustors must collect and exchange extensive trust information (e.g., recommendations) before making a selection, leading to increased communication overhead. This is particularly problematic in resource-constrained networks, where communication is more expensive than computation [2]. Sato and Sugawara explained in [6] that task allocation in such settings is a complex combinatorial problem, requiring multiple message exchanges. In contrast, the CA model reduces this burden by limiting communication to request messages from service requesters and feedback messages containing performance ratings, significantly minimizing communication time and overhead. Third, since agents in the CA model do not share trust information, this approach is inherently more resistant to disinformation tactics, such as false or dishonest recommendations, which are common in other trust-based approaches. However, as we will discuss later, there are still some potential vulnerabilities that need to be addressed. Finally, privacy concerns often discourage agents from sharing trust-related data in traditional models. Since CA does not rely on recommendations, agents do not have to disclose how they evaluate others’ services, thereby preserving their privacy.
In our previous work [7], we proposed a CA algorithm as a solution for managing the constant entry and exit of agents in open MASs, as well as their dynamic behaviors. Through an extensive comparison between CA and FIRE, a well-known trust model for open MASs, we found the following:
  • CA demonstrated stable performance under various environmental changes;
  • CA’s main strength was its resilience to fluctuations in the consumer population (unlike FIRE, CA does not depend on consumers for witness reputation, making it more effective in such scenarios);
  • FIRE was more resilient to changes in the provider population (since CA relies on providers’ self-assessment of their capabilities, newly introduced providers—who lack prior experience—must learn from scratch);
  • Among all environmental changes tested, the most detrimental to both models was frequent shifts in provider behavior, particularly when providers switched performance profiles, where FIRE exhibited greater resilience than CA.
Motivated by these findings, this work begins with a semi-formal analysis aimed at identifying potential modifications to the CA algorithm that could enhance its performance when dealing with dynamic trustees’ profiles. Based on this analysis, we introduce an improved version of the CA algorithm with a key modification: after providing a service, a service provider re-evaluates its performance, and if it falls below a predefined threshold, it classifies itself as a bad provider. This modification ensures that each provider maintains an up-to-date evaluation of its own performance, allowing for the immediate detection of performance drops that could harm consumers. Next, we conduct a series of simulation experiments to compare the performance of the updated CA algorithm against the previous version. The results demonstrate that the new version outperforms the original CA algorithm under all tested environmental conditions. In particular, when trustees change performance profiles, the improved CA algorithm surpasses both FIRE and its earlier version. We then present a comprehensive evaluation of the CA model based on thirteen widely recognized trust model criteria from the literature. While the model satisfies essential criteria such as decentralization, subjectivity, context awareness, dynamicity, availability, integrity, and transparency, further research and enhancements are identified. In this work, we make the assumption of willing and honest agents, but we acknowledge that our solution should be made generalizable to more realistic settings, where dishonesty and unwillingness are common conditions among agents’ societies. To this end, we examine CA’s resilience against common trust-related attacks, highlighting its strengths and limitations while proposing potential countermeasures. To our knowledge, addressing both openness and dishonesty together is novel.
This thorough assessment not only underscores the model’s strengths but also reflects a commitment to ongoing improvement, establishing it as a promising foundation for future trust management solutions in critical applications, such as computing resources, energy sharing, and environmental sensing—particularly in highly dynamic and distributed environments where security and privacy are paramount. The rest of this paper is organized as follows. Section 2 reviews several trust management protocols related to our work. Section 3 provides background information concerning the CA and FIRE models, as well as the original CA algorithm. In Section 4, we conduct a semi-formal analysis of the CA algorithm’s behavior, leading to the introduction of a new version that avoids unwarranted task executions. Section 5 outlines the experimental setup and the methodology for the simulation experiments, with the results presented in Section 6. Section 7 offers a comprehensive evaluation of the CA model, based on widely accepted evaluation criteria, while Section 8 examines its robustness against common trust-related attacks. Finally, Section 9 concludes our work and suggests potential directions for future work.

3. Background

In this section, we begin by outlining the key features of FIRE and CA, the two trust models evaluated in our simulation experiments, explaining the rationale behind selecting FIRE as a reference model for comparison, followed by an overview of the previous version of the CA algorithm, which incorporates a mechanism for handling dynamic trustee profiles.

3.1. FIRE

Huynh et al. introduced the FIRE model in [27], naming it after “fides” (Latin for “trust”) and “reputation”. We selected FIRE as a reference model for comparison with CA because it is a well-established trust and reputation model for open MASs that, like CA, adopts a decentralized approach. Additionally, FIRE represents the traditional trust management approach, where trustors select trustees, providing a contrast to the CA model, where trustees are not chosen by trustors. It consists of four key modules:
  • Interaction Trust (IT): Evaluates an agent’s trustworthiness based on its past interactions with the evaluator.
  • Witness Reputation (WR): Assesses the target’s agent’s trustworthiness using feedback from witnesses—other agents that have previously interacted with it.
  • Role-based Trust (RT): Determines trustworthiness based on role-based relationships with the target agent, incorporating domain knowledge such as norms and regulations.
  • Certified Reputation (CR): Relies on third-party references stored by the target agent, which can be accessed on demand to assess trustworthiness.
IT is the most reliable trust information source, as it directly reflects the evaluator’s satisfaction. However, if the evaluator has no prior interactions with the target agent, FIRE cannot utilize the IT module and must rely on the other three modules, primarily WR. However, when a large number of witnesses leave the system, WR becomes ineffective, forcing FIRE to depend mainly on the CR module for trust assessments. Yet, CR is not highly reliable, as trustees may selectively store only favorable third-party references, leading to the overestimation of their performance.

3.2. CA Model

CA is a biologically inspired computational trust model. It is inspired by biology concepts, particularly synaptic plasticity in the human brain and the formation of assemblies of neurons. In our previous work [28], we provided a detailed explanation of synaptic plasticity and how it is applied in our model.
An MAS is used to represent a social environment where agents communicate and collaborate to execute tasks. In [28], we formally defined the key concepts necessary for describing our model, but here, we summarize the most essential ones. A trustor is an agent that defines a task and broadcasts a request message containing all relevant details, including the following: (i) the task category (type of work to be performed) and (ii) a set of requirements, as specified by the trustor.
In the CA approach, the trustee, rather than the trustor, decides whether to engage in an interaction and perform a given task. When a trustee receives a request message, it establishes and maintains a connection weight w 0,1 , representing the strength of this connection. This weight reflects the probability of successful task completion and is updated by the trustee, based on the performance feedback provided by the trustor. If the trustee successfully completes the task, the weight is increased according to the following:
w = M i n 1 , w + α 1 w .
If the trustee fails, the weight decreases as follows:
w = M a x 0 , w β 1 w ,
where α and β are positive parameters controlling the rate of increase and decrease, respectively.
The trustee decides whether to accept a task request by comparing the connection weight with a predefined T h r e s h o l d 0,1 . If the weight meets or exceeds this threshold w T h r e s h o l d , the trustee proceeds with task execution.
Table 1 summarizes the main differences between FIRE and CA across several core dimensions of trust modeling, including trust evaluation methods, trust update mechanisms, and trust storage approaches.
Table 1. Comparison between FIRE and CA trust models.

3.3. The CA Algorithm Used to Handle Dynamic Trustee Profiles

In this subsection, we present the original version of the CA algorithm, as introduced in [7], including a brief description and the pseudocode, for the sake of completeness and to facilitate comparison with the updated version, presented in Section 4.2 of this work.
When a trustor identifies a new task to be executed by a trustee, it broadcasts a request message containing relevant task details (lines 2–3). Upon receiving this request, each potential trustee stores it in a list and establishes a new connection with the requesting trustor if one does not already exist (lines 4–12). Lines 6–11 address the cold start problem, which arises when a trustee lacks prior experience with a specific task and thus cannot assess its own capability to complete it. In the version shown in Algorithm 1, the trustee leverages prior knowledge gained from performing the same task for other trustors. Specifically, line 7 checks whether agent i has previously interacted with other agents (denoted by ~ j ) for the same task. If such similar connections exist, the agent will use the average weight of those connections to initialize the new one. If no prior experience exists (i.e., the trustee has never performed the task for any trustor), the connection weight is initialized to a default value of 0.5 (line 10), as was performed in the original algorithm proposed in [28].
At each time step, each trustee reviews the tasks stored in that list and attempts to execute the task with the highest connection weight, provided that the task remains available and the weight does not exceed a predefined threshold (lines 13–21). Lines 14–16 are part of the agent’s decision-making process before attempting to perform a task. They serve as a sequence of filters that ensure the task is worth pursuing, feasible, and assigned with sufficient trust. The check in line 14—“task is not visible or not done yet”—captures scenarios where the agent must move within a range to personally verify the task’s completion. In some cases, other agents might provide that information. Line 15 ensures the task is physically accessible and not already being performed by another agent or that the agent has the resources to perform it. Finally, line 16 introduces a trust-based filter. Even if a task is available and the agent is able to approach and undertake it, the agent proceeds only if the trust level with the task requester for the specific task meets or exceeds a predefined threshold. This promotes cautious behavior by ensuring that the agent only commits to tasks where sufficient trust exists in the cooperation.
If task execution is successful, the trustee increases the connection weight; otherwise, it decreases it (lines 22–26). This CA algorithm also accounts for dynamic trustee profiles as follows (lines 27–31). If a connection’s weight falls below the threshold, the trustee interprets this as an indication of its incapability to complete the task successfully and stops attempting it to save resources. However, if the trustee performs well on an easier task, it may infer that it has likely learned how to execute more complex tasks within the same category. In such cases, it increases the connection weight of those previously failed tasks to the threshold value, allowing itself an opportunity to attempt them again in the future.
To improve the understanding of Algorithm 1, Table 2 summarizes the main symbols and their meanings.
Table 2. Notation table.
Algorithm 1: CA v2, for agent i
1: while True do
# --- broadcast a request message when a new task is perceived ---
2:   when perceived a new task = (c, r)
3:     broadcast message m = (request, i, task)
# --- Receive/store a request message and initialize a new connection---
4:   when received a new message m = (request, j, task)
5:     add m to message list M
6:     if no existing connection co = (i, j, _, task) then
7:       if there are similar connections co’ = (i,~j,_,task) from i to other agents for the same task then
8:         create co = (i, j, avg_w, task), where avg_w = average of all weights for (i, ~j, _, task)
9:       else
10:          create co = (i, j, 0.5, task) # initialize weight to default initial trust 0.5
11:     end if
12:   end if
# --- Select and Attempt task ---
13:   select m = (request, j, task) from M such that co = (i, j, w, task) has the highest weight among all (i, k, w’, task)
14:   if task is not visible or not done yet then
15:     if canAccessAndUndertake(task) then
16:       if w ≥ Threshold then
17:         (result, performance) ← performTask(task)
18:       end if
19:     end if
20:   end if
21:   delete m from M
# --- Update connection weight based on result ---
22:   if result = success then
23:     strengthen co using Equation (1)
24:   else
25:     weaken co using Equation (2)
26:   end if
# --- Dynamic profile update ---
27:   for all failed connections co’ = (i, j, w’, task’) where w’ < Threshold and task’ = (c, r’) with r’ > r do
28:     if performance ≥ minSuccessfulPerformance(task’) then
29:       w’ ← Threshold # Give another chance on harder tasks
30:     end if
31:   end for
32: end while

4. Enhancing Performance by Avoiding Unwarranted Task Executions

In this section, we begin with a semi-formal analysis of the CA algorithm to identify potential modifications that could enhance its performance. Following this, we introduce an improved CA algorithm designed to detect and prevent unwarranted task executions.

4.1. A Semi-Formal Analysis of the CA Algorithm to Identify and Avoid Unwarranted Task Executions

To identify potential improvements to the performance of the CA algorithm, we conducted the following semi-formal analysis. While this analysis incorporates formal elements such as assumptions, definitions, propositions, and a proof by induction, it does not constitute a strictly formal analysis in the traditional mathematical or computer science sense. Instead, it represents a blend of formal reasoning and empirical evaluation for the following reasons:
  • Although induction is used to support Proposition 1, many conclusions rely more on empirical observation than on rigorous formal logic;
  • The conclusions are drawn from specific experimental conditions (e.g., Threshold = 0.5, a = β = 0.1), whereas a purely formal analysis would typically strive to derive results that hold regardless of particular parameter values;
  • Several conclusions are derived from example scenarios rather than universally valid logical proofs.
In our testbed, each consumer requesting the service at a certain performance level waits for a period equal to WT for the service to be provided by a provider in its operational range (nearby provider). If the requested t a s k = ( s e r v i c e _ I D , p e r f o r m a n c e _ l e v e l ) , where p e r f o r m a n c e _ l e v e l { P E R F E C T , G O O D , O K , B A D , W O R S T } is not performed, either because there are no nearby providers or none of the nearby providers are willing to perform the task, then the consumer requests the service at the next lower performance level, assuming that any consumer can manage with a lower-quality service.
Assumption 1.
WT is large enough so that if there is a capable provider in the operational range of the consumer, then the task will be successfully executed.
Assumption 2.
The reason why a nearby provider may not be willing to perform a task is only because the weight of the relevant connection is less than the threshold vale. We assume that there are no other reasons, i.e., the providers are not selfish or malicious. In other words, we assume that the providers are always honest and comply with the implemented protocols, having no intention to harm the trustors.
Since every rational consumer aims to maximize its profit, the sequence of tasks that it may request until a provider is found to provide the service is as follows: t a s k 1 , t a s k 2 , t a s k 3 , t a s k 4 , and t a s k 5 . In Table 3, the requirements of each task are specified.
Table 3. Explanation of requirements.
Definition 1.
Given that  C j  is a consumer requesting the successful execution of  t a s k i ,   i 1 , , 5 ,  P k  is a provider in the operational range of  C j  receiving a request from  C j  for the execution of  t a s k i , and  P k  does not yet have a connection for  C j  and  t a s k i , we define  P k  as having learned its incapability to perform  t a s k i  (meaning that it cannot always perform  t a s k i  successfully) when it initializes the weight of a new connection for  P k  and  t a s k i  to a value smaller than the threshold value, which will result in not executing  t a s k i  for  C j .
Since only bad and intermittent providers have negative (causing damage to the consumers) performances, we analyze how bad providers learn their own capabilities in each of the five tasks, aiming to identify ways to reduce task executions with negative performances and thus improve the performance of the CA algorithm.
We consider a system of four consumers, C 1 , C 2 , C 3 , and C 4 , and only one provider: the bad provider B P 1 . Each consumer has in its operational range the provider B P 1 . The provider has no knowledge of its capabilities in performing tasks, meaning that it has not formed any connections yet.
Phase A: Provider B P 1 learns its incapability in performing t a s k 1 .
Assume that C 1 requests the execution of t a s k 1 broadcasting the message m 1 = ( r e q u e s t , C 1 , t a s k 1 ) . According to the CA algorithm, when B P 1 receives message m 1 , it will create the connection c o 1 = B P 1 , C 1 , 0.5 , t a s k 1 , initializing its weight to the value 0.5. Since the condition w T h r e s h o l d is satisfied, B P 1 will perform t a s k 1 , but it will fail because its performance range is in [ 10 ,   0 ] , and t a s k 1 requires a performance equal to 10. After it fails, B P 1 will decrease the weight of c o 1 to the value w = 0.45 , by using equation w = w β ( 1 w ) , where β = 0.1 in our experiments.
Now, let C 2 require the execution of t a s k 1 by sending the message m 2 = ( r e q u e s t , C 2 , t a s k 1 ) . When B P 1 receives the message m 2 and because it already has the connection c o 1 , it will create the connection c o 2 = B P 1 , C 2 , 0.45 , t a s k 1 , initializing its weight to the average of the weights of the connections it has with other consumers for t a s k 1 . In this case, a v e r a g e = w c o 1 1 = 0.45 , where w c o 1 denotes the weight of connection c o 1 . Because the weight of c o 2 is less than the threshold value, B P 1 will decide not to perform t a s k 1 for consumer C 2 .
Assumption 3.
For simplicity, we assume that  B P 1  will not change its ability to provide  t a s k 1  so that the weights of all connections will remain constant over time.
Now, we can prove by induction the following proposition.
Proposition 1.
Given Assumption 3, every new connection that  B P 1  will create for  t a s k 1  will be initialized to the average of the weights of the connections it has already created for  t a s k 1 , which will remain constant and equal to 0.45.
The proof is provided in Appendix A. By Proposition 1 and Definition 1, it follows that B P 1 , in a single trial, has learned that it cannot successfully perform t a s k 1 . We can generalize every bad provider to the following conclusion.
Conclusion 1.
Given our experimental conditions (i.e., Threshold = 0.5, α = β = 0.1), every bad provider needs only one trial to learn that it cannot successfully perform task1.
Phase B: Provider B P 1 learns its incapability in performing t a s k 2 .
Following the same analysis as in Phase A, we can conclude as follows.
Conclusion 2.
Given our experimental conditions (i.e., Threshold = 0.5, α = β = 0.1), every bad provider needs only one trial to learn that it cannot successfully perform task2.
Phase C: Provider B P 1 learns its incapability in performing t a s k 3 .
Assume that C 1 requests the execution of t a s k 3 broadcasting the message m 1 = ( r e q u e s t , C 1 , t a s k 3 ) . When B P 1 receives message m 1 , it will create the connection c o 1 = B P 1 , C 1 , 0.5 , t a s k 3 , initializing its weight to the value 0.5. Since the condition w T h r e s h o l d is satisfied, B P 1 will perform t a s k 3 . Due to its performance range in [ 10 ,   0 ] and the task’s requirement that performance must be bigger or equal to zero in order to be successful, B P 1 has a small probability of having a performance equal to zero and thus a successful execution of t a s k 3 .
So, consider a scenario where the performance of B P 1 on t a s k 3 on its first trial is 0. After it succeeds, it will increase the weight of c o 1 to the value w = 0.55 by using equation w = w + a · ( 1 w ) , where α = 0.1 in our experiments. Now, let C 2 request the execution of t a s k 3 broadcasting the message m 2 = ( r e q u e s t , C 2 , t a s k 3 ) . When B P 1 receives message m 2 and because it already has the connection c o 1 , it will calculate the average weight of existing connections as a v e r a g e = w c o 1 1 = 0.55 . Then, it will create the connection c o 2 = B P 1 , C 2 , 0.55 , t a s k 3 , initializing its weight to the average just calculated. Since the condition w T h r e s h o l d is satisfied, B P 1 will execute t a s k 3 for C 2 . Suppose that B P 1 fails this time and decreases the weight of c o 2 to the value w = 0.505 . The scenario continues with two consecutive failed executions of t a s k 3 for consumers C 3 and C 4 . In Table 4, we can see the average weights of the connections after each task execution.
Table 4. Evolution of average connection weights after each execution of t a s k 3 for consumer C 1 to C 4 .
In the scenario above, B P 1 needed three consecutive failed executions after the successful first execution of t a s k 3 to learn (according to Definition 1) that it is not capable of always performing this task in a successful way, because the average weight of its connections for t a s k 3 is now less than the threshold value. This leads us to the following more general conclusion.
Conclusion 3.
The successful execution of  t a s k 3   on the first trial of a bad provider requires a number of consecutive failed executions to learn that it cannot always execute this task successfully.
Phase D: Provider B P 1 learns its incapability in performing t a s k 4 .
Since B P 1 has a performance range in 10 ,   0 and t a s k 4 requires a p e r f o r m a n c e 5 to be successful, B P 1 has a good probability of being successful in its first trial of t a s k 4 . If we repeat the analysis of phase C, we will be led to the following conclusion.
Conclusion 4.
The successful execution of t a s k 4  on the first trial of a bad provider requires a number of consecutive failed executions to learn that it cannot always execute this task successfully.
Phase E: Provider B P 1 learns its capability in performing t a s k 5 .
Since B P 1 has a performance range in [ 10 ,   0 ] and t a s k 5 is successfully executed if p e r f o r m a n c e 10 , B P 1 will always execute this task successfully.
Conclusion 5.
A bad provider will always execute  t a s k 5
Ideally, we would prefer that a bad provider not provide the service at all, because its poor performance harms the consumer, but providing the service at least once is required to assess the provider’s capabilities. However, the previous analysis demonstrates that the CA algorithm allows the bad provider to provide the service multiple times. Despite its negative performance on t a s k 1 , B P 1 performed poorly with a negative performance in phase B as well. A more intelligent agent could consider its negative performance from the first time and decide not to execute t a s k 2 . Furthermore, in both phase C and phase D, the successful first execution of the task requires a series of consecutive failed executions, which we would like to avoid.
To this end, in the following section, we propose an improved CA algorithm designed to detect bad providers early on and prevent them from damaging the consumers with their negative performances.

4.2. The Proposed CA Algorithm for the Early Detection of Bad Providers

In the proposed algorithm for the early detection of bad providers, each provider maintains a task-specific self-assessment mechanism through the i . b a d t a s k s   m a p , which is initialized with a default value of false for each task (line 1). This means that when a provider is created, it implicitly assumes it is not bad for any task it has not yet performed—there is no need to explicitly initialize entries for unknown tasks. After performing a task, the provider re-evaluates its performance (lines 25–29). If the performance is less than or equal to zero, it considers itself a bad provider for that specific task by setting i . b a d _ t a s k s [ t a s k ] < t r u e ; otherwise, it maintains or resets the value to false. This enables rapid, task-specific reassessment, allowing the provider to dynamically adapt its trustworthiness profile based on its most recent outcomes.
In lines 7–14 of the proposed algorithm, this self-assessment is used to initialize the trust weight of a new connection. If the provider believes it is bad ( i . b a d _ t a s k s [ t a s k ] = t r u e ) and the task requires a performance level of PERFECT, GOOD, or OK, the connection is initialized with a trust weight of 0.45 (line 9). Otherwise, the algorithm falls back to the original initialization logic (lines 10–13), which either averages existing weights from similar connections or sets a default trust of 0.5, as described in Section 3.3 for the previous version algorithm1.
In lines 15–19, if the connection already exists, the provider re-evaluates whether it should adjust the weight based on its current self-assessment. If the provider considers itself bad and the task requires a performance level ≥ 0 (i.e., PERFECT, GOOD, or OK), it lowers the weight of the existing connection to 0.45, reinforcing a cautious stance toward engaging in the task.
This modification enhances the context-awareness and reliability of the decision-making process by ensuring that each provider is consistently informed by its own recent experience when evaluating and responding to task requests.
Core functionalities from Algorithm 1—such as broadcasting task requests, handling incoming messages, task selection and execution, connection weight updates, and dynamic profile adjustments—are preserved without modification in Algorithm 2.
Algorithm 2: CA v3, for agent i
  # --- Initialize task-specific self-assessment memory ---
1: define i.bad_tasks as a map with default value false # assumes good unless proven otherwise
2: while True do
# --- Broadcast a request message when a new task is perceived ---
3:   when perceived a new task = (c, r)
4:     broadcast message m = (request, i, task)
# --- Receive/store a request message and initialize a new connection ---
5:   when received a new message m = (request, j, task)
6:     add m to message list M
# --- Initialize a new connection ---
7:     if no existing connection co = (i, j, _, task) then
8:       if i.bad_tasks[task] = true and task.performance_level ∈ {PERFECT, GOOD, OK} then
9:         create co = (i, j, 0.45, task) # cautious trust level for task-specific bad assessment
10:       else if there are similar connections co’ = (i, ~j, _, task) from i to other agents for the same task then
11:         create co = (i, j, avg_w, task), where avg_w = average of all weights for (i, ~j, _, task)
12:       else
13:         create co = (i, j, 0.5, task) # initialize to default trust
14:       end if
15:     else #if connection co = (i, j, _, task) exists
# --- modify an existing connection if certain conditions hold---
16:       if i.bad_tasks[task] = true and task.performance_level ∈ {PERFECT, GOOD, OK} then
17:         modify co = (i, j, 0.45, task)
18:       end if
19:     end if
# --- Select and Attempt task ---
20:   select m = (request, j, task) from M such that co = (i, j, w, task) has the highest weight among all (i, k, w’, task)
21:   if task is not visible or not done yet then
22:     if canAccessAndUndertake(task) then
23:       if w ≥ Threshold then
24:         (result, performance) ← performTask(task)
# --- Re-evaluate task-specific self-assessment based on latest performance ---
25:         if performance ≤ 0 then
26:           i.bad_tasks[task] ← true
27:         else
28:           i.bad_tasks[task] ← false
29:         end if
30:       end if
31:     end if
32:   end if
33:   delete m from M
# --- Update connection weight based on result ---
34:   if result = success then
35:     strengthen co using Equation (1)
36:   else
37:     weaken co using Equation (2)
38:   end if
# --- Dynamic profile update ---
39:   for all failed connections co’ = (i, j, w’, task’) where w’ < Threshold and task’ = (c, r’) with r’ > r do
40:     if performance ≥ minSuccessfulPerformance(task’) then
41:       w’ ← Threshold  # Give another chance on harder tasks
42:     end if
43:   end for
44: end while

5. Experimental Setup and Methodology

5.1. The Testbed

To test the performance of the revised algorithm, we performed an extensive simulation-based experimentation on a testbed based on the one described in [27].
The environment of the testbed consists of agents that either provide services (referred to as providers or trustees) or use these services (referred to as service requesters, consumers, or trustors). For simplicity, we assume all providers offer the same service, i.e., there exists only one type of task. The agents are randomly distributed within a spherical world with a radius of 1.0. The agent’s radius of operation ( r 0 ) represents its capacity to interact with others (e.g., available bandwidth), and it is uniform across all agents, set to half the radius of the spherical world. Each agent has acquaintances, which are other agents located within its operational radius.
Provider performance varies and determines the utility gain (UG) for consumers during interactions. There are four types of providers, good, ordinary, intermittent, and bad, as defined in [27]. Except for intermittent providers, each type has a mean performance level μ p , with the actual performance following a normal distribution around this mean. Table 5 shows the values of μ p and the associated standard deviation σ p for each provider type. Intermittent providers perform randomly within the range P L B A D ,   P L G O O D . The radius of operation of a provider also represents the range within which it can offer services without a loss of quality. If a consumer is outside this range, the service quality deteriorates linearly based on the distance, but the final performance remains within 10 , + 10 and corresponds to the utility the consumer gains from the interaction.
Table 5. Profiles of provider agents (performance constants defined in Table 6).
The simulations in the testbed are conducted in rounds. As in real life, consumers do not require services in every round. When a consumer is created, its probability of requiring a service ( a c t i v i t y   l e v e l   α ) is selected randomly. There are no limitations on the number of agents that can participate in a round. If a consumer needs a service in a round, the request is always made within that round. The round number marks the time for any event.
Consumers fall into one of three categories: (a) those using FIRE, (b) those using the old version of the CA algorithm, or (c) those using the new version of the CA algorithm. If a consumer requires a service in a round, it locates all nearby providers. FIRE consumers select a provider following the four-step process outlined in [27]. After choosing a provider, they use the service, gain utility, and rate the service based on the UG they received. This rating is recorded for future trust assessments. The provider is also informed of the rating and may keep it for future interactions.
CA consumers do not choose a provider. Instead, they send a request message to all nearby providers specifying the required service quality. Table 6 lists five performance levels that define the possible service qualities. CA consumers first request service at the highest quality (PERFECT). After a predetermined waiting time (WT), any CA consumer still unserved sends a new request for a lower-performance-level service (GOOD). This process continues until the lowest service level is reached or all consumers are served. When a provider receives a request, it stores it locally and applies the CA algorithm (CA_OLD or CA_NEW, depending on the consumer group it belongs to). WT is a parameter that defines the maximum time allowed for all requested services in a round to be provided.
Table 6. Performance level constants.
We assume that any consumer can manage with a lower-quality service. This assumption does not raise an issue of unfair comparison between FIRE and CA, since it also applies to consumers using FIRE. For instance, they may end up selecting a provider—potentially the only available option—whose service ultimately delivers the lowest performance level (WORST).
Agents can enter and exit the open MAS at any time, which is simulated by replacing a number of randomly selected agents with new ones. The number of agents added or removed after each round varies but must remain within certain percentage limits of the total population. The parameters p C P C and p P P C define these population change limits for consumers and providers, respectively. The characteristics of new agents are randomly determined, but the proportions of provider types and consumer groups are maintained.
When an agent changes location, it affects both its own situation and its interactions with others. The location is specified using polar coordinates ( r , φ , θ ) , and the agent’s position is updated by adding random angular changes Δ φ and Δ θ to φ and θ . Δ φ and Δ θ are chosen randomly from the range Δ ϕ , + Δ ϕ . Consumers and providers change their locations with probabilities p C L C and p P L C , respectively.
A provider’s performance μ can also change by a random amount Δ μ within the range M , + M with a probability of p μ C in each round. Additionally, with a probability of p P r o f i l e S w i t c h , a provider may switch to a different profile after each round.

5.2. Experimental Methodology

In our experiments, we compare the performance of the following three consumer groups:
  • FIRE: consumer agents use the FIRE algorithm.
  • CA_OLD: consumers use the previous version of the CA algorithm.
  • CA_NEW: consumers use the new version of the CA algorithm.
To ensure accuracy and minimize random noise, we conduct multiple independent simulation runs for each consumer group. The exact Number of Simulation Independent Runs (NSIR) varies per experiment to achieve statistically significant results. The exact NSIR values are displayed in the graphs illustrating the experimental results.
The effectiveness of each algorithm in identifying trustworthy provider agents is measured by the utility gain (UG) achieved by consumer agents during simulations. Throughout each simulation run, the testbed records the UG for each consumer interaction, along with the algorithm used (FIRE, CA_OLD, or CA_NEW).
After completing all simulation runs, we calculate the average UG for each interaction per consumer group. We then apply a two-sample t-test for mean comparison [29] with a 95% confidence level to compare the average UG between the following:
  • CA_OLD and CA_NEW;
  • FIRE and CA_NEW.
Each experiment’s results are displayed using two two-axis graphs: one comparing CA_OLD and CA_NEW and another comparing FIRE and CA_NEW. In each graph, the left y-axis represents the UG means for consumer groups per interaction, while the right y-axis displays the performance rankings produced by the UG mean comparison using the t-test. The ranking is denoted with the prefix “R” (e.g., R.CA), where a higher rank indicates superior performance. If two groups share the same rank, their performance difference is statistically insignificant. For instance, in Figure 1a, at the 17th interaction (x-axis), consumer agents in the CA_NEW group achieve an average UG of 6.15 (left y-axis), and according to the t-test ranking, the CA_NEW group holds a rank of 2 (right y-axis).
Figure 1. (a) A performance comparison of CA_NEW and CA_OLD, when providers change performance with 10% probability per round: CA_NEW achieves better performance, especially in early interactions. (b) A performance comparison of CA_NEW and FIRE, when providers change performance with 10% probability per round: CA_NEW excels in early interactions (except for the first interaction), while FIRE adapts and slightly outperforms over time.
All experiments use a “typical” provider population, as defined in [27], consisting of 50% beneficial providers (yielding positive UG) and 50% harmful providers (yielding negative UG, including intermittent providers).
To maintain consistency, we use the same experimental values as in [27], detailed in Table 7. Additionally, the default parameters for FIRE and CA are presented in Table 8 and Table 9, respectively.
Table 7. Experimental variables.
Table 8. FIRE’s default parameters.
Table 9. CA’s default parameters.

6. Simulation Results

This section presents the results of fourteen experiments evaluating the performance of the updated CA algorithm (CA_NEW) in comparison to both CA_OLD and FIRE. Each subsection examines different environmental conditions. The first two experiments (Section 6.1) concern scenarios where service provider performance fluctuates over time. Experiment 3 (Section 6.2) is conducted in a static environment. Experiments 4–8 (Section 6.3) test the impact of a gradually changing provider population increasing up to 30%. Experiments 9–11 (Section 6.4) examine the effects of a consumer population change up to 10%. Experiments 12 and 13 (Section 6.5) explore changes in the locations of consumers and providers. Experiment 14 (Section 6.6) evaluates performance where all dynamic factors change simultaneously. Section 6.7 summarizes the findings of all experiments.

6.1. The Performance of the New CA Algorithm in Dynamic Trustee Profiles

This section presents the results of two experiments, in which we compare the performance of the updated CA algorithm (CA_NEW) with the performance of the old version (CA_OLD) and the performance of FIRE, when the service providers’ performance varies over time.
Experiment 1. A provider may alter its average performance at a maximum of 1.0 UG unit with a probability of 10% in every round ( p μ C = 0.10 ,   M = 1.0 ) . The results are shown in Figure 1. Figure 1a shows that CA_NEW generally outperforms CA_OLD, with the most significant improvement observed in the initial interactions. Figure 1b shows that CA_NEW, except for the first interaction, outperforms FIRE in the initial interactions. However, FIRE adapts as the number of interactions increases and eventually achieves slightly better performance.
Experiment 2. A provider may switch into a different performance profile with a probability of 2% in every round ( p P r o f i l e S w i t c h = 0.02 ) . The results are depicted in Figure 2. Figure 2a demonstrates that CA_NEW consistently outperforms CA_OLD in all interactions, with an average difference of 2 UG units. Figure 2b shows that CA_NEW performs better than FIRE in all interactions except the first one.
Figure 2. (a) A performance comparison of CA_NEW and CA_OLD when providers switch performance profiles with 2% probability per round: CA_NEW consistently outperforms CA_OLD by an average of 2 UG units. (b) A performance comparison of CA_NEW and FIRE when providers switch performance profiles with 2% probability per round: CA_NEW outperforms FIRE in all interactions except for the first one.

6.2. The Performance of the New CA Algorithm in the Static Setting

This subsection presents the results of Experiment 3, which evaluates the performance of the updated CA algorithm (CA_NEW) in a static environment without any dynamic factors. The findings are depicted in Figure 3. Figure 3a demonstrates that CA_NEW performs better than CA_OLD in the initial interactions. Figure 3b shows that CA_NEW surpasses FIRE in all interactions, except for the first one.
Figure 3. (a) A performance comparison of CA_NEW and CA_OLD in a static environment: CA_NEW achieves better performance in early interactions. (b) A performance comparison of CA_NEW and FIRE in a static environment: CA_NEW outperforms FIRE in all interactions except for the first.

6.3. The Performance of the New CA Algorithm in Provider Population Changes

This section evaluates the performance of the updated CA algorithm (CA_NEW) under conditions where the provider population gradually fluctuates up to 30%, through a series of five experiments:
Experiment 4. The provider population changes at a maximum of 2% in every round ( p P P C = 0.02 ) . The results are shown in Figure 4.
Figure 4. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 2% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 2% probability per round.
Experiment 5. The provider population changes at a maximum of 5% in every round ( p P P C = 0.05 ) . The results are shown in Figure 5.
Figure 5. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 5% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 5% probability per round.
Experiment 6. The provider population changes at a maximum of 10% in every round ( p P P C = 0.10 ) . The results are shown in Figure 6.
Figure 6. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 10% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 10% probability per round.
Experiment 7. The provider population changes at a maximum of 20% in every round ( p P P C = 0.20 ) . The results are shown in Figure 7.
Figure 7. A performance comparison of CA_NEW and FIRE when the provider population changes with 20% probability per round.
Experiment 8. The provider population changes at a maximum of 30% in every round ( p P P C = 0.30 ) . The results are shown in Figure 8.
Figure 8. A performance comparison of CA_NEW and FIRE when the provider population changes with 30% probability per round.
In Experiments 4–6, we compare the performance of CA_NEW to both CA_OLD and FIRE. Figure 4a, Figure 5a and Figure 6a show that CA_NEW significantly outperforms CA_OLD, achieving higher UG across all interactions. Figure 4b, Figure 5b and Figure 6b reveal that as the provider population change rate rises from 2% to 10%, CA_NEW maintains better performance than FIRE.
Figure 9a,b demonstrate that CA_NEW is more resilient than CA_OLD to changes in provider population. Specifically, when the provider population change rate increases from 2% to 10%, CA_OLD’s performance drops by 4 UG units (Figure 9a), whereas CA_NEW’s performance drops only 2 UG units (Figure 9b).
Figure 9. (a) Performance comparison of CA_OLD when provider population changes with probability of 2%, 5%, and 10%; performance drops by 4 UG units. (b) Performance comparison of CA_NEW when provider population changes with probability of 2%, 5%, and 10%; performance drops by 2 UG units. (c) Performance comparison of FIRE when provider population changes with probability of 2%, 5%, and 10%; FIRE is more resilient than CA to this environmental change.
Figure 9b,c indicate that CA_NEW’s performance drops more sharply than FIRE’s, suggesting that FIRE is more resilient to this environmental change. This trend raises the expectation that FIRE will eventually surpass CA_NEW at a higher provider population change rate. To verify this hypothesis, we conducted Experiments 7 and 8. The results, shown in Figure 7 and Figure 8, confirm that when the provider population change rate reaches 30%, FIRE outperforms CA_NEW.
As discussed in Section 4.2 of our previous work [7], FIRE demonstrates greater resilience to changes in the provider population compared to CA. This is primarily due to FIRE’s continuous adaptation over time, which enables it to maintain or even improve its performance. In these experiments, consumers rely on witness reputation, which gradually increases as witnesses (other consumers) remain in the system, contributing to FIRE’s stability. In contrast, CA heavily depends on the provider’s knowledge of their capabilities, and when the provider population changes, newcomers must learn from scratch, leading to a more significant decline in performance. While CA_NEW shows an improvement over CA_OLD in scenarios involving provider population change, the fundamental differences in the underlying mechanisms still allow FIRE to outperform CA_NEW under these conditions.

6.4. The Performance of the New CA Algorithm in Consumer Population Changes

This section evaluates the performance of the updated CA algorithm (CA_NEW) in comparison to both CA_OLD and FIRE under conditions where the consumer population gradually fluctuates by up to 10%, through a series of three experiments.
Experiment 9. The consumer population changes at a maximum of 2% in every round ( p C P C = 0.02 ) . The results are shown in Figure 10.
Figure 10. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 2%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 2%.
Experiment 10. The consumer population changes at a maximum of 5% in every round ( p C P C = 0.05 ) . The results are shown in Figure 11.
Figure 11. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 5%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 5%.
Experiment 11. The consumer population changes at a maximum of 10% in every round ( p C P C = 0.10 ) . The results are shown in Figure 12.
Figure 12. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 10%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 10%.
Figure 9a, Figure 10a and Figure 11a illustrate that CA_NEW outperforms CA_OLD, generally achieving a higher UG in the initial interactions. Figure 9b, Figure 10b and Figure 11b demonstrate that CA_NEW consistently performs better than FIRE throughout all interactions in this environmental change.
Figure 13a,b show that in the first interactions, both CA_OLD and CA_NEW improve their performance as the consumer population change rate increases from 2% to 10%. Nevertheless, Figure 12a reveals that when CPC = 10%, CA_OLD slightly surpasses CA_NEW in the first interaction. A possible explanation for this is that when CPC = 10%, the average UG of the first interaction is influenced by a larger number of newcomer agents who have their first interaction in later simulation rounds. During these rounds, service providers have established more connections and can more accurately evaluate their ability to provide the service using CA_OLD, resulting in a higher UG for service consumers. This observation led us to hypothesize that CA_OLD, compared to CA_NEW, is a more suitable choice for service providers that have remained in the system longer and have gained more knowledge about their service-providing capabilities, assuming their capabilities remain unchanged over time.
Figure 13. (a) Performance comparison of CA_NEW in consumer population changes of 2%, 5%, and 10%. (b) Performance comparison of CA_OLD in consumer population changes of 2%, 5%, and 10%.

6.5. The Performance of the New CA Algorithm in Consumer and Provider Location Changes

In this subsection, we evaluate the performance of the updated CA algorithm (CA_NEW) in comparison to both CA_OLD and FIRE under conditions where consumers and providers change locations, by conducting the following two experiments.
Experiment 12. A consumer may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p C L C = 0.10 , Δ Φ = π / 20 ). The results are shown in Figure 14.
Figure 14. (a) A performance comparison of CA_NEW and CA_OLD when consumers change locations with a probability of 10% per round. (b) A performance comparison of CA_NEW and FIRE when consumers change locations with a probability of 10% per round.
Experiment 13. A provider may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p P L C = 0.10 , Δ Φ = π / 20 ). The results are shown in Figure 15.
Figure 15. (a) A performance comparison of CA_NEW and CA_OLD when providers change locations with a probability of 10% per round. (b) A performance comparison of CA_NEW and FIRE when providers change locations with a probability of 10% per round.
Figure 14a and Figure 15a indicate that CA_NEW demonstrates improved performance in the initial interactions compared to CA_OLD under both environmental changes. Figure 14b and Figure 15b show that CA_NEW outperforms FIRE in both experiments, except for the first interaction, where FIRE performs better.

6.6. The Performance of the New CA Algorithm Under the Effects of All Dynamic Factors

In this subsection, we report the results of the final experiment, which evaluates the performance of CA_NEW under the combined influence of all dynamic factors.
Experiment 14. A provider may alter its average performance at a maximum of 1.0 UG unit with a probability of 10% in every round ( p μ C = 0.10 ,   M = 1.0 ) . A provider may switch into a different performance profile with a probability of 2% in every round ( p P r o f i l e S w i t c h = 0.02 ) . The provider population changes at a maximum of 2% in every round ( p P P C = 0.02 ) . The consumer population changes at a maximum of 5% in every round ( p C P C = 0.05 ) . A consumer may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p C L C = 0.10 , Δ Φ = π / 20 ). A provider may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p P L C = 0.10 , Δ Φ = π / 20 ).
The results, shown in Figure 16, demonstrate that CA_NEW consistently outperforms both CA_OLD and FIRE across all interactions.
Figure 16. (a) Performance comparison of CA_NEW and CA_OLD when all dynamic factors are in effect: CA_NEW outperforms. (b) Performance comparison of CA_NEW and FIRE when all dynamic factors are in effect: CA_NEW outperforms.

6.7. An Overview of the Results

The simulation results reveal that the updated CA algorithm (CA_NEW) demonstrates superior performance over its predecessor (CA_OLD) and the FIRE model under various environmental conditions:
  • Dynamic Trustee Profiles: CA_NEW outperforms CA_OLD across all interactions and shows resilience in handling provider performance fluctuations. While FIRE adapts more quickly in some cases, CA_NEW remains very competitive.
  • Static Environment: CA_NEW surpasses CA_OLD in the initial interactions and consistently outperforms FIRE, except in the first interaction.
  • Provider Population Changes: CA_NEW is more resilient than CA_OLD when provider population fluctuations increase up to 10%, maintaining better performance. However, as changes reach 30%, FIRE eventually outperforms CA_NEW, indicating FIRE’s resilience in this environmental change.
  • Consumer Population Changes: CA_NEW generally achieves a higher UG than CA_OLD, though at high levels of consumer population changes, CA_OLD performs slightly better in the first interaction, suggesting that CA_OLD may be better for old service providers with stable capabilities. CA_NEW consistently outperforms FIRE under consumer population changes.
  • Consumer and Provider Location Changes: CA_NEW shows improved performance over CA_OLD in initial interactions and outperforms FIRE in all interactions, except for the first one.
  • Combined Dynamic Factors: When all dynamic factors are in effect, CA_NEW maintains superior performance over both CA_OLD and FIRE across all interactions, demonstrating its robustness in complex environments.
Overall, CA_NEW is a significant improvement over CA_OLD, with better resilience and adaptability, although there are indications that CA_OLD may be a better choice for old service providers that do not change their behavior. While FIRE exhibits some advantages in extreme environmental changes, CA_NEW remains highly competitive across diverse scenarios. Building on previous research [30], where we examined how trustors can identify environmental dynamics and select the optimal trust model (CA or FIRE) to maximize utility, a natural direction for future work is to explore how trustees can recognize environmental changes and assess their own self-awareness to determine whether CA_OLD or CA_NEW would yield the best performance. Adopting this comprehensive RL approach could enhance adaptability and effectiveness across a wide range of scenarios.

7. Towards a Comprehensive Evaluation of the CA Model

In [31], Wang et al. define quality of trust (QoT) as the quality of trust models and propose several evaluation criteria including subjectivity, dynamicity, context awareness, privacy preservation, scalability, robustness, overhead, explainability, and user acceptance. However, they highlight the absence of a uniform standard. Indeed, Fotia et al. in [32], focusing on edge-based IoT systems, provide a different set of evaluation criteria (accuracy, security, availability, reliability, Heterogeneity, Lightweightness, flexibility, and scalability), referring to them as trust management requirements, whereas in [33], integrity, computability, and flexibility are identified as the key properties of trust models.
To evaluate the CA model comprehensively, we selected thirteen trust model criteria synthesized from diverse but widely cited publications [31,32,33]. These criteria capture both technical properties (e.g., robustness, scalability, overhead) and human-centric qualities (e.g., explainability, user acceptance). Our selection reflects the recurring themes across the literature and the pressing demand for open, dynamic MAS environments. While all thirteen criteria are important, their relative priority may vary based on application needs. For example, scalability and overhead are critical in IoT settings with resource constraints, whereas explainability and user acceptance are more relevant in human–agent collaboration scenarios. This section outlines a comprehensive evaluation of the CA model based on the thirteen evaluation criteria as follows.
Decentralization. Centralized cloud-based trust assessment has significant drawbacks, including a single point of failure, scalability issues, and susceptibility to data misuse and manipulation by the companies that own cloud servers. As a response, decentralized–distributed solutions are becoming the norm in trust management [34]. The CA model follows the decentralized approach by allowing each trustee to compute trust independently, eliminating reliance on a central authority.
Subjectivity. Trust is inherently subjective, as different service requesters may have different service requirements in different contexts, affecting their perception for the trustworthiness of the same service provider [33]. Therefore, trust models should take into account the trustor’s subjective opinion [31]. CA satisfies the subjectivity requirement, because the service requester defines and broadcasts the task and its requirements, and after the task’s execution, the service requester gives feedback to the service provider, rating its performance, and the service provider then decides whether the task was successfully executed, modifying the weight of the relevant connection, based on the feedback received.
Context Awareness. Since trust is context-dependent, a service provider may be trustworthy for one task but unreliable for another. The context typically refers to the task type, but it can also refer to an objective or execution environment with specific requirements [31]. CA ensures context awareness by allowing trustees to maintain distinct trust connections for different tasks requested by the same trustor.
Dynamicity. Trust is inherently dynamic, continuously evolving in response to events, environmental changes, and resource fluctuations. Recent trust information is much more important than historical information [31]. The CA model satisfies the dynamicity requirement, since trust update is event-driven, taking place after each task execution, making it well suited for dynamic environments, where agents may frequently join, leave, or alter their behavior, as evidenced by the experiments in this study.
Availability. A trust model should ensure that all entities and services remain updated [35] and fully available, even in the face of attacks or resource constraints [32]. In the CA approach, the service requester does not assign a task to a specific service provider but instead broadcasts the task to all nearby service providers. This ensures that even if a particular service provider is unavailable due to resource limitations (e.g., battery depletion), another service provider that receives the request can still complete the task. Consequently, our approach effectively meets the availability requirement.
Integrity and Transparency. All transaction data and interaction outcomes should be fully recorded, efficiently stored, and easily retrieved [33]. Trust values should be accessible to all authorized network nodes whenever they need to assess the trustworthiness of any device in the system [34]. In the CA approach, trust information is stored locally by the trustee in the form of connections (trust relationships). This ensures that trust information remains available even when entities cross application boundaries or transition between systems. Additionally, any trustee can readily access its own trust information to evaluate its reliability for a given task, thereby fulfilling the integrity and transparency requirement.
Scalability. Since scalability is linked to processing load and time, a trust model must efficiently manage large-scale networks while maintaining stable performance, regardless of network size, and function properly when devices are added or removed [31]. Large-scale networks require increased communication and higher storage capacity, meaning that trust models must adapt to the growing number of nodes and interactions [26,35,36]. However, many existing trust management algorithms struggle to scale effectively in massively distributed systems [22]. The CA model is designed for highly dynamic environments, ensuring strong performance despite continuous population changes. Previous research [7] has experimentally demonstrated CA’s resilience to fluctuations in consumer populations. In this study, we present simulation experiments showing that the updated CA algorithm has significantly improved resilience, even when provider populations change. Additionally, since agents in the CA approach do not exchange trust information, the model avoids scalability issues related to agent communication. However, we have yet to evaluate how CA scales with an increasing number of nodes.
Overhead. A trust model should be simple and lightweight, as calculating trust scores may be impractical for IoT devices with limited computing power [26]. It is essential to ensure that a trust model does not excessively consume a device’s resources [21]. Both computational and storage overhead must be considered [31], as devices typically have constrained processing and storage capabilities and must prioritize their primary tasks over trust evaluation. Additionally, excessive computational overhead can hinder real-time trust assessments, negatively impacting time-sensitive applications. A trust model’s efficiency can be analyzed using big O notation for time and space complexity. The CA model is considered simple and lightweight since its algorithm avoids complex mathematical computations. We provide a sketch of a computational complexity analysis of Algorithm 2 in Appendix A.2, which outlines the core reasoning behind the CA model’s efficiency using big O notation. However, a detailed analysis (full formal proof) of its time and space complexity in big O notation remains to be conducted. Since all trust information is stored locally by the trustees, further research is needed to evaluate the CA model’s storage overhead and explore potential optimizations to minimize it.
Accuracy. A trust model must effectively identify and prevent malicious entities by ensuring precise classification [26]. It should achieve a high level of accuracy, meaning that the computed trust value closely reflects the true value (ground truth) [32]. Several studies [1,10,37,38] assess proposed trust models using metrics such as Precision, Recall, F1-score, accuracy, False Positive Rate (FPR), and True Positive Rate (TPR). However, the accuracy of the CA model has not yet been evaluated using these metrics.
Robustness. Trust models must withstand both anomalous behavior caused by sensor malfunctions [36] and trust-related attacks from insider attackers who deliberately act maliciously for personal gain or to disrupt system performance [26,39]. In heterogeneous networks, constant device connectivity and weak interoperability between network domains create numerous opportunities for malicious activities [31]. Traditional cryptographic and authentication methods are insufficient against insider attacks, where attackers possess valid cryptographic keys [36]. Existing trust models can only mitigate certain types of attacks, but none can fully defend against all threats [25]. Therefore, developing algorithms capable of detecting a broad range of malicious activity patterns is crucial [26]. In the following section, we examine common trust-related attacks from the literature and evaluate the CA model’s effectiveness in countering them.
Privacy Preservation. Ensuring a high level of privacy through data encryption is essential during trust assessment to prevent sensitive information leaks [34]. A trust model must safeguard both Identity Privacy (IP) and data privacy (DP) [31]. Specifically, feedback and interaction data must remain protected throughout all stages of data management, including collection, transmission, storage, fusion, and analysis. Additionally, identity details such as names and addresses should be shielded from unauthorized access, as linking trust information to real identities can lead to serious risks, such as Distributed Denial of Service (DDoS) Attacks [35]. In the CA approach, while entities generally do not exchange trust information (e.g., recommendations), interaction feedback is transmitted from the trustor to the trustee, allowing the trustee to assess and locally store its own trustworthiness in the form of connections. Therefore, protecting the trustor’s identity and ensuring data privacy during transmission, storage, and analysis are critical considerations in the CA approach. Exploring the integration of blockchain technology to enhance privacy protection could be a valuable future research direction.
Explainability. Trust models should be capable of providing clear and understandable explanations for their results. The ability to analyze and justify decisions is essential for enhancing user trust, compliance, and acceptance [40]. Additionally, it is important to clarify the processing logic and how trust metrics influence trust evaluations [31]. In this study, we made two key contributions to improve the explainability of our model. First, we conducted a semi-formal analysis to identify potential modifications to the CA algorithm’s processing logic that could enhance its performance. Then, we carried out a series of simulation experiments to demonstrate the effectiveness of these improvements.
User Acceptance. The acceptance of a trust model depends on factors such as Quality of Service (QoS), quality of experience (QoE), and individual user preferences [31]. It can be assessed by gathering user feedback through questionnaires. However, the user acceptance of the CA model has not yet been evaluated.
Overall, the CA model satisfies decentralization, subjectivity, integrity, and transparency, but several aspects require further enhancement and evaluation. Future research should focus on scalability, as the model’s performance in large-scale environments with a high number of nodes remains untested. Additionally, while trust data are stored locally, their impact on system overhead and storage efficiency needs further investigation. Accuracy assessment using standard metrics like Precision, Recall, and the F1-score is also necessary to validate its reliability. In terms of robustness, although the model resists false recommendations, its resilience against insider attacks requires deeper analysis. Privacy preservation remains an open challenge, particularly in safeguarding trustor identity and feedback transmission, which could benefit from encryption or blockchain-based solutions. Finally, user acceptance has yet to be assessed, making it essential to evaluate the model’s adoption based on Quality of Service (QoS) and quality of experience (QoE). Addressing these challenges will strengthen the CA model’s effectiveness and applicability in dynamic environments. To the best of our knowledge, no other models have undergone such a comprehensive evaluation.

8. Trust-Related Attacks

In this section, we examine the CA model’s resilience against the most common trust-related attacks identified in various research studies [2,16,21,22,23,25,31,34,35,36,41,42]. A malevolent node is typically defined as a socially uncooperative entity that consistently seeks to disrupt system operations [36]. Its primary objective is to provide low-quality services to conserve its own resources while still benefiting from the services offered by other nodes in the system. Malicious nodes can employ various trust-related attack strategies, each designed to evade detection through different deceptive tactics [25].
Malicious with Everyone (ME). In this attack, a node consistently provides low-quality services or misleading recommendations, regardless of the requester. This is one of the most fundamental types of attacks. To counter the ME attack, the CA approach could incorporate a contract-theoretic incentive mechanism. This mechanism would reward honest service providers with utility gains while penalizing dishonest service providers with utility losses, similarly to the approach in [19], by awarding or deducting credit coins.
Bad-mouthing Attack (BMA). In this attack, one or more malicious nodes deliberately provide bad recommendations to damage the reputation of a well-behaved node, reducing its chances of being selected as service provider. However, in the CA approach, agents do not exchange trust information through recommendations. Since nodes cannot act as recommenders, our approach is inherently immune to BMA. Most studies assume that service requesters are honest, but as noted in [17], service requesters can also be “ill-intended” or “dishonest”, deliberately giving low ratings to a service provider despite receiving good service. Additionally, some service requesters may be “amateur” and incapable of accurately assessing service quality. This represents a specific type of bad-mouthing attack, against which our approach is also resilient. In the CA model, service providers have the autonomy to accept or reject service requests, allowing them to maximize their profits while avoiding dishonest or amateur service requesters. If a service requester unfairly assigns a low rating to a high-quality service, the service provider will respond by decreasing the weight of its connection with that service requester, reducing the likelihood of future interactions. This serves as a built-in mechanism to penalize dishonest service requesters.
Ballot Stuffing (BSA) or Good-mouthing Attack. This attack occurs when one or more malicious recommenders (a collusion attack) falsely provide positive feedback for a low-quality service provider to boost its reputation and increase its chances of being selected, ultimately disadvantaging legitimate, high-quality service providers. Since the CA approach does not involve nodes exchanging recommendations, it is inherently resistant to any BSA carried out by recommender nodes, which is common in other trust models. However, a specific variation of this attack can occur when an ill-intended or amateur service requester assigns a service provider a higher rating than it actually deserves [17]. In this scenario, while the service provider benefits from an inflated rating, it also suffers by misjudging its actual service capability. Since the CA algorithm requires the service provider to compute the average weight of its existing connections to assess its performance, an inaccurate self-evaluation could lead to financial losses when dealing with honest service requesters. Thus, service providers have a strong incentive to identify and avoid dishonest or inexperienced service requesters. To mitigate this risk, we could implement a mechanism similar to the Tlocal and Tglobal value comparison from [17] or the Internal Similarity concept from [23], where the service provider would evaluate the consistency of a given rating by comparing it to the average or median weights of connections with other service providers for the same service. Alternatively, a contract-theoretic incentive mechanism, as in [18], could deter dishonest service requesters by requiring them to pay a disproportionately high amount for low-quality services received. Additionally, integrating credit quotas (coins) as proposed in [19] could further discourage good-mouthing attacks. The choice of mitigation strategy would depend on the specific application environment in which the CA approach is implemented.
Self-promoting Attack. In this attack, a malicious node falsely provides positive recommendations about itself to increase its chances of being chosen as a service provider. Once selected, it then delivers poor services or exploits the network. This type of attack is especially effective against nodes that have not previously interacted with the attacker. In the CA approach, service providers do not provide recommendations about themselves, and service requesters do not select service providers. As a result, traditional self-promotion tactics are ineffective. However, a dishonest service provider could still choose to provide a service despite knowing it would harm the service requester. To prevent such behavior, it is crucial to incorporate a penalty mechanism that imposes a financial loss on dishonest service providers. Potential solutions include the fee charge concept from [17], the contract-theoretic mechanism from [18], or an incentive mechanism using credit quotas, such as TrustCoin [19]. The choice of mechanism would depend on the specific application scenario.
Opportunistic Service Attack (OSA). This attack occurs when a node manipulates its behavior based on its reputation. When its reputation declines, it offers high-quality services, but once its reputation improves, it provides poor services. This strategy allows the node to sustain a sufficient level of trust to continue being chosen as a service provider. However, in the CA approach, OSA does not enhance a malicious node’s chances of being selected as a service provider since trustors do not choose trustees. Consequently, our approach remains resilient against opportunistic service attacks.
Sybil Attack (SA). This attack occurs when a malicious node generates multiple fake identities to manipulate the reputation of a target node unfairly by providing various ratings. In the CA approach, a malicious service requester could use this tactic to hinder a target service provider’s ability to accurately assess its service capability, deplete its resources, and reduce its chances of being selected as a service provider. To counter this attack, we can apply the same mechanisms proposed for addressing the good-mouthing attack. Specifically, an approach based on the Internal Similarity concept [23] or a contract-theoretic mechanism based on incentives [18] could either deter attackers from carrying out their attacks or assist the service provider in identifying and isolating malicious service requesters.
Whitewashing Attack (WA), also known as a Re-entry Attack, occurs when a malicious node abandons the system after its reputation drops below a certain threshold and then re-enters with a new identity to erase its negative history and reset its reputation to a default value. In the CA approach, this attack is ineffective because changing a service provider’s identity does not impact its likelihood of being selected as a service provider, as trustors do not choose trustees.
On–Off Attack (OOA), also called a Random Attack, is one of the most challenging attacks to detect. In this attack, a malicious node alternates its behavior between good (ON) and bad (OFF) in an unpredictable manner, restoring trust just before launching another attack. During the ON phase, the node builds a positive reputation, which it later exploits for malicious activities. The CA approach is resilient against OOA since trustors do not select trustees, meaning a high reputation does not increase a node’s chances of being chosen as a service provider.
Discrimination or Selective Misbehavior Attack. This kind of attack occurs when an attacker selectively manipulates its behavior by providing legitimate services for certain network tasks while acting maliciously against others. To mitigate this attack in the CA approach, a penalty mechanism should be implemented to impose financial consequences on dishonest service providers. Depending on the application scenario, this could involve the “fee charge concept” [17], a contract-theoretic mechanism [18], or an incentive mechanism using credit quotas like in TrustCoin [19]. One form of this attack is selfish behavior, where service providers prioritize easier tasks that require less effort. Rational agents would evaluate each task based on expected utility—calculating the probability of success multiplied by the reward—and choose accordingly. To counter this in the CA approach, it is crucial to implement incentives that encourage capable agents to take on more challenging tasks. For example, a contract-theoretic mechanism can ensure that highly skilled agents prefer difficult tasks with higher expected utility.
Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks aim to disrupt a service by overwhelming it with excessive traffic or resource-intensive requests. While DoS attack originates from a single attacker, DDoS attacks involve multiple compromised devices working together to target a single system, making them more difficult to counter. In the CA approach, these attacks remain a potential threat. Therefore, future research should focus on developing effective mechanisms to mitigate DoS and DDoS attacks.
Storage Attack occurs when a malicious node manipulates stored feedback data by deleting, modifying, or injecting fake information. In the CA approach, an attacker may attempt to alter or remove the locally stored connections of a trustee. To counter this threat, blockchain technology can be utilized as a safeguard to ensure data integrity and prevent tampering.

9. Conclusions and Future Work

Various approaches have been developed to assess trust and reputation in real-world MASs, such as peer-to-peer (P2P) networks, online marketplaces, pervasive computing, Smart Grids, and the Internet of Things (IoT). However, existing trust models face significant challenges, including agent mobility, dynamic behavioral changes, the continuous entry and exit of agents, and the cold start problem.
To address these issues, we introduced the Create Assemblies (CA) model, inspired by synaptic plasticity in the human brain, where trustees (service providers) can evaluate their own capabilities and locally store trust information, allowing for improved agent mobility handling, reduced communication overhead, resilience to disinformation, and enhanced privacy.
Previous work [7] comparing CA with FIRE, a well-known trust model, revealed that CA adapts well to consumer population fluctuations but is less resilient to provider population changes and continuous performance shifts. This work built on these findings and used a semi-formal analysis to identify performance pitfalls, which were then addressed by allowing service providers to self-assess whether their performance falls below a certain threshold, thereby ensuring faster reaction and better adaptability in dynamic environments.
The simulation results confirm that CA_NEW outperforms the original CA_OLD, with greatly improved resilience and adaptability, although CA_OLD may still be preferable in scenarios of consumer population change, where long-standing service providers maintain stable performance. While FIRE has certain advantages in extreme environmental changes, CA_NEW remains highly competitive across a wide and diverse variety of environmental conditions. Building on prior research [30], where we explored how trustors can detect environmental dynamics and select the optimal trust model (CA or FIRE) to maximize utility, we propose an obvious direction for future work: exploring how trustees can detect the dynamics of environments and their self-awareness level to choose between CA_OLD and CA_NEW for optimal performance.
This paper also analyzed CA with respect to established evaluation criteria for trust models and discussed its resilience to most well-known trust-related attacks, proposing countermeasures for dishonest behaviors. While the CA model meets key criteria such as decentralization, subjectivity, context awareness, dynamicity, availability, integrity, and transparency, it requires further research and improvements. Table 10 summarizes the CA model’s evaluation across all thirteen criteria, providing a concise overview of its current strengths and areas for future enhancement. The model’s holistic evaluation not only highlights its strengths but also demonstrates a commitment to continuous refinement, positioning it as a highly promising foundation for future trust management solutions.
Table 10. Summary of CA model evaluation based on thirteen criteria.

Author Contributions

Conceptualization, Z.L. and D.K.; methodology, Z.L. and D.K.; software, Z.L.; validation, Z.L. and D.K.; formal analysis, Z.L. and D.K.; investigation, Z.L. and D.K.; resources, Z.L.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L. and D.K.; visualization, Z.L. and D.K.; supervision, D.K.; project administration, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The research data will be made available on demand.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1

Proof of Proposition 1.
It suffices to show by mathematical induction that the n-th connection for task1 will be initialized to the value of 0.45.
Base Case. (n = 2). We know that B P 1 has initialized its second connection for c o 2 to the value 0.45.
Induction hypothesis: Assume that Proposition 1 is correct for the k-th connection that B P 1 will create for t a s k 1 .
Induction step: We will show that Proposition 1 holds for the (k+1)th connection that B P 1 will create for t a s k 1 .
B P 1 will calculate the average of its k connections for t a s k 1 as follows:
a v e r a g e = w c o 1 + w c o 2 + + w c o k 1 + w c o k k ,
where w c o n denotes the weight of the n-th connection of B P 1 for t a s k 1 .
From the induction hypothesis, we have the following:
w c o k = 0.45 ,
The average of the weights of the k-1 connections is equal to 0.45:
w c o 1 + w c o 2 + + w c o k 1 k 1 = 0.45 w c o 1 + w c o 2 + + w c o k 1 = k 1 · 0.45 .
From (A1)–(A3), we have the following: a v e r a g e = k 1 · 0.45 + 0.45 k = 0.45 . □

Appendix A.2

Sketch of Computational Complexity Analysis of Algorithm 2. This sketch highlights the dominant cost contributors. A full formal proof would require additional assumptions and details regarding data structures and implementation models.
Time Complexity Analysis. We analyze the time complexity per iteration of the main loop of Algorithm 2, focusing on dominant, input-dependent operations. Constant time steps and scalar updates are noted but not emphasized in the final complexity.
Notation:
  • t : Number of distinct known tasks ( c , r ) stored or processed by the agent, where
    • c is a task category.
    • r is a set of requirements.
  • n: Number of agents in the system.
  • q: Number of incoming messages in the agent’s message list.
  • m: Number of connections currently held by the agent. In the worst case, m = O ( n · t ) .
Step-by-Step Breakdown
1.
Initialization (line 1):
  • Creating an empty structure: O ( 1 ) .
2.
Message Handling and Connection Setup (lines 3–19):
  • Broadcast to all agents (line 4): O ( n ) .
  • Append message (line 6): O ( 1 ) .
  • Connection handling (lines 7–14 or 15–19):
    • Worst case (lines 7–14):
      • Linear search in connections: O ( m ) .
      • Find similar connections and average weights: O ( m ) .
    • Total: O ( m ) .
    • Alternative case (lines 15–19): O ( 1 ) .
Overall block cost: O ( n + m ) .
3.
Select and Attempt Task (lines 20–33):
  • For each message (q in total), find the corresponding connection (worst-case linear search): O ( q · m ) .
  • Other steps (task status, threshold check, performance evaluation): Each is O ( 1 ) .
  • Message deletion (search + shift in list): O ( q ) .
Total for this block: O q · m + q = O ( q · m ) .
4.
Update Connection Weight (lines 34–38): Direct updates and conditionals: O ( 1 ) .
5.
Dynamic Profile Update (lines 39–43):
  • Iterate and filter all connections: O ( m ) .
  • Search through task list: O ( t ) .
Total: O ( m + t ) .
  • Total Time Complexity: O n + m + q · m + m + t = O ( n + q · m + t ) = O ( q · n · t ) , since, as stated, in the worst case, it is m = O ( n · t ) .
Space Complexity. We estimate the memory requirements per agent, focusing on the key data structures and how they scale with input parameters:
  • i . b a d _ t a s k s : Map from tasks to Boolean: O ( t ) .
  • Message list M : Up to q messages: O ( q ) .
  • Connection list: At most one per known task per agent: O ( n · t ) .
  • Local scalars and flags: O ( 1 ) .
Total Space Complexity: O ( q + n · t ) .

References

  1. Fabi, A.K.; Thampi, S.M. A psychology-inspired trust model for emergency message transmission on the Internet of Vehicles (IoV). Int. J. Comput. Appl. 2020, 44, 480–490. [Google Scholar] [CrossRef]
  2. Jabeen, F.; Khan, M.K.; Hameed, S.; Almogren, A. Adaptive and survivable trust management for Internet of Things systems. IET Inf. Secur. 2021, 15, 375–394. [Google Scholar] [CrossRef]
  3. Hattab, S.; Lejouad Chaari, W. A generic model for representing openness in multi-agent systems. Knowl. Eng. Rev. 2021, 36, e3. [Google Scholar] [CrossRef]
  4. Player, C.; Griffiths, N. Improving trust and reputation assessment with dynamic behaviour. Knowl. Eng. Rev. 2020, 35, e29. [Google Scholar] [CrossRef]
  5. Jelenc, D. Toward unified trust and reputation messaging in ubiquitous systems. Ann. Telecommun. 2021, 76, 119–130. [Google Scholar] [CrossRef]
  6. Sato, K.; Sugawara, T. Multi-Agent Task Allocation Based on Reciprocal Trust in Distributed Environments. In Agents and Multi-Agent Systems: Technologies and Applications; Jezic, G., Chen-Burger, J., Kusek, M., Sperka, R., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2021; Smart Innovation, Systems and Technologies; Volume 241. [Google Scholar] [CrossRef]
  7. Lygizou, Z.; Kalles, D. A biologically inspired computational trust model for open multi-agent systems which is resilient to trustor population changes. In Proceedings of the 13th Hellenic Conference on Artificial Intelligence (SETN ′24), Athens, Greece, 11–13 September 2024; Article No. 29. pp. 1–9. [Google Scholar] [CrossRef]
  8. Samuel, O.; Javaid, N.; Khalid, A.; Imran, M.; Nasser, N. A Trust Management System for Multi-Agent System in Smart Grids Using Blockchain Technology. In Proceedings of the 2020 IEEE Global Communications Conference (GLOBECOM 2020), Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  9. Khalid, R.; Samuel, O.; Javaid, N.; Aldegheishem, A.; Shafiq, M.; Alrajeh, N. A Secure Trust Method for Multi-Agent System in Smart Grids Using Blockchain. IEEE Access 2021, 9, 59848–59859. [Google Scholar] [CrossRef]
  10. Bahutair, M.; Bouguettaya, A.; Neiat, A.G. Multi-Perspective Trust Management Framework for Crowdsourced IoT Services. IEEE Trans. Serv. Comput. 2022, 15, 2396–2409. [Google Scholar] [CrossRef]
  11. Meyerson, D.; Weick, K.E.; Kramer, R.M. Swift trust and temporary groups. In Trust in Organizations: Frontiers of Theory and Research; Kramer, R.M., Tyler, T.R., Eds.; Sage Publications, Inc.: Thousand Oaks, CA, USA, 1996; pp. 166–195. [Google Scholar] [CrossRef]
  12. Wang, J.; Jing, X.; Yan, Z.; Fu, Y.; Pedrycz, W.; Yang, L.T. A Survey on Trust Evaluation Based on Machine Learning. ACM Comput. Surv. 2020, 53, 1–36. [Google Scholar] [CrossRef]
  13. Ahmad, I.; Yau, K.-L.A.; Keoh, S.L. A Hybrid Reinforcement Learning-Based Trust Model for 5G Networks. In Proceedings of the 2020 IEEE Conference on Application, Information and Network Security (AINS), Kota Kinabalu, Malaysia, 17–19 November 2020; pp. 20–25. [Google Scholar] [CrossRef]
  14. Kolomvatsos, K.; Kalouda, M.; Papadopoulou, P.; Hadjieftymiades, S. Fuzzy trust modeling for pervasive computing applications. J. Data Intell. 2021, 2, 101–115. [Google Scholar] [CrossRef]
  15. Wang, E.K.; Chen, C.M.; Zhao, D.; Zhang, N.; Kumari, S. A dynamic trust model in Internet of Things. Soft Comput. 2020, 24, 5773–5782. [Google Scholar] [CrossRef]
  16. Latif, R. ConTrust: A Novel Context-Dependent Trust Management Model in Social Internet of Things. IEEE Access 2022, 10, 46526–46537. [Google Scholar] [CrossRef]
  17. Alam, S.; Zardari, S.; Shamsi, J.A. Blockchain-Based Trust and Reputation Management in SIoT. Electronics 2022, 11, 3871. [Google Scholar] [CrossRef]
  18. Fragkos, G.; Minwalla, C.; Plusquellic, J.; Tsiropoulou, E.E. Local Trust in Internet of Things Based on Contract Theory. Sensors 2022, 22, 2393. [Google Scholar] [CrossRef]
  19. Pan, Q.; Wu, J.; Li, J.; Yang, W.; Guan, Z. Blockchain and AI Empowered Trust-Information-Centric Network for Beyond 5G. IEEE Netw. 2020, 34, 38–45. [Google Scholar] [CrossRef]
  20. Muhammad, S.; Umar, M.M.; Khan, S.; Alrajeh, N.A.; Mohammed, E.A. Honesty-Based Social Technique to Enhance Cooperation in Social Internet of Things. Appl. Sci. 2023, 13, 2778. [Google Scholar] [CrossRef]
  21. Ali, S.E.; Tariq, N.; Khan, F.A.; Ashraf, M.; Abdul, W.; Saleem, K. BFT-IoMT: A Blockchain-Based Trust Mechanism to Mitigate Sybil Attack Using Fuzzy Logic in the Internet of Medical Things. Sensors 2023, 23, 4265. [Google Scholar] [CrossRef]
  22. Kouicem, D.E.; Imine, Y.; Bouabdallah, A.; Lakhlef, H. Decentralized Blockchain-Based Trust Management Protocol for the Internet of Things. IEEE Trans. Dependable Secur. Comput. 2022, 19, 1292–1306. [Google Scholar] [CrossRef]
  23. Ouechtati, H.; Nadia, B.A.; Lamjed, B.S. A fuzzy logic-based model for filtering dishonest recommendations in the Social Internet of Things. J. Ambient. Intell. Hum. Comput. 2023, 14, 6181–6200. [Google Scholar] [CrossRef]
  24. Mianji, E.M.; Muntean, G.-M.; Tal, I. Trust and Reputation Management for Data Trading in Vehicular Edge Computing: A DRL-Based Approach. In Proceedings of the 2024 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Toronto, ON, Canada, 19–21 June 2024; pp. 1–7. [Google Scholar] [CrossRef]
  25. Marche, C.; Nitti, M. Trust-Related Attacks and Their Detection: A Trust Management Model for the Social IoT. IEEE Trans. Netw. Serv. Manag. 2021, 18, 3297–3308. [Google Scholar] [CrossRef]
  26. Kumari, S.; Kumar, S.M.D.; Venugopal, K.R. Trust Management in Social Internet of Things: Challenges and Future Directions. Int. J. Com. Dig. Syst. 2023, 14, 899–920. [Google Scholar] [CrossRef]
  27. Huynh, T.D.; Jennings, N.R.; Shadbolt, N.R. An integrated trust and reputation model for open multi-agent systems. Auton. Agents Multi-Agent Syst. 2006, 13, 119–154. [Google Scholar] [CrossRef]
  28. Lygizou, Z.; Kalles, D. A Biologically Inspired Computational Trust Model based on the Perspective of the Trustee. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence (SETN ′22), Corfu, Greece, 7–9 September 2022; Article No. 7. pp. 1–10. [Google Scholar] [CrossRef]
  29. Cohen, P. Empirical Methods for Artificial Intelligence; The MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  30. Lygizou, Z.; Kalles, D. Using Deep Q-Learning to Dynamically Toggle Between Push/Pull Actions in Computational Trust Mechanisms. Mach. Learn. Knowl. Extr. 2024, 6, 1413–1438. [Google Scholar] [CrossRef]
  31. Wang, J.; Yan, Z.; Wang, H.; Li, T.; Pedrycz, W. A Survey on Trust Models in Heterogeneous Networks. IEEE Commun. Surv. Tutor. 2022, 24, 2127–2162. [Google Scholar] [CrossRef]
  32. Fotia, L.; Delicato, F.; Fortino, G. Trust in Edge-Based Internet of Things Architectures: State of the Art and Research Challenges. ACM Comput. Surv. 2023, 55, 1–34. [Google Scholar] [CrossRef]
  33. Wei, L.; Wu, J.; Long, C. Enhancing Trust Management via Blockchain in Social Internet of Things. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 159–164. [Google Scholar] [CrossRef]
  34. Amiri-Zarandi, M.; Dara, R.A. Blockchain-based Trust Management in Social Internet of Things. In Proceedings of the 2020 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Calgary, AB, Canada, 17–22 August 2020; pp. 49–54. [Google Scholar] [CrossRef]
  35. Hankare, P.; Babar, S.; Mahalle, P. Trust Management Approach for Detection of Malicious Devices in SIoT. Teh. Glas. 2021, 15, 43–50. [Google Scholar] [CrossRef]
  36. Talbi, S.; Bouabdallah, A. Interest-based trust management scheme for social internet of things. J. Ambient. Intell. Hum. Comput. 2020, 11, 1129–1140. [Google Scholar] [CrossRef]
  37. Sagar, S.; Mahmood, A.; Sheng, Q.Z.; Zhang, W.E. Trust Computational Heuristic for Social Internet of Things: A Machine Learning-based Approach. In Proceedings of the 2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  38. Sagar, S.; Mahmood, A.; Sheng, M.; Zaib, M.; Zhang, W. Towards a Machine Learning-driven Trust Evaluation Model for Social Internet of Things: A Time-aware Approach. In Proceedings of the 17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous ′20), Darmstadt, Germany, 7–9 December 2020; pp. 283–290. [Google Scholar] [CrossRef]
  39. Aalibagi, S.; Mahyar, H.; Movaghar, A.; Stanley, H.E. A Matrix Factorization Model for Hellinger-Based Trust Management in Social Internet of Things. IEEE Trans. Dependable Secur. Comput. 2022, 19, 2274–2285. [Google Scholar] [CrossRef]
  40. Ullah, F.; Salam, A.; Amin, F.; Khan, I.A.; Ahmed, J.; Alam Zaib, S.; Choi, G.S. Deep Trust: A Novel Framework for Dynamic Trust and Reputation Management in the Internet of Things (IoT)-Based Networks. IEEE Access 2024, 12, 87407–87419. [Google Scholar] [CrossRef]
  41. Alemneh, E.; Senouci, S.-M.; Brunet, P.; Tegegne, T. A Two-Way Trust Management System for Fog Computing. Future Gener. Comput. Syst. 2020, 106, 206–220. [Google Scholar] [CrossRef]
  42. Tu, Z.; Zhou, H.; Li, K.; Song, H.; Yang, Y. A Blockchain-Based Trust and Reputation Model with Dynamic Evaluation Mechanism for IoT. Comput. Netw. 2022, 218, 109404. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.