Next Article in Journal
On the Quest for Ophthalmological Biomarkers for Long COVID: A Scoping Review
Previous Article in Journal
AI Training Data Management for Reliable Autonomous Vehicles Using Hashgraph
Previous Article in Special Issue
Machine Learning in Small and Medium-Sized Enterprises, Methodology for the Estimation of the Production Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations

School of Science & Technology, Hellenic Open University, 26 335 Patra, Greece
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(11), 6125; https://doi.org/10.3390/app15116125
Submission received: 17 April 2025 / Revised: 23 May 2025 / Accepted: 26 May 2025 / Published: 29 May 2025

Abstract

:
Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges such as agent mobility, which causes agents to lose accumulated trust when moving across networks; changing behaviors, where previously reliable agents may degrade over time; and the cold start problem, which hinders the evaluation of newly introduced agents due to a lack of prior data. To address these issues, we introduced a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally. This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy. Despite these advantages, prior evaluations revealed the limitations of our model in adapting to provider population changes and continuous performance fluctuations. This study proposes a novel algorithm, incorporating a self-classification mechanism for providers to detect performance drops that are potentially harmful for service consumers. The simulation results demonstrate that the new algorithm outperforms its original version and FIRE, a well-known trust and reputation model, particularly in handling dynamic trustee behavior. While FIRE remains competitive under extreme environmental changes, the proposed algorithm demonstrates greater adaptability across various conditions. In contrast to existing trust modeling research, this study conducts a comprehensive evaluation of our model using widely recognized trust model criteria, assessing its resilience against common trust-related attacks while identifying strengths, weaknesses, and potential countermeasures. Finally, several key directions for future research are proposed.

1. Introduction

Conventional cryptographic methods, such as the use of certificates, digital signatures, and Public Key Infrastructure (PKI), depend on a static, centralized authority for certificate verification. However, this approach may be impractical or insecure in open, dynamic, and highly distributed networks [1]. Additionally, entities in such networks may have limited resources, making it difficult for them to handle the computational demands of cryptographic protocols [2]. To overcome these challenges, trust management provides an alternative approach, enabling entities to gather reliable and accurate information from their surrounding network.
Various methods for evaluating trust and reputation have been developed for real-world distributed networks that can be considered multi-agent systems (MASs), such as peer-to-peer (P2P) networks, online marketplaces, pervasive computing, Smart Grids, the Internet of Things (IoT), and many more. However, existing trust management approaches still face major challenges. In open MASs, agents frequently join and leave the system, making it difficult for most trust models to adapt [3]. Additionally, assigning trust values in the absence of evidence and detecting dynamic behavioral changes remain unresolved issues [4]. Trust and reputation models, which rely on data-driven methods, often struggle when assessing new agents with no prior interactions. This issue, known as the cold start problem, commonly arises in dynamic groups where agents may have interacted with previous neighbors but lack connections to newly introduced agents. Moreover, in dynamic environments, agents’ behaviors can change rapidly, necessitating that consumer agents recognize these changes to select reliable providers. The challenge in trust assessment lies in the fact that behavioral changes occur at varying speeds and times, making conventional methods, such as forgetting old data at a fixed rate, ineffective [4].
To address these challenges, we previously introduced a novel trust model for open MASs, inspired by synaptic plasticity, the process that enables neurons in the human brain to form structured groups known as assemblies, which is why we named our model “Create Assemblies (CA)”. CA’s distinguishing characteristic is that, unlike traditional trust models where the trustor selects a trustee, CA allows the trustee to determine whether it has the necessary skills to complete a given task. To briefly describe the CA approach, we note that activities are initiated by a service requester, the trustor, which broadcasts a request message that includes task details such as its category and specific requirements. Upon receiving the request, potential trustees (i.e., service providers within the service requester’s vicinity) store the message locally and establish a connection with the trustor if one does not already exist. Each connection is assigned a weight, a value between 0 and 1, representing the trust level or the probability of the trustee successfully completing the task. In the CA model, trust update is event-driven. After the task’s completion, the trustor provides performance feedback, which the trustee uses to adjust the connection weight—increasing it for successful execution and decreasing it otherwise.
The CA approach offers several key advantages in open and highly dynamic MASs due to its unique design, where the trustor does not select a trustee, and all trust-related data concerning the trustee are stored locally within the trustee. First, since each trustee retains its own trust information, these data can be readily accessed and utilized across different applications or systems the agent joins. This provides a major advantage in handling mobility, an ongoing challenge in MASs [5]. By allowing a new service provider to estimate its performance for a given task based on past executions in other systems, the CA model effectively addresses the cold start problem. Second, in conventional trust models, trustors must collect and exchange extensive trust information (e.g., recommendations) before making a selection, leading to increased communication overhead. This is particularly problematic in resource-constrained networks, where communication is more expensive than computation [2]. Sato and Sugawara explained in [6] that task allocation in such settings is a complex combinatorial problem, requiring multiple message exchanges. In contrast, the CA model reduces this burden by limiting communication to request messages from service requesters and feedback messages containing performance ratings, significantly minimizing communication time and overhead. Third, since agents in the CA model do not share trust information, this approach is inherently more resistant to disinformation tactics, such as false or dishonest recommendations, which are common in other trust-based approaches. However, as we will discuss later, there are still some potential vulnerabilities that need to be addressed. Finally, privacy concerns often discourage agents from sharing trust-related data in traditional models. Since CA does not rely on recommendations, agents do not have to disclose how they evaluate others’ services, thereby preserving their privacy.
In our previous work [7], we proposed a CA algorithm as a solution for managing the constant entry and exit of agents in open MASs, as well as their dynamic behaviors. Through an extensive comparison between CA and FIRE, a well-known trust model for open MASs, we found the following:
  • CA demonstrated stable performance under various environmental changes;
  • CA’s main strength was its resilience to fluctuations in the consumer population (unlike FIRE, CA does not depend on consumers for witness reputation, making it more effective in such scenarios);
  • FIRE was more resilient to changes in the provider population (since CA relies on providers’ self-assessment of their capabilities, newly introduced providers—who lack prior experience—must learn from scratch);
  • Among all environmental changes tested, the most detrimental to both models was frequent shifts in provider behavior, particularly when providers switched performance profiles, where FIRE exhibited greater resilience than CA.
Motivated by these findings, this work begins with a semi-formal analysis aimed at identifying potential modifications to the CA algorithm that could enhance its performance when dealing with dynamic trustees’ profiles. Based on this analysis, we introduce an improved version of the CA algorithm with a key modification: after providing a service, a service provider re-evaluates its performance, and if it falls below a predefined threshold, it classifies itself as a bad provider. This modification ensures that each provider maintains an up-to-date evaluation of its own performance, allowing for the immediate detection of performance drops that could harm consumers. Next, we conduct a series of simulation experiments to compare the performance of the updated CA algorithm against the previous version. The results demonstrate that the new version outperforms the original CA algorithm under all tested environmental conditions. In particular, when trustees change performance profiles, the improved CA algorithm surpasses both FIRE and its earlier version. We then present a comprehensive evaluation of the CA model based on thirteen widely recognized trust model criteria from the literature. While the model satisfies essential criteria such as decentralization, subjectivity, context awareness, dynamicity, availability, integrity, and transparency, further research and enhancements are identified. In this work, we make the assumption of willing and honest agents, but we acknowledge that our solution should be made generalizable to more realistic settings, where dishonesty and unwillingness are common conditions among agents’ societies. To this end, we examine CA’s resilience against common trust-related attacks, highlighting its strengths and limitations while proposing potential countermeasures. To our knowledge, addressing both openness and dishonesty together is novel.
This thorough assessment not only underscores the model’s strengths but also reflects a commitment to ongoing improvement, establishing it as a promising foundation for future trust management solutions in critical applications, such as computing resources, energy sharing, and environmental sensing—particularly in highly dynamic and distributed environments where security and privacy are paramount. The rest of this paper is organized as follows. Section 2 reviews several trust management protocols related to our work. Section 3 provides background information concerning the CA and FIRE models, as well as the original CA algorithm. In Section 4, we conduct a semi-formal analysis of the CA algorithm’s behavior, leading to the introduction of a new version that avoids unwarranted task executions. Section 5 outlines the experimental setup and the methodology for the simulation experiments, with the results presented in Section 6. Section 7 offers a comprehensive evaluation of the CA model, based on widely accepted evaluation criteria, while Section 8 examines its robustness against common trust-related attacks. Finally, Section 9 concludes our work and suggests potential directions for future work.

2. Related Work

In this section, we review various trust management protocols relevant to our work. First, we identify IoT service crowdsourcing as a suitable domain for implementing trustee-oriented models, like CA. We then explore different methods proposed to address the cold start problem and agent mobility, highlighting how our approach tackles these challenges. Additionally, we review existing solutions for handling agents’ dynamically changing behaviors, discuss their limitations, and explain how the CA model effectively addresses this issue. Next, we review two representative trust management protocols which, like CA, assign service providers the responsibility of assessing their ability to fulfill a requested task and deciding whether to accept or reject service requests, comparing their similarities and differences with our approach. Lastly, we emphasize that when service requesters lose control over selecting service providers, a mechanism is required to encourage honest service provision. To strengthen the CA model’s robustness against trust-related attacks, we review several trust management protocols with promising mechanisms that could be integrated into our approach.

2.1. Trust Models for IoT Service Crowdsourcing

The unique design of the CA model makes it well suited for IoT service crowdsourcing, where IoT devices offer services to nearby devices, known as crowdsourced IoT services. These services can include computing resources, energy sharing, and environmental sensing. For instance, in energy-sharing services, IoT devices acting as service providers can wirelessly transfer energy to nearby devices with low battery levels, which act as service consumers. Two key characteristics of IoT platforms necessitate specialized trust management frameworks, like the CA model; these are device volatility (IoT devices have a short lifespan, frequently joining and leaving the network) and context-aware trust (trustworthiness in IoT environments is influenced by various factors, such as the owner’s reputation, the operating system, and the device manufacturer).
In [8], Samuel et al. presented a Blockchain-Based Trust Management System (BTMS) for energy trading, integrating blockchain, trust management, and MASs to enhance security and reliability. The system is structured into four layers: a Blockchain Layer, which securely stores and encrypts trust values and feedback; Aggregator Layer, where coalitions of agents are managed by consensus-elected aggregators that assess trust and update blockchain records; Prosumer Layer, consisting of energy-producing and -consuming agents that trade energy while exchanging trust data with aggregators; and a Physical Layer, which includes distributed energy resources such as solar panels, wind turbines, and energy storage. The trust evaluation process incorporates direct and indirect assessments using a weighted average method, while multi-source feedback from aggregators is stored on the blockchain. Trust credibility is determined based on trust distortion, consistency, and reliability, ensuring accurate evaluation and identifying dishonest agents. Additionally, each agent is assigned a token, known as a Balanced Trust Incentive (BTI) unit, which is used to interact with other agents. BTI increases with honest behavior and decreases when dishonesty is detected, fostering a transparent and trustworthy energy trading environment.
Khalid et al. [9] build upon the Blockchain-Based Trust Management System (BTMS) introduced by Samuel et al. [8], addressing unresolved challenges related to agent-to-agent cooperation and privacy in trust management within MASs. Khalid et al. identify that while the original BTMS effectively integrates blockchain with trust management to enhance security and reliability in energy trading, it lacks mechanisms to (i) foster agent-to-agent cooperation, as in dynamic and decentralized environments, agents may behave selfishly or maliciously, undermining collective system performance; and (ii) ensure privacy preservation, as agents are concerned about their privacy and security, making it inappropriate for them to reveal trust information to others. To address these issues, Khalid et al. leverage game theory, treating trust evaluation as a repeated game to encourage effective agent cooperation. In this framework, agents that refuse to cooperate face penalties in subsequent rounds, as others will also decline cooperation. A Tit-3-For-Tat (T3FT) strategy is employed, immediately penalizing noncooperative agents, with the punishment duration depending on their level of cooperation. Defaulter agents may regain their trust after cooperating for three consecutive plays. Afterwards, their cheating behavior is forgiven. To ensure secure and verifiable trust evaluation, a Publicly Verifiable Secret Sharing (PVSS) mechanism is implemented. It distributes sensitive information, such as cryptographic keys, among three entities: dealers (blockchain miners) who allocate shares, participants who store shares, and a combiner (aggregator) who reconstructs the secret based on a defined threshold. PVSS enhances security by enabling the public verification of share validity, deterring dishonest actions. Additionally, a Proof-of-Cooperation (PoC) consensus protocol is introduced within a consortium blockchain network to govern miner selection and block validation while fostering agent cooperation. However, this study focuses only on two trust-related attacks: bad-mouthing and on–off.
To assess context-dependent trust in dynamic IoT services, Bahutair et al. [10] propose a perspective-based trust management framework, which evaluates trust through three key perspectives: the Owner perspective (determined by social relationships and locality), the Device perspective (based on device reputation, which includes attributes like the manufacturer and operating system), and the Service perspective (focuses on service reliability, measuring performance and its impact on trust). These attributes are processed using a machine learning-based algorithm to build a trust model for crowdsourced IoT services. The authors highlight that in some IoT crowdsourcing applications—especially energy-sharing services—service reliability is a more critical factor than privacy. Additionally, social network-based trust models alone may be insufficient for evaluating trust between IoT service providers and consumers.
The CA model primarily relies on service reliability to assess the trustworthiness of service providers; thus, it can be widely used in IoT service crowdsourcing applications.

2.2. The Cold Start Problem

Trust and reputation models typically perform poorly when dealing with newcomer service providers. This challenge, known as the cold start problem, arises because service requesters have difficulty assessing the trustworthiness of newcomer service providers with whom they have no previous interactions but are now connected to. Stereotyping, first introduced by Meyerson et al. in [11], operates on the idea that agents with similar observable characteristics are likely to behave similarly. This approach helps improve trust assessments for newcomers or agents with no prior interaction history. More recently, the cold start problem has been addressed using machine learning, where trust models predict a newcomer’s trustworthiness [12]. Additionally, roles within virtual organizations and social relationships among entities can serve as sources of trust information [5], particularly when direct or indirect trust data are unavailable. Another approach involves assigning an initial trust value until the agent can gather direct experience. This value may represent the average performance or be inferred from interactions with other agents [4].
The CA algorithm applies this principle to mitigate the cold start problem. When a service provider receives a request to complete a specific task for a particular service requester for the first time, it establishes a connection and assigns an initial weight based on the average of existing weights for that task (from interactions with other service requesters). In essence, the CA algorithm allows the trustee to leverage prior experience from performing the same task for other trustors. If no prior knowledge is available, the algorithm sets the initial weight to a predetermined value, representing a baseline performance level.

2.3. Agent Mobility

A major concern with trust and reputation systems is their lack of interoperability across applications, known as the issue of the mobility of agents. When entities move between different applications, their trust and reputation data do not follow them; instead, this information remains isolated, requiring entities to rebuild their reputation in each new application. Enabling the transfer of trust and reputation data across applications remains an open challenge.
To address this issue, Jelenc [5] proposed a general framework to enable the exchange of trust and reputation information. This framework establishes a set of messages and a protocol that allows trust and reputation systems to request ratings, provide responses, and communicate errors. Additionally, to streamline message creation, they developed a grammar for generating queries from strings and introduced a procedure for parsing these query messages.
In [2], Jabeen et al. proposed a hybrid trust management framework which enables trust evaluation between entities using both centralized and distributed approaches, specifically addressing agent mobility. The network is divided into multiple sub-networks to improve management efficiency and scalability. Each sub-network contains a resource-rich fog node, known as the kingpin node, responsible for sharing service provider reputation scores upon request. For global reputation assessment, kingpin nodes aggregate reputation scores and forward them to the cloud. When a service provider moves to a different sub-network, it can present the ID of the kingpin from its previous sub-network. The new kingpin can then retrieve the service provider’s reputation from the cloud using this ID.
In the CA approach, however, a service provider retains all trust-related information concerning itself in the form of stored connections. This allows the service provider to directly access its own trust data and manage mobility by calculating its trustworthiness as a global reputation.

2.4. Dynamic Changes in Agents’ Behaviors

In dynamic environments, agents’ behavior can change rapidly, making it essential for consumer agents to detect these changes when using trust and reputation models to select reliable provider agents. Behavioral changes may stem from malicious intent, but they can also result from resource limitations, such as reduced battery power [2]. In MASs, dynamic behavior can be influenced by various factors, including seasonal variations, device malfunctions, malicious actions, network congestion, and more [4]. Some changes occur randomly and unpredictably, while others follow cyclical patterns. Additionally, group dynamics can contribute to behavioral changes, as relationships between agents evolve due to changing motivations or external influences, such as environmental changes.
An example of agents exhibiting continuously changing behavior is found in Fifth-Generation (5G) networks, where forwarding entities are responsible for relaying packets along a route. These entities may drop packets, leading to a reduced packet delivery rate. Their behavior is inherently dynamic, as they can switch between malicious and legitimate states at any time [13]. Reinforcement Learning (RL) allows legitimate forwarding entities to learn about the behavior of other potential forwarders. However, malicious entities can also exploit RL to strategically drop packets while avoiding detection, thereby manipulating the learning environment of legitimate entities. To address this issue, Ahmad et al. propose in [13] a hybrid trust model for secure routing in 5G networks. In this model, Q-learning enables distributed legitimate entities to evaluate the trustworthiness of neighboring forwarding entities. Additionally, using network-wide trust data and RL, a central authority can decide whether legitimate entities should continue or halt their learning process, depending on the proportion of legitimate to malicious entities in the network. However, the effectiveness of this model has not been experimentally validated.
To handle dynamic behavior, current trust and reputation models continue to utilize methods like sliding windows and forgetting factors [4]. A sliding window of size n preserves only the n most recent experiences, based on the assumption that these interactions best reflect an agent’s current behavior, while older records are discarded as they become less relevant. A forgetting factor gradually decreases the influence of past interactions over time, retaining all instances but assigning greater weight to more recent experiences. The trust model for pervasive computing applications proposed by Kolomvatsos et al. in [14] is an instance of the forgetting factor approach, as it progressively reduces the influence of past interactions through referral age in social trust calculations. It integrates both personal experience and reputation using fuzzy logic (FL) principles. The model consists of three subsystems, with two of the subsystems responsible for evaluating an entity’s trust level based on individual experiences and referrals. To address the dynamic nature of trust, the calculation of social trust depends on two key factors: the referral age and the referrer reputation. Each referral is timestamped, and its influence on social trust decreases over time. Once it surpasses a predefined expiration point, it is no longer considered. The credibility of the referrer determines how much weight their referral carries in the final trust calculation. The system employs a central authority to manage storage and access to referrals. However, this introduces a single point of failure, which can be a limitation. After determining social and individual trust, each intelligent agent uses a weighted sum method to compute the final trust value. The third FL subsystem is responsible for determining the weight assigned to social trust, using as inputs the total number of social referrals and the total number of past individual interactions.
Assessing trust in MASs is particularly challenging due to the dynamic nature of agent behavior, which can change at varying speeds and times. This variability makes traditional approaches, such as forgetting old data at a constant rate and sliding windows, ineffective [4]. To address this issue, Player and Griffiths in [4] propose RaPTaR, a framework that sits on top of existing trust and reputation models to detect and adapt to behavior changes within agent groups. RaPTaR enhances trust algorithms by supplying past experience data that have been statistically evaluated to reflect an agent’s current behavior. The system identifies behavioral shifts by analyzing the outcome patterns of agent groups using the Kolmogorov–Smirnov (K-S) statistical test. Additionally, it records transitions between behavior changes to recognize recurring patterns, which can be leveraged for future predictions. By exploiting repetitive behavior, RaPTaR improves trust assessment in dynamic environments. The experimental results indicate that while RaPTaR effectively detects and responds to random behavioral changes, its accuracy remains limited and can be improved.
Reputation-based trust models are widely used in distributed networks, but they encounter two major challenges: the transmitted reputation may not accurately reflect an entity’s current trustworthiness, and over-reliance on reputation makes the system vulnerable to collusion, where malicious entities conspire to fabricate trust values. One possible solution is trust prediction, which estimates an entity’s current trustworthiness based on past interactions, behavioral history, and other objective factors. Traditional time-based prediction methods fall into two categories. One is the complete arithmetic mean, which computes the average of all past data to predict the next trust value. The other method is mean shift, which prioritizes recent data, ignoring long-term history. A more refined approach is exponential smoothing, which balances both methods by incorporating past data while assigning diminishing weight as time elapses. However, pure exponential smoothing struggles with highly volatile data, especially in cases where a service provider is hijacked by adversaries, causing an abrupt shift from high trust to deception. In such situations, exponential smoothing alone can lead to significant deviations in trust estimation. To address this issue, Wang et al. in [15] propose a dynamic trust model that integrates direct and indirect trust computation with trust prediction. Their approach first applies exponential smoothing to compute a trend curve and then utilizes a Markov chain model to adjust for fluctuations, improving prediction accuracy. This is a refined method for handling abrupt behavioral shifts, such as when an entity transitions from trustworthy to malicious.
In the proposed CA algorithm, after delivering a service, a service provider reassesses its performance. If the performance falls below a predefined threshold, the service provider categorizes itself as a bad provider. This ensures that each provider continuously maintains an updated assessment of its capabilities, enabling the prompt detection of performance declines that could negatively impact consumers.

2.5. Assessing Trust from the Trustee’s Perspective

Similarly to the CA approach, several studies [16,17] place the responsibility on service providers to determine whether they can complete a requested task and assess the potential benefits before deciding whether to accept or decline service requests. Latif in [16] introduces ConTrust, a context-aware trust model for the Social Internet of Things (SIoT), designed to help service consumers select reliable providers for task assignments. In this approach, the requester broadcasts the task, specifying its requirements, and providers evaluate their ability to fulfill the request along with the potential benefits. If a provider is willing to take on the task, it responds to the requester, who then assesses the trustworthiness of the candidates and selects a specific provider. Trustworthiness in ConTrust is measured using a weighted sum of three factors: the service provider’s capability, commitment, and job satisfaction feedback. The CA model shares similarities with ConTrust in that it is also context-aware, as trust evaluation considers the task type and requirements within the specific environment where the service is provided. Additionally, the consumer broadcasts a request, and each provider evaluates its own capability to complete the task. However, the CA model differs in a fundamental way: service consumers do not select providers. Instead, trust assessment is handled by the provider itself, based on consumer feedback.
A key issue with trust and reputation systems that rely solely on distributed infrastructures is that each entity only retains information about a small fraction of the entire network. As a result, knowledge about the number of available service providers is limited to local information, reducing the likelihood of selecting highly trustworthy providers [2]. This limitation does not apply to the CA approach. In CA, the request message is broadcasted to all service providers within the consumer agent’s vicinity. However, unreliable providers, who are aware of their own limitations, will opt out of executing tasks they are not capable of handling. Consequently, this self-selection process increases the probability of obtaining service from highly trustworthy service providers.
When a service requester no longer has control over selecting service providers, and instead, the service providers themselves decide whether to offer a service, a mechanism is needed to incentivize honest service provision. Additionally, most trust and reputation models assume that service requesters act honestly, which is often unrealistic in agent-based systems. To address these issues, Alam et al. proposed in [17] a blockchain-based, two-way reputation system for the SIoT, incorporating a penalty mechanism for both dishonest service providers and service requesters. This approach evaluates a service provider’s local trust, global trust, and reputation by considering both social trust and Quality of Service (QoS) factors, such as availability, accuracy, Cruciality, Responsiveness, and cooperation. After receiving a service, service requesters must provide feedback by rating the service providers. However, relying on a single cumulative rating has drawbacks: (i) it does not clarify the evaluation criteria used for assessing service providers; (ii) dishonest service requesters can intentionally lower a service provider’s rating, even if the service is good; and (iii) inexperienced service requesters may struggle to provide meaningful feedback. To overcome these challenges, the authors propose a “two-stage parameterized feedback” system. Tlocal (local trust) is calculated in two phases: pre-service avail and post-service avail. Comparing Tlocal and Tglobal (global trust) helps detect suspicious or dishonest service requesters, who are then classified into three categories: suspicious, temporarily banned (can only request certain services), and permanently banned (prohibited from requesting any service). For service providers, a penalty mechanism using a fee charge system imposes monetary losses on dishonest providers. Service providers are categorized into three status lists based on their reputation: the white list (highest service fees and selection priority), gray list (moderate reputation, lower fees), and black list (least trustworthy, low selection probability). Each feedback update adjusts a service provider’s trust and reputation value, and service providers can accept or decline service requests. The approach also allows service requesters to choose service providers based on cost preference—for example, a service requester aiming to reduce expenses may prefer a service provider from the gray list rather than the white list. The experimental results show that this method is resilient to various trust-related attacks, including on–off, Discriminatory, opportunistic service, Selective Behavior, bad-mouthing, and ballot stuffing attacks. However, this study does not address the Whitewashing Attack. The proposed mechanisms offer valuable insights that could serve as a foundation for adapting the CA approach to more realistic environments where dishonest behavior is prevalent.

2.6. Mechanisms for Promoting Honest Behavior

Besides [17], several other studies [18,19,20,21,22,23,24] propose mechanisms aimed at promoting honest behavior among agents during their interactions. These mechanisms serve as a basis for formulating a customized approach within the CA framework to counter various trust-related attacks, which we explore further in Section 8.
A key requirement in large, dynamic, and heterogeneous networks—where node participation is constantly changing—is a mechanism for establishing a secure, authenticated channel between any two participating nodes to exchange sensitive information. Fragkos et al. present in [18] a Bayesian trust model that probabilistically estimates node trust scores in an IoT environment, even in the absence of complete information. The authors propose a contract-theoretic incentive mechanism to build trust between devices in an ad hoc manner by leveraging locally adjacent nodes. Each IoT node independently stores and advertises its absolute trust score. The proposed effort–reward model motivates selected nodes to accurately report their trust scores and actively contribute to the authentication process, with rewards aligned to their actual trust levels. Unlike traditional blockchain-based solutions, this approach does not depend on cryptographic primitives or a central authority, and the final consensus is localized rather than being a universally shared state across all nodes in the system.
To enhance content trust, Pan et al. introduced in [19] TrustCoin, a smart trust evaluation framework for Information-Centric Networks (ICNs) in Beyond Fifth-Generation (B5G) networks, leveraging consortium blockchain technology. In this scheme, each TrustCoin user—whether a publisher/producer or a subscriber/consumer—is assigned credit quotas (i.e., coins) that reflect its reputation and serve as a measure of trust. A higher coin balance indicates greater credibility. Users must first register to obtain a legal identity and initial credit coins before they can request to publish content or report potentially malicious data. When content is published, a checking server verifies the publisher’s credit balance on the blockchain to ensure it meets a predefined threshold. For content sharing, edge nodes authenticate user identities, while Deep Reinforcement Learning (DRL) is employed to assess content credibility. TrustCoin incorporates an incentive mechanism that encourages users to proactively publish trustworthy content. The system updates the credit coins of publishers and subscribers based on the content credibility determined by DRL-driven evaluations. Higher credibility results in rewards for publishers, reinforcing trustworthy behavior, while subscribers may face penalties. Conversely, if a publisher shares low-credibility content, they incur a penalty, while the subscriber is rewarded for correctly identifying and reporting it.
In IoT networks, connected nodes may exhibit reluctance to forward packets in order to conserve resources such as battery life, energy, bandwidth, and memory. To address this issue, Muhammad et al. propose in [20] a trust-based approach called HBST, which fosters cooperation by forming a credible community based on honesty. Unlike conventional methods that permanently remove selfish nodes from the network, HBST instead isolates them in a separate domain, preventing interactions with honest nodes while offering then an opportunity to improve their behavior and re-enter the network. The proposed model consists of two phases. In the first phase, credible nodes—those with a sufficient community reputation—are selected from the main network to form a “credible community”. A node’s reputation is determined by its honesty level, assessed using metrics such as interaction frequency, credibility, and community engagement. In the second phase, a social selection process appoints two leaders from this credible community based on factors such as seniority, cooperative behavior, and energy levels. The node with the highest reputation weight is designated as the Selected Community Head (SCH), while the second highest becomes the Selected Monitoring Head (SMH). These leaders play key roles in mitigating selfish behavior: the SCH is responsible for encouraging selfish nodes to participate and can impose penalties on reported offenders, while the SMH monitors node behavior and flags selfish activity to the SCH. Selfish nodes are initially isolated in a separate domain, barring communication with the rest of the network. If a selfish node repeatedly fails to improve, it may face expulsion—a strict penalty. However, unlike permanent removal, HBST allows selfish nodes to rejoin the main community by improving their honesty level beyond a predefined threshold. In cases of persistent selfish behavior, a warning message is broadcasted to neighboring communities, instructing them to cease communication with the offending node as a severe consequence.
In the Internet of Medical Things (IoMT), numerous smart health monitoring devices communicate and transmit data for analysis and real-time decision-making. Secure communication among these devices is essential to ensure timely and accurate patient data processing. However, establishing reliable communication in large-scale IoMT networks is both time-intensive and energy-demanding. To address this challenge, Ali et al. propose in [21] BFT-IoMT, a distributed, energy-efficient, fuzzy logic-based, blockchain-enabled, and fog-assisted trust management framework. This framework employs a cluster-based trust evaluation mechanism to detect and isolate Sybil nodes. The process begins with a topology lookup module, which identifies network topology whenever an IoMT device joins or leaves the network. Next, clusters and Cluster Heads (CHs) are formed, with CHs registered on the blockchain. Each IoMT node’s trust-related parameters are collected and analyzed by a trust calculator module. The computed trust scores are then stored on the blockchain. If a node’s trust score falls below a predefined threshold, it is flagged as a malicious (Sybil) node and isolated from the network. The final decision on whether a node is malicious or benign is then broadcasted across the IoMT system. The fog-assisted trust framework is designed to enhance network throughput while reducing latency, energy consumption, and communication overhead. The use of fuzzy logic—which efficiently handles ambiguous and uncertain data, common in healthcare applications—improves the computational power and effectiveness of the decentralized trust management system. However, the proposed protocol assumes that CHs are inherently trusted nodes, which is an impractical assumption in most IoT environments.
Kouicem et al. in [22] aim to develop a fully distributed and scalable trust management protocol, enabling IoT devices to assess the trustworthiness of any service provider on the Internet without relying on pre-trusted entities. They introduce a decentralized trust management protocol for the Internet of Everything (IoE), leveraging blockchain technology and the fog computing paradigm. In this system, powerful fog nodes maintain the blockchain, relieving lightweight IoT devices of the burden of trust data storage and intensive computations. This approach optimizes resource usage, bandwidth efficiency, and scalability. By utilizing blockchain, the protocol provides a global perspective on the trustworthiness of each service provider within the network. A key feature of the proposed solution is its fine-grained trust evaluation mechanism—IoT devices receive recommendations about service providers not only based on the requested service but also according to a set of specific requirements that the providers can fulfill. Additionally, blockchain technology enhances the protocol’s adaptability to high-mobility scenarios. Through experiments, the authors demonstrate the resilience and robustness of their approach against various malicious attacks, including self-promotion, bad-mouthing, ballot stuffing, opportunistic service, and on–off attacks. They further validate their results through a theoretical analysis of trust value convergence under different attack scenarios. To mitigate the impact of cooperative attacks, the authors propose an online countermeasure algorithm that analyzes the recommendation history recorded on the blockchain. This real-time algorithm is executed whenever fog nodes compute trust recommendations. Each fog node aggregates all recommendations for a particular IoT service provider and evaluates the minimum and maximum recommendation values. If the difference exceeds a predefined threshold, the provider is flagged as potentially engaging in a cooperative attack, and its recommendations are disregarded. The proposed protocol takes advantage of blockchain technology’s strengths but also inherits its drawbacks. One major limitation is high resource consumption, as miners require substantial computing power to achieve consensus.
Ouechtati et al. introduced in [23] a fuzzy logic-based model to filter dishonest recommendations in the SIoT. The model evaluates recommendations based on factors such as recommendation values, the sender’s location coordinates, the time of submission, and social relationships. The proposed approach detects Sybil attacks by applying fuzzy classification to received recommendations, considering their Cosine Distance and temporal proximity. The underlying assumption is that recommendations that are similar in content, closely timed, and sent from nearby locations are likely generated by the same attacker conducting a Sybil attack. Once Sybil recommendations are identified, the remaining recommendations are classified based on the existing social relationships between the senders and the recommended object. These relationships include the Ownership Object Relationship (OOR), Co-Location Relationship (C-LOR), Co-Work Relationship (C-WOR), and Social Object Relationship (SOR). To further assess recommendation reliability, the model evaluates Internal Similarity (IS), which measures how closely each recommendation aligns with the median value of trusted witness objects within the same community. Simultaneously, it calculates the Degree of Social Relationship (DSR), which quantifies the strength of connections between the senders and the recommended object. The DSR is determined by the number of past transactions between objects, with greater interaction frequency indicating higher trustworthiness. Additionally, recommendations from specific communities, such as the OOR, carry more weight in the evaluation. The IS and DSR metrics play a crucial role in detecting good-mouthing and bad-mouthing attacks. If both the IS and DSR values are very low, the recommendation is deemed unreliable, suggesting that the sender may be attempting to manipulate trust through either a good-mouthing or bad-mouthing attack.
In Vehicular Edge Computing (VEC), edge servers may request data from Autonomous Vehicles (AVs) to support intelligent applications such as Intelligent Transportation Systems (ITSs). However, economic concerns, including power consumption, create challenges in integrating data sharing within the dynamic VEC network. A promising alternative is shifting form data sharing to data trading, which incentivizes AVs to exchange their data for rewards. However, integrating data trading into edge servers introduces additional concerns related to security, privacy, and trust. To address these challenges, Mianji et al. propose in [24] a novel reputation management system for data trading that utilizes DRL. Their approach, called the Dynamic Selection of Trusted Sellers using Deep Deterministic Policy Gradient (DSTSDPG), dynamically adjusts a credibility score threshold to identify the most reliable data sellers among the available AVs. The proposed system follows a hierarchical network architecture consisting of three levels: the Vehicle level (Autonomous Vehicles), Edge level (Roadside Units (RSUs) or edge servers), and Cloud level (a central cloud server). Within each cluster, a designated VEC server retrieves credibility values from the cloud and uses multiple parameters to calculate an AV’s credibility score using the DRL-based approach. Based on this score, the edge server selects the most trustworthy AV for data transactions. The computed scores are uploaded to the cloud and can be shared with other edge servers when an AV moves to a different cluster addressing the issue of mobility. However, since this method relies on a centralized architecture involving both cloud and edge servers, it inherits typical limitations such as the risk of a single point of failure. For credibility score calculation, the proposed approach considers three key factors: historical reputation (past interactions influence trustworthiness), familiarity (higher values indicate that an edge server has more prior knowledge of an AV), and freshness (recent interactions are weighted more heavily). The proposed trust and reputation model is built on subjective logic, and the credibility score is derived from a weighted sum of familiarity and freshness. However, the authors do not clarify how the weights are determined.
Existing trustworthiness models can only detect a subset of known attacks, but none can defend against all types [25]. This is because attack patterns are highly diverse, with malicious nodes strategically exploiting weaknesses in trust algorithms to evade detection. To address this challenge, Marche and Nitti examine in [25] all trust-related attacks documented in the literature that can impact IoT systems. They propose a decentralized trust management model that leverages a machine learning algorithm and introduces three novel parameters: goodness, usefulness, and perseverance scores. These scores enable the model to continuously learn, adapt, and effectively identify and counteract a wide range of malicious attacks. Therefore, developing algorithms capable of detecting diverse malicious activity patterns is crucial [26]. To this end, in Section 8, we further discuss how various mechanisms can be integrated into the CA approach to enhance its resilience against existing trust-related attacks.

3. Background

In this section, we begin by outlining the key features of FIRE and CA, the two trust models evaluated in our simulation experiments, explaining the rationale behind selecting FIRE as a reference model for comparison, followed by an overview of the previous version of the CA algorithm, which incorporates a mechanism for handling dynamic trustee profiles.

3.1. FIRE

Huynh et al. introduced the FIRE model in [27], naming it after “fides” (Latin for “trust”) and “reputation”. We selected FIRE as a reference model for comparison with CA because it is a well-established trust and reputation model for open MASs that, like CA, adopts a decentralized approach. Additionally, FIRE represents the traditional trust management approach, where trustors select trustees, providing a contrast to the CA model, where trustees are not chosen by trustors. It consists of four key modules:
  • Interaction Trust (IT): Evaluates an agent’s trustworthiness based on its past interactions with the evaluator.
  • Witness Reputation (WR): Assesses the target’s agent’s trustworthiness using feedback from witnesses—other agents that have previously interacted with it.
  • Role-based Trust (RT): Determines trustworthiness based on role-based relationships with the target agent, incorporating domain knowledge such as norms and regulations.
  • Certified Reputation (CR): Relies on third-party references stored by the target agent, which can be accessed on demand to assess trustworthiness.
IT is the most reliable trust information source, as it directly reflects the evaluator’s satisfaction. However, if the evaluator has no prior interactions with the target agent, FIRE cannot utilize the IT module and must rely on the other three modules, primarily WR. However, when a large number of witnesses leave the system, WR becomes ineffective, forcing FIRE to depend mainly on the CR module for trust assessments. Yet, CR is not highly reliable, as trustees may selectively store only favorable third-party references, leading to the overestimation of their performance.

3.2. CA Model

CA is a biologically inspired computational trust model. It is inspired by biology concepts, particularly synaptic plasticity in the human brain and the formation of assemblies of neurons. In our previous work [28], we provided a detailed explanation of synaptic plasticity and how it is applied in our model.
An MAS is used to represent a social environment where agents communicate and collaborate to execute tasks. In [28], we formally defined the key concepts necessary for describing our model, but here, we summarize the most essential ones. A trustor is an agent that defines a task and broadcasts a request message containing all relevant details, including the following: (i) the task category (type of work to be performed) and (ii) a set of requirements, as specified by the trustor.
In the CA approach, the trustee, rather than the trustor, decides whether to engage in an interaction and perform a given task. When a trustee receives a request message, it establishes and maintains a connection weight w 0,1 , representing the strength of this connection. This weight reflects the probability of successful task completion and is updated by the trustee, based on the performance feedback provided by the trustor. If the trustee successfully completes the task, the weight is increased according to the following:
w = M i n 1 , w + α 1 w .
If the trustee fails, the weight decreases as follows:
w = M a x 0 , w β 1 w ,
where α and β are positive parameters controlling the rate of increase and decrease, respectively.
The trustee decides whether to accept a task request by comparing the connection weight with a predefined T h r e s h o l d 0,1 . If the weight meets or exceeds this threshold w T h r e s h o l d , the trustee proceeds with task execution.
Table 1 summarizes the main differences between FIRE and CA across several core dimensions of trust modeling, including trust evaluation methods, trust update mechanisms, and trust storage approaches.

3.3. The CA Algorithm Used to Handle Dynamic Trustee Profiles

In this subsection, we present the original version of the CA algorithm, as introduced in [7], including a brief description and the pseudocode, for the sake of completeness and to facilitate comparison with the updated version, presented in Section 4.2 of this work.
When a trustor identifies a new task to be executed by a trustee, it broadcasts a request message containing relevant task details (lines 2–3). Upon receiving this request, each potential trustee stores it in a list and establishes a new connection with the requesting trustor if one does not already exist (lines 4–12). Lines 6–11 address the cold start problem, which arises when a trustee lacks prior experience with a specific task and thus cannot assess its own capability to complete it. In the version shown in Algorithm 1, the trustee leverages prior knowledge gained from performing the same task for other trustors. Specifically, line 7 checks whether agent i has previously interacted with other agents (denoted by ~ j ) for the same task. If such similar connections exist, the agent will use the average weight of those connections to initialize the new one. If no prior experience exists (i.e., the trustee has never performed the task for any trustor), the connection weight is initialized to a default value of 0.5 (line 10), as was performed in the original algorithm proposed in [28].
At each time step, each trustee reviews the tasks stored in that list and attempts to execute the task with the highest connection weight, provided that the task remains available and the weight does not exceed a predefined threshold (lines 13–21). Lines 14–16 are part of the agent’s decision-making process before attempting to perform a task. They serve as a sequence of filters that ensure the task is worth pursuing, feasible, and assigned with sufficient trust. The check in line 14—“task is not visible or not done yet”—captures scenarios where the agent must move within a range to personally verify the task’s completion. In some cases, other agents might provide that information. Line 15 ensures the task is physically accessible and not already being performed by another agent or that the agent has the resources to perform it. Finally, line 16 introduces a trust-based filter. Even if a task is available and the agent is able to approach and undertake it, the agent proceeds only if the trust level with the task requester for the specific task meets or exceeds a predefined threshold. This promotes cautious behavior by ensuring that the agent only commits to tasks where sufficient trust exists in the cooperation.
If task execution is successful, the trustee increases the connection weight; otherwise, it decreases it (lines 22–26). This CA algorithm also accounts for dynamic trustee profiles as follows (lines 27–31). If a connection’s weight falls below the threshold, the trustee interprets this as an indication of its incapability to complete the task successfully and stops attempting it to save resources. However, if the trustee performs well on an easier task, it may infer that it has likely learned how to execute more complex tasks within the same category. In such cases, it increases the connection weight of those previously failed tasks to the threshold value, allowing itself an opportunity to attempt them again in the future.
To improve the understanding of Algorithm 1, Table 2 summarizes the main symbols and their meanings.
Algorithm 1: CA v2, for agent i
1: while True do
# --- broadcast a request message when a new task is perceived ---
2:   when perceived a new task = (c, r)
3:     broadcast message m = (request, i, task)
# --- Receive/store a request message and initialize a new connection---
4:   when received a new message m = (request, j, task)
5:     add m to message list M
6:     if no existing connection co = (i, j, _, task) then
7:       if there are similar connections co’ = (i,~j,_,task) from i to other agents for the same task then
8:         create co = (i, j, avg_w, task), where avg_w = average of all weights for (i, ~j, _, task)
9:       else
10:          create co = (i, j, 0.5, task) # initialize weight to default initial trust 0.5
11:     end if
12:   end if
# --- Select and Attempt task ---
13:   select m = (request, j, task) from M such that co = (i, j, w, task) has the highest weight among all (i, k, w’, task)
14:   if task is not visible or not done yet then
15:     if canAccessAndUndertake(task) then
16:       if w ≥ Threshold then
17:         (result, performance) ← performTask(task)
18:       end if
19:     end if
20:   end if
21:   delete m from M
# --- Update connection weight based on result ---
22:   if result = success then
23:     strengthen co using Equation (1)
24:   else
25:     weaken co using Equation (2)
26:   end if
# --- Dynamic profile update ---
27:   for all failed connections co’ = (i, j, w’, task’) where w’ < Threshold and task’ = (c, r’) with r’ > r do
28:     if performance ≥ minSuccessfulPerformance(task’) then
29:       w’ ← Threshold # Give another chance on harder tasks
30:     end if
31:   end for
32: end while

4. Enhancing Performance by Avoiding Unwarranted Task Executions

In this section, we begin with a semi-formal analysis of the CA algorithm to identify potential modifications that could enhance its performance. Following this, we introduce an improved CA algorithm designed to detect and prevent unwarranted task executions.

4.1. A Semi-Formal Analysis of the CA Algorithm to Identify and Avoid Unwarranted Task Executions

To identify potential improvements to the performance of the CA algorithm, we conducted the following semi-formal analysis. While this analysis incorporates formal elements such as assumptions, definitions, propositions, and a proof by induction, it does not constitute a strictly formal analysis in the traditional mathematical or computer science sense. Instead, it represents a blend of formal reasoning and empirical evaluation for the following reasons:
  • Although induction is used to support Proposition 1, many conclusions rely more on empirical observation than on rigorous formal logic;
  • The conclusions are drawn from specific experimental conditions (e.g., Threshold = 0.5, a = β = 0.1), whereas a purely formal analysis would typically strive to derive results that hold regardless of particular parameter values;
  • Several conclusions are derived from example scenarios rather than universally valid logical proofs.
In our testbed, each consumer requesting the service at a certain performance level waits for a period equal to WT for the service to be provided by a provider in its operational range (nearby provider). If the requested t a s k = ( s e r v i c e _ I D , p e r f o r m a n c e _ l e v e l ) , where p e r f o r m a n c e _ l e v e l { P E R F E C T , G O O D , O K , B A D , W O R S T } is not performed, either because there are no nearby providers or none of the nearby providers are willing to perform the task, then the consumer requests the service at the next lower performance level, assuming that any consumer can manage with a lower-quality service.
Assumption 1.
WT is large enough so that if there is a capable provider in the operational range of the consumer, then the task will be successfully executed.
Assumption 2.
The reason why a nearby provider may not be willing to perform a task is only because the weight of the relevant connection is less than the threshold vale. We assume that there are no other reasons, i.e., the providers are not selfish or malicious. In other words, we assume that the providers are always honest and comply with the implemented protocols, having no intention to harm the trustors.
Since every rational consumer aims to maximize its profit, the sequence of tasks that it may request until a provider is found to provide the service is as follows: t a s k 1 , t a s k 2 , t a s k 3 , t a s k 4 , and t a s k 5 . In Table 3, the requirements of each task are specified.
Definition 1.
Given that  C j  is a consumer requesting the successful execution of  t a s k i ,   i 1 , , 5 ,  P k  is a provider in the operational range of  C j  receiving a request from  C j  for the execution of  t a s k i , and  P k  does not yet have a connection for  C j  and  t a s k i , we define  P k  as having learned its incapability to perform  t a s k i  (meaning that it cannot always perform  t a s k i  successfully) when it initializes the weight of a new connection for  P k  and  t a s k i  to a value smaller than the threshold value, which will result in not executing  t a s k i  for  C j .
Since only bad and intermittent providers have negative (causing damage to the consumers) performances, we analyze how bad providers learn their own capabilities in each of the five tasks, aiming to identify ways to reduce task executions with negative performances and thus improve the performance of the CA algorithm.
We consider a system of four consumers, C 1 , C 2 , C 3 , and C 4 , and only one provider: the bad provider B P 1 . Each consumer has in its operational range the provider B P 1 . The provider has no knowledge of its capabilities in performing tasks, meaning that it has not formed any connections yet.
Phase A: Provider B P 1 learns its incapability in performing t a s k 1 .
Assume that C 1 requests the execution of t a s k 1 broadcasting the message m 1 = ( r e q u e s t , C 1 , t a s k 1 ) . According to the CA algorithm, when B P 1 receives message m 1 , it will create the connection c o 1 = B P 1 , C 1 , 0.5 , t a s k 1 , initializing its weight to the value 0.5. Since the condition w T h r e s h o l d is satisfied, B P 1 will perform t a s k 1 , but it will fail because its performance range is in [ 10 ,   0 ] , and t a s k 1 requires a performance equal to 10. After it fails, B P 1 will decrease the weight of c o 1 to the value w = 0.45 , by using equation w = w β ( 1 w ) , where β = 0.1 in our experiments.
Now, let C 2 require the execution of t a s k 1 by sending the message m 2 = ( r e q u e s t , C 2 , t a s k 1 ) . When B P 1 receives the message m 2 and because it already has the connection c o 1 , it will create the connection c o 2 = B P 1 , C 2 , 0.45 , t a s k 1 , initializing its weight to the average of the weights of the connections it has with other consumers for t a s k 1 . In this case, a v e r a g e = w c o 1 1 = 0.45 , where w c o 1 denotes the weight of connection c o 1 . Because the weight of c o 2 is less than the threshold value, B P 1 will decide not to perform t a s k 1 for consumer C 2 .
Assumption 3.
For simplicity, we assume that  B P 1  will not change its ability to provide  t a s k 1  so that the weights of all connections will remain constant over time.
Now, we can prove by induction the following proposition.
Proposition 1.
Given Assumption 3, every new connection that  B P 1  will create for  t a s k 1  will be initialized to the average of the weights of the connections it has already created for  t a s k 1 , which will remain constant and equal to 0.45.
The proof is provided in Appendix A. By Proposition 1 and Definition 1, it follows that B P 1 , in a single trial, has learned that it cannot successfully perform t a s k 1 . We can generalize every bad provider to the following conclusion.
Conclusion 1.
Given our experimental conditions (i.e., Threshold = 0.5, α = β = 0.1), every bad provider needs only one trial to learn that it cannot successfully perform task1.
Phase B: Provider B P 1 learns its incapability in performing t a s k 2 .
Following the same analysis as in Phase A, we can conclude as follows.
Conclusion 2.
Given our experimental conditions (i.e., Threshold = 0.5, α = β = 0.1), every bad provider needs only one trial to learn that it cannot successfully perform task2.
Phase C: Provider B P 1 learns its incapability in performing t a s k 3 .
Assume that C 1 requests the execution of t a s k 3 broadcasting the message m 1 = ( r e q u e s t , C 1 , t a s k 3 ) . When B P 1 receives message m 1 , it will create the connection c o 1 = B P 1 , C 1 , 0.5 , t a s k 3 , initializing its weight to the value 0.5. Since the condition w T h r e s h o l d is satisfied, B P 1 will perform t a s k 3 . Due to its performance range in [ 10 ,   0 ] and the task’s requirement that performance must be bigger or equal to zero in order to be successful, B P 1 has a small probability of having a performance equal to zero and thus a successful execution of t a s k 3 .
So, consider a scenario where the performance of B P 1 on t a s k 3 on its first trial is 0. After it succeeds, it will increase the weight of c o 1 to the value w = 0.55 by using equation w = w + a · ( 1 w ) , where α = 0.1 in our experiments. Now, let C 2 request the execution of t a s k 3 broadcasting the message m 2 = ( r e q u e s t , C 2 , t a s k 3 ) . When B P 1 receives message m 2 and because it already has the connection c o 1 , it will calculate the average weight of existing connections as a v e r a g e = w c o 1 1 = 0.55 . Then, it will create the connection c o 2 = B P 1 , C 2 , 0.55 , t a s k 3 , initializing its weight to the average just calculated. Since the condition w T h r e s h o l d is satisfied, B P 1 will execute t a s k 3 for C 2 . Suppose that B P 1 fails this time and decreases the weight of c o 2 to the value w = 0.505 . The scenario continues with two consecutive failed executions of t a s k 3 for consumers C 3 and C 4 . In Table 4, we can see the average weights of the connections after each task execution.
In the scenario above, B P 1 needed three consecutive failed executions after the successful first execution of t a s k 3 to learn (according to Definition 1) that it is not capable of always performing this task in a successful way, because the average weight of its connections for t a s k 3 is now less than the threshold value. This leads us to the following more general conclusion.
Conclusion 3.
The successful execution of  t a s k 3   on the first trial of a bad provider requires a number of consecutive failed executions to learn that it cannot always execute this task successfully.
Phase D: Provider B P 1 learns its incapability in performing t a s k 4 .
Since B P 1 has a performance range in 10 ,   0 and t a s k 4 requires a p e r f o r m a n c e 5 to be successful, B P 1 has a good probability of being successful in its first trial of t a s k 4 . If we repeat the analysis of phase C, we will be led to the following conclusion.
Conclusion 4.
The successful execution of t a s k 4  on the first trial of a bad provider requires a number of consecutive failed executions to learn that it cannot always execute this task successfully.
Phase E: Provider B P 1 learns its capability in performing t a s k 5 .
Since B P 1 has a performance range in [ 10 ,   0 ] and t a s k 5 is successfully executed if p e r f o r m a n c e 10 , B P 1 will always execute this task successfully.
Conclusion 5.
A bad provider will always execute  t a s k 5
Ideally, we would prefer that a bad provider not provide the service at all, because its poor performance harms the consumer, but providing the service at least once is required to assess the provider’s capabilities. However, the previous analysis demonstrates that the CA algorithm allows the bad provider to provide the service multiple times. Despite its negative performance on t a s k 1 , B P 1 performed poorly with a negative performance in phase B as well. A more intelligent agent could consider its negative performance from the first time and decide not to execute t a s k 2 . Furthermore, in both phase C and phase D, the successful first execution of the task requires a series of consecutive failed executions, which we would like to avoid.
To this end, in the following section, we propose an improved CA algorithm designed to detect bad providers early on and prevent them from damaging the consumers with their negative performances.

4.2. The Proposed CA Algorithm for the Early Detection of Bad Providers

In the proposed algorithm for the early detection of bad providers, each provider maintains a task-specific self-assessment mechanism through the i . b a d t a s k s   m a p , which is initialized with a default value of false for each task (line 1). This means that when a provider is created, it implicitly assumes it is not bad for any task it has not yet performed—there is no need to explicitly initialize entries for unknown tasks. After performing a task, the provider re-evaluates its performance (lines 25–29). If the performance is less than or equal to zero, it considers itself a bad provider for that specific task by setting i . b a d _ t a s k s [ t a s k ] < t r u e ; otherwise, it maintains or resets the value to false. This enables rapid, task-specific reassessment, allowing the provider to dynamically adapt its trustworthiness profile based on its most recent outcomes.
In lines 7–14 of the proposed algorithm, this self-assessment is used to initialize the trust weight of a new connection. If the provider believes it is bad ( i . b a d _ t a s k s [ t a s k ] = t r u e ) and the task requires a performance level of PERFECT, GOOD, or OK, the connection is initialized with a trust weight of 0.45 (line 9). Otherwise, the algorithm falls back to the original initialization logic (lines 10–13), which either averages existing weights from similar connections or sets a default trust of 0.5, as described in Section 3.3 for the previous version algorithm1.
In lines 15–19, if the connection already exists, the provider re-evaluates whether it should adjust the weight based on its current self-assessment. If the provider considers itself bad and the task requires a performance level ≥ 0 (i.e., PERFECT, GOOD, or OK), it lowers the weight of the existing connection to 0.45, reinforcing a cautious stance toward engaging in the task.
This modification enhances the context-awareness and reliability of the decision-making process by ensuring that each provider is consistently informed by its own recent experience when evaluating and responding to task requests.
Core functionalities from Algorithm 1—such as broadcasting task requests, handling incoming messages, task selection and execution, connection weight updates, and dynamic profile adjustments—are preserved without modification in Algorithm 2.
Algorithm 2: CA v3, for agent i
  # --- Initialize task-specific self-assessment memory ---
1: define i.bad_tasks as a map with default value false # assumes good unless proven otherwise
2: while True do
# --- Broadcast a request message when a new task is perceived ---
3:   when perceived a new task = (c, r)
4:     broadcast message m = (request, i, task)
# --- Receive/store a request message and initialize a new connection ---
5:   when received a new message m = (request, j, task)
6:     add m to message list M
# --- Initialize a new connection ---
7:     if no existing connection co = (i, j, _, task) then
8:       if i.bad_tasks[task] = true and task.performance_level ∈ {PERFECT, GOOD, OK} then
9:         create co = (i, j, 0.45, task) # cautious trust level for task-specific bad assessment
10:       else if there are similar connections co’ = (i, ~j, _, task) from i to other agents for the same task then
11:         create co = (i, j, avg_w, task), where avg_w = average of all weights for (i, ~j, _, task)
12:       else
13:         create co = (i, j, 0.5, task) # initialize to default trust
14:       end if
15:     else #if connection co = (i, j, _, task) exists
# --- modify an existing connection if certain conditions hold---
16:       if i.bad_tasks[task] = true and task.performance_level ∈ {PERFECT, GOOD, OK} then
17:         modify co = (i, j, 0.45, task)
18:       end if
19:     end if
# --- Select and Attempt task ---
20:   select m = (request, j, task) from M such that co = (i, j, w, task) has the highest weight among all (i, k, w’, task)
21:   if task is not visible or not done yet then
22:     if canAccessAndUndertake(task) then
23:       if w ≥ Threshold then
24:         (result, performance) ← performTask(task)
# --- Re-evaluate task-specific self-assessment based on latest performance ---
25:         if performance ≤ 0 then
26:           i.bad_tasks[task] ← true
27:         else
28:           i.bad_tasks[task] ← false
29:         end if
30:       end if
31:     end if
32:   end if
33:   delete m from M
# --- Update connection weight based on result ---
34:   if result = success then
35:     strengthen co using Equation (1)
36:   else
37:     weaken co using Equation (2)
38:   end if
# --- Dynamic profile update ---
39:   for all failed connections co’ = (i, j, w’, task’) where w’ < Threshold and task’ = (c, r’) with r’ > r do
40:     if performance ≥ minSuccessfulPerformance(task’) then
41:       w’ ← Threshold  # Give another chance on harder tasks
42:     end if
43:   end for
44: end while

5. Experimental Setup and Methodology

5.1. The Testbed

To test the performance of the revised algorithm, we performed an extensive simulation-based experimentation on a testbed based on the one described in [27].
The environment of the testbed consists of agents that either provide services (referred to as providers or trustees) or use these services (referred to as service requesters, consumers, or trustors). For simplicity, we assume all providers offer the same service, i.e., there exists only one type of task. The agents are randomly distributed within a spherical world with a radius of 1.0. The agent’s radius of operation ( r 0 ) represents its capacity to interact with others (e.g., available bandwidth), and it is uniform across all agents, set to half the radius of the spherical world. Each agent has acquaintances, which are other agents located within its operational radius.
Provider performance varies and determines the utility gain (UG) for consumers during interactions. There are four types of providers, good, ordinary, intermittent, and bad, as defined in [27]. Except for intermittent providers, each type has a mean performance level μ p , with the actual performance following a normal distribution around this mean. Table 5 shows the values of μ p and the associated standard deviation σ p for each provider type. Intermittent providers perform randomly within the range P L B A D ,   P L G O O D . The radius of operation of a provider also represents the range within which it can offer services without a loss of quality. If a consumer is outside this range, the service quality deteriorates linearly based on the distance, but the final performance remains within 10 , + 10 and corresponds to the utility the consumer gains from the interaction.
The simulations in the testbed are conducted in rounds. As in real life, consumers do not require services in every round. When a consumer is created, its probability of requiring a service ( a c t i v i t y   l e v e l   α ) is selected randomly. There are no limitations on the number of agents that can participate in a round. If a consumer needs a service in a round, the request is always made within that round. The round number marks the time for any event.
Consumers fall into one of three categories: (a) those using FIRE, (b) those using the old version of the CA algorithm, or (c) those using the new version of the CA algorithm. If a consumer requires a service in a round, it locates all nearby providers. FIRE consumers select a provider following the four-step process outlined in [27]. After choosing a provider, they use the service, gain utility, and rate the service based on the UG they received. This rating is recorded for future trust assessments. The provider is also informed of the rating and may keep it for future interactions.
CA consumers do not choose a provider. Instead, they send a request message to all nearby providers specifying the required service quality. Table 6 lists five performance levels that define the possible service qualities. CA consumers first request service at the highest quality (PERFECT). After a predetermined waiting time (WT), any CA consumer still unserved sends a new request for a lower-performance-level service (GOOD). This process continues until the lowest service level is reached or all consumers are served. When a provider receives a request, it stores it locally and applies the CA algorithm (CA_OLD or CA_NEW, depending on the consumer group it belongs to). WT is a parameter that defines the maximum time allowed for all requested services in a round to be provided.
We assume that any consumer can manage with a lower-quality service. This assumption does not raise an issue of unfair comparison between FIRE and CA, since it also applies to consumers using FIRE. For instance, they may end up selecting a provider—potentially the only available option—whose service ultimately delivers the lowest performance level (WORST).
Agents can enter and exit the open MAS at any time, which is simulated by replacing a number of randomly selected agents with new ones. The number of agents added or removed after each round varies but must remain within certain percentage limits of the total population. The parameters p C P C and p P P C define these population change limits for consumers and providers, respectively. The characteristics of new agents are randomly determined, but the proportions of provider types and consumer groups are maintained.
When an agent changes location, it affects both its own situation and its interactions with others. The location is specified using polar coordinates ( r , φ , θ ) , and the agent’s position is updated by adding random angular changes Δ φ and Δ θ to φ and θ . Δ φ and Δ θ are chosen randomly from the range Δ ϕ , + Δ ϕ . Consumers and providers change their locations with probabilities p C L C and p P L C , respectively.
A provider’s performance μ can also change by a random amount Δ μ within the range M , + M with a probability of p μ C in each round. Additionally, with a probability of p P r o f i l e S w i t c h , a provider may switch to a different profile after each round.

5.2. Experimental Methodology

In our experiments, we compare the performance of the following three consumer groups:
  • FIRE: consumer agents use the FIRE algorithm.
  • CA_OLD: consumers use the previous version of the CA algorithm.
  • CA_NEW: consumers use the new version of the CA algorithm.
To ensure accuracy and minimize random noise, we conduct multiple independent simulation runs for each consumer group. The exact Number of Simulation Independent Runs (NSIR) varies per experiment to achieve statistically significant results. The exact NSIR values are displayed in the graphs illustrating the experimental results.
The effectiveness of each algorithm in identifying trustworthy provider agents is measured by the utility gain (UG) achieved by consumer agents during simulations. Throughout each simulation run, the testbed records the UG for each consumer interaction, along with the algorithm used (FIRE, CA_OLD, or CA_NEW).
After completing all simulation runs, we calculate the average UG for each interaction per consumer group. We then apply a two-sample t-test for mean comparison [29] with a 95% confidence level to compare the average UG between the following:
  • CA_OLD and CA_NEW;
  • FIRE and CA_NEW.
Each experiment’s results are displayed using two two-axis graphs: one comparing CA_OLD and CA_NEW and another comparing FIRE and CA_NEW. In each graph, the left y-axis represents the UG means for consumer groups per interaction, while the right y-axis displays the performance rankings produced by the UG mean comparison using the t-test. The ranking is denoted with the prefix “R” (e.g., R.CA), where a higher rank indicates superior performance. If two groups share the same rank, their performance difference is statistically insignificant. For instance, in Figure 1a, at the 17th interaction (x-axis), consumer agents in the CA_NEW group achieve an average UG of 6.15 (left y-axis), and according to the t-test ranking, the CA_NEW group holds a rank of 2 (right y-axis).
All experiments use a “typical” provider population, as defined in [27], consisting of 50% beneficial providers (yielding positive UG) and 50% harmful providers (yielding negative UG, including intermittent providers).
To maintain consistency, we use the same experimental values as in [27], detailed in Table 7. Additionally, the default parameters for FIRE and CA are presented in Table 8 and Table 9, respectively.

6. Simulation Results

This section presents the results of fourteen experiments evaluating the performance of the updated CA algorithm (CA_NEW) in comparison to both CA_OLD and FIRE. Each subsection examines different environmental conditions. The first two experiments (Section 6.1) concern scenarios where service provider performance fluctuates over time. Experiment 3 (Section 6.2) is conducted in a static environment. Experiments 4–8 (Section 6.3) test the impact of a gradually changing provider population increasing up to 30%. Experiments 9–11 (Section 6.4) examine the effects of a consumer population change up to 10%. Experiments 12 and 13 (Section 6.5) explore changes in the locations of consumers and providers. Experiment 14 (Section 6.6) evaluates performance where all dynamic factors change simultaneously. Section 6.7 summarizes the findings of all experiments.

6.1. The Performance of the New CA Algorithm in Dynamic Trustee Profiles

This section presents the results of two experiments, in which we compare the performance of the updated CA algorithm (CA_NEW) with the performance of the old version (CA_OLD) and the performance of FIRE, when the service providers’ performance varies over time.
Experiment 1. A provider may alter its average performance at a maximum of 1.0 UG unit with a probability of 10% in every round ( p μ C = 0.10 ,   M = 1.0 ) . The results are shown in Figure 1. Figure 1a shows that CA_NEW generally outperforms CA_OLD, with the most significant improvement observed in the initial interactions. Figure 1b shows that CA_NEW, except for the first interaction, outperforms FIRE in the initial interactions. However, FIRE adapts as the number of interactions increases and eventually achieves slightly better performance.
Experiment 2. A provider may switch into a different performance profile with a probability of 2% in every round ( p P r o f i l e S w i t c h = 0.02 ) . The results are depicted in Figure 2. Figure 2a demonstrates that CA_NEW consistently outperforms CA_OLD in all interactions, with an average difference of 2 UG units. Figure 2b shows that CA_NEW performs better than FIRE in all interactions except the first one.

6.2. The Performance of the New CA Algorithm in the Static Setting

This subsection presents the results of Experiment 3, which evaluates the performance of the updated CA algorithm (CA_NEW) in a static environment without any dynamic factors. The findings are depicted in Figure 3. Figure 3a demonstrates that CA_NEW performs better than CA_OLD in the initial interactions. Figure 3b shows that CA_NEW surpasses FIRE in all interactions, except for the first one.

6.3. The Performance of the New CA Algorithm in Provider Population Changes

This section evaluates the performance of the updated CA algorithm (CA_NEW) under conditions where the provider population gradually fluctuates up to 30%, through a series of five experiments:
Experiment 4. The provider population changes at a maximum of 2% in every round ( p P P C = 0.02 ) . The results are shown in Figure 4.
Experiment 5. The provider population changes at a maximum of 5% in every round ( p P P C = 0.05 ) . The results are shown in Figure 5.
Experiment 6. The provider population changes at a maximum of 10% in every round ( p P P C = 0.10 ) . The results are shown in Figure 6.
Experiment 7. The provider population changes at a maximum of 20% in every round ( p P P C = 0.20 ) . The results are shown in Figure 7.
Experiment 8. The provider population changes at a maximum of 30% in every round ( p P P C = 0.30 ) . The results are shown in Figure 8.
In Experiments 4–6, we compare the performance of CA_NEW to both CA_OLD and FIRE. Figure 4a, Figure 5a and Figure 6a show that CA_NEW significantly outperforms CA_OLD, achieving higher UG across all interactions. Figure 4b, Figure 5b and Figure 6b reveal that as the provider population change rate rises from 2% to 10%, CA_NEW maintains better performance than FIRE.
Figure 9a,b demonstrate that CA_NEW is more resilient than CA_OLD to changes in provider population. Specifically, when the provider population change rate increases from 2% to 10%, CA_OLD’s performance drops by 4 UG units (Figure 9a), whereas CA_NEW’s performance drops only 2 UG units (Figure 9b).
Figure 9b,c indicate that CA_NEW’s performance drops more sharply than FIRE’s, suggesting that FIRE is more resilient to this environmental change. This trend raises the expectation that FIRE will eventually surpass CA_NEW at a higher provider population change rate. To verify this hypothesis, we conducted Experiments 7 and 8. The results, shown in Figure 7 and Figure 8, confirm that when the provider population change rate reaches 30%, FIRE outperforms CA_NEW.
As discussed in Section 4.2 of our previous work [7], FIRE demonstrates greater resilience to changes in the provider population compared to CA. This is primarily due to FIRE’s continuous adaptation over time, which enables it to maintain or even improve its performance. In these experiments, consumers rely on witness reputation, which gradually increases as witnesses (other consumers) remain in the system, contributing to FIRE’s stability. In contrast, CA heavily depends on the provider’s knowledge of their capabilities, and when the provider population changes, newcomers must learn from scratch, leading to a more significant decline in performance. While CA_NEW shows an improvement over CA_OLD in scenarios involving provider population change, the fundamental differences in the underlying mechanisms still allow FIRE to outperform CA_NEW under these conditions.

6.4. The Performance of the New CA Algorithm in Consumer Population Changes

This section evaluates the performance of the updated CA algorithm (CA_NEW) in comparison to both CA_OLD and FIRE under conditions where the consumer population gradually fluctuates by up to 10%, through a series of three experiments.
Experiment 9. The consumer population changes at a maximum of 2% in every round ( p C P C = 0.02 ) . The results are shown in Figure 10.
Experiment 10. The consumer population changes at a maximum of 5% in every round ( p C P C = 0.05 ) . The results are shown in Figure 11.
Experiment 11. The consumer population changes at a maximum of 10% in every round ( p C P C = 0.10 ) . The results are shown in Figure 12.
Figure 9a, Figure 10a and Figure 11a illustrate that CA_NEW outperforms CA_OLD, generally achieving a higher UG in the initial interactions. Figure 9b, Figure 10b and Figure 11b demonstrate that CA_NEW consistently performs better than FIRE throughout all interactions in this environmental change.
Figure 13a,b show that in the first interactions, both CA_OLD and CA_NEW improve their performance as the consumer population change rate increases from 2% to 10%. Nevertheless, Figure 12a reveals that when CPC = 10%, CA_OLD slightly surpasses CA_NEW in the first interaction. A possible explanation for this is that when CPC = 10%, the average UG of the first interaction is influenced by a larger number of newcomer agents who have their first interaction in later simulation rounds. During these rounds, service providers have established more connections and can more accurately evaluate their ability to provide the service using CA_OLD, resulting in a higher UG for service consumers. This observation led us to hypothesize that CA_OLD, compared to CA_NEW, is a more suitable choice for service providers that have remained in the system longer and have gained more knowledge about their service-providing capabilities, assuming their capabilities remain unchanged over time.

6.5. The Performance of the New CA Algorithm in Consumer and Provider Location Changes

In this subsection, we evaluate the performance of the updated CA algorithm (CA_NEW) in comparison to both CA_OLD and FIRE under conditions where consumers and providers change locations, by conducting the following two experiments.
Experiment 12. A consumer may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p C L C = 0.10 , Δ Φ = π / 20 ). The results are shown in Figure 14.
Experiment 13. A provider may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p P L C = 0.10 , Δ Φ = π / 20 ). The results are shown in Figure 15.
Figure 14a and Figure 15a indicate that CA_NEW demonstrates improved performance in the initial interactions compared to CA_OLD under both environmental changes. Figure 14b and Figure 15b show that CA_NEW outperforms FIRE in both experiments, except for the first interaction, where FIRE performs better.

6.6. The Performance of the New CA Algorithm Under the Effects of All Dynamic Factors

In this subsection, we report the results of the final experiment, which evaluates the performance of CA_NEW under the combined influence of all dynamic factors.
Experiment 14. A provider may alter its average performance at a maximum of 1.0 UG unit with a probability of 10% in every round ( p μ C = 0.10 ,   M = 1.0 ) . A provider may switch into a different performance profile with a probability of 2% in every round ( p P r o f i l e S w i t c h = 0.02 ) . The provider population changes at a maximum of 2% in every round ( p P P C = 0.02 ) . The consumer population changes at a maximum of 5% in every round ( p C P C = 0.05 ) . A consumer may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p C L C = 0.10 , Δ Φ = π / 20 ). A provider may move to a new location on the spherical world at a maximum angular distance of π / 20 with a probability of 0.10 in every round ( p P L C = 0.10 , Δ Φ = π / 20 ).
The results, shown in Figure 16, demonstrate that CA_NEW consistently outperforms both CA_OLD and FIRE across all interactions.

6.7. An Overview of the Results

The simulation results reveal that the updated CA algorithm (CA_NEW) demonstrates superior performance over its predecessor (CA_OLD) and the FIRE model under various environmental conditions:
  • Dynamic Trustee Profiles: CA_NEW outperforms CA_OLD across all interactions and shows resilience in handling provider performance fluctuations. While FIRE adapts more quickly in some cases, CA_NEW remains very competitive.
  • Static Environment: CA_NEW surpasses CA_OLD in the initial interactions and consistently outperforms FIRE, except in the first interaction.
  • Provider Population Changes: CA_NEW is more resilient than CA_OLD when provider population fluctuations increase up to 10%, maintaining better performance. However, as changes reach 30%, FIRE eventually outperforms CA_NEW, indicating FIRE’s resilience in this environmental change.
  • Consumer Population Changes: CA_NEW generally achieves a higher UG than CA_OLD, though at high levels of consumer population changes, CA_OLD performs slightly better in the first interaction, suggesting that CA_OLD may be better for old service providers with stable capabilities. CA_NEW consistently outperforms FIRE under consumer population changes.
  • Consumer and Provider Location Changes: CA_NEW shows improved performance over CA_OLD in initial interactions and outperforms FIRE in all interactions, except for the first one.
  • Combined Dynamic Factors: When all dynamic factors are in effect, CA_NEW maintains superior performance over both CA_OLD and FIRE across all interactions, demonstrating its robustness in complex environments.
Overall, CA_NEW is a significant improvement over CA_OLD, with better resilience and adaptability, although there are indications that CA_OLD may be a better choice for old service providers that do not change their behavior. While FIRE exhibits some advantages in extreme environmental changes, CA_NEW remains highly competitive across diverse scenarios. Building on previous research [30], where we examined how trustors can identify environmental dynamics and select the optimal trust model (CA or FIRE) to maximize utility, a natural direction for future work is to explore how trustees can recognize environmental changes and assess their own self-awareness to determine whether CA_OLD or CA_NEW would yield the best performance. Adopting this comprehensive RL approach could enhance adaptability and effectiveness across a wide range of scenarios.

7. Towards a Comprehensive Evaluation of the CA Model

In [31], Wang et al. define quality of trust (QoT) as the quality of trust models and propose several evaluation criteria including subjectivity, dynamicity, context awareness, privacy preservation, scalability, robustness, overhead, explainability, and user acceptance. However, they highlight the absence of a uniform standard. Indeed, Fotia et al. in [32], focusing on edge-based IoT systems, provide a different set of evaluation criteria (accuracy, security, availability, reliability, Heterogeneity, Lightweightness, flexibility, and scalability), referring to them as trust management requirements, whereas in [33], integrity, computability, and flexibility are identified as the key properties of trust models.
To evaluate the CA model comprehensively, we selected thirteen trust model criteria synthesized from diverse but widely cited publications [31,32,33]. These criteria capture both technical properties (e.g., robustness, scalability, overhead) and human-centric qualities (e.g., explainability, user acceptance). Our selection reflects the recurring themes across the literature and the pressing demand for open, dynamic MAS environments. While all thirteen criteria are important, their relative priority may vary based on application needs. For example, scalability and overhead are critical in IoT settings with resource constraints, whereas explainability and user acceptance are more relevant in human–agent collaboration scenarios. This section outlines a comprehensive evaluation of the CA model based on the thirteen evaluation criteria as follows.
Decentralization. Centralized cloud-based trust assessment has significant drawbacks, including a single point of failure, scalability issues, and susceptibility to data misuse and manipulation by the companies that own cloud servers. As a response, decentralized–distributed solutions are becoming the norm in trust management [34]. The CA model follows the decentralized approach by allowing each trustee to compute trust independently, eliminating reliance on a central authority.
Subjectivity. Trust is inherently subjective, as different service requesters may have different service requirements in different contexts, affecting their perception for the trustworthiness of the same service provider [33]. Therefore, trust models should take into account the trustor’s subjective opinion [31]. CA satisfies the subjectivity requirement, because the service requester defines and broadcasts the task and its requirements, and after the task’s execution, the service requester gives feedback to the service provider, rating its performance, and the service provider then decides whether the task was successfully executed, modifying the weight of the relevant connection, based on the feedback received.
Context Awareness. Since trust is context-dependent, a service provider may be trustworthy for one task but unreliable for another. The context typically refers to the task type, but it can also refer to an objective or execution environment with specific requirements [31]. CA ensures context awareness by allowing trustees to maintain distinct trust connections for different tasks requested by the same trustor.
Dynamicity. Trust is inherently dynamic, continuously evolving in response to events, environmental changes, and resource fluctuations. Recent trust information is much more important than historical information [31]. The CA model satisfies the dynamicity requirement, since trust update is event-driven, taking place after each task execution, making it well suited for dynamic environments, where agents may frequently join, leave, or alter their behavior, as evidenced by the experiments in this study.
Availability. A trust model should ensure that all entities and services remain updated [35] and fully available, even in the face of attacks or resource constraints [32]. In the CA approach, the service requester does not assign a task to a specific service provider but instead broadcasts the task to all nearby service providers. This ensures that even if a particular service provider is unavailable due to resource limitations (e.g., battery depletion), another service provider that receives the request can still complete the task. Consequently, our approach effectively meets the availability requirement.
Integrity and Transparency. All transaction data and interaction outcomes should be fully recorded, efficiently stored, and easily retrieved [33]. Trust values should be accessible to all authorized network nodes whenever they need to assess the trustworthiness of any device in the system [34]. In the CA approach, trust information is stored locally by the trustee in the form of connections (trust relationships). This ensures that trust information remains available even when entities cross application boundaries or transition between systems. Additionally, any trustee can readily access its own trust information to evaluate its reliability for a given task, thereby fulfilling the integrity and transparency requirement.
Scalability. Since scalability is linked to processing load and time, a trust model must efficiently manage large-scale networks while maintaining stable performance, regardless of network size, and function properly when devices are added or removed [31]. Large-scale networks require increased communication and higher storage capacity, meaning that trust models must adapt to the growing number of nodes and interactions [26,35,36]. However, many existing trust management algorithms struggle to scale effectively in massively distributed systems [22]. The CA model is designed for highly dynamic environments, ensuring strong performance despite continuous population changes. Previous research [7] has experimentally demonstrated CA’s resilience to fluctuations in consumer populations. In this study, we present simulation experiments showing that the updated CA algorithm has significantly improved resilience, even when provider populations change. Additionally, since agents in the CA approach do not exchange trust information, the model avoids scalability issues related to agent communication. However, we have yet to evaluate how CA scales with an increasing number of nodes.
Overhead. A trust model should be simple and lightweight, as calculating trust scores may be impractical for IoT devices with limited computing power [26]. It is essential to ensure that a trust model does not excessively consume a device’s resources [21]. Both computational and storage overhead must be considered [31], as devices typically have constrained processing and storage capabilities and must prioritize their primary tasks over trust evaluation. Additionally, excessive computational overhead can hinder real-time trust assessments, negatively impacting time-sensitive applications. A trust model’s efficiency can be analyzed using big O notation for time and space complexity. The CA model is considered simple and lightweight since its algorithm avoids complex mathematical computations. We provide a sketch of a computational complexity analysis of Algorithm 2 in Appendix A.2, which outlines the core reasoning behind the CA model’s efficiency using big O notation. However, a detailed analysis (full formal proof) of its time and space complexity in big O notation remains to be conducted. Since all trust information is stored locally by the trustees, further research is needed to evaluate the CA model’s storage overhead and explore potential optimizations to minimize it.
Accuracy. A trust model must effectively identify and prevent malicious entities by ensuring precise classification [26]. It should achieve a high level of accuracy, meaning that the computed trust value closely reflects the true value (ground truth) [32]. Several studies [1,10,37,38] assess proposed trust models using metrics such as Precision, Recall, F1-score, accuracy, False Positive Rate (FPR), and True Positive Rate (TPR). However, the accuracy of the CA model has not yet been evaluated using these metrics.
Robustness. Trust models must withstand both anomalous behavior caused by sensor malfunctions [36] and trust-related attacks from insider attackers who deliberately act maliciously for personal gain or to disrupt system performance [26,39]. In heterogeneous networks, constant device connectivity and weak interoperability between network domains create numerous opportunities for malicious activities [31]. Traditional cryptographic and authentication methods are insufficient against insider attacks, where attackers possess valid cryptographic keys [36]. Existing trust models can only mitigate certain types of attacks, but none can fully defend against all threats [25]. Therefore, developing algorithms capable of detecting a broad range of malicious activity patterns is crucial [26]. In the following section, we examine common trust-related attacks from the literature and evaluate the CA model’s effectiveness in countering them.
Privacy Preservation. Ensuring a high level of privacy through data encryption is essential during trust assessment to prevent sensitive information leaks [34]. A trust model must safeguard both Identity Privacy (IP) and data privacy (DP) [31]. Specifically, feedback and interaction data must remain protected throughout all stages of data management, including collection, transmission, storage, fusion, and analysis. Additionally, identity details such as names and addresses should be shielded from unauthorized access, as linking trust information to real identities can lead to serious risks, such as Distributed Denial of Service (DDoS) Attacks [35]. In the CA approach, while entities generally do not exchange trust information (e.g., recommendations), interaction feedback is transmitted from the trustor to the trustee, allowing the trustee to assess and locally store its own trustworthiness in the form of connections. Therefore, protecting the trustor’s identity and ensuring data privacy during transmission, storage, and analysis are critical considerations in the CA approach. Exploring the integration of blockchain technology to enhance privacy protection could be a valuable future research direction.
Explainability. Trust models should be capable of providing clear and understandable explanations for their results. The ability to analyze and justify decisions is essential for enhancing user trust, compliance, and acceptance [40]. Additionally, it is important to clarify the processing logic and how trust metrics influence trust evaluations [31]. In this study, we made two key contributions to improve the explainability of our model. First, we conducted a semi-formal analysis to identify potential modifications to the CA algorithm’s processing logic that could enhance its performance. Then, we carried out a series of simulation experiments to demonstrate the effectiveness of these improvements.
User Acceptance. The acceptance of a trust model depends on factors such as Quality of Service (QoS), quality of experience (QoE), and individual user preferences [31]. It can be assessed by gathering user feedback through questionnaires. However, the user acceptance of the CA model has not yet been evaluated.
Overall, the CA model satisfies decentralization, subjectivity, integrity, and transparency, but several aspects require further enhancement and evaluation. Future research should focus on scalability, as the model’s performance in large-scale environments with a high number of nodes remains untested. Additionally, while trust data are stored locally, their impact on system overhead and storage efficiency needs further investigation. Accuracy assessment using standard metrics like Precision, Recall, and the F1-score is also necessary to validate its reliability. In terms of robustness, although the model resists false recommendations, its resilience against insider attacks requires deeper analysis. Privacy preservation remains an open challenge, particularly in safeguarding trustor identity and feedback transmission, which could benefit from encryption or blockchain-based solutions. Finally, user acceptance has yet to be assessed, making it essential to evaluate the model’s adoption based on Quality of Service (QoS) and quality of experience (QoE). Addressing these challenges will strengthen the CA model’s effectiveness and applicability in dynamic environments. To the best of our knowledge, no other models have undergone such a comprehensive evaluation.

8. Trust-Related Attacks

In this section, we examine the CA model’s resilience against the most common trust-related attacks identified in various research studies [2,16,21,22,23,25,31,34,35,36,41,42]. A malevolent node is typically defined as a socially uncooperative entity that consistently seeks to disrupt system operations [36]. Its primary objective is to provide low-quality services to conserve its own resources while still benefiting from the services offered by other nodes in the system. Malicious nodes can employ various trust-related attack strategies, each designed to evade detection through different deceptive tactics [25].
Malicious with Everyone (ME). In this attack, a node consistently provides low-quality services or misleading recommendations, regardless of the requester. This is one of the most fundamental types of attacks. To counter the ME attack, the CA approach could incorporate a contract-theoretic incentive mechanism. This mechanism would reward honest service providers with utility gains while penalizing dishonest service providers with utility losses, similarly to the approach in [19], by awarding or deducting credit coins.
Bad-mouthing Attack (BMA). In this attack, one or more malicious nodes deliberately provide bad recommendations to damage the reputation of a well-behaved node, reducing its chances of being selected as service provider. However, in the CA approach, agents do not exchange trust information through recommendations. Since nodes cannot act as recommenders, our approach is inherently immune to BMA. Most studies assume that service requesters are honest, but as noted in [17], service requesters can also be “ill-intended” or “dishonest”, deliberately giving low ratings to a service provider despite receiving good service. Additionally, some service requesters may be “amateur” and incapable of accurately assessing service quality. This represents a specific type of bad-mouthing attack, against which our approach is also resilient. In the CA model, service providers have the autonomy to accept or reject service requests, allowing them to maximize their profits while avoiding dishonest or amateur service requesters. If a service requester unfairly assigns a low rating to a high-quality service, the service provider will respond by decreasing the weight of its connection with that service requester, reducing the likelihood of future interactions. This serves as a built-in mechanism to penalize dishonest service requesters.
Ballot Stuffing (BSA) or Good-mouthing Attack. This attack occurs when one or more malicious recommenders (a collusion attack) falsely provide positive feedback for a low-quality service provider to boost its reputation and increase its chances of being selected, ultimately disadvantaging legitimate, high-quality service providers. Since the CA approach does not involve nodes exchanging recommendations, it is inherently resistant to any BSA carried out by recommender nodes, which is common in other trust models. However, a specific variation of this attack can occur when an ill-intended or amateur service requester assigns a service provider a higher rating than it actually deserves [17]. In this scenario, while the service provider benefits from an inflated rating, it also suffers by misjudging its actual service capability. Since the CA algorithm requires the service provider to compute the average weight of its existing connections to assess its performance, an inaccurate self-evaluation could lead to financial losses when dealing with honest service requesters. Thus, service providers have a strong incentive to identify and avoid dishonest or inexperienced service requesters. To mitigate this risk, we could implement a mechanism similar to the Tlocal and Tglobal value comparison from [17] or the Internal Similarity concept from [23], where the service provider would evaluate the consistency of a given rating by comparing it to the average or median weights of connections with other service providers for the same service. Alternatively, a contract-theoretic incentive mechanism, as in [18], could deter dishonest service requesters by requiring them to pay a disproportionately high amount for low-quality services received. Additionally, integrating credit quotas (coins) as proposed in [19] could further discourage good-mouthing attacks. The choice of mitigation strategy would depend on the specific application environment in which the CA approach is implemented.
Self-promoting Attack. In this attack, a malicious node falsely provides positive recommendations about itself to increase its chances of being chosen as a service provider. Once selected, it then delivers poor services or exploits the network. This type of attack is especially effective against nodes that have not previously interacted with the attacker. In the CA approach, service providers do not provide recommendations about themselves, and service requesters do not select service providers. As a result, traditional self-promotion tactics are ineffective. However, a dishonest service provider could still choose to provide a service despite knowing it would harm the service requester. To prevent such behavior, it is crucial to incorporate a penalty mechanism that imposes a financial loss on dishonest service providers. Potential solutions include the fee charge concept from [17], the contract-theoretic mechanism from [18], or an incentive mechanism using credit quotas, such as TrustCoin [19]. The choice of mechanism would depend on the specific application scenario.
Opportunistic Service Attack (OSA). This attack occurs when a node manipulates its behavior based on its reputation. When its reputation declines, it offers high-quality services, but once its reputation improves, it provides poor services. This strategy allows the node to sustain a sufficient level of trust to continue being chosen as a service provider. However, in the CA approach, OSA does not enhance a malicious node’s chances of being selected as a service provider since trustors do not choose trustees. Consequently, our approach remains resilient against opportunistic service attacks.
Sybil Attack (SA). This attack occurs when a malicious node generates multiple fake identities to manipulate the reputation of a target node unfairly by providing various ratings. In the CA approach, a malicious service requester could use this tactic to hinder a target service provider’s ability to accurately assess its service capability, deplete its resources, and reduce its chances of being selected as a service provider. To counter this attack, we can apply the same mechanisms proposed for addressing the good-mouthing attack. Specifically, an approach based on the Internal Similarity concept [23] or a contract-theoretic mechanism based on incentives [18] could either deter attackers from carrying out their attacks or assist the service provider in identifying and isolating malicious service requesters.
Whitewashing Attack (WA), also known as a Re-entry Attack, occurs when a malicious node abandons the system after its reputation drops below a certain threshold and then re-enters with a new identity to erase its negative history and reset its reputation to a default value. In the CA approach, this attack is ineffective because changing a service provider’s identity does not impact its likelihood of being selected as a service provider, as trustors do not choose trustees.
On–Off Attack (OOA), also called a Random Attack, is one of the most challenging attacks to detect. In this attack, a malicious node alternates its behavior between good (ON) and bad (OFF) in an unpredictable manner, restoring trust just before launching another attack. During the ON phase, the node builds a positive reputation, which it later exploits for malicious activities. The CA approach is resilient against OOA since trustors do not select trustees, meaning a high reputation does not increase a node’s chances of being chosen as a service provider.
Discrimination or Selective Misbehavior Attack. This kind of attack occurs when an attacker selectively manipulates its behavior by providing legitimate services for certain network tasks while acting maliciously against others. To mitigate this attack in the CA approach, a penalty mechanism should be implemented to impose financial consequences on dishonest service providers. Depending on the application scenario, this could involve the “fee charge concept” [17], a contract-theoretic mechanism [18], or an incentive mechanism using credit quotas like in TrustCoin [19]. One form of this attack is selfish behavior, where service providers prioritize easier tasks that require less effort. Rational agents would evaluate each task based on expected utility—calculating the probability of success multiplied by the reward—and choose accordingly. To counter this in the CA approach, it is crucial to implement incentives that encourage capable agents to take on more challenging tasks. For example, a contract-theoretic mechanism can ensure that highly skilled agents prefer difficult tasks with higher expected utility.
Denial of Service (DoS) and Distributed Denial of Service (DDoS) Attacks aim to disrupt a service by overwhelming it with excessive traffic or resource-intensive requests. While DoS attack originates from a single attacker, DDoS attacks involve multiple compromised devices working together to target a single system, making them more difficult to counter. In the CA approach, these attacks remain a potential threat. Therefore, future research should focus on developing effective mechanisms to mitigate DoS and DDoS attacks.
Storage Attack occurs when a malicious node manipulates stored feedback data by deleting, modifying, or injecting fake information. In the CA approach, an attacker may attempt to alter or remove the locally stored connections of a trustee. To counter this threat, blockchain technology can be utilized as a safeguard to ensure data integrity and prevent tampering.

9. Conclusions and Future Work

Various approaches have been developed to assess trust and reputation in real-world MASs, such as peer-to-peer (P2P) networks, online marketplaces, pervasive computing, Smart Grids, and the Internet of Things (IoT). However, existing trust models face significant challenges, including agent mobility, dynamic behavioral changes, the continuous entry and exit of agents, and the cold start problem.
To address these issues, we introduced the Create Assemblies (CA) model, inspired by synaptic plasticity in the human brain, where trustees (service providers) can evaluate their own capabilities and locally store trust information, allowing for improved agent mobility handling, reduced communication overhead, resilience to disinformation, and enhanced privacy.
Previous work [7] comparing CA with FIRE, a well-known trust model, revealed that CA adapts well to consumer population fluctuations but is less resilient to provider population changes and continuous performance shifts. This work built on these findings and used a semi-formal analysis to identify performance pitfalls, which were then addressed by allowing service providers to self-assess whether their performance falls below a certain threshold, thereby ensuring faster reaction and better adaptability in dynamic environments.
The simulation results confirm that CA_NEW outperforms the original CA_OLD, with greatly improved resilience and adaptability, although CA_OLD may still be preferable in scenarios of consumer population change, where long-standing service providers maintain stable performance. While FIRE has certain advantages in extreme environmental changes, CA_NEW remains highly competitive across a wide and diverse variety of environmental conditions. Building on prior research [30], where we explored how trustors can detect environmental dynamics and select the optimal trust model (CA or FIRE) to maximize utility, we propose an obvious direction for future work: exploring how trustees can detect the dynamics of environments and their self-awareness level to choose between CA_OLD and CA_NEW for optimal performance.
This paper also analyzed CA with respect to established evaluation criteria for trust models and discussed its resilience to most well-known trust-related attacks, proposing countermeasures for dishonest behaviors. While the CA model meets key criteria such as decentralization, subjectivity, context awareness, dynamicity, availability, integrity, and transparency, it requires further research and improvements. Table 10 summarizes the CA model’s evaluation across all thirteen criteria, providing a concise overview of its current strengths and areas for future enhancement. The model’s holistic evaluation not only highlights its strengths but also demonstrates a commitment to continuous refinement, positioning it as a highly promising foundation for future trust management solutions.

Author Contributions

Conceptualization, Z.L. and D.K.; methodology, Z.L. and D.K.; software, Z.L.; validation, Z.L. and D.K.; formal analysis, Z.L. and D.K.; investigation, Z.L. and D.K.; resources, Z.L.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L. and D.K.; visualization, Z.L. and D.K.; supervision, D.K.; project administration, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The research data will be made available on demand.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1

Proof of Proposition 1.
It suffices to show by mathematical induction that the n-th connection for task1 will be initialized to the value of 0.45.
Base Case. (n = 2). We know that B P 1 has initialized its second connection for c o 2 to the value 0.45.
Induction hypothesis: Assume that Proposition 1 is correct for the k-th connection that B P 1 will create for t a s k 1 .
Induction step: We will show that Proposition 1 holds for the (k+1)th connection that B P 1 will create for t a s k 1 .
B P 1 will calculate the average of its k connections for t a s k 1 as follows:
a v e r a g e = w c o 1 + w c o 2 + + w c o k 1 + w c o k k ,
where w c o n denotes the weight of the n-th connection of B P 1 for t a s k 1 .
From the induction hypothesis, we have the following:
w c o k = 0.45 ,
The average of the weights of the k-1 connections is equal to 0.45:
w c o 1 + w c o 2 + + w c o k 1 k 1 = 0.45 w c o 1 + w c o 2 + + w c o k 1 = k 1 · 0.45 .
From (A1)–(A3), we have the following: a v e r a g e = k 1 · 0.45 + 0.45 k = 0.45 . □

Appendix A.2

Sketch of Computational Complexity Analysis of Algorithm 2. This sketch highlights the dominant cost contributors. A full formal proof would require additional assumptions and details regarding data structures and implementation models.
Time Complexity Analysis. We analyze the time complexity per iteration of the main loop of Algorithm 2, focusing on dominant, input-dependent operations. Constant time steps and scalar updates are noted but not emphasized in the final complexity.
Notation:
  • t : Number of distinct known tasks ( c , r ) stored or processed by the agent, where
    • c is a task category.
    • r is a set of requirements.
  • n: Number of agents in the system.
  • q: Number of incoming messages in the agent’s message list.
  • m: Number of connections currently held by the agent. In the worst case, m = O ( n · t ) .
Step-by-Step Breakdown
1.
Initialization (line 1):
  • Creating an empty structure: O ( 1 ) .
2.
Message Handling and Connection Setup (lines 3–19):
  • Broadcast to all agents (line 4): O ( n ) .
  • Append message (line 6): O ( 1 ) .
  • Connection handling (lines 7–14 or 15–19):
    • Worst case (lines 7–14):
      • Linear search in connections: O ( m ) .
      • Find similar connections and average weights: O ( m ) .
    • Total: O ( m ) .
    • Alternative case (lines 15–19): O ( 1 ) .
Overall block cost: O ( n + m ) .
3.
Select and Attempt Task (lines 20–33):
  • For each message (q in total), find the corresponding connection (worst-case linear search): O ( q · m ) .
  • Other steps (task status, threshold check, performance evaluation): Each is O ( 1 ) .
  • Message deletion (search + shift in list): O ( q ) .
Total for this block: O q · m + q = O ( q · m ) .
4.
Update Connection Weight (lines 34–38): Direct updates and conditionals: O ( 1 ) .
5.
Dynamic Profile Update (lines 39–43):
  • Iterate and filter all connections: O ( m ) .
  • Search through task list: O ( t ) .
Total: O ( m + t ) .
  • Total Time Complexity: O n + m + q · m + m + t = O ( n + q · m + t ) = O ( q · n · t ) , since, as stated, in the worst case, it is m = O ( n · t ) .
Space Complexity. We estimate the memory requirements per agent, focusing on the key data structures and how they scale with input parameters:
  • i . b a d _ t a s k s : Map from tasks to Boolean: O ( t ) .
  • Message list M : Up to q messages: O ( q ) .
  • Connection list: At most one per known task per agent: O ( n · t ) .
  • Local scalars and flags: O ( 1 ) .
Total Space Complexity: O ( q + n · t ) .

References

  1. Fabi, A.K.; Thampi, S.M. A psychology-inspired trust model for emergency message transmission on the Internet of Vehicles (IoV). Int. J. Comput. Appl. 2020, 44, 480–490. [Google Scholar] [CrossRef]
  2. Jabeen, F.; Khan, M.K.; Hameed, S.; Almogren, A. Adaptive and survivable trust management for Internet of Things systems. IET Inf. Secur. 2021, 15, 375–394. [Google Scholar] [CrossRef]
  3. Hattab, S.; Lejouad Chaari, W. A generic model for representing openness in multi-agent systems. Knowl. Eng. Rev. 2021, 36, e3. [Google Scholar] [CrossRef]
  4. Player, C.; Griffiths, N. Improving trust and reputation assessment with dynamic behaviour. Knowl. Eng. Rev. 2020, 35, e29. [Google Scholar] [CrossRef]
  5. Jelenc, D. Toward unified trust and reputation messaging in ubiquitous systems. Ann. Telecommun. 2021, 76, 119–130. [Google Scholar] [CrossRef]
  6. Sato, K.; Sugawara, T. Multi-Agent Task Allocation Based on Reciprocal Trust in Distributed Environments. In Agents and Multi-Agent Systems: Technologies and Applications; Jezic, G., Chen-Burger, J., Kusek, M., Sperka, R., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2021; Smart Innovation, Systems and Technologies; Volume 241. [Google Scholar] [CrossRef]
  7. Lygizou, Z.; Kalles, D. A biologically inspired computational trust model for open multi-agent systems which is resilient to trustor population changes. In Proceedings of the 13th Hellenic Conference on Artificial Intelligence (SETN ′24), Athens, Greece, 11–13 September 2024; Article No. 29. pp. 1–9. [Google Scholar] [CrossRef]
  8. Samuel, O.; Javaid, N.; Khalid, A.; Imran, M.; Nasser, N. A Trust Management System for Multi-Agent System in Smart Grids Using Blockchain Technology. In Proceedings of the 2020 IEEE Global Communications Conference (GLOBECOM 2020), Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  9. Khalid, R.; Samuel, O.; Javaid, N.; Aldegheishem, A.; Shafiq, M.; Alrajeh, N. A Secure Trust Method for Multi-Agent System in Smart Grids Using Blockchain. IEEE Access 2021, 9, 59848–59859. [Google Scholar] [CrossRef]
  10. Bahutair, M.; Bouguettaya, A.; Neiat, A.G. Multi-Perspective Trust Management Framework for Crowdsourced IoT Services. IEEE Trans. Serv. Comput. 2022, 15, 2396–2409. [Google Scholar] [CrossRef]
  11. Meyerson, D.; Weick, K.E.; Kramer, R.M. Swift trust and temporary groups. In Trust in Organizations: Frontiers of Theory and Research; Kramer, R.M., Tyler, T.R., Eds.; Sage Publications, Inc.: Thousand Oaks, CA, USA, 1996; pp. 166–195. [Google Scholar] [CrossRef]
  12. Wang, J.; Jing, X.; Yan, Z.; Fu, Y.; Pedrycz, W.; Yang, L.T. A Survey on Trust Evaluation Based on Machine Learning. ACM Comput. Surv. 2020, 53, 1–36. [Google Scholar] [CrossRef]
  13. Ahmad, I.; Yau, K.-L.A.; Keoh, S.L. A Hybrid Reinforcement Learning-Based Trust Model for 5G Networks. In Proceedings of the 2020 IEEE Conference on Application, Information and Network Security (AINS), Kota Kinabalu, Malaysia, 17–19 November 2020; pp. 20–25. [Google Scholar] [CrossRef]
  14. Kolomvatsos, K.; Kalouda, M.; Papadopoulou, P.; Hadjieftymiades, S. Fuzzy trust modeling for pervasive computing applications. J. Data Intell. 2021, 2, 101–115. [Google Scholar] [CrossRef]
  15. Wang, E.K.; Chen, C.M.; Zhao, D.; Zhang, N.; Kumari, S. A dynamic trust model in Internet of Things. Soft Comput. 2020, 24, 5773–5782. [Google Scholar] [CrossRef]
  16. Latif, R. ConTrust: A Novel Context-Dependent Trust Management Model in Social Internet of Things. IEEE Access 2022, 10, 46526–46537. [Google Scholar] [CrossRef]
  17. Alam, S.; Zardari, S.; Shamsi, J.A. Blockchain-Based Trust and Reputation Management in SIoT. Electronics 2022, 11, 3871. [Google Scholar] [CrossRef]
  18. Fragkos, G.; Minwalla, C.; Plusquellic, J.; Tsiropoulou, E.E. Local Trust in Internet of Things Based on Contract Theory. Sensors 2022, 22, 2393. [Google Scholar] [CrossRef]
  19. Pan, Q.; Wu, J.; Li, J.; Yang, W.; Guan, Z. Blockchain and AI Empowered Trust-Information-Centric Network for Beyond 5G. IEEE Netw. 2020, 34, 38–45. [Google Scholar] [CrossRef]
  20. Muhammad, S.; Umar, M.M.; Khan, S.; Alrajeh, N.A.; Mohammed, E.A. Honesty-Based Social Technique to Enhance Cooperation in Social Internet of Things. Appl. Sci. 2023, 13, 2778. [Google Scholar] [CrossRef]
  21. Ali, S.E.; Tariq, N.; Khan, F.A.; Ashraf, M.; Abdul, W.; Saleem, K. BFT-IoMT: A Blockchain-Based Trust Mechanism to Mitigate Sybil Attack Using Fuzzy Logic in the Internet of Medical Things. Sensors 2023, 23, 4265. [Google Scholar] [CrossRef]
  22. Kouicem, D.E.; Imine, Y.; Bouabdallah, A.; Lakhlef, H. Decentralized Blockchain-Based Trust Management Protocol for the Internet of Things. IEEE Trans. Dependable Secur. Comput. 2022, 19, 1292–1306. [Google Scholar] [CrossRef]
  23. Ouechtati, H.; Nadia, B.A.; Lamjed, B.S. A fuzzy logic-based model for filtering dishonest recommendations in the Social Internet of Things. J. Ambient. Intell. Hum. Comput. 2023, 14, 6181–6200. [Google Scholar] [CrossRef]
  24. Mianji, E.M.; Muntean, G.-M.; Tal, I. Trust and Reputation Management for Data Trading in Vehicular Edge Computing: A DRL-Based Approach. In Proceedings of the 2024 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Toronto, ON, Canada, 19–21 June 2024; pp. 1–7. [Google Scholar] [CrossRef]
  25. Marche, C.; Nitti, M. Trust-Related Attacks and Their Detection: A Trust Management Model for the Social IoT. IEEE Trans. Netw. Serv. Manag. 2021, 18, 3297–3308. [Google Scholar] [CrossRef]
  26. Kumari, S.; Kumar, S.M.D.; Venugopal, K.R. Trust Management in Social Internet of Things: Challenges and Future Directions. Int. J. Com. Dig. Syst. 2023, 14, 899–920. [Google Scholar] [CrossRef]
  27. Huynh, T.D.; Jennings, N.R.; Shadbolt, N.R. An integrated trust and reputation model for open multi-agent systems. Auton. Agents Multi-Agent Syst. 2006, 13, 119–154. [Google Scholar] [CrossRef]
  28. Lygizou, Z.; Kalles, D. A Biologically Inspired Computational Trust Model based on the Perspective of the Trustee. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence (SETN ′22), Corfu, Greece, 7–9 September 2022; Article No. 7. pp. 1–10. [Google Scholar] [CrossRef]
  29. Cohen, P. Empirical Methods for Artificial Intelligence; The MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  30. Lygizou, Z.; Kalles, D. Using Deep Q-Learning to Dynamically Toggle Between Push/Pull Actions in Computational Trust Mechanisms. Mach. Learn. Knowl. Extr. 2024, 6, 1413–1438. [Google Scholar] [CrossRef]
  31. Wang, J.; Yan, Z.; Wang, H.; Li, T.; Pedrycz, W. A Survey on Trust Models in Heterogeneous Networks. IEEE Commun. Surv. Tutor. 2022, 24, 2127–2162. [Google Scholar] [CrossRef]
  32. Fotia, L.; Delicato, F.; Fortino, G. Trust in Edge-Based Internet of Things Architectures: State of the Art and Research Challenges. ACM Comput. Surv. 2023, 55, 1–34. [Google Scholar] [CrossRef]
  33. Wei, L.; Wu, J.; Long, C. Enhancing Trust Management via Blockchain in Social Internet of Things. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 159–164. [Google Scholar] [CrossRef]
  34. Amiri-Zarandi, M.; Dara, R.A. Blockchain-based Trust Management in Social Internet of Things. In Proceedings of the 2020 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Calgary, AB, Canada, 17–22 August 2020; pp. 49–54. [Google Scholar] [CrossRef]
  35. Hankare, P.; Babar, S.; Mahalle, P. Trust Management Approach for Detection of Malicious Devices in SIoT. Teh. Glas. 2021, 15, 43–50. [Google Scholar] [CrossRef]
  36. Talbi, S.; Bouabdallah, A. Interest-based trust management scheme for social internet of things. J. Ambient. Intell. Hum. Comput. 2020, 11, 1129–1140. [Google Scholar] [CrossRef]
  37. Sagar, S.; Mahmood, A.; Sheng, Q.Z.; Zhang, W.E. Trust Computational Heuristic for Social Internet of Things: A Machine Learning-based Approach. In Proceedings of the 2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  38. Sagar, S.; Mahmood, A.; Sheng, M.; Zaib, M.; Zhang, W. Towards a Machine Learning-driven Trust Evaluation Model for Social Internet of Things: A Time-aware Approach. In Proceedings of the 17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous ′20), Darmstadt, Germany, 7–9 December 2020; pp. 283–290. [Google Scholar] [CrossRef]
  39. Aalibagi, S.; Mahyar, H.; Movaghar, A.; Stanley, H.E. A Matrix Factorization Model for Hellinger-Based Trust Management in Social Internet of Things. IEEE Trans. Dependable Secur. Comput. 2022, 19, 2274–2285. [Google Scholar] [CrossRef]
  40. Ullah, F.; Salam, A.; Amin, F.; Khan, I.A.; Ahmed, J.; Alam Zaib, S.; Choi, G.S. Deep Trust: A Novel Framework for Dynamic Trust and Reputation Management in the Internet of Things (IoT)-Based Networks. IEEE Access 2024, 12, 87407–87419. [Google Scholar] [CrossRef]
  41. Alemneh, E.; Senouci, S.-M.; Brunet, P.; Tegegne, T. A Two-Way Trust Management System for Fog Computing. Future Gener. Comput. Syst. 2020, 106, 206–220. [Google Scholar] [CrossRef]
  42. Tu, Z.; Zhou, H.; Li, K.; Song, H.; Yang, Y. A Blockchain-Based Trust and Reputation Model with Dynamic Evaluation Mechanism for IoT. Comput. Netw. 2022, 218, 109404. [Google Scholar] [CrossRef]
Figure 1. (a) A performance comparison of CA_NEW and CA_OLD, when providers change performance with 10% probability per round: CA_NEW achieves better performance, especially in early interactions. (b) A performance comparison of CA_NEW and FIRE, when providers change performance with 10% probability per round: CA_NEW excels in early interactions (except for the first interaction), while FIRE adapts and slightly outperforms over time.
Figure 1. (a) A performance comparison of CA_NEW and CA_OLD, when providers change performance with 10% probability per round: CA_NEW achieves better performance, especially in early interactions. (b) A performance comparison of CA_NEW and FIRE, when providers change performance with 10% probability per round: CA_NEW excels in early interactions (except for the first interaction), while FIRE adapts and slightly outperforms over time.
Applsci 15 06125 g001
Figure 2. (a) A performance comparison of CA_NEW and CA_OLD when providers switch performance profiles with 2% probability per round: CA_NEW consistently outperforms CA_OLD by an average of 2 UG units. (b) A performance comparison of CA_NEW and FIRE when providers switch performance profiles with 2% probability per round: CA_NEW outperforms FIRE in all interactions except for the first one.
Figure 2. (a) A performance comparison of CA_NEW and CA_OLD when providers switch performance profiles with 2% probability per round: CA_NEW consistently outperforms CA_OLD by an average of 2 UG units. (b) A performance comparison of CA_NEW and FIRE when providers switch performance profiles with 2% probability per round: CA_NEW outperforms FIRE in all interactions except for the first one.
Applsci 15 06125 g002
Figure 3. (a) A performance comparison of CA_NEW and CA_OLD in a static environment: CA_NEW achieves better performance in early interactions. (b) A performance comparison of CA_NEW and FIRE in a static environment: CA_NEW outperforms FIRE in all interactions except for the first.
Figure 3. (a) A performance comparison of CA_NEW and CA_OLD in a static environment: CA_NEW achieves better performance in early interactions. (b) A performance comparison of CA_NEW and FIRE in a static environment: CA_NEW outperforms FIRE in all interactions except for the first.
Applsci 15 06125 g003
Figure 4. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 2% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 2% probability per round.
Figure 4. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 2% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 2% probability per round.
Applsci 15 06125 g004
Figure 5. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 5% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 5% probability per round.
Figure 5. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 5% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 5% probability per round.
Applsci 15 06125 g005
Figure 6. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 10% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 10% probability per round.
Figure 6. (a) A performance comparison of CA_NEW and CA_OLD when the provider population changes with 10% probability per round: CA_NEW outperforms in all interactions. (b) A performance comparison of CA_NEW and FIRE when the provider population changes with 10% probability per round.
Applsci 15 06125 g006
Figure 7. A performance comparison of CA_NEW and FIRE when the provider population changes with 20% probability per round.
Figure 7. A performance comparison of CA_NEW and FIRE when the provider population changes with 20% probability per round.
Applsci 15 06125 g007
Figure 8. A performance comparison of CA_NEW and FIRE when the provider population changes with 30% probability per round.
Figure 8. A performance comparison of CA_NEW and FIRE when the provider population changes with 30% probability per round.
Applsci 15 06125 g008
Figure 9. (a) Performance comparison of CA_OLD when provider population changes with probability of 2%, 5%, and 10%; performance drops by 4 UG units. (b) Performance comparison of CA_NEW when provider population changes with probability of 2%, 5%, and 10%; performance drops by 2 UG units. (c) Performance comparison of FIRE when provider population changes with probability of 2%, 5%, and 10%; FIRE is more resilient than CA to this environmental change.
Figure 9. (a) Performance comparison of CA_OLD when provider population changes with probability of 2%, 5%, and 10%; performance drops by 4 UG units. (b) Performance comparison of CA_NEW when provider population changes with probability of 2%, 5%, and 10%; performance drops by 2 UG units. (c) Performance comparison of FIRE when provider population changes with probability of 2%, 5%, and 10%; FIRE is more resilient than CA to this environmental change.
Applsci 15 06125 g009
Figure 10. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 2%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 2%.
Figure 10. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 2%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 2%.
Applsci 15 06125 g010
Figure 11. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 5%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 5%.
Figure 11. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 5%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 5%.
Applsci 15 06125 g011
Figure 12. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 10%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 10%.
Figure 12. (a) Performance comparison of CA_NEW and CA_OLD when consumer population changes with probability of 10%. (b) Performance comparison of CA_NEW and FIRE when consumer population changes with probability of 10%.
Applsci 15 06125 g012
Figure 13. (a) Performance comparison of CA_NEW in consumer population changes of 2%, 5%, and 10%. (b) Performance comparison of CA_OLD in consumer population changes of 2%, 5%, and 10%.
Figure 13. (a) Performance comparison of CA_NEW in consumer population changes of 2%, 5%, and 10%. (b) Performance comparison of CA_OLD in consumer population changes of 2%, 5%, and 10%.
Applsci 15 06125 g013
Figure 14. (a) A performance comparison of CA_NEW and CA_OLD when consumers change locations with a probability of 10% per round. (b) A performance comparison of CA_NEW and FIRE when consumers change locations with a probability of 10% per round.
Figure 14. (a) A performance comparison of CA_NEW and CA_OLD when consumers change locations with a probability of 10% per round. (b) A performance comparison of CA_NEW and FIRE when consumers change locations with a probability of 10% per round.
Applsci 15 06125 g014
Figure 15. (a) A performance comparison of CA_NEW and CA_OLD when providers change locations with a probability of 10% per round. (b) A performance comparison of CA_NEW and FIRE when providers change locations with a probability of 10% per round.
Figure 15. (a) A performance comparison of CA_NEW and CA_OLD when providers change locations with a probability of 10% per round. (b) A performance comparison of CA_NEW and FIRE when providers change locations with a probability of 10% per round.
Applsci 15 06125 g015
Figure 16. (a) Performance comparison of CA_NEW and CA_OLD when all dynamic factors are in effect: CA_NEW outperforms. (b) Performance comparison of CA_NEW and FIRE when all dynamic factors are in effect: CA_NEW outperforms.
Figure 16. (a) Performance comparison of CA_NEW and CA_OLD when all dynamic factors are in effect: CA_NEW outperforms. (b) Performance comparison of CA_NEW and FIRE when all dynamic factors are in effect: CA_NEW outperforms.
Applsci 15 06125 g016
Table 1. Comparison between FIRE and CA trust models.
Table 1. Comparison between FIRE and CA trust models.
FeatureFIRE ModelCA Model
Initiation of Trust EvaluationTrustor initiates trust and selects a trusteeTrustee decides whether to engage with a task based on its trust level
Sources of Trust InformationInteraction trust (IT), witness reputation (WR), role-based trust (RT), certified reputation (CR)Single source: trust learned via synaptic-style feedback from experience
Trust Update MechanismWeighted aggregation of direct and indirect trust evidenceAdjusts connection weights dynamically based on task success/failure feedback
Cold Start HandlingLeverages indirect trust (WR, CR) when no prior interaction exists (IT is unavailable)Trustee initializes connection weight using prior experience or a default value
ScalabilityDependent on network structure. May suffer in large-scale systems due to reliance on witness availabilityHigh: fully decentralized, with no inter-agent trust exchange
Communication OverheadModerate to high: requires trustor to query multiple witnesses for WR or providers for CRVery low: no inter-agent trust exchange; all trust is stored and updated locally
Resilience to False ReportsVulnerable to dishonest witnesses or biased CRResistant to recommendation attacks
Trust StorageDistributed. Consumers store IT values locally, and providers store CR values locallyFully local: trust is stored as neural-like weights in each trustee/provider
Biological InspirationNoneYes—based on synaptic plasticity and neural assembly formation
Table 2. Notation table.
Table 2. Notation table.
SymbolMeaning
i ,   j Agent identifiers
~ j Any agent other than j
t a s k = ( c , r ) A task defined by category c and requirement r
r’ > rRequirement r’ signifies a more difficult task than r
m = ( r e q u e s t ,   i ,   t a s k ) Request message from agent i
c o = i , j , w , t a s k A connection between i and j , with weight w representing the trust value of trustor j for trustee i for executing t a s k
_Placeholder for any value
M List of received request messages
T h r e s h o l d Trust threshold required to attempt a task
w Trust/connection weight
Table 3. Explanation of requirements.
Table 3. Explanation of requirements.
TasksExplanation of the Requirement
task1 = (service_ID, performance_level = PERFECT)Performance must be equal to 10
task2 = (service_ID, performance_level = GOOD)Performance must be greater than or equal to 5
task3 = (service_ID, performance_level = OK)Performance must be greater than or equal to 0
task4 = (service_ID, performance_level = BAD)Performance must be greater than or equal to −5
task5 = (service_ID, performance_level = WORST)Performance must be greater than or equal to −10
Table 4. Evolution of average connection weights after each execution of t a s k 3 for consumer C 1 to C 4 .
Table 4. Evolution of average connection weights after each execution of t a s k 3 for consumer C 1 to C 4 .
ConsumerResultFinal Weight of the New ConnectionAverage Weight
C 1 success w c o 1 = 0.55 0.55
C 2 failure w c o 2 = 0.505 0.5275
C 3 failure w c o 3 = 0.48025 0.51175
C 4 failure w c o 4 = 0.462925 0.499544
Table 5. Profiles of provider agents (performance constants defined in Table 6).
Table 5. Profiles of provider agents (performance constants defined in Table 6).
ProfilePerformance Rangeσp
Good[PL_GOOD, PL_PERFECT]1.0
Ordinary[PL_OK, PL_GOOD]2.0
Bad[PL_WORST, PL_OK]2.0
Table 6. Performance level constants.
Table 6. Performance level constants.
Performance LevelUtility Gained
PL_PERFECT10
PL_GOOD5
PL_OK0
PL_BAD−5
PL_WORST−10
Table 7. Experimental variables.
Table 7. Experimental variables.
Simulation VariableSymbolValue
Number of simulation rounds N 500
Total number of provider agents N P 100
-
Good providers
N G P 10
-
Ordinary providers
N P O 40
-
Intermittent providers
N P I 5
-
Bad providers
N P B 45
Total number of consumer agents N C 500
Range of consumer activity levelA[0.25, 1.00]
Waiting time W T 1000 msec
Table 8. FIRE’s default parameters.
Table 8. FIRE’s default parameters.
ParametersSymbolValue
Local rating history size H 10
IT recency scaling factor λ −(5/ln(0.5))
Branching factor n B F 2
Referral length threshold n R L 5
Component coefficients
-
Interaction trust
W I 2.0
-
Role-based trust
W R 2.0
-
Witness reputation
W W 1.0
-
Certified reputation
W C 0.5
Reliability function parameters
-
Interaction trust
γ I −ln(0.5)
-
Role-based trust
γ R −ln(0.5)
-
Witness reputation
γ W −ln(0.5)
-
Certified reputation
γ C −ln(0.5)
Table 9. CA’s default parameters.
Table 9. CA’s default parameters.
ParametersSymbolValue
Threshold T h r e s h o l d 0.5
Positive factor controlling the rate of increase in the strengthening of a connection α 0.1
Positive factor controlling the rate of decrease in the weakening of a connection β 0.1
Table 10. Summary of CA model evaluation based on thirteen criteria.
Table 10. Summary of CA model evaluation based on thirteen criteria.
CriterionCA Model StatusRemarks
DecentralizationSatisfiedFully decentralized; no central authority
SubjectivitySatisfiedTrust reflects trustor’s individual feedback
Context AwarenessSatisfiedTrust is task-specific
DynamicitySatisfiedTrust is updated after every task
AvailabilitySatisfiedBroadcast ensures redundancy
Integrity and TransparencySatisfiedLocal storage; accessible by the trustee
ScalabilityPartially addressedNo large-scale evaluation; promising in simulations
OverheadPartially addressedAlgorithm is simple; lacks complete formal complexity analysis—a proof sketch is provided in Appendix A.2
AccuracyNot yet evaluatedNeeds empirical evaluation using metrics (Precision, Recall, etc.)
RobustnessPartially addressedResists false feedback; insider attack defense needs strengthening
Privacy PreservationPartially addressedNo trust sharing; needs feedback protection mechanisms
ExplainabilitySatisfiedSupported via semi-formal analysis and empirical validation
User AcceptanceNot yet evaluatedNo data on user preferences or quality of experience
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lygizou, Z.; Kalles, D. A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations. Appl. Sci. 2025, 15, 6125. https://doi.org/10.3390/app15116125

AMA Style

Lygizou Z, Kalles D. A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations. Applied Sciences. 2025; 15(11):6125. https://doi.org/10.3390/app15116125

Chicago/Turabian Style

Lygizou, Zoi, and Dimitris Kalles. 2025. "A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations" Applied Sciences 15, no. 11: 6125. https://doi.org/10.3390/app15116125

APA Style

Lygizou, Z., & Kalles, D. (2025). A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations. Applied Sciences, 15(11), 6125. https://doi.org/10.3390/app15116125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop