Next Article in Journal
Security Evaluation and Improvement of the Extended Protocol EIBsec for KNX/EIB
Previous Article in Journal
The Coupling and Coordination Degree of Digital Business and Digital Governance in the Context of Sustainable Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Role of Trust in Dependence Networks: A Case Study

Institute of Cognitive Sciences and Technologies, National Research Council of Italy (ISTC-CNR), 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Information 2023, 14(12), 652; https://doi.org/10.3390/info14120652
Submission received: 7 September 2023 / Revised: 28 November 2023 / Accepted: 4 December 2023 / Published: 7 December 2023
(This article belongs to the Section Artificial Intelligence)

Abstract

:
In a world where the interconnection and interaction between human and artificial agents are continuously increasing, the dynamics of social bonds and dependence networks play a fundamental role. The core of our investigation revolves around the intricate interplay between dependence and trust within a hybrid society, populated by human and artificial agents. By means of a structural theory, this study offers valuable insights into the utilization of dependence networks and their impact on collaborative dynamics and resource management. Most notably, agents that leverage dependence, even at the cost of interacting with low-trustworthiness partners, achieve superior performance in resource-constrained environments. On the other hand, in contexts where the use of dependence is limited, the role of trust is emphasized. These findings underscore the significance of dependence networks and their practical implications in real-world contexts, offering useful practical implications in areas such as robotics, resource management, and collaboration among human and artificial agents.

1. Introduction

The dynamics of social bonds are a widely explored subject within the realm of social sciences, encompassing both theoretical and empirical perspectives [1,2,3]. This subject holds clear relevance within these fields. Understanding how these relationships develop, evolve, and influence human behavior is crucial for a deeper understanding of society. Indeed, many contributions in the literature address various aspects of social relationships and demonstrate how they impact health, well-being, and a range of human behaviors [4,5,6,7]. Furthermore, the dynamics of social bonds have a significant impact on individuals’ and communities’ decisions. People are often influenced by the opinions and actions of their social groups [8]. Understanding how these dynamics influence behavior is essential for more accurate predictions and for the planning of social and economic interventions.
The extensive array of research available in the literature provides an in-depth understanding of the various facets and applications of social networks. The conducted studies have unveiled the pivotal significance of the structure of these networks in influencing various phenomena, such as the analysis of how ideas diffuse [9,10,11] or how consumption trends propagate [12,13]. These networks enable us to gain a unique perspective on collective phenomena and social behaviors.
Numerous studies across diverse contexts, ranging from academia to business, have revealed the critical role of social network structures in information dissemination and the success of various initiatives [14,15]. Remarkably, social network studies offer a rich and diversified perspective on the intricacies of human relationships and the ways in which these relationships influence and guide a wide array of phenomena.
The primitive functions of these relational structures play a crucial role in both collaborative dynamics and interactions that are more neutral or conflictual. These elements form the basis of what we could define as “extended sociality” [16], a concept that extends to both artificial and human agents. In order to concretely achieve effective high-level collaboration between human and artificial agents, it is imperative for the latter to possess social capabilities akin to those of humans. Among the various elements to be implemented, the “theory of mind” [17,18,19] plays a pivotal role. Firstly, it entails the awareness that other individuals have a mind with thoughts and internal mental states that influence their decisions and behavior. Therefore, the theory of mind enables the recognition and understanding of the intentions, beliefs, and motivations of others [20,21]. This skill not only pertains to the interpretation of objective data from reality but also involves anticipating the cognitive processes of other actors at play. In other words, the capacity to acquire knowledge about the convictions and desires of other agents is fundamental, as these pieces of information play a crucial role in social interactions. This concept is intriguing as it sheds light on the intricate nuances of both human and artificial interactions, emphasizing how an understanding of cognitive dynamics is pivotal in fostering effective and productive relationships within an increasingly interconnected reality.
Our goal is to investigate the fundamentals of collaboration within a world cohabited by artificial and human agents. Specifically, we inquire about the dependence network [22,23,24] enriched by agents’ beliefs about the reliability of their counterparts. By employing a comprehensive theory encompassing the type of beliefs in play, we can not only address pivotal issues regarding agents’ influence in a network but also gain insights into the dynamic facets of relational capital.
Thus, building upon the theoretical framework presented in [16], within this contribution, we introduce a simulation-based implementation of dependence networks to investigate their utilization and resultant effects. Specifically, our focus lies in conducting a comparative analysis between the concepts of dependence and trust, examining the roles they play in shaping interactions among agents. In our analysis, we refer to the concepts of agent and multi-agent systems, considering in particular the BDI—beliefs, desires, intentions—model of the rational agent [25,26,27]. In more detail, in this work, we will make use of the blocks world. The choice to reference this context is linked to the specific aspects we intend to investigate. In fact, the block world allows us to adequately represent contexts in which agents pursue their personal goals, based on the beliefs they have on the world. Yet they share common and limited resources. Within such a context, we are interested in examining how agents achieve better results the more they are capable of choosing their collaboration partners wisely. We investigate the agents’ ability to accurately identify the dependencies that spontaneously arise and to use them profitably for their own goals. In addition to the concept of dependence, we also take into account the reliability of partners.
This study serves two fundamental purposes:
  • Firstly, it advances our understanding of the dynamics of collaboration in a hybrid society, where cognitive artificial and human agents interact.
  • Secondly, it empirically explores the intricate interplay between dependence and trust in this context.
By doing so, we aim to shed light on the mechanisms that underlie effective cooperation, offering valuable insights for various domains, including artificial intelligence, sociology, and organizational science.
This section has highlighted the research gaps and the contribution we intend to realize. The rest of this article is organized as follows. In Section 2, we consider related works about trust and dependence networks in the domain of agents. In Section 3, we provide a theoretical formulation social dependence, while in Section 4 we introduce the framework of our work, together with the related concepts and issues. On the basis of these theoretical premises, Section 5 discusses the implementation we realized, whose results are presented in Section 6 and discussed in Section 7. Section 8 summarizes the contribution of the whole work.

2. Related Work

Within this section, we will analyze the role of trust and dependency networks in the state of the art, aiming to provide a conceptual context for our contribution by discussing the most impactful research in these fields.

2.1. Trust

Trust is a complex and multifaceted concept [28]. It is a key concept in agent interactions, whether human or artificial [29], cognitive or otherwise [30]. This significance becomes even more pronounced when we consider social interaction among cognitive agents. The role of trust in multi-agent systems is pivotal [31,32], especially in contexts where autonomous agents need to interact and collaborate. Trust among agents can foster mutual cooperation, as agents are more inclined to share resources, knowledge, or efforts if they trust each other [33]. Trust is instrumental in forming coalitions and accomplishing complex tasks requiring the collaboration of multiple agents [34]. Furthermore, trust serves as an essential tool for managing uncertainty about other agents [35]. When an agent can rely on another, it can make more informed and predictable decisions. This is particularly valuable in scenarios where uncertainty could lead to undesirable outcomes or inefficiencies, aiding agents in mitigating risks.
To quantify the appropriate level of trust to place in potential partners, it is necessary to introduce mechanisms for assessing their trustworthiness [36,37]. Trustworthiness is an intrinsic property of the trustee and, as such, cannot be accessed directly but only estimated. Therefore, the trustor needs to estimate and infer how reliable its potential partners would be in executing the task of interest [38]. These learning mechanisms rely on direct experience, reputation/recommendation [39,40,41], and categories/stereotypes/inferential reasoning [42,43] as channels of information to evaluate agents’ trustworthiness.
Trust also plays a central role in dependence networks, where it is not only a question of establishing who is capable of doing what and who can be relied upon to achieve specific goals. It is also essential to know how willing, available, and trustworthy a potential partner, on whom one depends, is when it comes to fulfilling a task. In this regard, it is interesting to note that, more than the objective trustworthiness of an agent, what matters in dependence networks (enhanced by the concept of trust) is the trust that others attribute to their respective potential partners in collaborations [30].

2.2. Dependence Network

Several works have, over time, delved into the utilization of dependence networks.
A notable contribution is found in [44], where the authors introduce a Distributed Artificial Intelligence (DAI) system known as DEPNET, specifically designed to address communication and coordination challenges in a distributed environment. This system possesses the capability to compute dependence relationships within a population of artificial agents sharing a common world. The authors employ this system to illustrate how intricate structures of interdependencies emerge from agents with straightforward architectures situated in a common world. These structures, in turn, influence various properties of the system, both at the individual level (pertaining to agents’ disparities and negotiation abilities) and the collective level (involving the emergence of coalitions and organizational structures, among other aspects).
In [24], the authors propose an abstract structure termed a ‘dependence graph’, which extends the concept of dependence networks. The advantage of this structure lies in its capacity to be applied to multi-agent scenarios, whereas dependence networks primarily analyze dependence relationships between individual agents. This contribution serves as a theoretical expansion of the field of dependence.
Dependence and trust are two closely related concepts. On the one hand, the belief in dependence serves as a prerequisite for the act of trust [28]. Conversely, awareness of dependence networks alone does not suffice for the allocation of tasks to a trustee; it is imperative to consider the actual reliability of potential partners. However, even though numerous studies have explored the pivotal role of trust in agent interactions, limited attention has been given to its role within dependence relationships. To the best of our knowledge, the existing literature predominantly consists of purely theoretical works that lack concrete implementations.
One such instance is presented in [45], where the authors propose an architecture involving cognitive agents and the environment within which they operate and interact. This theoretical framework was devised as a tool for examining the dynamics of information sharing, collaboration, and collective action within various service systems. Notably, the authors introduce a trust mechanism founded on agents’ competence, assuming the absence of malicious agents. An agent’s trustworthiness is assessed based on its responsiveness to incoming requests. Nonetheless, the authors do not detail how trust should be effectively employed in conjunction with dependence networks.
Hence, the current state of the art still lacks concrete implementations that explore the role of trust within dependence networks.

3. Agents and Social Dependence

In this section, we summarize the theory that was developed in [16], concerning the role of social dependence in agents’ societies. Within a shared “common world”, agents move and act in order to realize their goals, wielding limited power and control over the world and its components. Moreover, given that agents’ power to act upon the world is limited, they will most likely need to interact with others in order to perform useful actions for them. An agent’s action can support or undermine the goal of another agent. Therefore, they also require social powers, denoting the capacity to leverage the abilities of other agents for their individual goals. Hence, it is necessary for agents to identify situations where they need to interact, correctly identifying who they depend on to accomplish their tasks and when and with whom it is appropriate to interact. Knowing which other agents each agent depends on (or believes they depend on) is also crucial.
We define an objective dependence relationship between an agent a i and another agent a j concerning a task τ k when the completion of τ k involves actions, plans, and/or resources possessed by a j and unavailable or less suitable for utilization by a i . This dependence exists irrespective of a i ’s or a j ’s awareness. If, in addition to this dependence relation, it occurs that a j has an objective dependence relationship concerning a i , we define this relationship as a mutual dependence relationship.
Objective dependence is a pivotal element in social interactions and serves as the foundation of society, fostering cooperation in diverse ways. However, knowledge of objective dependence relationships is insufficient to predict the arising or absence of relationships among agents. To achieve this, it is fundamental to take into account the dependence relationships that agents are aware of or believe in. In this context, we refer to subjective dependence.
It is worth emphasizing that when we introduce such a concept concerning the subjective view of dependence relationships, we are delving into the agent’s beliefs and perceptions regarding its dependence on others. This introduces a further dimension, the beliefs, shifting from objective facts to the personal mental representation that agents have about what is actually true in the world. Equally crucial is the consideration of what an agent believes about the dependencies of other agents, i.e., how it interprets the dependencies of the others. The set of objective dependencies, subjective dependencies, and others’ believed dependencies determines the fundamental relationships to initiate negotiation processes. Note that, while the object level is factual and always present, this is not the case for the others. Indeed, in those cases, we are dealing with beliefs that may or may not be present in the minds of the agents. Of course, it is not necessarily the case that what is subjectively believed by an agent actually coincides with reality in the world. This also implies that different agents may have different views on the same dependence.
Figure 1 illustrates a potential example of objective dependence. In this scenario, the agent a i requires the resource r e s j , possessed by the agent a j , to successfully complete the task τ i of interest. This establishes an objective dependence relationship for agent a i with respect to agent a j .
Starting from the case in Figure 1 and Figure 2, we analyze a possible example of subjective dependence. In the analyzed scenario, both agents are aware that agent a i requires the resource r e s j , possessed by a j , to complete the task τ i of interest. Therefore, in this case, the dependence relationship is known to both. In this instance, subjective dependence corresponds to objective dependence. Certainly, from the same objective dependence, other subjective views could have arisen. For instance, one of the agents might not have been aware of the dependence. Alternatively, they might have thought that the resource r e s j could be missing or possessed by another agent.
Dependence networks are highly dynamic and can change unpredictably as environmental conditions change. For example, they change based on individual objectives. Moreover, they evolve based on the resources in the world and the individual skills of the agents. The entry or exit of a new agent (open world) is particularly relevant for determining dependencies. Just becoming aware of possessing (or not) a certain capability (or what the agent/others believe about this capability) can alter possible scenarios. The decision of a cognitive agent to pursue a goal is based on the belief, with a given level of certainty, in possessing that capability. Consequently, if an agent is unaware of a particular power, it does not genuinely possess it. Conversely, the attainment of power, autonomy, and control over other agents can be attributed to the awareness of that power, not necessarily stemming from the acquisition of external resources or skills and competencies.
Dependencies naturally emerge from the context, and their numerical values cannot be predetermined. What can be specified is the extent to which an agent depends on others. This dependence internally is linked to the agent’s goal, its possessed resources, its ability to act in the world, and the beliefs it has on these dimensions. It also depends on other network members, whether their goal conflicts with the goal of the agent, if they can offer resources for the agent’s goal, or if they can perform actions on its behalf, and on contextual factors such as resource availability in the world. Agents can be entirely independent if they have all the necessary resources and can achieve their goal autonomously, while they become entirely dependent if they rely on others for everything. In our case, we operate in a middle ground, where dependence relationships can emerge in any situation. On the contrary, controlling the frequency of agents’ use of dependence relationships is possible, as we will see in the following experiments.
In essence, a complex and multifaceted framework is being outlined, wherein concepts like theory of mind come into play and prove essential for understanding these tools. Possessing the ability to analyze dependence networks is crucial for comprehending, predicting, and optimizing interactions with other agents. This is a fundamental tool that ensures that those who employ it gain a distinct advantage over other agents when used appropriately.

4. Practical Formulation of the Model

This section aims to explain in detail how we chose to implement the theoretical model introduced. In this contribution, to explore the intricate dynamics of dependence networks, we have developed an implementation of the block world [46,47,48]. We will now present the practical formulation of our model, which will be utilized in the simulations. The world under consideration consists of a table and several blocks, each possessing distinct characteristics such as shape, color, and weight. Within the context of the simulation, agents endeavor to create one or more combinations or sequences of blocks on the table.

4.1. The Blocks

As mentioned, the blocks in the world possess various shapes (cylinders, cones, cubes, spheres), colors (red, blue, green, yellow), and weights (light, heavy). In total, there are 32 blocks in the world. These blocks can be either on the table or off the table. Additionally, each block can have an owner, i.e., a single agent authorized to change the block’s status in terms of position or ownership. Initially, all blocks are off the table. Some are assigned to agents from the beginning, while others are unclaimed and can be taken by the agents.
The blocks are subject to physical constraints within the world, known to all agents:
  • Stacks can consist of up to three elements.
  • A light element can have at most one light element on top.
  • A heavy element can have up to two elements of any kind on top. It is evident that, due to the previous constraint, a combination of heavy–light–heavy blocks cannot be realized.
  • Cones and spheres cannot have other blocks on top of them.
The blocks represent the fundamental building units for creating stacks, which are structures formed by stacking multiple blocks in a specific order. The underlying concept is that these block configurations abstractly symbolize the basic elements necessary to achieve goals.
As we will show later, we aim to consider that there are simpler tasks in reality, where successful execution involves directly satisfying a sub-goal. In contrast, there are more complex tasks that require multi-phase construction, relying on the knowledge of specific methods to accomplish the goal.

4.2. The Agents

Agents, whether human or artificial, operate in the world to accomplish their goals. Specifically, each agent is defined by:
  • A goal: A specific combination of blocks that the agent aims to achieve in the world. This configuration consists of a series of more or less articulated sub-goals.
  • A set of plans to achieve its goal/sub-goals (ranging from 0 to n): In the absence of plans, a dependency is established (towards someone) for obtaining a plan.
  • A defined level of competence, indicating the agent’s ability to perform certain tasks.
  • Membership category: We considered two categories—human or artificial agents. The category influences the characteristics of the agent. For instance, we assume that humans can manipulate cylinders and cones, while robots can manipulate cubes and spheres.
  • Resources (blocks): Initially, each agent possesses five blocks.
  • Beliefs about themselves, the world, and others. Agents’ entire perception of the world, processing, and planning are based on beliefs, thus reflecting their individual interpretation of reality. These beliefs can be more or less accurate or even absent.
  • A σ threshold that determines how trustworthy potential partners must be for the agent to consider their dependence usable. This threshold value, specific to each agent, is used to verify if the partner is capable of performing certain actions. Nonetheless, there remains a certain probability of error.
Agents need to collaborate to achieve their goals, taking into account their subjective dependence networks, which represent their personal understanding of dependencies on others and of others on themselves. Agents are acquainted with all other agents and blocks in the world.
We start from the assumption that human and artificial agents are cognitively equivalent, meaning that they determine their actions based on what they believe and the goals they pursue. Certainly, there are differences at the level of physical interfacing, whereby it is assumed that the two categories of agents interact with the physical world differently. At an implementation level, we have translated this by allowing the ability to manipulate different categories of objects. Of course, this is a theoretical expedient. However, even though they possess equivalent social, reasoning, and operational capabilities, their diversity allows us to highlight the necessity of collaboration among agents. Furthermore, introducing categories into this framework allows us to differentiate the characteristics of the agents, including their manipulation and action capabilities in the world. Additionally, it enables us to introduce and model processes of inferential reasoning [42,49,50]. Knowing the category of an agent allows us to deduce its specific characteristics and thus—in our case—whether it is capable of achieving certain states of the world or not. This aspect is relevant in relation to the concept of dependence: If an agent A2 knows the category to which another agent A1 belongs, A2 can deduce whether it depends on A1 (knowing its own objectives/plans) or if A1 depends on A2 (knowing the goals/plans of A1).

4.3. Goals and Plans

Each agent has the goal of placing certain blocks on the table. These blocks can be stacked in a specific order or simply positioned on the table. In this regard, the agent’s goal is subdivided into sub-goals, which can be:
  • Atomic: For example, moving a single block onto the table. This kind of task is useful for modeling the presence of simpler tasks in the world that do not require complex planning skills and do not need to be performed in multiple steps.
  • Complex: Creating a stack, which is an ordered sequence of blocks. Constructing a stack introduces the requirement to perform a series of actions in a specific sequence (effectively a plan) to achieve a single sub-goal. Accomplishing only part of it is insufficient; all actions must be executed.
The stacks of blocks in the block world are intended to represent complex and challenging tasks for AI systems. The agent must be capable of engaging in complex planning involving blocks to achieve specific goals that cannot be realized with a single action. Given the complexity of these tasks, within our framework, we assume that in the absence of a specific plan containing implementation instructions, agents are unable to accomplish these sub-goals.
We establish that each agent needs three to five blocks to fulfill its goal. The agent’s goal is considered as completely satisfied when all sub-goals have been achieved. Conversely, it is considered partially satisfied if only some of the sub-goals have been attained. The goals are not shared, in the sense that the presence of a block or a sequence on the table satisfies the goal/sub-goal solely of its specific owner, rather than that of other agents. A plan is considered feasible for Agent A1 if:
  • There exists a set of unused blocks that, when properly used, satisfies it.
  • There is someone (either Agent A1 or an Agent A2 dependent on A1) who can potentially move these blocks.
  • The plan is physically achievable, meaning that it adheres to the rules governing block composition as defined in Section 4.1.

4.4. Agents’ Trust and Trustworthiness

In this study, we refer to the concept of trust as modeled in [28]. Trust is taken into account by the agents when selecting dependencies, serving as a mechanism to decide whether to interact with one partner over another.
Within the simulation context, we assume the absence of malicious agents; therefore, we choose not to consider the influence of motivational aspects on the determination of an agent’s trustworthiness. For completeness, it is worth emphasizing that an agent might have conflicting motivations regarding a task. For instance, it may not want to part with a block of interest, as it will be required to complete its sub-goals. However, this does not imply malicious intent. In such cases, the agent will simply decline the proposed task.
Hence, we characterize agent trustworthiness in terms of c o m p e t e n c e , i.e., how effectively they can accomplish tasks in the world. Competence is defined as a real value within the range [0, 1], where 0 implies a complete inability to act, while 1 signifies a guaranteed success.
In the simulated world, we consider three types of tasks:
  • Obtaining a plan.
  • Acquisition of a block.
  • Repositioning of a block.
Since we have no interest in differentiating the values of competence for these tasks, for computational simplicity, we assume that an agent’s trustworthiness is the same for each of them. We would like to point out that this is not necessarily true in reality. Indeed, skills on different tasks usually tend to differ. Nevertheless, considering such a difference would have no practical impact within our scenario.
An agent is considered capable of achieving a task if it has a probability greater than a given threshold σ of accomplishing it. This probability is assessed through its trustworthiness evaluation. As mentioned earlier, agents possess a trustworthiness. This is an intrinsic characteristic of the agent that determines its ability to execute tasks. However, as such, it cannot be accessed directly, not even by the agent itself; it can only be estimated. To estimate the trustworthiness of agents, we consider a computational model based on the beta distribution. The beta distribution is commonly employed in the analysis of agent trustworthiness [51,52,53,54], especially when it comes to modeling and estimating success or failure probabilities in complex situations. The beta distribution is defined by two parameters, denoted as α and β . As described in Equations (1) and (2), they depend on the estimation of the number of observed successes n _ s u c c e s s e s a x and failures n _ f a i l u r e s a x of the agent a x .
α a x = n _ s u c c e s s e s a x + 1
β a x = n _ f a i l u r e s a x + 1
In this context, the expected value of the distribution, representing the estimation of the average trustworthiness T r u s t w o r t h i n e s s a x of an agent a x , is given by Equation (3):
T r u s t w o r t h i n e s s a x = α a x α a x + β a x
In this regard, if T r u s t w o r t h i n e s s a x is equal to x, defined within the range [ 0 , 1 ] , then, for a single task with a dichotomous outcome, the probability of success is x, while the probability of failure is 1 x . Generalizing to n tasks, the number of successes and failures can be estimated as in Equations (4) and (5), respectively:
n _ s u c c e s s e s a x = n x
n _ f a i l u r e s a x = n ( 1 x )
Thus, we can estimate α a x and β a x as in Equations (6) and (7):
α a x = n x + 1
β a x = n ( 1 x ) + 1
Equation (8) provides us with the expected value after n attempts:
T r u s t w o r t h i n e s s a x = n x + 1 n x + 1 + n ( 1 x ) + 1 = n x + 1 n + 2
Overall, the difference between the real value of x and that estimated will be:
Δ T r u s t w o r t h i n e s s a x = x n x + 1 n + 2 = 2 x 1 n + 2
Equation (9) describes the behavior of a straight line; therefore, the maximum and minimum values will be found at the endpoints of the definition interval, namely, x = 0 and x = 1 . Thus, the maximum difference between the estimation and the actual value of trustworthiness will be ± 1 n + 2 . In other words, the use of the beta distribution to model trust allows us to have a sufficiently accurate method for quantifying agent trustworthiness with a small number of observations, as the error decreases inversely proportional to the number of observations, n.

4.5. Beliefs

Beliefs represent the perceptions and knowledge that agents possess about the state of the environment, other agents, and the relationships between them. These beliefs influence the decisions and actions of the agent, thus guiding the overall evolution of the simulation. Indeed, their fundamental role becomes even more crucial if beliefs on dependence networks are also considered.
In our framework, agents possess beliefs about:
  • Their own goals;
  • Their own abilities;
  • Their own plans;
  • The blocks that exist in the world;
  • Who the owners of the blocks are;
  • The other agents that exist in the world;
  • The goals of the other agents;
  • The abilities of the other agents;
  • The plans of the other agents (they know which ones they possess, but not how these plans are articulated);
  • Dependencies on actions;
  • Dependencies on resources (blocks);
  • Dependencies on plans.
Beliefs are a fundamental aspect of both the theoretical framework and the simulations we consider. In fact, agents reason entirely based on beliefs, meaning, their personal perception of the world. Firstly, both the assessment of partner trustworthiness and the perception of their own and others’ dependencies are belief-based. Furthermore, other key aspects we consider are belief-based. As a remarkable example, consider the knowledge of the blocks or their owner: An agent might potentially be unaware of the existence of a block in the world, might not know that it can take ownership of it, or might still not know that it can move it. These beliefs significantly influence both its choices and the achievement of its goals.

4.6. The Blackboard

As introduced in Section 3, the difference between objective dependencies and subjective ones is that the former are real and factual, while the latter represent the agents’ subjective perception of what dependencies exist in the world. Of course, only an observer outside the world who has control over the simulation system is capable of knowing what the objective dependencies are, since these by their very nature are not directly knowable. Objective dependencies can only be known by the system. Therefore, when agents consider a dependence relationship, it will always and only be a subjective dependence. This type of modeling is consistent with the principle that, in the world of agents, as in the real world, individuals always possess partial and subjective knowledge. As a result, agents may not know the fact of being dependent on another agent (partial knowledge) and may also mistakenly believe they are or are not dependent on another agent (in this regard, it is important to clarify that not knowing that you are dependent on an agent is semantically different from knowing that you are not dependent on an agent). In this context, the only way agents interact is through the blackboard. Through its utilization, others can discover the needs (which create dependencies) of an agent, and the agent, in turn, gains insight into the needs of others. The blackboard facilitates the recognition of subjectively experienced dependencies within the world, which objectively existed beforehand. The blackboard serves as the tool that transforms an objective dependency into a subjective one.
We assume that agents are capable of autonomously determining their dependencies, solely through the observation of the world, which they then process through their beliefs. This assumption is reasonable, because they can ascertain who is capable of what using inferential processes on categories and because the possession of resources in the world is public. However, this represents limited and partial knowledge which, in the event of an error, could also prove to be incorrect. Moreover, it is not guaranteed that agents can independently determine who depends on them. In order to establish a protocol of agent interaction based on these principles, we introduced in the world the presence of a blackboard. The blackboard represents the system that agents use for communication and for verifying dependencies. At the beginning of each simulation, each agent will declare the goal it intends to pursue. Then, whenever it needs to perform a task to continue its plan, it will check the blackboard.
Firstly, it will verify the existence of a mutual dependence. The agent will determine if, among the agents with a trust rating higher than its internal threshold σ , there are requests that it can fulfill, and if there is someone who can satisfy its task. In the case of a negative outcome, the agent will simply post its request on the blackboard, awaiting another agent to select it for a mutual dependence in the future. Conversely, if mutual dependencies are identified, the agent will proceed to initiate a negotiation phase. If both agents agree, each will proceed to fulfill the other’s request.

4.7. Workflow

The agent starts each cycle by updating its beliefs. This is crucial because, in the previous cycle, the state of the world may have been altered by the actions of the agents. For example, ownership of blocks may have changed, or blocks may have been moved. In each cycle, the agents are limited to performing only one action: retrieving a plan from other agents, obtaining a block, or moving a block.
At first, each agent evaluates whether it possesses at least one feasible plan to achieve its goals. If this condition does not hold, in this cycle it focuses on obtaining a plan. If the agent does not require external resources to achieve its goal, then it simply declares its goal on the blackboard, starting to execute the first task necessary to achieve it. Otherwise:
  • The agent establishes how to proceed in obtaining the required elements, following internal priority criteria.
  • It checks previous requests, updating its subjective view on the dependence network, and verifies in the blackboard if any of the agents having active dependence on it can provide the needed resource. Where this is the case, a mutual dependence is explicit, and the agent attempts a negotiation to formalize the exchange. If a partner is found, both requests are executed. If no partner is found, the agent declares on the blackboard its request and the goal it is pursuing. Then, it waits for a future mutual dependence.
Figure 3 provides us with graphical indications regarding the simulation workflow.
It is worth emphasizing that, within the complex system defined, dependence is not just a necessity but also a resource in itself. The fact that someone depends on me provides me with the opportunity to access the resources that the other has to offer. Therefore, while being independent of everyone could be considered an advantage, the simulation world has been designed to make this possibility unlikely. Conversely, the fact that no one depends on us represents a significant competitive disadvantage.
Regarding the ranking criteria, the agent ranks the sub-goals it can address in the specific cycle, according to the following principles:
  • Abstraction level of the sub-goal: It will prioritize less abstract sub-goals since, as the need for the sub-goal becomes more specific, the availability of blocks in the world that can satisfy this request becomes more restricted.
  • Reasoning about others’ goals: Starting with knowledge about other agents’ goals, an agent estimates which blocks it need that are most likely to be used by other agents. It might even find it better to take possession of the final block of a stack, even if the base has not been constructed yet (typical market problem: offer/demand dynamics).

5. Simulation Experiments

Once the practical model has been introduced, our next step involves a comprehensive examination of its simulation implementations, with a primary focus on investigating the efficacy of dependence networks. Additionally, we aim to delve into the role of trust within this experimental context. The proposed simulation is motivated by several key factors that underscore its necessity and validity. First of all, the simulation allows for the exploration of complex dynamics that emerge when autonomous agents interact and collaborate in pursuing their goals. This provides an opportunity to analyze interactions in a controlled environment, facilitating a better understanding of the underlying mechanisms governing agents’ interactions. The simulation also aims to assess the effectiveness of dependence networks in optimizing collaboration among agents. This is particularly relevant in scenarios where trust and mutual dependence play a critical role, as in interactions among autonomous agents. As this marks the preliminary stage of our experiment, we initiate the exploration by considering the dynamics that unfold within a relatively compact network of agents, all within a controlled setting, providing a solid foundation for understanding such dynamics in more complex scenarios. Overall, the simulation provides a detailed framework to explore, test, and understand the functioning of the proposed model in a variety of scenarios, enabling targeted refinement and optimization of the approach.
Within the simulated environment, we considered a total of six agents. To ensure fairness and eliminate any potential advantage or disadvantage arising from the complexity of assigned goals, we introduced a unique goal for all agents. This goal entails the creation of a stack composed of two blue blocks and the placement of two lightweight blocks on the table, as visually represented in Figure 4. This controlled scenario allows us to closely observe the interactions and dependencies among the agents, providing valuable insights into the functioning of the proposed model. The limited scale of the network and the unique goal set the foundation for an initial exploration of how the agents collaborate, showcasing the potential impact of dependence networks and shedding light on the interplay of trust within the system.
In general, agents are capable of accurately observing the world, meaning that their subjective perception aligns with objective occurrences. However, there is one exception to this. Consider the scenario in which an agent fails to execute a specific action necessary for achieving one of its sub-goals. However, despite being deemed a failure, this action could still satisfy another sub-goal of the same agent. For instance, referring to the task of interest in this simulation (which involves creating a stack composed of two blue blocks and placing two lightweight blocks on the table), an agent might have intended to place a lightweight blue block on the table but mistakenly places a lightweight green block instead. Nonetheless, this lightweight green block might still satisfy a second sub-goal of the agent. In this sense, the action will be perceived as a failure by the executing agent. However, since agents’ beliefs are grounded in their observations of the world, the rest of the agents are unable to accurately identify this situation, as the truth about what happened is internal to the agent who performed the action. Consequently, the other agents will interpret this action as correct.
To elaborate further, agents reveal their potential requests when pursuing their own goals, and as a result, they become active on the blackboard.

Comparison Metric

We need to define a metric for evaluating and comparing the performance of agents. This metric should be designed to reward agents that can accomplish more complex actions. For instance, creating a stack of two blocks is more complex than simply placing two blocks on the table, both in terms of planning and due to the possibility of resources running out in the meantime, given the stricter constraints on block composition. Therefore, we decided that:
  • Successfully placing a block of interest on the table is worth one point;
  • Successfully completing a stack earns an additional point;
  • Successfully completing all goals grants an additional point;
Overall, considering the set goals for the agents, the maximum achievable score for an agent is 6. Naturally, considering that not all agents have correct plans available to achieve their goals, that agents make mistakes, and that resources in the world become engaged at some point, we expect the average score to decrease significantly.

6. Results

Within this section, we present the results of the conducted experiments. We have considered three simulation scenarios in which to assess the effectiveness of using dependence networks, specifically comparing their effectiveness with trust. The experiments were conducted using agent-based simulation, implementing what was described in the previous sections on the 3D version of the NetLogo platform [55]. The experiments are designed in such a way as to increasingly disadvantage the utility of dependence: in the first simulation, it plays a significant role, while in the others, it is progressively limited. In the simulations, we consider a total of six agents, comprising three humans and three artificial agents. We decided to allocate five blocks to each agent right from the start. Indeed, we had initially considered the possibility of assigning a lower number of blocks. However, from the initial results, it became evident that such a setup significantly disadvantaged trust in favor of dependence.
Regarding trust, we considered agents with three different σ threshold levels: 0.25, 0.5, and 0.75. The first threshold identifies agents willing to interact with almost all available partners. The second one pertains to agents who are willing to interact only with partners with above-average performance, thus, on average, interacting with only half of the available agents. The last group make a strict selection of their partners, which, however, significantly reduce the availability of agents to interact with. The results we report pertain to a window of 30 interactions among the agents, which is sufficient to stabilize the interactions. Moreover, we considered the results averaged over 1000 simulations, in such a way as to eliminate the variability introduced by the random effects on the individual runs.

6.1. First Simulation

Within this first scenario, agents identify mutual dependencies by means of the blackboard. Then, they select their potential partners by filtering them based on their personal σ threshold. Finally, they rank the remaining partners according to their trust evaluation and contact partners based on the established order until a partner is found or until all available partners have refused.
The experiments were conducted using the following settings:
  • Number of human agents: 3;
  • Number of artificial agents: 3;
  • Three blocks per agent;
  • Agents’ c o m p e t e n c e randomly assigned in the range [0, 1];
  • σ Threshold randomly assigned between 0.25, 0.5, and 0.75.
Within this experiment, we aim to investigate the value of trustworthiness within this type of network, comparing it with the effect of dependence.
From the results in Table 1, it paradoxically emerges that agents operating with a lower threshold of 0.25 achieved superior outcomes. The agents with a threshold of 0.25 exhibited a higher score, which was 9.3% higher than those with a threshold of 0.75. The agents with a threshold of 0.5 achieved a 6.3% higher score, compared to the agents with a threshold of 0.75.
This phenomenon occurs due to the presence of a context in which errors do not lead to substantial or effective losses. In the event of task failure, agents do not incur any economic loss nor lose the possibility of accomplishing the task in the future. Under these conditions, attempting reliance until the selected trustee succeeds proves to be the winning strategy, ensuring better results. Furthermore, in this experiment, it is noteworthy that all agents always approach their trustees in order of trust (requesting assistance from the most trusted to the least trusted). Therefore, having a high threshold introduces a disadvantage, as those with a low threshold can always attempt with other agents in case of rejection, whereas having a high threshold means forfeiting task execution if a partner with sufficient trustworthiness is not found.
In fact, upon observing other metrics, it is noted that, although the success rate of delegated tasks is significantly lower (0.53 for the 0.25 threshold compared to 0.76 for the 0.75 threshold), the agents with a threshold of 0.25 manage to delegate tasks five times more than the agents with a threshold of 0.75. This, in turn, results in these agents being able to complete an average of 0.67 more tasks.

6.2. Second Simulation

Given the significant weight of dependence in this context, in this second experiment, we attempted to identify certain conditions that can mitigate its effect. Specifically, agents are constrained to use dependence only once. Furthermore, we are examining what occurs when partner selection is randomized among those chosen via the threshold σ .
In this case, the trend identified in the previous scenario is reversed. Remarkably, as we can see in Table 2, the influence of trust becomes more significant compared to dependence. Agents with σ = 0.75 achieved a superior average performance by 3.15% and were able to complete an average of 0.3 more tasks, compared to agents with σ = 0.25. It is worth noting that, although the effects are relatively small, they pertain to a single delegated task, thus the actual difference in performance remains limited.

6.3. Third Simulation

In this scenario, similarly to the previous one, we restrict agents’ use of dependence to only once. Additionally, here too, partner selection occurs randomly among those surpassing the threshold σ . Furthermore, we attempt to further reinforce the utilization of trust by fixing the agents’ performances at 0.4 (three randomly chosen agents) and 0.9 (three randomly chosen agents), making a clear division between trustworthy and untrustworthy agents. Due to this simplification, we consider only two threshold values, 0.25 and 0.75, as the threshold of 0.5 would yield results similar to that of 0.75.
Compared to the previous scenario, there are more agents considered trustworthy. This has an impact on several dimensions of the simulation. Firstly, as emerges from Table 3, more delegation was achieved. This is precisely a direct consequence of the fact that, with the same potential use of the dependence, in a network with more reliable agents, it is easier to exploit dependence. Additionally, the average number of completed tasks decreases in favor of higher scores, as fewer tasks are needed to achieve the same results. Once again, we can observe a slight increase in terms of both score and average performance. The difference is more pronounced when observing completed tasks. With a threshold of 0.75, an average of 0.35 more tasks are completed. The performance of delegated tasks also differs significantly: 0.63 versus 0.78.

7. Discussion

The starting point of this work is the idea that the study and understanding of dependence networks is a key issue for cognitive agents [16]. In fact, in the majority of real-world scenarios, agents are closely interconnected in the execution of tasks of their concern [56,57]. This is because the abilities and resources they possess and control in the world are limited [44,58]. And, above all, in order for an agent to increase the quantity and type of tasks to be obtained, it is necessary to be able to correctly represent one’s own powers, those of others, and mutual dependencies.
Understanding mutual dependencies allows agents to collaborate more effectively [59]. When agents comprehend how their actions influence others and vice versa, they can understand better where to direct their efforts, maximizing efficiency and achieving better outcomes [16].
Moreover, recognizing dependencies enables agents to identify potential weak points or risks within the chains of actions required to achieve a goal. This awareness assists in devising risk mitigation strategies and contingency plans in the event of issues or failures. As an example of this, we may consider the case of supply chain management in disaster scenarios. This knowledge is of fundamental importance for implementing resilience capabilities, allowing for better planning of the collaboration, communication, coordination, and cooperation processes [60].
Awareness of our dependencies on other agents, and conversely, the dependencies other agents have on us, empowers cognitive agents to anticipate possible reactions or responses from others. This can aid in making more informed decisions and managing potential conflicts or issues.
Thus, in this study, we focused on investigating the role of dependence within a simulated hybrid society, populated by both human and artificial agents. In this regard, the scenario of the block world has been particularly interesting for our analysis. Indeed, the block world is a common context used in research on artificial intelligence and cognitive science. Understanding how dependence networks operate in this context can have important repercussions on sectors such as robotics [61], resource management [62], and collaboration [63,64] between artificial agents.
The results of the experiments conducted provide valuable theoretical and practical insights into the utilization of dependence networks. Remarkably, the importance of dependence networks finds practical confirmation in the simulation investigated in this study. Most notably, it is stimulating to note that the effect of dependence is very significant.
From the results of the first experiment, it becomes evident that agents making greater use of dependence, even at the expense of the trustworthiness of their partners, manage to achieve better performances than those who prefer a more restrictive partner selection based on their estimated trustworthiness. This experiment allows us to verify that, in the specific situation at hand—a closely interconnected world with limited resources—in the presence of dependence on one or more agents, it is preferable to rely on one of them, rather than give up due to their lack of trustworthiness. This phenomenon happens because of the potential unavailability of an alternative way to carry out the task.
In the subsequent two experiments, we attempted to limit agents’ use of dependence. Although this does indeed bring forth the effect of trust, wherein in both experiments, agents with a higher trust threshold manage to achieve better performance, this difference does not, however, prove to be of significant impact.

8. Conclusions and Future Directions

This work aims to contribute to the state of the art in the study of dependence networks. In fact, the study’s results have provided an initial response to the key questions of our study:
  • Dependence networks have a clear impact on agents’ performance.
  • A complex relationship with the concept of trust is established, where it is not always better to preclude interaction with less reliable partners.
In summary, we would have expected the results to show a strong effect of dependence networks. However, we had assumed that this effect would complement the efficacy of trust. Instead, it seems that implementing trust as a filter, and then limiting interactions, blocks the effectiveness of dependence networks. This leads us to a further conclusion: In a world characterized by such dynamics, it is more convenient for agents to possess skills, abilities, and resources needed by other agents, rather than being or being perceived as trustworthy, especially when, as in our case, the risk of failure does not permanently compromise the achievement of one’s goals. In fact, this specific context is not designed to penalize agents’ erroneous choices. Certainly, opting for less reliable partners results in a higher percentage of failed tasks, and wasting time can be precious in a resource-constrained environment. Moreover, the limited number of agents undoubtedly impacts the obtained results: Since agents can choose from a restricted pool of partners, filtering these partners based on their trustworthiness can prove counterproductive.
Thus, we consider it relevant to conduct further research to delve into the effectiveness of dependence networks in simulated worlds consisting of more extensive networks of agents. Indeed, another relevant aspect that we have not explored in this work is the presence and the effect of incorrect beliefs. This is, in fact, a particularly interesting point, as the entire process of reasoning and decision making by agents is based on their perception of the world. As an additional point, it is necessary to highlight that we have taken for granted the good willingness of the agents. In reality, this may not be the case. There may be malicious agents or colluding agents who exploit the properties of dependence to the detriment of other agents. These three aspects will serve as a starting point for more detailed investigation in future studies.

Author Contributions

Conceptualization, R.F.; methodology, R.F. and A.S.; software, A.S.; validation, A.S.; formal analysis, R.F. and A.S.; investigation, R.F. and A.S.; writing—original draft preparation, R.F. and A.S.; writing—review and editing, R.F. and A.S.; supervision, R.F.; project administration, R.F.; funding acquisition, R.F. All authors have read and agreed to the published version of the manuscript.

Funding

This project has been funded by the European Union—Next Generation EU, PE 0000013, FAIR-Future Artificial Intelligence Research.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available as this study belongs to an ongoing research project.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviation

The following abbreviations are used in this manuscript:
BDIBeliefs, desires, intentions

References

  1. Guare, J. Six Degrees of Separation; Vintage: New York, NY, USA, 1990. [Google Scholar]
  2. Wasserman, S.; Faust, K. Social Network Analysis; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  3. Watts, D.J. Small Worlds; Princeton University Press: Princeton, NJ, USA, 1999. [Google Scholar]
  4. Christakis, N.A.; Fowler, J.H. The spread of obesity in a large social network over 32 years. N. Engl. J. Med. 2007, 357, 370–379. [Google Scholar] [CrossRef]
  5. Cohen, S.; Wills, T.A. Stress, social support, and the buffering hypothesis. Psychol. Bull. 1985, 98, 310. [Google Scholar] [CrossRef]
  6. Fowler, H.J.; Christakis, A.N. Dynamic spread of happiness in a large social network: Longitudinal analysis over 20 years in the Framingham Heart Study. BMJ 2008, 337, a2338. [Google Scholar] [CrossRef]
  7. Uchino, B.N. Social support and health: A review of physiological processes potentially underlying links to disease outcomes. J. Behav. Med. 2006, 29, 377–387. [Google Scholar] [CrossRef]
  8. Cialdini, R.B.; Goldstein, N.J. Social influence: Compliance and conformity. Annu. Rev. Psychol. 2004, 55, 591–621. [Google Scholar] [CrossRef]
  9. Acemoglu, D.; Ozdaglar, A. Opinion dynamics and learning in social networks. Dyn. Games Appl. 2011, 1, 3–49. [Google Scholar] [CrossRef]
  10. Das, A.; Gollapudi, S.; Munagala, K. Modeling opinion dynamics in social networks. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining, New York, NY, USA, 24–28 February 2014; pp. 403–412. [Google Scholar]
  11. Xia, H.; Wang, H.; Xuan, Z. Opinion dynamics: A multidisciplinary review and perspective on future research. Int. J. Knowl. Syst. Sci. (IJKSS) 2011, 2, 72–91. [Google Scholar] [CrossRef]
  12. Drescher, L.S.; Grebitus, C.; Roosen, J. Exploring Food Consumption Trends on Twitter with Social Media Analytics: The Example of Veganuary. EuroChoices 2023, 22, 45–52. [Google Scholar] [CrossRef]
  13. Yu, L.; Zhao, Y.; Tang, L.; Yang, Z. Online big data-driven oil consumption forecasting with Google trends. Int. J. Forecast. 2019, 35, 213–223. [Google Scholar] [CrossRef]
  14. Jin, X.; Wang, Y. Research on social network structure and public opinions dissemination of micro-blog based on complex network analysis. J. Netw. 2013, 8, 1543. [Google Scholar] [CrossRef]
  15. Safarnejad, L.; Xu, Q.; Ge, Y.; Krishnan, S.; Bagarvathi, A.; Chen, S. Contrasting misinformation and real-information dissemination network structures on social media during a health emergency. Am. J. Public Health 2020, 110, S340–S347. [Google Scholar] [CrossRef] [PubMed]
  16. Falcone, R.; Castelfranchi, C. Grounding Human Machine Interdependence Through Dependence and Trust Networks: Basic Elements for Extended Sociality. Front. Phys. 2022, 10, 946095. [Google Scholar] [CrossRef]
  17. Carlson, S.M.; Koenig, M.A.; Harms, M.B. Theory of mind. Wiley Interdiscip. Rev. Cogn. Sci. 2013, 4, 391–402. [Google Scholar] [CrossRef] [PubMed]
  18. Leslie, A.M.; Friedman, O.; German, T.P. Core mechanisms in ‘theory of mind’. Trends Cogn. Sci. 2004, 8, 528–533. [Google Scholar] [CrossRef] [PubMed]
  19. Nichols, S.; Stich, S. How to read your own mind: A cognitive theory of self-consciousness. In Consciousness: New Philosophical Essays; Oxford University Press: Oxford, UK, 2003; pp. 157–200. [Google Scholar]
  20. Lapierre, M.A. Development and persuasion understanding: Predicting knowledge of persuasion/selling intent from children’s theory of mind. J. Commun. 2015, 65, 423–442. [Google Scholar] [CrossRef]
  21. Sylwester, K.; Lyons, M.; Buchanan, C.; Nettle, D.; Roberts, G. The role of theory of mind in assessing cooperative intentions. Personal. Individ. Differ. 2012, 52, 113–117. [Google Scholar] [CrossRef]
  22. Castelfranchi, C. Founding agents’ “autonomy” on dependence theory. ECAI 2000, 1, 353–357. [Google Scholar]
  23. Sichman, J.S.; Conte, R.; Demazeau, Y.; Castelfranchi, C. A social reasoning mechanism based on dependence networks. In Proceedings of the 11th European Conference on Artificial Intelligence, Amsterdam, The Netherlands, 8–12 August 1998; pp. 416–420. [Google Scholar]
  24. Sichman, J.S.; Conte, R. Multi-agent dependence by dependence graphs. In Proceedings of the 1st International Joint Conference on Autonomous Agents and Multiagent Systems: Part 1, Bologna, Italy, 15–19 July 2002; pp. 483–490. [Google Scholar]
  25. Bratman, M. Intentions, Plans and Practical Reason; Harvard University Press: Cambridge, MA, USA, 1987. [Google Scholar]
  26. Cohen, P.R.; Levesque, H.J. Intention is choice with commitment. Artif. Intell. 1990, 42, 213–261. [Google Scholar] [CrossRef]
  27. Wooldridge, M. An Introduction to Multiagent Systems; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  28. Castelfranchi, C.; Falcone, R. Trust Theory: A Socio-Cognitive and Computational Model; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  29. Azzedin, F.; Ghaleb, M. Internet-of-Things and information fusion: Trust perspective survey. Sensors 2019, 19, 1929. [Google Scholar] [CrossRef]
  30. Falcone, R.; Sapienza, A.; Cantucci, F.; Castelfranchi, C. To Be Trustworthy and to Trust: The New Frontier of Intelligent Systems. In Handbook of Human-Machine Systems; Wiley-IEEE Press: Hoboken, NJ, USA, 2023; pp. 213–223. [Google Scholar]
  31. Castelfranchi, C.; Falcone, R. Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Proceedings of the International Conference on Multi Agent Systems (Cat. No. 98EX160), Paris, France, 3–7 July 1998; pp. 72–79. [Google Scholar]
  32. Drawel, N.; Bentahar, J.; Laarej, A.; Rjoub, G. Formal verification of group and propagated trust in multi-agent systems. Auton. Agents Multi-Agent Syst. 2022, 36, 19. [Google Scholar] [CrossRef]
  33. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  34. Kuipers, B. Trust and cooperation. Front. Robot. AI 2022, 9, 676767. [Google Scholar] [CrossRef]
  35. Asan, O.; Bayrak, A.E.; Choudhury, A. Artificial intelligence and human trust in healthcare: Focus on clinicians. J. Med. Internet Res. 2020, 22, e15154. [Google Scholar] [CrossRef] [PubMed]
  36. Kelp, C.; Simion, M. What is trustworthiness? Noûs 2023, 57, 667–683. [Google Scholar] [CrossRef]
  37. Peels, R.; Bouter, L. Replication and trustworthiness. Account. Res. 2023, 30, 77–87. [Google Scholar] [CrossRef]
  38. Sapienza, A.; Falcone, R. A Bayesian Computational Model for Trust on Information Sources. In Workshop from Objects to Agents; 2016; pp. 50–55. Available online: https://ceur-ws.org/Vol-1664/w9.pdf (accessed on 7 September 2023).
  39. Conte, R.; Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order; Springer Science & Business Media: Berlin, Germany, 2002; Volume 6. [Google Scholar]
  40. Govindaraj, R.; Govindaraj, P.; Chowdhury, S.; Kim, D.; Tran, D.T.; Le, A.N. A Review on Various Applications of Reputation Based Trust Management. Int. J. Interact. Mob. Technol. 2021, 15, 87–102. [Google Scholar]
  41. Sabater, J.; Sierra, C. REGRET: Reputation in gregarious societies. In Proceedings of the 5th International Conference on Autonomous Agents, Montreal, QC, Canada, 28 May–1 June 2001; pp. 194–195. [Google Scholar]
  42. Burnett, C.; Norman, T.J.; Sycara, K. Stereotypical trust and bias in dynamic multiagent systems. ACM Trans. Intell. Syst. Technol. (TIST) 2013, 4, 1–22. [Google Scholar] [CrossRef]
  43. Liu, X.; Datta, A.; Rzadca, K. Trust beyond reputation: A computational trust model based on stereotypes. Electron. Commer. Res. Appl. 2013, 12, 24–39. [Google Scholar] [CrossRef]
  44. Conte, R.; Castelfranchi, C. Simulating multi-agent interdependencies. A two-way approach to the micro-macro link. In Social Science Microsimulation; Springer: Berlin/Heidelberg, Germany, 1996; pp. 394–415. [Google Scholar]
  45. Za, S.; Marzo, F.; De Marco, M.; Cavallari, M. Agent based simulation of trust dynamics in dependence networks. In Exploring Services Science, Proceedings of the 6th International Conference, IESS 2015, Porto, Portugal, 4–6 February 2015; Proceedings 6; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 243–252. [Google Scholar]
  46. Chenoweth, S.V. On the NP-Hardness of Blocks World. In Proceedings of the AAAI, Anaheim, CA, USA, 14–19 July 1991; pp. 623–628. [Google Scholar]
  47. Slaney, J.; Thiébaux, S. Blocks world revisited. Artif. Intell. 2001, 125, 119–153. [Google Scholar] [CrossRef]
  48. Winograd, T. Five Lectures on Artificial Intelligence; Stanford University, Standord Artificial Intelligence Laboratory: Stanford, CA, USA, 1974; p. 0100. [Google Scholar]
  49. Liu, X.; Datta, A.; Rzadca, K.; Lim, E.P. Stereotrust: A group based personalized trust model. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, 2–6 November 2009; pp. 7–16. [Google Scholar]
  50. Sapienza, A.; Falcone, R. Evaluating agents’ trustworthiness within virtual societies in case of no direct experience. Cogn. Syst. Res. 2020, 64, 164–173. [Google Scholar] [CrossRef]
  51. Choi, D.; Jin, S.; Lee, Y.; Park, Y. Personalized eigentrust with the beta distribution. ETRI J. 2010, 32, 348–350. [Google Scholar] [CrossRef]
  52. Fang, W.; Zhang, C.; Shi, Z.; Zhao, Q.; Shan, L. BTRES: Beta-based trust and reputation evaluation system for wireless sensor networks. J. Netw. Comput. Appl. 2016, 59, 88–94. [Google Scholar] [CrossRef]
  53. Fang, W.; Zhang, W.; Yang, Y.; Liu, Y.; Chen, W. A resilient trust management scheme for defending against reputation time-varying attacks based on BETA distribution. Sci. China Inf. Sci. 2017, 60, 040305. [Google Scholar] [CrossRef]
  54. Kanchana Devi, V.; Ganesan, R. Trust-based selfish node detection mechanism using beta distribution in wireless sensor network. J. ICT Res. Appl. 2019, 13, 79–92. [Google Scholar]
  55. Wilensky, U. NetLogo; Center for Connected Learning and Computer-Based Modeling, Northwestern University: Evanston, IL, USA, 1999. [Google Scholar]
  56. Antonelli, G. Interconnected dynamic systems: An overview on distributed control. IEEE Control Syst. Mag. 2013, 33, 76–88. [Google Scholar]
  57. Ji, Z.; Wang, Z.; Lin, H.; Wang, Z. Interconnection topologies for multi-agent coordination under leader–follower framework. Automatica 2009, 45, 2857–2863. [Google Scholar] [CrossRef]
  58. Wooldridge, M.; Jennings, N.R. Intelligent agents: Theory and practice. Knowl. Eng. Rev. 1995, 10, 115–152. [Google Scholar] [CrossRef]
  59. Gaur, V.; Soni, A. A novel approach to explore inter agent dependencies from user requirements. Procedia Technol. 2012, 1, 412–419. [Google Scholar] [CrossRef]
  60. Scholten, K.; Sharkey Scott, P.; Fynes, B. Mitigation processes–antecedents for building supply chain resilience. Supply Chain Manag. Int. J. 2014, 19, 211–228. [Google Scholar] [CrossRef]
  61. Motes, J.; Sandström, R.; Lee, H.; Thomas, S.; Amato, N.M. Multi-robot task and motion planning with subtask dependencies. IEEE Robot. Autom. Lett. 2020, 5, 3338–3345. [Google Scholar] [CrossRef]
  62. Ravnborg, H.M.; Westermann, O. Understanding interdependencies: Stakeholder identification and negotiation for collective natural resource management. Agric. Syst. 2002, 73, 41–56. [Google Scholar] [CrossRef]
  63. Raposo, A.B.; Fuks, H. Defining Task Interdependencies and Coordination Mechanism for Colaborative Systems. In Proceedings of the COOP, Saint-Raphaël, France, 4–7 June 2002; pp. 88–103. [Google Scholar]
  64. Sapienza, A.; Falcone, R. An Autonomy-Based Collaborative Trust Model for User-IoT Systems Interaction. In International Symposium on Intelligent and Distributed Computing; Springer International Publishing: Cham, Switzerland, 2022; pp. 178–187. [Google Scholar]
Figure 1. An example of objective dependence.
Figure 1. An example of objective dependence.
Information 14 00652 g001
Figure 2. An example of subjective dependence.
Figure 2. An example of subjective dependence.
Information 14 00652 g002
Figure 3. Workflow of the simulation.
Figure 3. Workflow of the simulation.
Information 14 00652 g003
Figure 4. In the figure, The label on the block indicates the ID of the agent that owns it. In this specific case, agent 5 completely realized its goal, as he managed to place a stack of two blue blocks and two lightweight blocks on the table.
Figure 4. In the figure, The label on the block indicates the ID of the agent that owns it. In this specific case, agent 5 completely realized its goal, as he managed to place a stack of two blue blocks and two lightweight blocks on the table.
Information 14 00652 g004
Table 1. First experiment results.
Table 1. First experiment results.
σ Average ScoreCompleted TasksPercentage of Delegated TaskSuccess Rate of Delegated Tasks
0.251.3913.310.320.53
0.51.3513.00.180.68
0.751.2712.640.060.76
Table 2. Second experiment results.
Table 2. Second experiment results.
σ Average ScoreCompleted TasksPercentage of Delegated TaskSuccess Rate of Delegated Tasks
0.251.2112.20.130.52
0.51.2212.380.090.65
0.751.2512.50.040.68
Table 3. Third experiment results.
Table 3. Third experiment results.
σ Average ScoreCompleted TasksPercentage of Delegated TaskSuccess Rate of Delegated Tasks
0.251.558.450.220.63
0.751.578.80.070.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Falcone, R.; Sapienza, A. The Role of Trust in Dependence Networks: A Case Study. Information 2023, 14, 652. https://doi.org/10.3390/info14120652

AMA Style

Falcone R, Sapienza A. The Role of Trust in Dependence Networks: A Case Study. Information. 2023; 14(12):652. https://doi.org/10.3390/info14120652

Chicago/Turabian Style

Falcone, Rino, and Alessandro Sapienza. 2023. "The Role of Trust in Dependence Networks: A Case Study" Information 14, no. 12: 652. https://doi.org/10.3390/info14120652

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop