Next Article in Journal
An Efficient Distributed Optimization Algorithm for Cooperation of Automated Vehicles Considering Packet Loss
Previous Article in Journal
CFGuide-Fuzz: Dynamic Fuzz Testing Framework Based on Control Flow Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory

School of Computer Science and Engineering, North Minzu University, Yinchuan 750000, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(3), 456; https://doi.org/10.3390/math14030456
Submission received: 16 December 2025 / Revised: 15 January 2026 / Accepted: 21 January 2026 / Published: 28 January 2026
(This article belongs to the Section D2: Operations Research and Fuzzy Decision Making)

Abstract

In multi-agent systems, the interactions between autonomous agents within dynamic and uncertain environments are crucial for achieving their objectives. Current research leverages model checking techniques to verify these interactions, with social accessibility relations commonly used to formalize agent interactions. In multi-agent systems that incorporate generalized possibility measures, the quantification, computation, and model checking of trust properties present significant challenges. This paper introduces an indirect model checking algorithm designed to transform social trust under uncertainty into quantifiable properties for verification. A Generalized Possibilistic Trust Interpreted System (GPTIS) is proposed to model and characterize multi-agent systems with trust-related uncertainties. Subsequently, the trust operators are extended based on Generalized Possibilistic Computation Tree Logic (GPoCTL) to develop the Generalized Possibilistic Trust Computation Tree Logic (GPTCTL), which is employed to express the trust properties of the system. Then, a model checking algorithm that maps trust accessibility relations to trust actions is introduced, thereby transforming the model checking of GPTCTL on GPTIS into model checking of GPoCTL on Generalized Possibility Kripke Structures (GPKSs). The proposed algorithm is provided with a correctness proof and complexity analysis, followed by an example demonstrating its practical feasibility.

1. Introduction

Multi-agent Systems (MASs) are composed of multiple autonomous agents characterized by reactivity, proactivity, social capability, and rationality. These agents interact within dynamic and uncertain environments to achieve their respective objectives [1,2,3,4]. With the continuous advancement of artificial intelligence (AI) and computing technologies, MASs have emerged as powerful tools for addressing complex problems and are widely applied across domains such as automation, decision support, and resource allocation [5,6]. In MASs, trust is a fundamental element for enabling effective collaboration among agents. By reasoning about trust, it becomes possible to evaluate whether interactions among agents adhere to predefined expectations, thereby improving the overall reliability and security of the system. Verifying the properties of MASs is critical in ensuring effective communication and cooperation among agents, as well as maintaining the safe and stable operation of the system. This remains a significant challenge in AI research.
Model checking is an automated formal verification technique used to determine whether a system satisfies specific properties. It has been extensively adopted for safety verification in MASs [7,8]. Since the development of model checking, both academia and industry have continuously worked to expand its applicability to more complex systems. Traditional model checking approaches reply on temporal logic, particularly Computation Tree Logic (CTL) and Linear Temporal Logic (LTL), to specify system properties. However, temporal logic alone is insufficient to express trust-related attributes within MASs. To accurately describe communication behaviors among agents and meet the verification demands of MASs, trust operators have been integrated into temporal logic, thereby broadening the applicability of model checking techniques.
In the context of communication and collaboration within MASs, the inherent uncertainty of trust and the dynamic nature of agent interactions generate substantial imprecision and fuzzy information. This poses significant challenges for traditional model checking techniques, which lack the capability to quantitatively analyze such communicative behaviors. Classical and probabilistic model checking methods are limited in their ability to accurately represent fuzzy relationships [9]. For example, consider a scenario where an agent has never observed any interaction with another agent. A probabilistic model, due to normalization constraints, must assign probabilities that sum to one—often leading to the assignment of low probability to malicious behavior and high probability to trustworthy behavior by default (e.g., via uniform priors or Laplace smoothing). This conflates lack of evidence with positive evidence, treating “unknown” as “likely safe.” In contrast, generalized possibility theory allows the possibility of both trustworthy and untrustworthy behaviors to remain non-zero, reflecting genuine epistemic uncertainty. The non-additive nature of possibility measures—where the possibility of a disjunction is the maximum of individual possibilities—enables a more cautious and semantically faithful representation of trust under sparse or qualitative evidence. Generalized possibility measures, due to their intrinsic linguistic vagueness and non-additivity, offer a promising framework for capturing these fuzzy properties. In response, Li Yongming et al. [10] proposed the Generalized Possibilistic Computation Tree Logic (GPoCTL) as a model checking approach. They further introduced the Generalized Possibilistic Markov Decision Process (GPMDP) model, which transforms the model checking into fuzzy matrix operations or fixed-point computations solvable in polynomial time [11]. Currently, research in generalized possibility theory primarily focuses on the analysis of individual agents interacting with the environment, with relatively limited investigation into multi-agent scenarios. In MAS communication, trust is characterized by a high degree of uncertainty. Compared with classical and probabilistic methods, fuzzy model checking is better suited to handle the possibilistic values of agent attributes, particularly in the analysis of fuzzy communicative behaviors. Additionally, the fuzzy model checking approaches are preliminarily investigated for MASs based on fuzzy temporal logic incorporating cognition and commitment operators [12,13,14].

1.1. Related Work

In multi-agent systems (MASs), the definition and modeling of trust is a central research topic. The most widely accepted definition of trust was proposed by Castelfranchi and Falcone, often referred to as C&F [15]. In [16], the authors proposed a logical framework for the formal analysis of trust and reputation. They formalized the definition of trust proposed by C&F using a logic that incorporates temporal, action-based, belief, and choice operators. They also introduced a reputation definition that mirrors trust but shifts the belief concept to the collective level. Additionally, they distinguished between two primary types of trust: occurrent trust and dispositional trust. In [17], Herzig simplified the logic in [16] by restricting the truth values of propositional variables to either true or false in the action logic, providing a simple yet expressive framework and proving its completeness.
Research on trust in multi-agent systems can be categorized into two main approaches. The first approach enables agents to measure the trustworthiness of other agents during interactions, with higher-trust agents being prioritized in repeated interactions [18,19,20]. The second approach, which is adopted in this paper, involves using trust models to reason about the honesty or reliability of agents, establishing trust relationships, and proposing formal semantic frameworks for reasoning. Several trust semantic frameworks have already been proposed and are widely applied [21,22,23,24,25]. For example, in [21], Liu and Lorini proposed a DL-BET logic for reasoning about the interplay among belief, evidence, and trust, and they provided an axiomatic system for the logic. In [22], a modal logic was developed for reasoning about the trust an agent places in another based on reputation. This work refined trust values as interactions between agents evolve, allowing the system to assess an agent’s trustworthiness. Furthermore, agents can exchange testimonies regarding the trustworthiness of third-party agents, even without direct interactions. In [23], Nagat Drawel et al. proposed a Trust Computation Tree Logic (TCTL), extending Computation Tree Logic (CTL) to model trust and proposing an algorithm to capture trust-based actions. In [24], a Trust Reputation System and a Trust Reputation Interaction Model were proposed to enhance the system’s resistance against malicious agents. In contrast, [25] presented a formal framework that extends the scope of trust reasoning from individual interactions to group–individual interactions. They introduced a branching-time (BT) logic that incorporates operators to express concepts such as collective trust, distributed trust, and propagated trust. Table 1 presents a comparison of relevant publications against key evaluation criteria, illustrating how our approach relates to and differs from prior work.
In this study, we evaluate and compare existing trust system modeling approaches using eight criteria, denoted as C1 through C8. These criteria are as follows: whether the approach is founded on BDI logic to describe the mental states of agents, reflecting the cognitive dimension; whether it accounts for interactions and social relationships among multiple agents, capturing the social dimension; whether it employs a dedicated modality to explicitly model trust rather than using predicates, thereby demonstrating an explicit conceptualization of trust; whether it includes a rigorous analysis of computational complexity; whether it supports fine-grained expressions of trust levels, indicating its capability for graded modality handling; whether it incorporates formal verification techniques such as model checking to ensure correctness; whether it establishes the soundness and completeness of the logical framework, validating its theoretical foundation; and whether it integrates fuzzy logic to model uncertainty and subjectivity. These criteria collectively provide a comprehensive evaluation of the theoretical foundations, practical applicability, and verification capabilities of different trust modeling methods.

1.2. Main Contributions

Both classical and probabilistic model checking techniques face limitations when trying accurately represent certain fuzzy relationships. Generalized possibility measures, due to their inherent language fuzziness and the non-additivity of possibility measures, are well-suited for effectively capturing these fuzzy relationships. This approach fills the gap in generalized possibility model checking for trust-based multi-agent systems. As illustrated in Figure 1, the proposed approach defines a GPTIS based on vector-based interpreted systems and introduces GPTCTL. An indirect model checking algorithm that transforms the model checking of GPTCTL into a model checking of GPoCTL is employed. First, the GPTIS model is converted into a Generalized Possibilistic Decision Process (GPDP) model. Given that each state in the GPDP model can have multiple actions, a scheduler is introduced to select a unique action at each state, transforming the GPDP model into a GPKS model, which is used to solve the maximal possibility. Then, the GPTCTL formulas are converted into their GPoCTL equivalents. Finally, the corresponding algorithm along with its complexity analysis is presented, and its application is demonstrated using an example from a shopping system.

1.3. Structure of This Paper

The remainder of this article is organized as follows. Section 2 introduces the concepts of fuzzy theory. Section 3 introduces the related theories of IS (Interpreted System) and VIS (Vector-based Interpreted System) and defines a Generalized Possibility Trust Interpretation System. Section 4 defines Generalized Possibility Trust Computation Tree Logic (GPTCTL). Section 5 presents a Generalized Possibility Model Checking algorithm. Section 6 provides a complexity analysis of the algorithm. Section 7 discusses the limitations and practical scalability of the proposed framework, analyzing the impact of trust graph structure on computational complexity and proposing practical strategies—such as trust pruning, aggregation, and symbolic representation—to address challenges in large-scale systems. An illustrative example using an online shopping system is given in Section 8. Section 9 concludes this article with a summary the findings and suggests directions for future research.

2. Preliminaries

This section provides the concepts of fuzzy theory.
Definition 1
([10,11]). A fuzzy set A over a universe of discourse X is defined as a mapping A : X [ 0 , 1 ] , also referred to as the membership function of the fuzzy set A, and is commonly denoted by μ A . For any element x X , the value μ A ( x ) represents the degree of membership of x in fuzzy set A. Let F ( X ) denote the class of all fuzzy sets on X; that is F ( X ) = { A | A : X [ 0 , 1 ] } .
Definition 2
([10,11]). Let A , B F ( X ) . For any x X , the membership functions of the fuzzy sets A B and A B are defined as follows:
( A B ) ( x ) = A ( x ) B ( x ) = max { A ( x ) , B ( x ) }
( A B ) ( x ) = A ( x ) B ( x ) = min { A ( x ) , B ( x ) }
In these definitions, ∨ (equivalently, max) denotes the maximum operation, and ∧ (equivalently, min) denotes the minimum operation.
Definition 3
([10]). Let X = ( x i j ) m × n and Y = ( y i j ) m × n be two fuzzy matrices of size m × n . The intersection, union, and complement of X and Y are defined as follows: X Y = ( x i j y i j ) m × n , X Y = ( x i j y i j ) m × n , X c = ( 1 x i j ) m × n
Here, the operators ∧ and ∨ denote the minimum and maximum operations, respectively.
Definition 4
([10]). Let X = ( x i j ) m × l be an m × n fuzzy matrix and Y = ( y i j ) l × n be an l × n fuzzy matrix. The inner product of fuzzy matrices X and Y, denoted as X Y , is defined as:
X Y = ( z i j ) m × n
where z i j = k = 1 n ( x i k y k j ) ( i = 1 , , m ; j = 1 , , n ) .
Definition 5
([10]). Let U denote a nonempty universe of discourse, with all subsets assumed to be measurable. A possibility measure is defined as a function from the powerset 2 U to the interval [ 0 , 1 ] , satisfying the following conditions:
(1) 
Π ( ) = 0 ;
(2) 
Π ( U ) = 1 ;
(3) 
Π ( i I E i ) = i I Π ( E i ) for any subset family E i of the universe set U.
Here, we use i I a i to denote the supremum (or least upper bound) of the family of real numbers { a i } i I . Dually, we use i I a i to denote the infimum (or greatest lower bound) of the real family { a i } i I .
If ∏ only satisfies Conditions (1) and (3), then ∏ is called a generalized possibility measure.

3. Generalized Possibilistic Trust Interpreted System

This study introduces a Generalized Possibility Trust Interpreted System based on vector-based interpreted systems in order to better characterize the trust accessibility relations inside the system due to the presence of fuzzy behaviors among agents. The definition is as follows:
Definition 6
([26]). An Interpreted System (IS) is a tuple consisting of n agents, denoted as I S = ( ( L i , a c t i , ρ i , τ i ) i A g t , S , I , A c t , τ ) , where:
1. 
For each agent i A g t , the following components are defined:
(1) 
A non-empty set of local states L i , where l i L i represents the local state of the agent i at a given time;
(2) 
A set of local actions a c t i , representing the possible actions agent i can take;
(3) 
A local protocol ρ i : L i 2 a c t i mapping each local state to the set of enabled actions for agent i in that state;
(4) 
A local evolution function τ i = L i × a c t i L i determining how the local state of agent i evolves based on its current state and action;
We also define the following global components:
2. 
A set of global states S L 1 × L 2 × × L n , where a global state s = ( l 1 , l 2 , , l n ) represents a snapshot of the local states of all agents. The local state of agent i in global state s is denoted as l i ( s ) ;
3. 
I S is a set of initial global states, representing the possible starting configurations of the system;
4. 
A c t = a c t 1 × × a c t n is a set of joint actions performed by all agents in the system, and σ = ( σ 1 , σ 2 , σ 3 , σ n ) A c t ;
5. 
A global evolution function τ : S × A c t S , defined as τ ( s , σ ) = ( τ 1 ( l 1 , σ 1 ) , τ 2 ( l 2 , σ 2 ) , , τ n ( l n , σ n ) ) , where s = ( l 1 , l 2 , , l n ) , σ = ( σ 1 , σ 2 , , σ n ) .
Definition 7
([23]). To explicitly represent the trust relationship between the truster and trustee, we extend the interpreted system by defining a Vector-based Interpreted System (VIS) as a 5-tuple: V I S = ( ( L i , a c t i , v i , ρ i , τ i ) i A g t , S , I , A c t , τ ) . For each agent i A g t and each of its local states l i L i , the following trust-related component is added:
(1) 
A trust vector v i of size | A g t | is associated with l i , defined as v i ( i ) , v i ( j ) , , v i ( n ) , where v i ( j ) [ 0 , 1 ] represents the degree of trust agent i has in agent j in that state. l i ( s ) ( v i ( j ) ) represents the degree of trust that agent i has in agent j at state s;
(2) 
The trust vector v i defines agent i’s trust accessibility relation, determining how much agent i “believes” or “responds to” the behavior of other agents.
All other components ( L i , a c t i , ρ i , τ i , S , I , A c t , τ ) follow the same structure and semantics as in the standard interpreted system defined in Definition 6.
Definition 8.
A Generalized Possibilistic Trust Interpreted System (GPTIS) is an 8-tuple M = ( S , A c t , P , I , L , A P , v i , i j ) consisting of n agents, where the definitions of S, A c t , I, and v i are the same as before, defined in VIS (see Definition 7). The main distinction between VIS and GPTIS lies in the fact that in GPTIS, there are four new tuples, P, L, A P , and i j . The definition is given as follows:
1. 
P : S × A c t × S [ 0 , 1 ] is the possibility of transition distribution. For any state s S and action α A c t , there exists a state t S such that P ( s , α , t ) > 0 ;
2. 
L : A g t × S × A P [ 0 , 1 ] is a fuzzy labeling function, where L ( i , s , p ) i A g t , s S , p A P represents the possibilistic truth value of atomic proposition p in local state l i ( s ) ;
3. 
A P is the set of atomic propositions for all agents, where A P i i A g t is the set of atomic propositions for agent i;
4. 
i j : S × S [ 0 , 1 ] is the trust accessibility relation, as shown in Figure 2. The following conditions must be satisfied:
(1) 
l i ( s ) ( v i ( j ) ) = l i ( s ) ( v i ( j ) ) ;
(2) 
s can reach s through the transition distribution;
Example 1.
Figure 2 illustrates the dashed line representing the possibility value of 0.7 for the trust accessibility relation from agent i to agent j in the transition from global state s to global state s . Here, l i ( s ) ( v i ( j ) ) = l i ( s ) ( v i ( j ) ) denotes that within the GPTIS of these two states, the vector v i ( j ) has the same value, thus enabling the trust accessibility relation from i to j.
If | S | , | A P | , | A g t | are finite, then M is called a finite GPTIS. For a GPTIS M = ( S , P , I , L , A P , v i , i j ) , a finite path starting from state s 0 in M is denoted by the sequence π ^ = s 0 σ 0 s 1 σ 1 s 2 s n 1 σ n 1 s n . An infinite path starting from state s 0 in M is denoted by the sequence π = s 0 σ 0 s 1 σ 1 ( S × A c t ) w . The set of all infinite paths starting from global state s in M is denoted by P a t h s ( s ) . The set of all finite paths starting from the global state s in M is denoted by P a t h s f i n ( s ) . The set of all infinite paths in M is denoted by P a t h s ( M ) . The set of all finite paths in M is denoted by P a t h s f i n ( M ) .
Figure 2. An example of trust accessibility relation i j ( s , s ) = 0.7 .
Figure 2. An example of trust accessibility relation i j ( s , s ) = 0.7 .
Mathematics 14 00456 g002
Definition 9
([10]). Let M = ( S , A c t , P , I , L , A P , v i , i j ) be a finite set GPTIS. Then, G P o : P a t h s ( M ) [ 0 , 1 ] is defined as follows:
G P o ( π ) = I ( s 0 ) i 0 P ( s i , σ i , s i + 1 )
For π = s 0 σ 0 s 1 σ 1 P a t h s A d v ( M ) and E P a t h s ( M ) , we can define G P o ( E ) = { G P o ( π ) | π E } , which leads to the function
G P o : 2 P a t h s ( M ) [ 0 , 1 ]
This function is called the generalized possibility measure on K = 2 P a t h s ( M ) .

4. Generalized Possibilistic Computation Tree Logic

This section extends the trust operators based on GPoCTL and introduces GPTCTL. Below, we provide the syntax of GPTCTL and its semantics in the context of GPTIS.
Definition 10
(GPTCTL Syntax). The state formulas of GPTCTL are recursively defined as follows:
Φ : : = true | a | Φ 1 Φ 2 | ¬ Φ | G P o ( φ ) | T P ( i , j , Φ 1 , Φ 2 ) | T c ( i , j , Φ 1 , Φ 2 )
The path formulas of GPTCTL are recursively defined as follows:
φ : : = Φ | Φ 1 Φ 2
where Φ, Φ 1 and Φ 2 represents a state formula, φ represents a path formula.
T p ( i , j , Φ 1 , Φ 2 ) is a premise trust formula, representing the degree of possibility that agent i trusts agent j to achieve Φ 2 , given that Φ 1 holds.
T c ( i , j , Φ 1 , Φ 2 ) is a conditional trust formula, representing the degree of possibility that agent i trusts agent j to achieve Φ 2 when Φ 1 holds.
Example 2.
Figure 3 presents an intuitive illustration of the premise trust formula and the conditional trust formula. In the premise trust formula T p ( i , j , Φ 1 , Φ 2 ) , the current state s 0 must satisfy proposition Φ 1 and not satisfy Φ 2 , while Φ 2 must hold in the next state that is reachable via the trust accessibility relation. In the conditional trust formula T c ( i , j , Φ 2 , Φ 3 ) , the current state s 1 must not satisfy Φ 3 , and in the next state s 2 , which is accessible through the trust accessibility relation, both Φ 2 and Φ 3 must be satisfied.
Definition 11
(GPTCTL Semantics). Let M = ( S , A c t , P , I , L , A P , v i , i j ) be a finite GPTIS and Φ : S [ 0 , 1 ] be a function. The semantics of a GPTCTL state formula Φ are recursively defined as follows, where s S , a A P , i , j A g t :
t r u e ( s ) = 1 ,
a ( s ) = L ( i , s , a ) ,
Φ 1 Φ 2 ( s ) = Φ 1 ( s ) Φ 2 ( s ) ,
¬ Φ ( s ) = 1 Φ ( s ) ,
G P o ( φ ) ( s ) = G P o ( s φ ) ,
T p ( i , j , Φ 1 , Φ 2 ) ( s ) = Φ 1 ¬ Φ 2 ( s ) π P a t h s ( s ) | s i j s ( i j ( s , s ) Φ 2 ( s ) G P o ( π ) ) ,
T c ( i , j , Φ 1 , Φ 2 ) ( s ) = ¬ Φ 2 ( s ) π P a t h s ( s ) s i j s ( i j ( s , s ) Φ 1 Φ 2 ( s ) G P o ( π ) )
For any path π = s 0 σ 0 s 1 σ 1 s 2 P a t h s ( M ) , the semantics of a path formula φ in M are recursively defined as follows, where φ : P a t h s ( M ) [ 0 , 1 ] represents the path formula:
Φ ( π ) = P ( π [ 0 ] , σ 0 , π [ 1 ] ) Φ ( π [ 1 ] )
Φ 1 Φ 2 ( π ) = Φ 2 ( π [ 0 ] ) j 1 ( Φ 1 ( π [ 0 ] ) k < j P ( π [ k 1 ] , σ k 1 , π [ k ] ) Φ 1 ( π [ k ] ) P ( π [ j 1 ] , σ j 1 , π [ j ] ) Φ 2 ( π [ j ] ) )
G P o ( s | = φ ) represents the set of all paths starting from state s that satisfies the path formula φ. It is defined as follows:
G P o ( s φ ) = π P a t h s ( s ) G P o ( π ) φ ( π )

5. Model Checking GPTIS Based on GPTCTL

Given a GPTIS M and a GPTCTL state formula Φ , the GPTCTL model checking problem for GPTIS is to compute the possibility of Φ ( s ) . Inspired by the model checking methods proposed in [13,14], this section adopts an indirect model checking approach consisting of two main components: model transformation and formula simplification. Through transformation and simplification, the problem is reduced to a GPoCTL model checking problem based on GPKS.
Definition 12
([10]). A Generalized Possibilistic Kripke Structure is a tuple M k = ( S k , P k , I k , A P k , L k ) , where
(1) 
S k is a countable, nonempty set of states;
(2) 
P k : S k × S k [ 0 , 1 ] is the transition possibility distribution such that for all states s S k , there exists t S k satisfying P k ( s , t ) > 0 ;
(3) 
I k : S k [ 0 , 1 ] is the initial distribution, such that I k ( s ) > 0 for some states s S k ;
(4) 
A P k is a set of atomic propositions;
(5) 
L k : S k × A P k [ 0 , 1 ] is a labeling function, L k ( s , a ) denotes the possibility of an atomic proposition a that is supposed to hold in s.
Definition 13
([11]). A GPDP (Generalized Possibilistic Decision Process) is a tuple M d = ( S d , A c t d , P d , I d , A P d , L d ) , where
(1) 
S d is a countable, nonempty set of states;
(2) 
A c t d is a set of actions;
(3) 
P d : S d × A c t d × S d [ 0 , 1 ] is the possibilistic action transition function, such that for all states s S d and all actions α A c t d , there exists t S d satisfying P d ( s , α , t ) > 0 ;
(4) 
I d : S d [ 0 , 1 ] is the possibilistic initial distribution function, such that I d ( s ) > 0 for some states s S d ;
(5) 
A P d is a set of atomic propositions;
(6) 
L d : S d × A P d [ 0 , 1 ] is a labeling function, L d ( s , a ) denotes the possibility of an atomic proposition a that is supposed to hold in s.
Definition 14
([11]). Let M d = ( S d , A c t d , P d , I d , A P d , L d ) be a finite GPDP. A function A d v : S d A c t d is defined as a scheduler of M d , such that for any state s S d , there exists A d v ( s ) A c t d ( s ) , where A c t d ( s ) denotes the set of all enabled actions at state s.
For all i > 0 , if α i = A d v ( s 0 s 1 s i 1 ) , then the path π = s 0 α 1 s 1 α 2 s 2 α 3 s 3 is called an A d v path of M d . We use P a t h s A d v ( s ) to denote the set of all paths starting from state s under the control of scheduler A d v .
Definition 15
([10]). Let M d = ( S d , A c t d , P d , I d , A P d , L d ) be a finite set GPDP, r A d v : S d [ 0 , 1 ] as r A d v ( s ) = { i 0 P A d v ( s i , σ i , s i + 1 ) | s 0 = s , s i S , σ i A c t } . It is a computation method is denoted as
r A d v ( s ) = t S d ( P A d v + ( s , t ) P A d v + ( t , t ) )
The form of computation using fuzzy matrices is
r A d v = P A d v + D A d v ,
where D A d v = ( P A d v + ( t , t ) ) t S . The proof of the computation method is given in [10].

5.1. Model Transformation

In the model transformation, Algorithm 1 is first utilized to convert the GPTIS model into a GPDP model. The GPDP is used to describe the fuzzy and uncertain behaviors of the system, and each state has at least one enabled action. The transformation process mainly includes two aspects:
(1)
Transformation between trust accessibility relations and action sets: The trust accessibility relations in the GPTIS are transformed into corresponding action sets in the GPDP.
(2)
Transformation of different actions into different possibility transition distributions: The different actions in the GPTIS are mapped to different possibility transition distributions in the GPDP. During the transformation process, the states, the set of initial global states, the atomic propositions, and the labeling function remain unchanged.
Algorithm 1 GPTIS is converted to GPDP
Require: GPTIS M = ( S , A c t , P , I , L , A P , v i , i j )
Ensure: GPDP M d = ( S d , A c t d , P d , I d , A P d , L d )
 1:
S d : = S ;
 2:
I d : = I ;
 3:
A P d : = A P ;
 4:
L d : = L ;
 5:
Action A c t d is defined as A c t d = { σ , δ } :
 6:
(1)Set σ = σ 1 , σ 2 , σ 3 , σ n : Represents the set of joint actions performed by a group of agents.
 7:
(2)Set δ = δ 11 , δ 12 , , δ i j , , δ n n ( n | A g t | ) : Represents the trust accessibility relations labeled as trust actions. Specifically, δ i j denotes that agent i performs a trust action towards agent j.
 8:
The fuzzy transition function P d combines the temporal transition function and the transition function based on trust accessibility relations: For states s , s S , θ A c t d .
 9:
if  θ σ  then
10:
    P d ( s , θ , s ) : = P ( s , σ , s )
11:
else
12:
    P d ( s , θ , s ) : = i j
13:
end if
To address the uncertainty in actions within the GPDP, Algorithm 2 introduces a scheduling function to transform the GPDP into a deterministic GPKS model. This transformation allows for the resolution of the maximum possibility problem. The scheduling function is defined as follows:
Let M d = ( S d , A c t d , P d , I d , A P d , L d ) be a finite GPDP. Define the function A d v : S d A c t d as the scheduling of M d . For any global state s S d , there exists A d v ( s ) A c t d ( s ) . We define A d v as two different scheduling functions, where A d v t is used to interpret temporal formulas and A d v δ is used to interpret trust formulas.
(1)
A d v σ : S d σ , For any state s S d , given a scheduling function A d v σ , the action σ is selected at the state S denoted as A d v σ ( s ) = σ . The fuzzy transition matrix is defined as P A d v σ , such that
P A d v σ ( s , t ) = P ( s , A d v σ ( s ) , t ) s , t S d
(2)
A d v δ : S d δ , For any state s S d , given a scheduling function A d v δ , the action δ is selected at state S, denoted as A d v δ ( s ) = δ . The fuzzy transition matrix is defined as P A d v δ , such that
P A d v δ ( s , t ) = P ( s , A d v δ ( s ) , t ) s , t S d
Based on the selection of different actions by the scheduling function, we obtain a deterministic GPKS model.
Algorithm 2 GPDP is converted to GPKS
Require: GPDP M d = ( S d , A c t d , P d , I d , A P d , L d )
Ensure: GPKS M k = ( S k , P k , I k , A P k , L k )
 1:
S k : = S d ;
 2:
I k : = I d ;
 3:
A P k : = A P d ;
 4:
L k : = L d ;
 5:
The possibility transition distribution P k is defined as P k = { P A d v σ , P A d v δ } :
 6:
if  A d v = A d v σ  then
 7:
    P A d v σ ( s , t ) = P ( s , A d v σ ( s ) , t ) s , t S d ;
 8:
else
 9:
     P A d v δ ( s , t ) = P ( s , A d v δ ( s ) , t ) s , t S d ;
10:
end if
Any GPKS can be viewed as a GPDP with only one available action (either the maximum likelihood action or the minimum likelihood action) per state. Therefore, GPKS is a special case of GPDP. Consequently, the semantic interpretation of GPoCTL remains consistent across both models. Figure 4 presents an illustrative example of the stepwise model transformation, starting from GPTIS, proceeding through GPDP, and finally reaching GPKS.

5.2. GPTCTL to GPoCTL Formula Reduction

To simplify the formula using Algorithm 3, it is necessary to ensure that the simplified formula retains the same semantics as the original formula, i.e., Φ G P T I S ( M , s ) = Φ G P K S ( M k , s ) . The syntax definition of GPoCTL logic is given in [10].
Algorithm 3 GPTCTL Formula Simplification Algorithm
Require: GPTCTL state formula ( Φ ) G P T C T L
Ensure: GPoCTL state formula ( Φ ) G P o C T L
 1:
Case ( Φ ) G P T C T L C
 2:
p return p
 3:
¬ Φ return ¬ Φ
 4:
Φ 1 Φ 2 return Φ 1 Φ 2
 5:
G P o ( ψ ) return ( G P o ( ψ ) ) A d v σ
 6:
T p ( i , j , Φ 1 , Φ 2 ) return ( ( Φ 1 ¬ Φ 2 ) G P o ( Φ 2 ) ) A d v δ
 7:
T c ( i , j , Φ 1 , Φ 2 ) return ¬ Φ 2 G P o ( ( Φ 1 Φ 2 ) ) A d v δ
 8:
End case
Given a scheduling function A d v t and a state formula Φ , Φ 1 , Φ 2 , let the transition function be ξ : Φ Φ G P K S . The GPTCTL temporal formulas are converted to GPoCTL formulas as follows:
ξ ( a ) = a
ξ ( ¬ Φ ) = ¬ Φ
ξ ( Φ 1 Φ 2 ) = Φ 1 Φ 2
ξ ( G P o ( φ ) ) = ( G P o ( φ ) ) A d v σ
When given the scheduling function A d v t , it only captures the temporal transitions in each state of the model. Since GPTCTL is an extension of GPoCTL with trust operators, Formulas (11)–(14) do not require further simplification.
Theorem 1.
Given a scheduling function Adv δ and a transition function ζ : Φ Φ G P K S , the GPTCTL trust formulas are converted into quantitatively computable temporal formulas as follows:
ζ ( T p ( i , j , φ 1 , φ 2 ) ) = ( ( Φ 1 ¬ Φ 2 ) G P o ( Φ 2 ) ) A d v δ
ζ ( T c ( i , j , φ 1 , φ 2 ) ) = ¬ Φ 2 G P o ( ( Φ 1 Φ 2 ) ) A d v δ
When the scheduling function is Adv δ , the GPKS model will only select the probability transition distribution labeled as action δ i j . According to the intuitive interpretation of trust, for two states connected by the trust accessibility relation, the current state must satisfy trust content Φ 1 , and the subsequent next state must satisfy trust content Φ 2 . This explains why the trust formula is translated into the next operator.
Proof. 
Let M d = ( S d , A c t d , P d , I d , A P d , L d ) be a finite GPDP, and let D Φ be the fuzzy diagonal matrix of state formula Φ for | S | × | S | . For any s , t S d :
D Φ ( s , t ) = Φ ( s ) 0 s = t o t h e r w i s e
P Φ is denoted as a fuzzy matrix of | S | × 1 . E is denoted as an identity matrix of | S | × 1 , where all elements are equal to 1.
Define r A d v : S d [ 0 , 1 ] as r A d v ( s ) = { i 0 P A d v ( s i , σ i , s i + 1 ) | s 0 = s , s i S , σ i A c t } . The proof of the computation method is given in [9].
In GPoCTL syntax and semantics, the formula Φ 1 G P o ( Φ 2 ) ( s ) denotes that, starting from state s, under the scheduler A d v , there exists a path π such that the state formula Φ 1 is satisfied in state s, and the next state satisfies the state formula Φ 2 .
Φ 1 G P o ( Φ 2 ) ( s ) s S = Φ 1 ( s ) π P a t h s ( s ) ( Φ 2 ( π ) G P o ( π ) ) = Φ 1 ( s ) s , s i S ( P ( s , σ , s i ) Φ 2 ( s i ) i 0 P ( s i , σ , s i + 1 ) ) = Φ 1 ( s ) s , s i S ( P A d v σ ( s , s i ) Φ 2 ( s i ) i 0 P A d v σ ( s i , s i + 1 ) ) = Φ 1 ( s ) s , s i S ( P A d v σ ( s , s i ) Φ 2 ( s i ) r A d v σ ( s i ) ) = ( D Φ 1 E ) ( P A d v σ D Φ 2 r A d v σ ) ( s )
In the syntax and semantics of GPTCTL, the trust formula T p ( i , j , Φ 1 , Φ 2 ) ( s ) s S denotes that, in state s, given that Φ 1 holds, agent i trusts agent j to bring about the satisfaction of Φ 2 , and the value is computed as follows:
T p ( i , j , Φ 1 , Φ 2 ) ( s ) s S = Φ 1 ¬ Φ 2 ( s ) π P a t h s ( s ) | s i j s i ( i j ( s , s i ) Φ 2 ( s i ) G P o ( π ) ) = Φ 1 ¬ Φ 2 ( s ) s , s i S a n d δ A c t d ( P ( s , δ , s i ) Φ 2 ( s i ) i 0 P ( s i , δ , s i + 1 ) ) = Φ 1 ¬ Φ 2 ( s ) s , s i S ( P A d v δ ( s , s i ) Φ 2 ( s i ) i 0 P A d v δ ( s i , s i + 1 ) ) = Φ 1 ¬ Φ 2 ( s ) s , s i S ( P A d v δ ( s , s i ) Φ 2 ( s i ) r A d v δ ( s i ) ) = ( D Φ 1 ¬ Φ 2 E ) ( P A d v δ D Φ 2 r A d v δ ) ( s ) = ( Φ 1 ¬ Φ 2 ) G P o ( Φ 2 ) ( s ) s S
In the syntax and semantics of GPTCTL, the trust formula T c ( i , j , Φ 1 , Φ 2 ) ( s ) s S denotes that, in state s, when Φ 1 holds, agent i trusts agent j to bring about the satisfaction of Φ 2 , and the value is computed as follows:
T c ( i , j , Φ 1 , Φ 2 ) ( s ) s S = ¬ Φ 2 ( s ) π P a t h s ( s ) | s i j s i ( i j ( s , s i ) Φ 1 Φ 2 ( s i ) G P o ( π ) ) = ¬ Φ 2 ( s ) s , s i S a n d δ A c t d ( P ( s , δ , s i ) Φ 1 Φ 2 ( s i ) i 0 P ( s i , δ , s i + 1 ) ) = ¬ Φ 2 ( s ) s , s i S ( P A d v δ ( s , s i ) Φ 1 Φ 2 ( s i ) i 0 P A d v δ ( s i , s i + 1 ) ) = ¬ Φ 2 ( s ) s , s i S ( P A d v δ ( s , s i ) Φ 1 Φ 2 ( s i ) r A d v δ ( s i ) ) = ( D ¬ Φ 2 E ) ( P A d v δ D Φ 1 Φ 2 r A d v δ ) ( s ) = ¬ Φ 2 G P o ( ( Φ 1 Φ 2 ) ) ( s ) s S
Therefore, the trust formula can be transformed into the following equivalent matrix expression:
T p ( i , j , Φ 1 , Φ 2 ) ( s ) s S = ( Φ 1 ¬ Φ 2 ) G P o ( Φ 2 ) ( s ) s S
T c ( i , j , Φ 1 , Φ 2 ) ( s ) s S = ¬ Φ 2 G P o ( ( Φ 1 Φ 2 ) ) ( s ) s S
This proves the validity of Equations (15) and (16).    □
Model transformation is performed using Algorithms 1 and 2, while formula simplification is conducted using Algorithm 3 to convert the model checking algorithm of GPTCTL into that of GPoCTL. The   G P o C T L C h e c k ( M k , s , Φ ) is provided in Reference [10]. The model checking algorithm for GPTCTL based on the GPTIS model is illustrated in Algorithm 4.
Algorithm 4 GPTCTL model checking algorithm: G P T C T L C h e c k ( M , s , Φ ) s S
Require: GPTIS M,s,GPTCTL state formula Φ
Ensure: the truth value of Φ ( s )
 1:
Procedure GPTCTLCheck ( M , s , Φ )
 2:
Step 1: Call algorithm 1 Convert the GPTIS model to the GPDP model
 3:
Step 2: Call algorithm 2 Convert the GPDP model to the GPKS model
 4:
Step 3: Call algorithm 3 Convert the GPTCTL formulas into GPoCTL formulas.
 5:
Step 4: Call G P o C T L C h e c k ( M k , s , Φ G P o C T L )
 6:
Return the truth value of Φ ( s )

6. Complexity Analysis

This paper adopts an indirect generalized possibility model checking algorithm to verify properties in the GPTIS. The computational steps of the algorithm include:
(1)
Convert the GPTIS model into a GPDP model, then transform the GPDP model into a GPKS model. The time complexity of this transformation process is calculated;
(2)
Convert the GPTCTL formulas into GPoCTL formulas. The time complexity of this conversion is also calculated;
(3)
Calculate the time complexity of the GPoCTL model checking process.
Lemma 1.
The time complexity of the model transformation is positively correlated with the size M of the input model, i.e., O ( M )
Proof. 
To prove this, we use a Deterministic Turing Machine (DTM) to sequentially read all states in the input GPTIS model, along with their temporal transition relations and trust accessibility relations. The DTM then converts the trust accessibility relations into trust actions. Finally, the transformed states and actions are written on the output tape. The time complexity of the transformation process is positively correlated with the size of the input model. □
Lemma 2.
The time complexity of converting GPTCTL formulas to GPoCTL formulas is linear with respect to the size φ of the input formula, i.e., O ( | Φ | ) .
Proof. 
The conversion steps from GPTCTL formulas to GPoCTL formulas are as follows:
(1)
Input GPTCTL formula Φ.
(2)
Expand the formula Φ into n layers (i.e., n subformulas) until a state formula with trust operators is encountered. This step is performed in linear time proportional to the size of the formula, i.e., O ( | Φ | ) .
(3)
Simplify the state formula with trust operators according to the reduction rules specified in Algorithm 3. The time complexity of this step is constant, i.e., O ( 1 ) .
(4)
Replace the state formula with the trust operator with the transformed GPoCTL state formula.
(5)
Repeat the above steps until the formula no longer contains any state formulas with trust operators. In summary, each subformula needs to go through Steps (2) to (4). Therefore, the process requires n iterations, where n is the number of subformulas. Since the number of subformulas is linearly related to the size of the formula, the overall time complexity of the conversion process is linear.
Lemma 3.
The time complexity of the GPTCTL fuzzy model checking algorithm based on the GPTIS model is O ( s i z e M · p o l y ( S ) · | Φ | ) .
Proof. 
According to [10], the time complexity of GPoCTL model checking is O ( s i z e M · p o l y ( S ) · | Φ | ) . By combining Lemmas 1 and 2, the overall time complexity of the proposed algorithm is ( s i z e ( M ) · p o l y ( S ) · | Φ | ) + ( | η | ) + ( | Φ | ) . Since the size of the Generalized Possibilistic Trust-Interpreted System (GPTIS) model and the formulas is linearly related to the size of the transformed models and formulas, the total time complexity of the algorithm is O ( s i z e M · p o l y ( S ) · | Φ | ) . □

7. Limitations and Practical Scalability

While our framework provides a sound formal basis for possibilistic trust verification, its practical scalability depends critically on the structure of the trust graph. Specifically, the construction of the GPDP model introduces a distinct trust action δ i j for each directed trust relation ( i , j ) in the GPTIS. For a system with N agents, the number of potential trust relations is O ( N 2 ) in the worst case (i.e., a complete trust graph). This leads to quadratic growth in the action space, which in turn affects the size of the transition relation in the GPKS model and increases the computational burden of model checking.
This limitation implies that the current approach is most suitable for:
  • Sparse trust networks, where each agent maintains trust relationships with only a small, bounded number of peers (e.g., d-regular graphs with constant d);
  • Bounded-neighborhood verification, where properties are localized (e.g., “agent i trusts someone within 2 hops”), allowing the model to be restricted to a subgraph;
  • Offline or moderate-scale deployment, such as in consortium blockchains, smart grids, or team-based robotics, where N is typically tens to hundreds.
To mitigate scalability challenges in larger systems, several practical strategies can be employed:
  • Trust pruning: Remove trust edges below a possibility threshold or older than a time window;
  • Aggregation: Group agents into roles or clusters and reason about inter-group trust;
  • Symbolic representation: Encode trust relations using BDDs or first-order structures to avoid explicit enumeration.
We emphasize that optimizing for large-scale dense MASs is beyond the scope of this foundational work, which prioritizes semantic expressiveness and formal correctness. Nonetheless, we view scalability as a critical direction for future research.

8. Illustrative Examples

Since there is currently no benchmark logic that supports the joint modeling of fuzziness and trust, and because the generalized possibilistic semantics adopted by GPTCTL fundamentally differ from both the Boolean semantics of classical logic and the probabilistic semantics of probabilistic logic, numerical results across these frameworks are not directly comparable. This case study employs a multi-agent interaction scenario to verify the feasibility of the reasoning process of the proposed logic and to examine whether its theoretical computations fall within the range of outcomes observed in simulation experiments, thereby demonstrating the theoretical viability of GPTCTL.
It should be emphasized that the correctness of the proposed approach has been rigorously established through the formal derivations and complexity analysis presented in Section 5 and Section 6. The simulation experiments in this section are not intended to replace formal verification; rather, they aim to demonstrate the reasonableness of the computed results and the semantic interpretability of the method in representative application scenarios.

8.1. Multi-Agent Shopping Interaction Model

This section adopts a case study similar to the online shopping system presented in [13] to illustrate the application of the GPTCTL model checking method. Below are the specific steps for constructing the model.
In the online shopping system, there are three key roles: merchants i 1 and i 2 , and a customer j.
  • Global state: S = { S 0 , S 1 , S 2 , S 3 } , where S 0 represents the initial state of the system.
  • Atomic proposition sets for agents i 1 , i 2 , j :
    (1)
    For agents i 1 , i 2 , the set of atomic propositions is defined as A P i 1 = { A 1 , B 1 } , A P i 2 = { A 2 , B 2 } . Among them, atomic proposition A represents that the merchant is idle, and atomic proposition B represents that the merchant has shipped the order;
    (2)
    For agent j, the set of atomic propositions is defined as A P j = { C , D , F } , where C represents that the customer gives positive feedback, D represents that the order is delivered quickly, and F represents that the customer has made the payment;
    In the global states, the atomic propositions have fuzzy values. For example, A 1 = 0.7 represents that the possibility of merchant i 1 being idle in state s 0 is 0.7.
  • Joint actions between global states are denoted by σ ;
  • There exists a possibility of transition distribution between each pair of global states, represented by solid arrows in the diagram;
  • Trust accessibility relations: Trust accessibility relations are represented by dashed arrows with arrowheads in the diagram. For example, to represent the trust accessibility relation between agents in states S 0 and S 1 , we denote it as S 0 . . . . . . . . . . . . . . . . . . > < 0.8 , 0.6 , 0.9 , 0.8 > S 1 . Additionally, < 0.8 , 0.6 , 0.9 , 0.8 > represents the possibility values of the trust accessibility relations between Agent i 1 and Agent i 2 with respect to Agent j, as well as from Agent j to Agents i 1 and i 2 , denoted as < i 1 j ( s 0 , s 1 ) = 0.8 , i 2 j ( s 0 , s 1 ) = 0.6 , j i 1 ( s 0 , s 1 ) = 0.9 , j i 2 ( s 0 , s 1 ) = 0.8 > . Here, null indicates that there is no trust accessibility relation between the agents;
  • For vector v i , it is assumed by default that the trust accessibility relations correspond to the same level of trust between each pair of agents. This is not explicitly shown in the diagram, i.e., l i ( s 0 ) ( v i ( j ) ) = l i ( s 1 ) ( v i ( j ) ) , and is detailed in Definition 8.
Converting the GPTIS model in Figure 5 to the GPDP model in Figure 6. The conversion, δ i 1 j , δ i 2 j , δ j i 1 , δ j i 2 represents the trust actions between merchant i 1 , i 2 and customer j. S 0 δ j i 1 0.9 S 1 denotes that there is a trust action from customer j to merchant i 1 between global state S 0 and global state S 1 , with a possibility of 0.9.
Trust formulas generated in action δ j i 1 are as follows:
  • T p ( j , i 1 , F , B 1 ) ( S 0 ) = 0.7 represents that in state S 0 , the possibility of merchant i 1 shipping the order, given that customer j trusts merchant i 1 and the customer has made the payment, is 0.7.
  • T p ( j , i 1 , F , D ) ( S 0 ) = 0.3 represents that in state S 0 , the possibility of a fast order delivery, given that customer j trusts merchant i 1 and the customer has made the payment, is 0.3.
  • T c ( j , i 1 , ¬ F , F B 1 ) ( S 0 ) = 0.1 represents that in state S 0 , the possibility of the customer making the payment at the time of shipment, given that customer j trusts merchant i 1 and the customer has not yet made the payment, is 0.1.
  • T c ( j , i 1 , ¬ F , F D ) ( S 0 ) = 0.1 represents that in state S 0 , the possibility of a fast order delivery, given that customer j trusts merchant i 1 and the customer has not yet made the payment, is 0.1 (Similarly for action δ j i 2 ).
The trust formulas generated in action δ i 1 j are as follows,
  • T p ( i 1 , j , D , C ) ( S 1 ) = 0.4 represents that in state S 1 , the possibility of the customer giving positive feedback, given that merchant i 1 trusts customer j and the order is delivered quickly, is 0.4.
  • T p ( i 1 , j , B 1 , F ) ( S 1 ) = 0.1 represents that in state S 1 , the possibility of the customer making the payment after the merchant has shipped the order, given that merchant i 1 trusts customer j, is 0.1.
  • T c ( i 1 , j , ¬ B 1 , B 1 F ) ( S 1 ) = 0.3 represents that in state S 1 , the possibility of the merchant shipping the order when the customer makes the payment, given that merchant i 1 trusts customer j and the merchant has not yet shipped the order, is 0.3.
  • T c ( i 1 , j , ¬ B 1 , B 1 C ) ( S 1 ) = 0.3 represents that in state S 1 , the possibility of the customer giving positive feedback, given that merchant i 1 trusts customer j and the merchant has not yet shipped the order, is 0.3 (Similarly for action δ i 2 j ).
Converting the GPDP model in Figure 6 to the GPKS model. By considering only the trust actions, two GPKS models M k = ( S k , P k max , I k , A P k , L k ) and M k = ( S k , P k min , I k , A P k , L k ) can be obtained.
Their fuzzy matrices are shown in the following figure:
I = 1 0 0 0   P δ max = 0.8 0.9 0.7 0.9 0.9 0.8 0.8 0.9 0.9 0.9 0.8 0.9 0.9 0.9 0.8 0.9   P δ min = 0.7 0.6 0.6 0.5 0.7 0.6 0.5 0.5 0.5 0.5 0.6 0.6 0.4 0.4 0.6 0.7
(1)
T p ( j , i 1 , F , B 1 ) max ( S 0 ) = 0.6 and T p ( j , i 2 , F , B 1 ) min ( S 0 ) = 0.6 are represented as, in state S 0 , the maximum possibility of the merchant shipping the order, given that customer j trusts the merchant and the customer has made the payment, which is 0.6, and the minimum possibility is also 0.6.
(2)
T c ( i 2 , j , ¬ B 2 , B 2 F ) max ( S 3 ) = 0.3 and T c ( i 1 , j , ¬ B 1 , B 1 F ) min ( S 3 ) = 0.2 are represented as, in state S 3 , the maximum possibility of the merchant shipping the order when the customer makes the payment, given that the merchant trusts customer j and the merchant has not yet shipped the order, which is 0.3, and the minimum possibility is 0.2.
The specific meanings of the formulas are consistent with the explanations provided above and will not be reiterated here.

8.2. Simulation Modeling Using NetLogo

This study develops and implements a multi-agent trust interaction simulation system based on the NetLogo platform to model the dynamic interactions between a customer and two distinct sellers. In each simulation round, the customer randomly selects one of the sellers and randomly adopts one of eight trust evaluation types. Specifically, TP1 through TP8 denote the trust values computed between the customer and Seller A using the eight trust computation methods listed in the leftmost column of Table 2 in top-to-bottom order, while TP1b through TP8b denote the corresponding results for interactions with Seller B using the same set of methods.
In each round of interaction, the system not only updates the corresponding trust value but also adjusts the seller’s behavioral action probability and the internal state variables of the agents based on the feedback from the trust result. The state adjustment mechanism adopts a directionally consistent linear fine-tuning strategy in which each relevant state variable incrementally moves toward the current trust value. This design is intended to simulate the agents’ learning and behavioral adaptation over time. Figure 7 and Figure 8 illustrate the trust dynamics trends between a customer and sellers A and B, respectively.
The aforementioned partial scenario is computed using a fuzzy model checking algorithm.
T P 1 = T p ( j , i 1 , F , B 1 ) ( S ) = ( F ¬ B 1 ) G P o ( B 1 ) ( s ) = ( D F ¬ B 1 E ) ( P A d v D B 1 r A d v ) ( s ) = F ¬ B 1 ( s ) E P ( s 0 , δ j i 1 , s 1 ) B 1 ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.36
T P 2 = T p ( j , i 1 , F , D ) ( S ) = ( F ¬ D ) G P o ( D ) ( s ) = ( D F ¬ D E ) ( P A d v D r A d v ) ( s ) = F ¬ D ( s ) E P ( s 0 , δ j i 1 , s 1 ) D ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.30
T P 3 = T c ( j , i 1 , ¬ F , F B 1 ) ( S ) = ¬ F G P o ( ( F B 1 ) ) ( s ) = ( D ¬ F E ) ( P A d v D F B 1 r A d v ) ( s ) = ¬ F ( s ) E P ( s 0 , δ j i 1 , s 1 ) F B 1 ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.32
T P 4 = T c ( j , i 1 , ¬ F , F D ) ( S ) = ¬ F G P o ( ( F D ) ) ( s ) = ( D ¬ F E ) ( P A d v D F D r A d v ) ( s ) = ¬ F ( s ) E P ( s 0 , δ j i 1 , s 1 ) F D ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.31
T P 5 = T p ( i 1 , j , D , C ) ( S ) = ( D ¬ C ) G P o ( C ) ( s ) = ( D D ¬ C E ) ( P A d v D C r A d v ) ( s ) = D ¬ C ( s ) E P ( s 0 , δ i 1 j , s 1 ) C ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.31
T P 6 = T p ( i 1 , j , B 1 , F ) ( S ) = ( B 1 ¬ F ) G P o ( F ) ( s ) = ( D B 1 ¬ F E ) ( P A d v D F r A d v ) ( s ) = B 1 ¬ F ( s ) E P ( s 0 , δ i 1 j , s 1 ) F ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.32
T P 7 = T c ( i 1 , j , ¬ B 1 , B 1 F ) ( S ) = ¬ B 1 G P o ( ( B 1 F ) ) ( s ) = ( D ¬ B 1 E ) ( P A d v D B 1 F r A d v ) ( s ) = ¬ B 1 ( s ) E P ( s 0 , δ j i 1 , s 1 ) B 1 F ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.34
T P 8 = T c ( i 1 , j , ¬ B 1 , B 1 C ) ( S ) = ¬ B 1 G P o ( ( B 1 C ) ) ( s ) = ( D ¬ B 1 E ) ( P A d v D B 1 C r A d v ) ( s ) = ¬ B 1 ( s ) E P ( s 0 , δ j i 1 , s 1 ) B 1 C ( s ) i 0 P A d v δ ( s i , s i + 1 ) = 0.36
The trust values computed by the model checking algorithm consistently lie within the ranges observed in the simulation experiments, thereby validating that the model behaves as intended. This confirms the accuracy and effectiveness of the proposed approach in reasoning about fuzzy trust relationships. Due to space constraints, only representative scenarios are presented; nevertheless, all verified cases exhibit consistent alignment with the simulation results.

9. Conclusions

To study the verification of trust properties in multi-agent systems under generalized possibility measures, this paper uses the GPTIS model to model the system and employs GPTCTL to describe the system properties. An indirect model checking algorithm is used to transform the GPTCTL model checking problem based on GPTIS into a GPoCTL model checking problem based on GPKS. The calculation method for trust formulas is derived, and the corresponding algorithms are provided along with an analysis of their time complexity. An example of an online shopping system is given to illustrate the application of the algorithm in practice. Next, we will develop specific application tools to perform verification using a more direct model checking approach. Additionally, we will incorporate commitment operators or reputation operators into the trust operators to enrich the system properties.

Author Contributions

R.H.: Conceptualization, methodology, formal analysis, investigation, algorithm design, complexity analysis, writing—original draft preparation. Z.M.: Conceptualization, supervision, validation, writing—review and editing, funding acquisition, project administration. N.H.: Investigation, formal verification support, discussion of trust semantics, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Natural Science Foundation of Ningxia (Grant No. AAC03300), National Natural Science Foundation of China (Grant No. 61962001), Graduate Innovation Project of North Minzu University (Grant No. YCX24353).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Dahiya, A.; Aroyo, A.M.; Dautenhahn, K.; Smith, S.L. A survey of multi-agent human–robot interaction systems. Robot. Auton. Syst. 2023, 161, 104335. [Google Scholar] [CrossRef]
  2. Drew, D.S. Multi-agent systems for search and rescue applications. Curr. Robot. Rep. 2021, 2, 189–200. [Google Scholar] [CrossRef]
  3. Wang, J.; Hong, Y.; Wang, J.; Xu, J.; Tang, Y.; Han, Q.L.; Kurths, J. Cooperative and competitive multi-agent systems: From optimization to games. IEEE/CAA J. Autom. Sin. 2022, 9, 763–783. [Google Scholar] [CrossRef]
  4. Bao, G.; Ma, L.; Yi, X. Recent advances on cooperative control of heterogeneous multi-agent systems subject to constraints: A survey. Syst. Sci. Control Eng. 2022, 10, 539–551. [Google Scholar] [CrossRef]
  5. Seitz, M.; Gehlhoff, F.; Cruz Salazar, L.A.; Fay, A.; Vogel-Heuser, B. Automation platform independent multi-agent system for robust networks of production resources in industry 4.0. J. Intell. Manuf. 2021, 32, 2023–2041. [Google Scholar] [CrossRef]
  6. Haj Qasem, M.; Aljaidi, M.; Samara, G.; Alazaidah, R.; Alsarhan, A.; Alshammari, M. An intelligent decision support system based on multi agent systems for business classification problem. Sustainability 2023, 15, 10977. [Google Scholar] [CrossRef]
  7. Zhang, D.; Feng, G.; Shi, Y.; Srinivasan, D. Physical safety and cyber security analysis of multi-agent systems: A survey of recent advances. IEEE/CAA J. Autom. Sin. 2021, 8, 319–333. [Google Scholar] [CrossRef]
  8. Bentahar, J.; Drawel, N.; Sadiki, A. Quantitative group trust: A two-stage verification approach. In Proceedings of the the 21st International Conference on Autonomous Agents and Multiagent Systems, Auckland, New Zealand, 9–13 May 2022; pp. 100–108. [Google Scholar]
  9. Audun, J. A logic for uncertain probabilities. Int. J. Uncertainty Fuzziness Knowl. Based Syst. 2001, 9, 212–279. [Google Scholar]
  10. Li, Y.; Ma, Z. Quantitative computation tree logic model checking based on generalized possibility measures. IEEE Trans. Fuzzy Syst. 2015, 23, 2034–2047. [Google Scholar] [CrossRef]
  11. Ma, Z.; Li, Y. Model checking for generalized possibilistic computation tree logic based on decision processes. Sci. China Inf. Sci. 2016, 46, 1591–1607. [Google Scholar] [CrossRef]
  12. Li, X.; Ma, Z.; Mian, Z.; Liu, Z.; Huang, R.; He, N. Computation Tree Logic Model Checking of Multi-Agent Systems Based on Fuzzy Epistemic Interpreted Systems. Comput. Mater. Contin. 2024, 78, 4129–4152. [Google Scholar] [CrossRef]
  13. Ma, Z.; Li, X.; Liu, Z.; Huang, R.; He, N. Model checking fuzzy computation tree logic of multi-agent systems based on fuzzy interpreted systems. Fuzzy Sets Syst. 2024, 485, 108966. [Google Scholar] [CrossRef]
  14. Ma, Z.; Li, X.; Gao, Y.; Liu, Z. Model checking based on fuzzy multi-agent systems. J. Huazhong Univ. Sci. Technol. (Natural Sci. Ed.) 2024, 52, 64–71. [Google Scholar]
  15. Castelfranchi, C.; Falcone, R. Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Proceedings of the Proceedings International Conference on Multi Agent Systems (Cat. No. 98EX160); IEEE: New York, NY, USA, 1998; pp. 72–79. [Google Scholar]
  16. Herzig, A.; Lorini, E.; Hübner, J.F.; Vercouter, L. A logic of trust and reputation. Log. J. IGPL 2010, 18, 214–244. [Google Scholar] [CrossRef]
  17. Herzig, A.; Lorini, E.; Moisan, F. A simple logic of trust based on propositional assignments. In The Goals of Cognition. Essays in Honor of Cristiano Castelfranchi; College Publications: Suwanee, GA, USA, 2012; pp. 407–419. [Google Scholar]
  18. Chen, Z.; Jiang, Y.; Zhao, Y. A Collaborative Filtering Recommendation Algorithm Based on User Interest Change and Trust Evaluation. J. Digit. Content Technol. Its Appl. 2010, 4, 106–113. [Google Scholar] [CrossRef]
  19. Oliveira, E.; Cardoso, H.L.; Urbano, M.J.; Rocha, A.P. Normative monitoring of agents to build trust in an environment for b2b. In Proceedings of the Artificial Intelligence Applications and Innovations: 10th IFIP WG 12.5 International Conference, AIAI 2014, Rhodes, Greece, 19–21 September 2014; Proceedings 10; Springer: Berlin/Heidelberg, Germany, 2014; pp. 172–181. [Google Scholar]
  20. Wahab, O.A.; Bentahar, J.; Otrok, H.; Mourad, A. Towards trustworthy multi-cloud services communities: A trust-based hedonic coalitional game. IEEE Trans. Serv. Comput. 2016, 11, 184–201. [Google Scholar] [CrossRef]
  21. Liu, F.; Lorini, E. Reasoning about belief, evidence and trust in a multi-agent setting. In Proceedings of the International Conference on Principles and Practice of Multi-Agent Systems, Nice, France, 30 October–November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 71–89. [Google Scholar] [CrossRef]
  22. Leturc, C.; Bonnet, G. A normal modal logic for trust in the sincerity. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  23. Drawel, N. Model Checking Trust-Based Multi-Agent Systems. Ph.D Thesis, Concordia University, Montreal, QC, Canada, 2019. [Google Scholar]
  24. Ghasempouri, S.A.; Ladani, B.T. Model checking of robustness properties in trust and reputation systems. Future Gener. Comput. Syst. 2020, 108, 302–319. [Google Scholar] [CrossRef]
  25. Drawel, N.; Bentahar, J.; Laarej, A.; Rjoub, G. Formal verification of group and propagated trust in multi-agent systems. Auton. Agents Multi-Agent Syst. 2022, 36, 19. [Google Scholar] [CrossRef]
  26. Fagin, R.; Halpern, J.Y.; Moses, Y.; Vardi, M. Reasoning About Knowledge; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
Figure 1. The proposed approach.
Figure 1. The proposed approach.
Mathematics 14 00456 g001
Figure 3. Example of trust representation format.
Figure 3. Example of trust representation format.
Mathematics 14 00456 g003
Figure 4. An illustrative example of model transformation.
Figure 4. An illustrative example of model transformation.
Mathematics 14 00456 g004
Figure 5. Example of trust representation format in the GPTIS model.
Figure 5. Example of trust representation format in the GPTIS model.
Mathematics 14 00456 g005
Figure 6. Example of trust representation format in the GPDP model.
Figure 6. Example of trust representation format in the GPDP model.
Mathematics 14 00456 g006
Figure 7. Trust dynamics between customer and seller A under eight trust computation methods.
Figure 7. Trust dynamics between customer and seller A under eight trust computation methods.
Mathematics 14 00456 g007
Figure 8. Trust dynamics between customer and Seller B under eight trust computation methods.
Figure 8. Trust dynamics between customer and Seller B under eight trust computation methods.
Mathematics 14 00456 g008
Table 1. Comparison between publications and evaluation criteria.
Table 1. Comparison between publications and evaluation criteria.
ApproachC1C2C3C4C5C6C7C8
Castelfranchi-Falcone [15]----
Logic of Trust and Reputation [16,17]--
Norm-Based Trust [19]-----
Trust-Hedonic Game [20]----
Belief-Evidence-Trust Logic [21]-----
Normal Modal Logic for Trust in Sincerity [22]-----
Trust-Based Multi-Agent Systems [23]---
Our Work-
Table 2. Truth table for trust and action possibility based on Figure 5.
Table 2. Truth table for trust and action possibility based on Figure 5.
Trust Formula S 0 S 1 S 2 S 3
T p ( j , i 1 , F , B 1 ) 0.70.30.40.2
T p ( j , i 1 , F , D ) 0.30.30.30.3
T c ( j , i 1 , ¬ F , F B 1 ) 0.10.10.20.1
T c ( j , i 1 , ¬ F , F D ) 0.10.10.20.1
T p ( i 1 , j , D , C ) 0.70.40.40.5
T p ( i 1 , j , B 1 , F ) 0.10.10.20.1
T c ( i 1 , j , ¬ B 1 , B 1 F ) 0.60.30.30.1
T c ( i 1 , j , ¬ B 1 , B 1 C ) 0.70.30.40.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, R.; Ma, Z.; He, N. Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory. Mathematics 2026, 14, 456. https://doi.org/10.3390/math14030456

AMA Style

Huang R, Ma Z, He N. Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory. Mathematics. 2026; 14(3):456. https://doi.org/10.3390/math14030456

Chicago/Turabian Style

Huang, Ruiqi, Zhanyou Ma, and Nana He. 2026. "Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory" Mathematics 14, no. 3: 456. https://doi.org/10.3390/math14030456

APA Style

Huang, R., Ma, Z., & He, N. (2026). Formal Verification of Trust in Multi-Agent Systems Under Generalized Possibility Theory. Mathematics, 14(3), 456. https://doi.org/10.3390/math14030456

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop