In this section, we first provide an overview of the current state of the art in decentralized reputation systems. We then present an introduction to both blockchain and SMC (Secure Multi-Party Computation), including any modifications or adaptations that have been made, and highlight specific features that are suitable for our proposed use case. Finally, we provide definitions for relevant concepts related to the proposed system and discuss important security considerations.
2.1. Related Works
In PDRSs, there are two primary approaches to ensuring privacy [
13]. The first approach prioritizes user anonymity, while the second approach emphasizes feedback confidentiality. The distinction between these two approaches can be summarized as follows [
13]:
The first approach, known as user anonymity-oriented systems, assigns one or more pseudonyms to users that cannot be linked to their true identities to preserve anonymity. These systems allow the users to conduct transactions and provide feedback without hiding it because they are not associated with their true identities.
The second approach, known as feedback confidentiality-oriented systems, assigns unique pseudonyms to each user, and feedback is kept private. These systems do not aim to conceal the identities of the users providing feedback, but rather to hide the specific feedback values. In theory, these systems should not reveal any information about feedback other than the aggregated reputation.
The current study adopts the second approach as it is more pragmatic and realistic. This is because, in reality, complete anonymity is not always feasible in everyday situations. For instance, on e-commerce platforms, even if anonymity can be maintained online, the exchange of physical goods sold on them may reveal customers’ identities. From this perspective, feedback-confidentiality systems are a more viable option for enabling users to give honest feedback without fear of retaliation.
Most traditional works in the field of PDRSs (e.g., [
6,
8,
9,
10]) focus on a scenario where a querying party, denoted as
${P}_{q}$, wishes to interact with a target party, denoted as
${P}_{t}$, but is uncertain of
${P}_{t}$’s trustworthiness. This may be due to a lack of information about
${P}_{t}$’s past behavior, or limited or outdated experience with
${P}_{t}$. Let
$\{{P}_{1}^{\left(t\right)},{P}_{2}^{\left(t\right)},\dots ,{P}_{N}^{\left(t\right)}\}$ represent the set of parties that possess reputation information about
${P}_{t}$, referred to as witnesses or source parties. In such cases,
${P}_{q}$ can consult a selected subset of source parties, namely
$\{{P}_{{i}_{1}}^{\left(t\right)},{P}_{{i}_{2}}^{\left(t\right)},\dots ,{P}_{{i}_{n}}^{\left(t\right)}\}$ (
$n\le N$), who will execute a protocol to compute
${P}_{t}$’s reputation score securely and then send the result to
${P}_{q}$.
In line with this setting, one of the earliest works in the field is presented in [
6]. The authors proposed a system that relies on random witness selection and additive secret sharing. The system offers three different levels of security and demonstrates the feasibility of witness selection schemes that produce at least two honest witnesses with high probability. Despite being fully decentralized and suitable for general use, the system is not able to compute and store global reputation scores. Instead, each party in the system must retain its gathered information locally, and reputation is determined solely by feedback from neighboring parties. Additionally, the system demands the exchange of
$O\left({n}^{2}\right)$ messages for each reputation request.
The authors of [
7,
9] expanded upon the work presented in [
6] with the k-shares reputation system, which is designed for the semi-honest and malicious adversarial model, respectively. Their system enhances efficiency by reducing communication costs to
$O\left(n\right)$ messages. Furthermore, it increases the probability of keeping reputation information private by enabling users to choose witnesses with good reputation scores while avoiding those they do not trust.
The previous systems presented in [
6,
7,
9] are not well-suited for dynamic networks due to a number of limitations. Specifically, in dynamic networks, the number of available source parties, or parties currently participating in the network, may be smaller than in a static network. Furthermore, when a party leaves the network, all of its reputation information becomes inaccessible, as each party stores its information locally. As a result, reputation is likely to be computed with a different set of present parties each time a party requests it, leading to inconsistent and changing reputation information at each request.
To address these challenges, authors in [
10] proposed a system that enables parties leaving the network to delegate their reputation information in order to prevent its loss. However, this approach comes with an increase in computation and communication costs and requires the leaving user to divide the entrusted information among a group of users (through secret-sharing) before leaving. Additionally, if a member of the delegation group leaves, the information must be re-delegated, making the recovery and reconstruction of the delegated information more challenging as the number of parties involved increases and the data become fragmented.
Despite the efforts made to prevent the loss of reputation information, it is evident that the previously mentioned solutions remain incapable of computing and storing global reputation scores. These methods rely on users’ direct experiences and recommendations from neighbors and acquaintances, resulting in incomplete and inconsistent reputation data that are primarily shared locally.
To structure the literature review effectively, we assess and classify related works according to the following criteria:
- 1.
Full Decentralization: Reputation systems that do not depend on central entities for the collection, computation, or dissemination of reputation scores [
14]. Instead, the information is distributed among parties, who share it to evaluate the trustworthiness of potential transactional partners.
- 2.
General-Purpose Reputation Systems: These systems are designed to be utilized in various network environments and are not limited to specific settings such as service providers/consumers in online marketplaces or servers/clients in IoT [
6,
9,
10]. They are flexible enough to adapt to various networks, including P2P, MANETs, or WSNs.
- 3.
Global Reputation Systems: These systems collect ratings from all over the network and aggregate them into global reputation scores that are accessible to all users across the network [
15].
- 4.
Privacy: refers to the ability of a reputation system to compute and disseminate reputation scores while preserving feedback privacy [
7,
10].
It is worth noting that many proposed systems in the literature are not general-purpose systems. Rather, they are tailored to specific contexts such as online marketplaces [
16,
17,
18] or the Internet of Things (IoT) [
19,
20,
21,
22], where the network is divided into two distinct groups: ratees and raters. In online marketplaces, users are either consumers (raters) or service providers (ratees), while in IoT, they are either server nodes or clients. However, in other contexts, such as P2P networks, MANETs, VANETs, DSNs, and WSNs, users often have to play both the role of a rater and a ratee.
We emphasize that such proposed systems are often too specific or incompatible with fully distributed settings such as P2P networks, MANETs, or WSNs. One example is PrivBox [
23], a verifiable reputation system for online marketplaces. It enables consumers to rate retailers and submit their feedback in an encrypted form using homomorphic encryption to a public bulletin board (PBB). The system makes reputation information publicly accessible and verifiable without disclosing the individual ratings. However, the system leaves the reputation computation task to any customer who wishes to compute the reputation of a particular vendor. The system employs zero-knowledge proofs to demonstrate that the ratings are well-formed.
Another example of such a system is PrivRep [
24], which builds upon the work presented in [
23]. The system utilizes a public bulletin board (PBB) in combination with a reputation engine (RE) to calculate reputation from homomorphically encrypted feedback. The RE is controlled and operated by the marketplace, rather than regular users, and it has the authority to reject feedback deemed untrustworthy. It is evident that the use of two central entities, namely the RE for computation and the PBB for storage, undermines the decentralized nature of the proposed system.
Similarly, the system proposed in [
15] for the Social Internet of Things also relies on a PBB. However, the authors mention the possibility of implementing it as a blockchain or a mirrored server, which may address the issue of centralization and enhance the decentralized aspect of the system.
In [
17], the authors proposed a blockchain-based cross-platform reputation system for e-commerce, referred to as RepChain. The system interconnects various e-commerce platforms and enables them to share their users’ reputations through a consortium blockchain. While the system is not entirely decentralized, as each platform relies on its centralized entity, the top layer interconnecting platforms are decentralized owing to the use of blockchain technology.
The authors in [
25] propose a solution to blockchain usage limitations in the Internet of Things (IoT) reputation systems, especially their lack of scalability. They introduce a distributed ledger combining Tangle and blockchain as a reputation framework. Combining Tangle with blockchain is destined to provide maintainability of the former and scalability of the latter. Consequently, the proposed ledger could handle a more significant number of IoT devices and transactions.
Blockchain has had a wide range of applications due to its outstanding features such as security and reliability, especially in distributed settings [
26,
27,
28,
29]. Among other applications is fog computing, where blockchain may achieve secure decentralized reputation systems and identity management [
30].
Similarly, the authors in [
21] proposed a decentralized reputation management system for the Internet of Things (IoT) that takes into account geospatial information. The proposed system recognizes that the trustworthiness of a device can be affected by various factors, including its geographical location. The system utilizes a cloud–fog–edge architecture, in which the fog layer employs blockchain technology to create a decentralized network among fog nodes, allowing for transparent and decentralized management. The location-based system component stores geographical information in smart contracts, enabling reputation values to vary based on the device’s location.
In the field of VANETs, decentralized reputation systems were proposed in works such as [
31], where the authors utilized a Bayesian filter to enable nodes to detect malicious vehicles based on their trust scores. Additionally, the authors in [
32] proposed a two-layered blockchain-based reputation system comprising a local, one-day message blockchain and a global vehicle reputation blockchain. The local blockchain efficiently manages local traffic information, reducing the memory overhead for vehicles, while the global one maintains reputation scores.
Based on the literature review and the classification of related works according to four criteria, namely: Full Decentralization, General-Purpose, Global Reputation, and Privacy (as summarized in
Table 1), it is clear that a portion of the related works are fully decentralized reputation systems and general-purpose, but do not maintain global reputation scores and rely on locally stored information. As a result, their produced reputation information is partial and inconsistent across the network, as it is limited to users’ direct experience and recommendations from neighbors and acquaintances. On the other hand, another portion of related works is proposed for specific settings and achieves global reputation and some form of decentralization, but they are not general-purpose systems. The current work objective is to fill this gap by proposing a global reputation system that is both general-purpose and fully decentralized.
2.2. Blockchain
Typically, a blockchain can be seen as a distributed public database from which every user can read, but not any user can write. Rather, users need to reach a consensus over the network before the writing is accepted [
33]. All actions that modify this database are recorded and broadcast to all users in blocks. Once a block is received and accepted by network users, it becomes immutable after a few following blocks. Furthermore, blockchain is reputed for recording transactions efficiently in a verifiable and permanent way. Owing to this fact, it is considered an append-only database.
From another point of view, blockchain can be regarded as a data structure composed of an ordered list of blocks as depicted in
Figure 1. The result of all actions in all blocks at a given moment constitutes the state of the blockchain.
Let ${\left({\mathrm{B}}_{i}\right)}_{(0\le i\le l)}$ be a blockchain, where the first block ${\mathrm{B}}_{0}$ is the genesis block, while ${\mathrm{B}}_{l}$ is the last validated block. Each block ${\mathrm{B}}_{i}$ contains a cryptographic hash of its precedent block ${\mathrm{B}}_{i-1}$, which makes the blockchain resistant to modification by design. Once recorded, the data in any given block cannot be modified without altering all the previous blocks, which requires the consensus of the network majority. A blockchain is typically managed by a P2P network adhering to a communication protocol.
The ${i}^{\mathrm{th}}$ block ${\mathrm{B}}_{i}$ is composed of a block header ${\mathrm{H}}_{i}$ and a series of transactions ${\mathrm{T}}_{i}$. The block header ${\mathrm{H}}_{i}$ includes a collection of relevant data fields, whereas the transactions series ${\mathrm{T}}_{i}=(T{x}_{i}^{\left(1\right)},\dots ,T{x}_{i}^{\left(n\right)})$ is the list of transactions comprised in this block. Specifically, transactions are organized in each block as a Merkle tree data structure. In addition, the Merkle tree has its root hash called TransactionsRoot in the block header.
For simplicity, a block can be formulated as:
where
${\mathrm{H}}_{i}$ includes several data fields, among them [
33,
34]:
PreviousBlockHash: The hash of the previous block’s header.
StateRoot: The hash of the root node of the state tree.
TransactionsRoot: The hash of the root node of the transactions’ Merkle tree.
A transaction
$Tx$ is a single instruction constructed by a party and signed cryptographically. It is traditionally used to transfer a sum of coins (virtual money). In our context, it is also used to submit parties’ feedback to the blockchain and to join source parties’ lists (c.f.
Section 3). Mainly, it contains some common data fields, namely:
Nonce: A scalar value equal to the number of transactions issued by the sender;
Recipient: The address of the recipient.
Type: The transaction type: $Transfer$ for coins transfer; $Rate$ for rating peers; and $Join$ for the joining recipient’s source parties list.
Data: A value that depends on the transaction type.
Signature: The transaction signature, also used to recover the sender’s address.
2.2.1. State
The state, as described in [
34,
35], is a mapping between parties’ addresses and their state accounts. It is implemented and maintained in the form of a modified Merkle tree. It is a simple database linked to blocks, but not stored on the blockchain. The state database can be considered a condensed version of the blockchain, as it only contains the essential information for regular users, specifically accounts and balances. It is the only data structure, in addition to the blocks’ headers, required for non-miner users to participate in the network. In the context of this work, reputation scores are also stored in the state. Ethereum is an example of a blockchain that implements a state database, unlike Bitcoin, which does not have an equivalent structure, as stated in [
36].
We denote the state database as $\sigma $ and use a party’s address a to reference its account denoted by ${\sigma}^{\left(a\right)}$. In the context of this research, the account state includes the following data fields:
Nonce: The number of transactions sent from this address, denoted ${\sigma}_{n}^{\left(a\right)}$.
Balance: The number of coins owned by this address, denoted ${\sigma}_{b}^{\left(a\right)}$.
Reputation: The reputation score, denoted ${\sigma}_{r}^{\left(a\right)}$.
Weight: The number of feedback received so far, denoted ${\sigma}_{w}^{\left(a\right)}$.
Source Parties List: The list of source parties’ addresses, denoted ${\sigma}_{l}^{\left(a\right)}\left[\phantom{\rule{4pt}{0ex}}\right]$.
Figure 2 shows all the blockchain components and how they are linked.
2.2.2. Consensus
The creation of blocks is controlled by a mechanism that varies between different blockchain algorithms. The first mechanism used to reach consensus [
37] on newly created blocks was the Proof-of-Work (PoW). The concept started with the blockchain/currency Bitcoin [
12] and was followed by several alternative coins launched using similar ideas.
The mechanism of PoW requires block creators, called miners, to prove that they performed a certain amount of work to write to the blockchain. Usually called mining, this task consists of finding a partial collision using hash functions, which is a power-consuming process often requiring dedicated hardware [
38].
With the advent of PPCoin [
39] developed further by BlackCoin [
40], NXT [
41], and NeuCoin [
42], a new family of blockchain-based systems was born, replacing PoW with the concept of Proof-of-Stake (PoS).
With the PoS, every participant randomly gains the right to write to the blockchain with a probability proportional to their stake, i.e., the number of coins they staked. Therefore, the additional computing power used in PoW becomes useless. Accordingly, it is less costly to run the PoS and particularly much faster to create new blocks under it. After some of its inherent issues were tackled [
39,
40,
42], it is perfectly reasonable today to maintain a distributed consensus using the PoS. In this regard, we consider the PoS more appropriate in our context than the PoW.
During the block creation process, miners, also known as validators, perform various tasks. Among them, they record new transactions and validate new blocks. Usually, a new block is validated on regular periods. When transactions become part of an accepted block, they are considered confirmed. Consequently, all the concerned users’ state accounts are updated to reflect the changes made by the transactions in that block.
Whether the consensus mechanism uses mining in the case of PoW or validation in the case of PoS, block creation has a cost for miners/validators. This task is attractive for them only because they are rewarded by earning transaction fees. In this regard, it is crucial to highlight that the role of a reputation system is to support the network’s operation by establishing trust between the users and encouraging participation and good behavior. Thus, it is counterproductive to impose fees on users that provide their feedback, as it is already challenging to persuade them to do so without fees. Additionally, assuming some activity and transactions in the network other than rating and reputation—otherwise, the very existence of the network would be pointless—it is those types of transactions that have to pay for the block creation process. Because reputation functionalities are essential functionalities just like security, we cannot compare them with financial transactions, for example.
2.3. Secure Multiparty Computation
Using SMC, a set of parties can collaboratively compute a function over their private inputs without disclosing them. With n parties ${\left({P}_{i}\right)}_{1\le i\le n}$, each one holding a secret input ${x}_{i}$, and f an agreed-upon function that accepts n inputs: A protocol ${\Pi}$ is a SMC one if it allows ${\left({P}_{i}\right)}_{1\le i\le n}$ to compute $y=f({x}_{1},...,{x}_{n})$ while meeting the following criteria:
Correctness: The protocol ${\Pi}$ correctly computes the value of y;
Privacy: $\forall i/1\le i\le n$ The $n-1$ parties ${\left({P}_{j}\right)}_{1\le j\ne i\le n}$ cannot learn any information about ${x}_{i}$, but y from the protocol.
In order to calculate a function
f, which is typically represented as a Boolean or arithmetic circuit, one must evaluate the equivalent circuit gate by gate. There are currently two paradigms for Secure Multi-Party Computation (SMC) implementation: secret sharing [
43,
44,
45,
46], and garbled circuits [
47,
48,
49]. Each paradigm has its own advantages and development trajectory. In our system, we use secret sharing, as it is more adapted to arithmetic circuits.
In the following paragraphs, we will introduce the variant of SMC used in our system, which is adapted from [
50] for arithmetic circuits. This protocol is considered in the semi-honest adversarial model. However, we do not use the entire protocol. Instead, we only include the elements of the protocol that are pertinent to our setting and particularly appropriate for the reputation-related functions we are focusing on (c.f.
Section 2.4.1).
Reputation functions typically determine reputation scores from ratings provided by users. They usually take the form of a sum, an average, or a weighted average, which we can represent as linear functions of the kind $f({x}_{1},\dots ,{x}_{n})={a}_{1}{x}_{1}+\dots +{a}_{n}{x}_{n}$ where ${\left\{{x}_{i}\right\}}_{1\le i\le n}$ are private values and ${\left\{{a}_{i}\right\}}_{1\le i\le n}$ are public. Remarkably, these functions employ only addition and multiplication by public values.
Let us assume that the desired function f is given as an arithmetic circuit composed of only addition and multiplication by public values.
In accordance with this protocol, secret-sharing a value $x\in {\mathbb{Z}}_{p}$ entails sampling $n-1$ uniform random shares ${\left\{{x}^{\left(i\right)}\right\}}_{1\le i\le n-1}\subset {\mathbb{Z}}_{p}$ and taking the ${n}^{th}$ share as ${x}^{\left(n\right)}=x-{\sum}_{i=1}^{n}{x}^{\left(i\right)}\phantom{\rule{3.33333pt}{0ex}}mod\phantom{\rule{0.277778em}{0ex}}p$. The outcome is an additive secret sharing of x, which is represented by $\left[x\right]=({\left[x\right]}_{1},\cdots ,{\left[x\right]}_{n})$, where ${\left[x\right]}_{i}={x}^{\left(i\right)}$ for $1\le i\le n$. Practically speaking, a party can generate $\left[x\right]$ as indicated and send a share ${\left[x\right]}_{i}$ to each party if it wishes to share its secret x with $n-1$ parties. The value of x will remain private as long as the party keeps the ${n}^{\mathrm{th}}$ share secret, and adding the n shares is sufficient to recover x.
The SMC protocol for n parties ${\left\{{P}_{i}\right\}}_{1\le i\le n}$ computing $y=f({x}_{1},\dots ,{x}_{n})$ from their respective private inputs, works as follows:
- 1.
Input: Every party ${P}_{i}$ secret-shares its private input ${x}_{i}$ by generating $\left[{x}_{i}\right]$ and sending ${\{{\left[{x}_{i}\right]}_{j}\}}_{1\le j\ne i\le n}$, respectively, to ${\left\{{P}_{j}\right\}}_{1\le j\ne i\le n}$ while keeping ${\left[{x}_{i}\right]}_{i}$ secret.
- 2.
Computation: Each party ${P}_{i}$ calculates f over the shares they received ${\left[y\right]}_{i}=f({\left[{x}_{1}\right]}_{i},\dots ,{\left[{x}_{n}\right]}_{i})$ by evaluating operations in the order and precedence. The operations are realized as follows:
- (a)
Addition: For example, $y={x}_{1}+{x}_{2}$ is realized by each party ${P}_{i}$ computing ${\left[y\right]}_{i}={\left[{x}_{1}\right]}_{i}+{\left[{x}_{2}\right]}_{i}$.
- (b)
Multiplication by a public value: For example, $y=a\times x$ for a public and x private is computed by each party ${P}_{i}$ evaluating ${\left[y\right]}_{i}=a{\left[x\right]}_{i}$.
- 3.
Output: A party ${P}_{i}$ can learn the result of computation y by each party ${P}_{j}$ sending ${\left[y\right]}_{j}$ to ${P}_{i}$ and party ${P}_{i}$ reconstructing $y={\left[y\right]}_{1}+\cdots +{\left[y\right]}_{n}$.
It is simple to check that the protocol computes f precisely. The protocol is secure against any passive adversary controlling up to $n-1$ parties. Indeed, the adversary cannot unveil the value of x unless he knows all shares in the representation $\left[x\right]$.
From the previously mentioned operations, the SMC protocol can handle any linear function $f({x}_{1},\dots ,{x}_{n})={a}_{1}{x}_{1}+\dots +{a}_{n}{x}_{n}$, which is amply sufficient for reputation functions.
The original protocol is broader than described above. It can handle the multiplication of two private values and, by extension, any arithmetic circuit since addition and multiplication form a complete basis for arithmetic circuits.
2.4. Problem Setting & Definitions
We model our environment as a multi-agent environment, where each agent represents a user or any devices executing the necessary computation and communication on behalf of them. We often use the word “party” instead of “agent” without changing the meaning. Let
$\mathbb{P}$ be the set of all parties existing in the environment and
$N=\left|\mathbb{P}\right|$. We associate with each party
${P}_{i}\in \mathbb{P}$ an account that is controlled by a pair of private/public keys, denoted (
${p}_{r}^{\left(i\right)}$,
${p}_{u}^{\left(i\right)}$). A party and its associated account are usually identified by a short address
${a}_{i}$ derived from its public key
${p}_{u}^{\left(i\right)}$ by taking the right-most 160-bits of its 256-bit SHA-3 hash [
34]:
2.4.1. Trust and Reputation
Let us introduce the following notations:
$\mathbb{T}\subseteq \mathbb{P}\times \mathbb{P}$ denote the set of all trust relationships between parties in $\mathbb{P}$, where $(a,t)\in \mathbb{T}$ (or $a\mathbb{T}t$) implies that the party a has a trust relationship towards a target party t. Mainly, $\mathbb{T}$ is a binary relation that is not necessarily symmetric, as trust is a directional relation.
$\mathbb{A}$ denotes the set of all actions, e.g., “upload authentic content”, or “report an event”.
$Exec$ refers to the function
such that
$Exec\left(t,\psi \right)$ outputs
$true$ if party
t executes the action
$\psi $ anticipated by party
a, or outputs
$false$ if
t does not perform the anticipated action. Let the subjective probability
$\mathrm{Pr}{[Exec(t,\psi )=true]}_{a}$ denote party
a’s belief that party
t will accomplish the action.
Without a loss of generality and for practicality’s sake, we can assign to the subjective belief mentioned above an equivalent integer value in the interval $[0,M]$ where M is a fixed positive integer. The integer value is obtained by normalizing the probability $\mathrm{Pr}{[Exec(t,\psi )=true]}_{a}$ to the scale $[0,M]$, which is done by multiplying by M, adding $1/2$ and taking the floor. The result is an integer in the range $[0,M]$.
Definition 1 (Trust).
Let $\mathbb{P}$ be the set of all parties, $\mathbb{A}$ be the set of all actions and $a,t\in \mathbb{P}$. The trust of party a in party t expressed as an integer and reported to the scale $[0,M]$ is given as: where $\psi \in \mathbb{A}$ and $M\in {\mathbb{Z}}_{p}$ with p a prime number. A party a is said to be a source party of a target party t in the context of an action $\psi $ if a has trust in t in the context $\psi $. The set of all source parties of a party t in context $\psi $ is denoted ${\mathcal{S}}_{t,\psi}$. When the context (action) is clear, the notation ${\mathcal{S}}_{t}$ is used. We also refer to a’s trust in party t as a’s feedback on t or the rating.
Definition 2 (Reputation).
A reputation function is any chosen function $Rep$ such that $Rep:{[0,\phantom{\rule{4pt}{0ex}}M]}^{n}\to \mathbb{R}$ ($M\in {\mathbb{Z}}_{p}$). Let ${\mathcal{S}}_{t}=\{{a}_{1}...{a}_{n}\}$ be the set of source parties of party t in the context ψ. If $Rep$ is the adopted reputation function, then the reputation of party t in the context ψ is defined as:where ${\tau}_{t}^{\left({a}_{i}\right)}$ is the trust of party ${a}_{i}$ in party t for $1\le i\le n$. ${\rho}_{t,\psi}$ is also denoted ${\rho}_{t}$ when the context is clear. Reputation, in general, is the outcome of evaluations from various sources. It is often represented as a function of these evaluations, such as the sum or average [
2]. There are a variety of methods for aggregating reputation from ratings, including counting [
51], probabilistic [
52], discrete [
53], flow [
54], and fuzzy approaches [
55]. However, a comprehensive examination of these methods is beyond the scope of this study. In this work, we adopt the counting approach for reputation, which is implemented as the average of feedback values due to its simplicity and ease of comprehension by human users. Other linear functions, such as the weighted average, could also be utilized without any alteration to the proposed system. The reputation is recorded on the blockchain as a pair
$({\sigma}_{r}^{\left(t\right)},{\sigma}_{w}^{\left(t\right)})$ (refer to
Section 2.2.1), where
${\sigma}_{r}^{\left(t\right)}={\rho}_{t}$ represents the reputation score and
${\sigma}_{w}^{\left(t\right)}=n$ represents the number of ratings, also known as weight. This way, the weight of this measure is preserved.
The weight of a reputation score is an essential factor in determining its overall value, as it reflects the number of ratings or evaluations that have been used to calculate the reputation score. The higher the weight, the more evaluations have been taken into account, making the reputation score more reliable and representative of the overall perception of an entity. To ensure that the weight of the reputation score is preserved, we record it alongside the reputation score on the blockchain. For further discussions on reputation aggregation, one can refer to the works of [
1,
2].
2.4.2. Security Definition and Adversary Model
In the following, we present the adversary model for our system, some important assumptions, and system requirements.
Adversary Model: In this paper, we consider the model of multiparty computation in the presence of static semi-honest adversaries. Parties in such a model are supposed to follow the protocol, but may try to learn more information than allowed during the execution of the protocol using intermediate information and their internal states. We call any coalition of dishonest parties adversaries.
Random Number Generator & Hash Function: All parties in the network are granted access to a random number generator, denoted RandGen(), to achieve privacy for our system and use the Keccak 256 algorithm [
56] as the default hash function denoted
$h\left(\right)$. Keccak is a robust hashing algorithm at the core of SHA-3 and is also part of Solidity [
57] and Ethereum.
Authentication: Our system uses public-key cryptography. It authenticates each user through digital signatures enabling them to exchange messages and perform transactions. Every user obtains a public and a private key forming his digital identity. They use the private key to sign messages and transactions linking them to their identity, while other users can verify the signer’s identity using the public key visible to all network participants. A valid signature gives a recipient confidence that the message was created by a known sender (authenticity) and was not altered in transit (integrity). Regularly, a signature scheme is a tuple of algorithms (Gen(), Sign(), Verify()) where Gen() generates a private key
${p}_{r}$ and a corresponding public key
${p}_{u}$; Sign() returns a tag
t on the inputs of the private key
${p}_{r}$, and a message
m; and Verify() outputs accepted or rejected on the inputs of a public key
${p}_{u}$, a message
m, and a tag
t. In our context, the system uses the elliptic curve digital signature algorithm (ECDSA) [
58] specifically, the recoverable version of it [
34] consisting of three functions that are
PUBKEY,
SIGN and
RECOVER, defined as follows:
$PUBKEY\left({p}_{r}\right)={p}_{u}$ returns ${p}_{u}$, a 512-bit public key on the input of a randomly generated 256-bit private key.
$SIGN(m,{p}_{r})=(v,r,s)$ returns a tag $(v,r,s)$ as a signature on the inputs of a public key ${p}_{u}$ and a message m.
$RECOVER(m,v,r,s)={p}_{u}$ returns the public key ${p}_{u}$ of the signer if $(v,r,s)$ is a valid signature or nothing otherwise.
Recoverable ECDSA is a variant of the ECDSA that allows for the recovery of the public key used to generate a signature from the signature itself. This is useful in certain situations, especially when multiple parties are signing a message, as it allows for a more efficient verification process without the need to store multiple copies of each party’s public key.
Encryption: On the other hand, for privacy purposes, the Elliptic Curve Integrated Encryption Scheme (ECIES) [
59] is used to encrypt messages between parties.
Communication channels: We assume that point-to-point channels exist between every pair of parties and postulate that they are reliable and guarantee the authenticity of the data sent through them. In addition, we assume they are private, so the adversary cannot obtain messages sent between honest parties. Point-to-point private channels are emulated in our context through signature and encryption.
Privacy in the Semi-Honest Model: Recall that in the semi-honest model, it is assumed that the parties involved in SMC will follow the protocol as prescribed, but they may attempt to gain additional information beyond what they are supposed to learn during the computation. Privacy in this model refers to the ability of the parties to keep their inputs confidential from one another during the computation, while still permitting the correct output to be computed.
A SMC protocol is considered to privately compute a function f if the information obtained by any subset of semi-honest parties during the execution of the protocol is the same as what they could learn by just looking at their inputs and the outputs. In other words, the protocol ensures that the parties do not learn any additional information about the inputs of other parties beyond what can be inferred from their own inputs and the outputs.
In formal terms [
60]: Let
$\{{a}_{1},\dots ,{a}_{k}\}$ be the parties participating in a SMC with the inputs
$\{{x}_{1},\dots ,{x}_{k}\}$, respectively,
$I=\{{i}_{1},\dots {i}_{t}\}\subseteq \{1,\dots ,k\}$ a subset of semi-honest parties representing the adversary, and
${\mathbf{view}}_{I}^{{\Pi}}$ denotes their view on the protocol
${\Pi}$, which is the set of all the information obtained by the adversary
I from the protocol during its execution. There exists a polynomial-time algorithm
$\mathcal{S}$ known as a simulator that can produce the same view from just the inputs of
I ${x}_{{i}_{1}},\dots ,{x}_{{i}_{t}}$ and the output
$f({x}_{1},\dots ,{x}_{k})$. In computational security, this is expressed and equivalent to the indistinguishability of two distributions
${\left\{\mathcal{S}(I,{x}_{{i}_{1}},\dots ,{x}_{{i}_{t}},f\left(\overline{x}\right))\right\}}_{\overline{x}\in {\left({\{0,1\}}^{*}\right)}^{k}}$ and
${\left\{{\mathbf{view}}_{I}^{{\Pi}}\left(\overline{x}\right)\right\}}_{\overline{x}\in {\left({\{0,1\}}^{*}\right)}^{k}}$.
O. Goldreich [
60] specifies the security definition of privacy for multiparty computation in the semi-honest model as follows:
Definition 3. Let $k\in \mathbb{N}$ and $\overline{x}\in {\left({\{0,1\}}^{*}\right)}^{k}$ where $\overline{x}=({x}_{1},\dots ,{x}_{k})$. Let $f:{\left({\{0,1\}}^{*}\right)}^{k}\to {\{0,1\}}^{*}$ a deterministic functionality. We say that a protocol Π privately computes f if there is a probabilistic polynomial time algorithm denoted $\mathcal{S}$ (a simulator) such that for every $I=\{{i}_{1},\dots {i}_{t}\}\subseteq \{1,\dots ,k\}$, it holds that:
- 1.
${\mathit{output}}^{{\Pi}}\left(\overline{x}\right)=f\left(\overline{x}\right)\phantom{\rule{2.em}{0ex}}$ (Correctness)
- 2.
${\left\{\mathcal{S}(I,{x}_{{i}_{1}},\dots ,{x}_{{i}_{t}},f\left(\overline{x}\right))\right\}}_{\overline{x}\in {\left({\{0,1\}}^{*}\right)}^{k}}\stackrel{c}{\equiv}{\left\{{\mathbf{view}}_{I}^{{\Pi}}\left(\overline{x}\right)\right\}}_{\overline{x}\in {\left({\{0,1\}}^{*}\right)}^{k}}\phantom{\rule{2.em}{0ex}}$ (Privacy)
where “Correctness” means that the protocol computes and outputs the desired function $f\left(\overline{x}\right)$, namely ${\mathit{output}}^{{\Pi}}\left(\overline{x}\right)=f\left(\overline{x}\right)$.
Accessibility: Accessibility refers to the ability of a reputation system to facilitate the utilization and access of reputation information for all users. In decentralized reputation systems, this entails ensuring that all individuals, regardless of their location and the data they possess, can utilize and benefit from others’ reputation scores.
Definition 4. We say that a reputation system Π achieves Accessibility if $\forall a,t\in \mathbb{P}$ a can query for the reputation score of t at any time and always obtains a copy of ${\rho}_{t}={\sigma}_{r}^{\left(t\right)}$ according to its request time.
Consistency: Consistency refers to the uniformity or unchanging nature of the reputation system across the network. In decentralized reputation systems, consistency is crucial as it ensures that the output reputation scores are comparable and reliable across the network when requested simultaneously.
Definition 5. We say that a reputation system Π is Consistent if $\forall a,b,t\in \mathbb{P}$, if a and b query for the reputation score of t at the same time and obtain ${\rho}_{t}$ and ${\rho}_{t}^{\prime}$, respectively, then ${\rho}_{t}={\rho}_{t}^{\prime}$.
Conservation: Conservation in the context of decentralized reputation systems refers to the safeguarding and preservation of reputation and feedback information. In more straightforward terms, it entails ensuring that valuable information is not lost, even when the provider of feedback leaves the network.
Definition 6. We say that a reputation system Π conserves the reputation information if $\forall t\in \mathbb{P}$ such that a party a has rated t before, then ${\rho}_{t}$ remains a function of its rating even if a leaves the network.
Verifiability: In decentralized reputation systems, verifiability refers to the capability for reputation scores and rating transactions to be independently examined and confirmed by any user, rather than being accepted solely based on trust or authority. This process involves verifying the qualifications of raters, signatures, nonces, and the calculation of reputation scores.
Definition 7. We say that a reputation system Π achieves verifiability if $\forall t\in \mathbb{P}$ such that the system has received for t so far n feedback ${\tau}_{t}^{\left({a}_{1}\right)},\phantom{\rule{4pt}{0ex}}\dots ,{\tau}_{t}^{\left({a}_{n}\right)}$ from parties ${a}_{1}...{a}_{n}$, respectively: then any party can check that the reputation score of t at this time is ${\sigma}_{r}^{\left(t\right)}=Rep({\tau}_{t}^{\left({a}_{1}\right)},\phantom{\rule{4pt}{0ex}}\dots ,{\tau}_{t}^{\left({a}_{n}\right)})$ and that the number of ratings is ${\sigma}_{w}^{\left(t\right)}=n$.
2.4.3. Problem Definition
Let ${S}_{t}=\{{a}_{1},\dots ,{a}_{n}\}$ be the set of source parties of a target party t in the context of a given action $\psi $. We assume that $\psi $ is known and unique for simplicity. We examine the situation where the set of t’s source parties ${S}_{t}$ execute a reputation system ${\Pi}$, which takes their private feedback $\overline{{\tau}_{t}}=({\tau}_{t}^{\left({a}_{1}\right)},\phantom{\rule{4pt}{0ex}}\dots ,{\tau}_{t}^{\left({a}_{n}\right)})$, securely computes the functionality $\rho}_{t}=Rep({\tau}_{t}^{\left({a}_{1}\right)},\phantom{\rule{4pt}{0ex}}\dots ,{\tau}_{t}^{\left({a}_{n}\right)})=\frac{{\sum}_{i=1}^{n}{\tau}_{t}^{\left({a}_{i}\right)}}{n$, and outputs ${\rho}_{t}$ the reputation score of target party t. The reputation system ${\Pi}$ is required to be decentralized and secure under the semi-honest model.