A Formal Analysis of the Mimblewimble Cryptocurrency Protocol

Mimblewimble (MW) is a privacy-oriented cryptocurrency technology that provides security and scalability properties that distinguish it from other protocols of its kind. We present and discuss those properties and outline the basis of a model-driven verification approach to address the certification of the correctness of the protocol implementations. In particular, we propose an idealized model that is key in the described verification process, and identify and precisely state the conditions for our model to ensure the verification of the relevant security properties of MW. Since MW is built on top of a consensus protocol, we develop a Z specification of one such protocol and present an excerpt of the {log} prototype after its Z specification. This {log} prototype can be used as an executable model. This allows us to analyze the behavior of the protocol without having to implement it in a low level programming language. Finally, we analyze the Grin and Beam implementations of MW in their current state of development.


Introduction
Cryptocurrency protocols deal with virtual money, so they are a valuable target for highly skilled attackers. Several attacks have already been mounted against cryptocurrency systems, causing irreparable losses of money and credibility (e.g., [1]). Furthermore, it is necessary to understand virtual money in the context of a global financial system which has deteriorated due to the COVID-19 pandemic [2]. In this context, security and confidential properties have become even more crucial. For this reason the cryptocurrency community is seeking approaches, methods, techniques and development practices that can reduce the chances of successful attacks. One such approach is the application of formal methods to software implementation. In particular, the cryptocurrency community is showing interest in formal proofs and formally certified implementations [3,4] . However, that community is not an exception. For instance, in the field of service industries, formal and mathematical approaches are also carried out to analyze and propose a solution to a problem. An et al. [5] model a network revenue management problem and propose a genetic algorithm to maximize revenue providing computational experiments. The reported results demonstrate the effectiveness of this approach.
Mimblewimble (MW) is a privacy-oriented cryptocurrency technology encompassing security and scalability properties that distinguish it from other technologies of its kind. MW was first proposed in 2016 [6]. The idea was then further developed by Poelstra [7].
In MW, unlike Bitcoin [8], there is no concept of address and all the transactions are confidential. In this paper we outline an approach based on formal software verification aimed at formally verifying the basic mechanisms of MW and its implementations [9,10].
We put forward a model-driven verification approach where security issues that pertain to the realm of critical mechanisms of the MW protocol are explored on an idealized model of this system. This model abstracts away the specifics of any particular implementation, and yet provides a realistic setting. Verification is then performed on more concrete models, where low level mechanisms are specified. Finally the low level model is proved to be a correct implementation of the idealized model. Security (idealized) models have played an important role in the design and evaluation of high assurance security systems. Their importance was already pointed out in the Anderson report [11]. The paradigmatic Bell-LaPadula model [12], conceived in 1973, constituted the first big effort on providing a formal setting in which to study and reason on confidentiality properties of data in time-sharing mainframe systems. In that work the authors provide a general descriptive model of a computer system involving formally precised security concepts like simple-security, discretionary-security and the star-property over State machines. State machines can be employed as the building block of a security model. The basic features of a state machine model are the concepts of state and state change. A state is a representation of the system under study at a given time, which should capture those aspects of the system that are relevant to the analyzed problem. State changes are modeled by a state transition function that defines the next state based on the current state and input. If one wants to analyze a specific safety property of a system using a state machine model, one must first specify what it means for a state to satisfy the property, and then check if all state transitions preserve it. Thus, state machines can be used to model the enforcement of a security policy.
The aim of the present work is to identify and analyze the main components of the MW protocol in order to build an idealized model. Then, relevant security properties are defined and verified over the model. This is key to address the verification of its implementations. Furthermore, since MW is built on top of a consensus protocol, we specify such protocol and present an executable model which allows us to analyze the behavior of the protocol without having to implement it.

Related Work
Developers of cryptocurrency software have already shown interest in using mathematics as a tool to describe software. In fact, both Nakamoto, in his seminal paper on Bitcoin [8], and Wood, in his description of the Ethereum Virtual Machine (EVM) [13], make use of mathematical constructs. In particular, in the latter work it is explained the Ethereum blockchain as a transaction-based state machine and the programs to be executed on the EVM are called smart contracts. However, those descriptions can not be understood as Formal Methods (FM) because they are neither based on standardized notations nor on clear mathematical theories.
Sestrem et al. [14] have pointed out the importance of privacy and security over Smart grid systems. They propose a sidechain architecture which is built up of three different blockchains. They claim for that solution to ensure system privacy, security, and reliability. In particular, one of those blockchains is responsible for ensuring such properties in the system, which are verified performing several tests using the Loom Platform. We think that, in this kind of scenarios, a formal definition of those security properties would help to understand what is required to be proved, independently of any particular implementation.
On the other side the FM community has started to pay attention to cryptocurrency software. Idelberger et al. [15] proposed to use defeasible logic frameworks such as Formal Contract Logic for the description of smart contracts. However, in that work the authors do not analyze cryptocurrency protocols nor the necessary conditions to guarantee security properties that those protocols should satisfy.
Bhargavan et al. [16] compile SOLIDITY programs into a verification-oriented functional language where they can verify source code. Although the paper describes a framework to analyze and verify both the runtime security and the functional correctness of blockchain systems, the work only focuses on smart contracts. Luu et al. [17] use the OYENTE tool to find and detect vulnerabilities in smart contracts. Hirai [18] uses Lem to formally specify the EVM; Grishchenko, Maffei and Schneidewind [19] also formalize the EVM but in F*; and Hildenbrandt et al. do the same but with the reachability logic system known as K. Pîrlea and Sergey [20] present a Coq [21,22] formalization of a blockchain consensus protocol where some properties are formally verified.
More recently, Rosu [3] presented academic and commercial results in developing blockchain languages and virtual machines that come directly equipped with formal analysis and verification tools. Hajdu et al. [23] developed a source-level approach for the formal specification and verification of Solidity contracts with the primary focus on events. Santos Reis et al. [24] introduced Tezla, an intermediate representation of Michelson smart contracts that eases the design of static smart contract analysers. In [25], Boyd et al. presented a blockchain model in Tamarin, that is useful for analyzing certain blockchain based protocols. On the other hand, Garfatta et al.
[4] described a general overview of the different axes investigated actually by researchers towards the (formal) verification of Solidity smart contracts. Tolmach et al. [26] investigated formal models and specifications of smart contracts and presented a systematic overview to understand common trends, although they did not specifically consider security in cryptocurrency protocols.
Additionally, Metere and Dong [27] present a mechanised formal verification of the Pedersen commitment protocol using EASYCRYPT [28] and Fuchsbaue et al. [29] introduce an abstraction for the analysis of some security properties of MW. Our work assumes some of these results to formalize and analyze the MW protocol, to then propose a methodology to verify their implementations.
Finally, in Betarte et al. [30] we outline some formal methods related techniques that we consider particularly useful for cryptocurrency software. We present some guidelines for the adoption of formal methods in cryptocurrency software projects. We argue that setbased formal modeling (or specification), simulation, prototyping and automated proof can be applied before considering more powerful approaches such as code formal verification. In particular, we show excerpts of a set-based formal specification of a consensus protocol and of the EVM. We also exhibit that prototypes can be generated from these formal models and simulations can be run on them. Finally, we show that test cases can be generated from the same models and how automated proofs can be used to evaluate the correctness of these models. The work we present here closely follows the approach of Betarte et al. [30].

Contribution
This article builds upon and extends a previously published paper in AIBlock 2020 [31]. In that paper, we present elements that comprise the essential steps towards the development of an exhaustive formalization of the MW cryptocurrency protocol and the analysis of some of its properties. The proposed idealized model constitutes the main contribution together with the analysis of the essential properties it is shown to verify. We have also introduced and discussed the basis of a model-driven verification approach to address the certification of the correctness of a protocol's implementation.
In the present paper, we extend both the definition of the MW protocol and the idealized model. In particular, the formal definition and discussion of the notion of commitment scheme in Section 2.3 is completely new. We have also extended the section where we study the security properties of MW, incorporating the discussion on the security properties of Pedersen commitments in Section 4.3. We study the strength of the scheme regarding the main security properties a cryptocurrency protocol must have. The sections Model-driven verification and MW implementations are also new. In the first one, since MW is built on top of a consensus protocol, we develop a Z specification of one such protocol and present an excerpt of the {log} prototype generated from the Z specification. This {log} prototype can be used as an executable model where simulations can be run. This allows us to analyze the behavior of the protocol without having to implement it in a low level programming language. Finally, we compare two MW implementations, Grin [9] and Beam [10], with our model and we discuss some features that set them apart.
The rest of the paper is organized as follows: Section 2 provides a brief description of MW. Section 3 describes the building blocks of a formal idealized model (abstract state machine) of the computational behavior of MW. Sections 4 and 5 provide an account of the verification activities we are putting in place in order to verify the protocol and its implementation. Then, Section 6 analyzes the Grin and Beam implementations of MW in their current state of development. Final remarks and directions for future work are presented in Section 7.
The list of abbreviations and acronyms can be found at the end of this manuscript. Table A1 in Appendix A provides the meaning of the math symbols used throughout this work.

The Mimblewimble Protocol
Confidential transactions [32,33] are at the core of the MW protocol. A transaction allows a sender to encrypt the amount of bitcoins by using blinding factors. In a confidential transaction only the two parties involved know the amount of bitcoins being exchanged. However, for anyone observing that transaction it is possible to verify its validity by comparing the number of inputs and outputs; if both are the same, then the transaction will be considered valid. Such procedure ensures that no bitcoins have been created from nothing and is key in preserving the integrity of the system. In MW transactions, the recipient randomly selects a range of blinding factors provided by the sender, which are then used as proof of ownership by the receiver.
The MW protocol aims at providing the following properties [6,9]: • Verification of zero sums without revealing the actual amounts involved in a transaction, which implies confidentiality. • Authentication of transaction outputs without signing the transaction. • Good scalability, while preserving security, by generating smaller blocks-or better, reducing the size of old blocks, producing a blockchain whose size does not grow in time as much as, for instance, Bitcoin's.
The first two properties are achieved by relying on Elliptic Curves Cryptography (ECC) operations and properties. The third one is a consequence of the first two.

Verification of Transactions
If v is the value of a transaction (either input or output) and H is a point over an elliptic curve, then v.H encrypts v because it is assumed to be computationally hard to get v from v.H if we only know H. However, if w and z are other values such that v + w = z, then if we only have the result of encrypting each of them with H we are still able to verify that due to simple properties of scalar multiplication over groups. Therefore, with this simple operations, we can check sums of transactions amounts without knowing the actual amounts. Nevertheless, say some time ago we have encrypted v with H and now we see v.H, then we know that it is the result of encrypting v. In the context of blockchain transactions this is a problem because once a block holding v.H is saved in the chain it will reveal all the transactions of v coins. For such problems, MW encrypts v as r.G + v.H, where r is a scalar and G is another point in H's elliptic curve, r is called blinding factor and r.G + v.H is called Pedersen commitment. By using Pedersen commitments, MW allows the verification of expressions such as v + w = z providing more privacy than the standard scheme. In effect, if v + w = z then we choose r v , r w and r z such that r v .G + r w .G = r z .G and so the expression is recorded as: making it possible for everyone to verify the transaction without knowing the true values.

Authentication of Transactions
Consider that Alice has received v coins and this was recorded in the blockchain as r.G + v.H, where r was chosen by her to keep it private. Now she wants to transfer these v coins to Bob. As a consequence, Alice looses v coins and Bob receives the very same amount, which means that the transaction adds to zero: However, Alice now knows Bob's blinding factor because it must be the same chosen by her (so the transaction is balanced). In order to protect Bob from being stolen by Alice, MW allows Bob to add his blinding factor, r B , in such a way that the transaction is recorded as: although now it does not sum zero. However, this excess value is used as part of an authentication scheme. Indeed, Bob uses r B as a private key to sign the empty string ( ). This signed document is attached to the transaction so in the blockchain we have: This way, the transaction is valid if the result of decrypting S with I − O (in the group generated by G) yields . If I − O does not yield something in the form of r B .G − 0.H, then will not be recovered and so we know there is an attempt to create money from thin air or there is an attempt to steal Bob's money.

Commitment Scheme
A commitment scheme [34] is a two-phase cryptographic protocol between two parties: a sender and a receiver. At the end of the commit phase the sender is committed to a specific value that he cannot change later and the receiver should have no information about the committed value.
A non-interactive commitment scheme [35] can be defined as follows: Definition 1 (Non-interactive Commitment Scheme). A non-interactive commitment scheme ζ(Setup, Com) consists of two probabilistic polynomial time algorithms, Setup and Com, such that: • Setup generates public parameters for the scheme depending on the security parameter λ.

•
Com is the commitment algorithm: Com : M × R → C, where M is the message space, R the randomness space and C the commitment space. For a message m ∈ M, the algorithm draws uniformly at random r ← R and computes the commitment com ← Com(m, r).
We have simplified the notation, but it is important to keep in mind that all the sets depend on the public parameters, in particular, the commitment algorithm.
In other words, Com is additive in both parameters. Transactions in MW are derived from confidential transactions [32], which are enabled by Pedersen commitments with homomorphic properties over elliptic curves. We define the non-interactive Pedersen commitment scheme we will use in our model, based on Definition 1, as follows: Definition 2 (Pedersen Commitment Scheme with Elliptic Curves). As in Definition 1, let M and R be the finite field F n and let C be the set of points determined by an elliptic curve C of prime order n.
The probabilistic polynomial time algorithms are defined as: • Setup generates the order n (dependent on the security parameter λ) and two generator points G and H on the elliptic curve C of prime order n whose discrete logarithms relative to each other are unknown. • Com(v, r) = r.G + v.H, with v the transactional value and r the blinding factor chosen randomly in F n .
Security properties of this commitment scheme (for MW) will be analyzed in Section 4.3.

Idealized Model of Mimblewimble-Based Blockchain
The basic elements of our model are transactions, blocks and chains. Each node in the blockchain maintains a local state. The main components are the local copy of the chain and the set of transactions waiting to be validated and added to a new block. Moreover, each node keeps track of unspent transaction outputs (UTXOs). Properties such as zero-sum and the absence of double spending in blocks and chains must be proved for local states. The blockchain global state can be represented as a mapping from nodes to local states. For global states, we can state and prove properties for the entire system like, for instance, correctness of the consensus protocol.

Transactions
Given two fixed generator points G and H on the elliptic curve C of prime order n (whose discrete logarithms relative to each other are unknown), we define a single transaction between two parties as follows: σ is the kernel signature (for simplicity, fees are left aside). • tko ∈ F n is the transaction kernel offset.
The transaction kernel offset will be used in the construction of a block to satisfy security properties.
Definition 4 (Ownership). Given a transaction t, we say S owns the output o if S knows the opening (r, v) for the Pedersen commitment o = r.G + v.H.
The strength of this security definition is directly related to the difficulty of solving the logarithm problem. If the elliptic curve discrete logarithm problem in C is hard then given a multiple Q of G, it is computationally infeasible to find an integer r such that Q = r.G.
It is important to notice that during the construction of the transaction the sender and the receiver do not learn their respective blinding factors . Instead, they build a Schnorr signature that is used to guarantee the authenticity of the transaction excess value.

Definition 5 (Balanced Transaction
A balanced transaction guarantees no money is created from thin air and the transaction was honestly constructed.

Property 1 (Valid Transaction).
A transaction t is valid (valid transaction(t)) if t satisfies: i.
The range proofs of all the outputs are valid. ii.
The transaction is balanced. iii.
The kernel signature σ is valid for the excess.
These three properties have a straightforward interpretation in our model. Due to limitations of space, in this paper we only formalize and analyze some of the properties mentioned throughout the document.

Aggregate Transactions
Transactions can be aggregated into bigger transactions. A single transaction can be seen as the sending of money between two parties. The following definition represents multiple parties: Transactions can be merged non-interactively to construct an aggregate transaction.
where || is the list concatenation operator and + is the scalar sum.
This process can be applied recursively to add more transactions into one aggregate transaction. A single transaction (Definition 3) can be seen as a particular case where the transaction kernel list contains a single element. Besides, the ownership of the coins is between two parties.
The CoinJoin mechanism [36] makes it possible to combine all inputs and outputs from separate transactions to form a single transaction and the signatures can be composed by the parties. A Transaction Join can be understood as a simple way to perform CoinJoin with no composite signatures.
The validity of an aggregate transaction is guaranteed by the validity of the transactional parties during the construction process. Lemma 1 (Invariant: CoinJoin Validity). Given a valid transaction t 0 and a valid aggregate transaction tx as in Definition 6. Let tx be the result of aggregating t 0 into tx. Then, tx is valid.
In our model, aggregate transactions and blocks (Definition 9) are the same (without considering headers). We are interested in distinguishing them because the unconfirmed transaction pool will contain aggregate transactions.
Notice that, in an aggregate transaction, an adversary could find out which input cancel which output. They could try all possible permutations and verify if they summed to the transaction excess. The property of transaction unlinkability will be proved over blocks, as we will see in Property 3.

Unconfirmed Transaction Pool
The unconfirmed transaction pool (mempool) contains the transactions which have not been confirmed in a block yet.

Definition 8 (Mempool).
A mempool mp is a list of type:

Blocks and Chains
Genesis block Gen is a special block since it is the first ever block recorded in the chain. Transactions can be merged into a block. We can see a block as a big transaction with aggregated inputs, outputs and transaction kernels. We can say a block is balanced if each aggregated transaction is balanced.
Definition 10 (Balanced Block). Let b be a block of the form b = {i, o, tks, ko} with tks = (tk 1 , . . . , tk t ) the list of transaction kernels and where the j-th item in tks is of the form tk j = {rp j , ke j , σ j }. We say the block b is balanced if the following holds: We assume the genesis block Gen is valid. We define the notion of block validity as follows: The block is balanced. ii.
For every transaction kernel, the range proofs of all the outputs are valid and the kernel signature σ is valid for the transaction excess.
where || is the list concatenation operator and + is the scalar sum.
Block aggregation preserves the validity of blocks; that is, block validity is invariant w.r.t. block aggregation.
Lemma 2 (Invariant: Block Validity). Given a valid transaction t 0 and a valid block b as in Definition 11. Let b be the result of aggregating t 0 into b. Then, b is valid.
Applying Definition 11, we have that the resulting b is of the form: According to Definition 10, we need to prove the following equality holds for the block b : Each term can be written as follows: Rearranging the equality and using algebraic properties on elliptic curves, we have: Now, we apply the hypothesis concerning the validity of t 0 and b. In particular, applying Definition 5 for t 0 and Definition 10 for b, we have that the following equalities are true: That is exactly what we wanted to prove. Notice that the proof above is analogous to the proof of CoinJoin Validity (Lemma 1).

Definition 12 (Chain). A chain is a non-empty list of blocks:
For a chain c and a valid block b, we can define a predicate validate(c, b) representing the fact that is correct to add b to c. This relation must verify, for example, that all the inputs in b are present as outputs in c, in other words, they are UTXOs.

Validating a Chain
The model formalizes a notion of a valid state that captures several well-formedness conditions. In particular, every block in the blockchain must be valid. A predicate validChain can be defined for a chain c = (b 0 , b 1 , . . . b n ) by checking that: The axiomatic semantics of the system are modeled by defining a set of transactions, and providing their semantics as state transformers. The behavior of transactions is specified by a precondition Pre and by a postcondition Post: This approach is valid when considering local (nodes) or global (blockchain) states (of type State) and transactions (of type Transaction). Different sets of transactions, pre and postcondition are defined to cover local or global state transformations. At a general level, State is Chain.

Executions
There can be attempts to execute a transaction on a state that does not verify the precondition of that transaction. In the presence of such situation the system answers with a corresponding error code (of type ErrorCode). Executing a transaction t over a valid state s (valid state(s)) (when dealing with global states, valid state is validChain) produces a new state s and a corresponding answer r (denoted s t/r −→ s ), where the relation between the former state and the new one is given by the postcondition relation Post.
valid state(s) Pre(s, t) Post(s, t, s ) Whenever a transaction occurs for which the precondition holds, the valid state may change in such a way that the transaction postcondition is established. The notation s t/ok − − → s may be read as the execution of the transaction t in a valid state s results in a new state s . However, if the precondition is not satisfied, then the valid state s remains unchanged and the system answer is the error message determined by a relation ErrorMsg. (Given a state s, a transaction t and an error code ec, ErrorMsg(s, t, ec) holds iff error ec is an acceptable response when the execution of t is requested on state s). Formally, the possible answers of the system are defined by the type: where ok is the answer resulting from a successful execution of a transaction.
One-step execution with error management preserves valid states.
The proof follows by case analysis on s t/r −→ s . When Pre(s, t) does not hold, s = s . From this equality and valid state(s) then valid state(s ). Otherwise, Pos(s, t, s ) must hold and we proceed by case analysis on t, considering that t is a valid transaction and s is a valid state.
System state invariants, such as state validity, are useful for analyzing other relevant properties of the model. In particular, the properties in this work are obtained from valid states of the system.

Verification of Mimblewimble
We now detail some relevant properties that can be verified in our model. In addition to some of the properties mentioned in previous sections, we include in our research other properties such as those formulated in [20], and various security properties considered in [29,37,38].

Protocol Properties
The property of no coin inflation or zero-sum guarantees that no new funds are produced from thin air in a valid transaction. The property can be stated as follows. Proof. We know the transaction t is valid, in particular, the transaction is balanced. Applying Definition 5, we know that: Using Definition 3, we start to unfold the terms in the equality: Applying algebraic properties on elliptic curves, we have: This means that all the inputs and outputs add up to zero. In other words, they are summed to the commitment to the kernel offset plus the commitment to the excess blinding factor.
Thus, we have proved that no money is created from thin air and the only ones who knew the blinding factors were the transacting parties when they created the transaction. This means the new outputs will be spendable only by them.
An important feature of MW is the cut-through process. The purpose of this process is to erase redundant outputs that are used as inputs within the same block. Let C be some coins that appear as an output in the block b. If the same coins appear as an input within the block, then C can be removed from the list of inputs and outputs after applying the cut-through process. In this way, the only remaining data are the block headers, transaction kernels and UTXOs. After applying cut-through to a valid block b it is important to ensure that the resulting block b is still valid. We can say that the validity of a block should be invariant with respect to the cut-through process.
We want to prove that b is valid. In particular, that b is balanced. According to Definition 10, we need to prove: By hypothesis, we know that b is a valid block. Applying Property 2, we know that b is balanced. According to Definition 10, the following equality holds for block b: Applying the definition of r, we can rewrite the above equality as follows: Rearranging the equality, we have: Now, we can observe that we are substracting the sum of all the elements belonging to the same set r. Thus, the term is equal to zero.
Then, if we remove the term we have: By hypothesis, we know that i = i \ r and o = o \ r; therefore we can rewrite the above equality as: That is exactly what we wanted to prove.

Privacy and Security Properties
In blockchain systems the notion of privacy is crucial: sensitive data should not be revealed over the network. In particular, it is desirable to ensure properties such as confidentiality, anonymity and unlinkability of transactions. Confidentiality refers to the property of preventing other participants from knowing certain information about the transaction, such as the amounts and addresses of the owners. Anonymity refers to the property of hiding the real identity from the one who is transacting, while unlinkability refers to the inability of linking different transactions of the same user within the blockchain.
In the case of MW no addresses or public keys are used; there are only encrypted inputs and outputs. Privacy concerns rely on confidential transactions, cut-through and CoinJoin. CoinJoin combines inputs and outputs from different transactions into a single unified transaction. It is important to ensure that the resulting transaction satisfies the validity defined in the model.
The security problem of double spending refers to spending a coin more than once. All the nodes keep track of the UTXO set, so before confirming a block to the chain, the node checks that the inputs come from it. If we refer to our model, that validation is performed in the predicate validate mentioned in Section 3.4.

Security Properties of Pedersen Commitments
In MW transactions, input and output amounts are hidden in Pedersen commitments. In Section 2.3 we have introduced the definition of a commitment scheme (Definition 1).
A commitment scheme is expected to satisfy the following two security properties: In the cryptocurrencies world, these two properties should be understood this way: • Hiding: a commitment scheme is used to keep the transactions secure. The sender commits to an amount of coins and this should remain private for the rest of the network over time. • Binding: senders cannot change their commitments to a different transaction amount. If that were possible, it would mean that an adversary could spend coins which have already been committed to an UTXO, what would allow to create coins out of thin air.
There are two possible specifications for these properties. Computational hiding or binding is when for all polynomial time adversaries, they can break the security property with negligible probability. This asymptotic security is parameterized by a security parameter λ and adversaries run in polynomial time in λ and their other inputs. On the other hand, we talk about perfectly hiding or binding, when even with infinite computing power it would be not possible to break the security property.
Notice that a commitment scheme cannot be perfectly hiding and binding at the same time: i.
If the scheme is perfectly hiding, there must exist several inputs committing to the same value. Otherwise, an adversary with infinite computing power attempting to find out which input committed to a certain output, could try all possible inputs finding out the corresponding output. This shows that this scheme cannot be perfectly binding. ii.
If the scheme is perfectly binding, it means that there is at most one input that committed to an output. Imagine an adversary with infinite computing power attempting to find out which input committed to a target output. It would be possible to try all inputs and find which one verifies the commitment. Thus, this scheme cannot be perfectly hiding.
So, for cryptocurrencies systems is better to provide stronger security in order to guarantee the hiding property. In other words, we prefer a commitment scheme with computational binding and perfectly hiding. We can understand this by first assuming adversaries break the binding property. It means that they could create money from thin air from a certain point in time but this would not affect the blockchain history. On the other hand, if the adversary breaks the hiding property, history could be inspected and all the transactions revealed. That breaks one of the main principles of a privacyoriented cryptocurrency.

Pedersen Commitments Are Computational Binding
This property relies on the discrete logarithm assumption. In provable security, security is proved to hold against any probabilistic time adversary by showing an efficient way to break the cryptography protocol implies a way to break the underlying mathematical problem which is supposed to be hard (security reduction). The adversary is modeled as a procedure.
Definition 13 (Computational Binding Commitment). Let ζ(Setup, Com) be a Pedersen commitment scheme as in Definition 2. Let A Binding be a polynomial probabilistic time adversary against the binding property running in the context of the game G Binding as in Figure 1. We say that the Pedersen commitment scheme ζ is computational binding if the success probability of A Binding winning game G Binding is negligible.
In game G Binding , firstly the scheme is set up by choosing two generator points, G and H, over the elliptic curve C of prime order n. All these parameters are public. Secondly, the adversary A Binding performs the attack attempting to find out two different transactional values v 1 and v 2 that commit to the same commitment. Once the adversary finishes the attack, two pair of different opening values are returned. The adversary succeeds if both pairs commit to the same value and v 1 = v 2 . As we mentioned before, the computational binding property is based on a security reduction. In terms of Pedersen commitment, it means that if the adversary A Binding could perform the attack in the context of game G Binding and could win with non-negligible probability, an adversary I Dlog attacking a game against the discrete logarithm problem on the group C could use A Binding to win the game with non-negligible probability.
Recall that MW uses Pedersen commitment with elliptic curves (Definition 2). The discrete logarithm problem on this context means: given a point y over the elliptic curve C with generator G, it is hard to find x such that y = x.G.
The following lemma captures the semantics of that security reduction.
Lemma 6 (Computational Binding). Let ζ(Setup, Com) be a Pedersen commitment scheme as in Definition 2. Let A Binding be an adversary against the computational binding commitment (Definition 13) in the commitment scheme ζ. Let us assume that A Binding succeeds in finding two distinct pair of opening values that commit to the same commitment with probability. Therefore, there exists an inversor I Dlog which can find out the discrete logarithm to the base G of a randomly chosen element y on the elliptic curve C with probability using the adversary A Binding . Hence, if is negligible then is negligible too.
In these cases, the contraposition is proved: if is non-negligible, then is nonnegligible too. The goal of the proof is to show how to transform the efficient adversary A Binding that is able to break the computational binding commitment into an algorithm that efficiently solves the discrete logarithm assumption. The inversor I Dlog will provide a simulation context in which the adversary A Binding will perform its attack. The attack of the inversor I Dlog will be successful if A Binding is successful and the simulation does not fail.
According to game G Binding , when the adversay A succeeds we have two identical commitments Com(v 1 , r 1 ) = Com(v 2 , r 2 ) and v 1 = v 2 such that (Definition 2): So we can compute: which means that we have computed the discrete logarithm of H with respect to G. Figure 2 shows the game G Dlog , which captures the semantic of the reduction. The failure event captures when the adversary A Binding fails and therefore, the adversary I Dlog fails too. The probability of the success of I Dlog is equal to the probability of the success of A Binding .

Pedersen Commitments Are Perfectly Hiding
Basically, it is because, given a commitment Com(v, r) = r.G + v.H, there are many combinations of (v , r ) that satisfies Com(v, r) = r .G + v .H. Despite the adversary have infinite computing power and could attempt all possible values, there would be no way to know which opening values (v , r ) were the original ones. Furthermore, r is a random value of the finite field F n so r.G + v.H is a random element of C. Definition 14 (Perfectly Hiding Commitment). Let ζ(Setup, Com) be a Pedersen commitment scheme as in Definition 2. Let A Hiding be a computationally unbounded adversary against the hiding property running in the context of the game G Hiding as in Figure 3. We say that the Pedersen commitment scheme ζ is perfectly binding if the success probability of A Hiding winning game G Hiding holds: In the game described in Figure 3, first the game is set up and then the adversary chooses two distinct transactional values v 0 and v 1 . Then, one of these values is randomly chosen as v b , as well as with the blinding factor r. The commitment of (v b , r) is computed and the adversary A Hiding performs the attack attempting to find out which one of the values was committed.

Switch Commitments
As already mentioned, if an attacker succeeds in breaking the computational binding property of a commitment then money can be created from thin air. Switch commitments [39] were introduced to enable the transition from computational bindingness to statistical bindingness, specially to the commitments stored in the blockchain. The notion of statistical security implies that a computationally unbounded adversary cannot violate the property except with negligible probability.
If in a certain moment we believe that the bindingness of the commitment scheme gets broken, we could make a soft fork on the chain and switch existing commitments to this new validation scheme which is backwards compatible.
Below, we show the changes that are needed for our model to also encompass Switch commitments. In Pedersen commitment definition (Definition 2) we add a third point generator J of the elliptic curve C whose discrete logarithm relative to G and H is unknown. We define the new commitment algorithm as follows: where r is the blinding factor randomly chosen in the finite field F n and J is the third point generator.
Note that r is still randomly distributed and the hash value of ElGamal commitment is computed which is the combination of ElGamal encryption [40] and a commitment scheme.

Zero-Knowledge Proof
The goal is to prove that a statement is true without revealing any information beyond the verification of the statement. In MW we need to ensure that in every transaction the amount is positive so that users cannot create coins. Here, the hard part is to prove that without revealing the amount. In our model, the output amounts are hidden in the form of a Pedersen commitment, and the transaction contains a list of range proofs of the outputs to prove that the amount is positive. MW uses Bulletproofs to achieve this goal. In our model, this verification is performed as the first step of the validation of the transaction.

Unlinkability and Untraceability
As we specified in our model, each node has a pool of unconfirmed transactions in the mempool. This transactions are waiting for the miners in order to be included in a block. We can distinguish two security properties of the transactions. Untraceability refers to the transactions in the mempool and unlinkability to the transactions in the block. In our model, this two notions are formalized as follows.
Property 3 (Transaction Unlinkability). Given a valid block b, it is computationally infeasible to know which input cancels which output.
The following lemma captures the semantics of this property in MW. Moreover, the operations cut-through and CoinJoin, which were described above, also contribute to this property.
where ke is the transaction excess and tko the transaction kernel offset.
If we refer to the construction process in Definition 11, the transaction kernel offsets were added to generate a single aggregate offset ko to cover all transactions in the block. It means that we do not store the individual kernel offset tko of the transaction in b once the transaction is aggregated to the block.
The challenge is trying to solve the adversary A could be seen as the subset sum problem (NP-complete) but, in this case, tko is unrecoverable. So, although many transactions have few inputs and outputs, it is computationally infeasible, without knowing that value, to find the tuple.
We can define: Definition 15 (Transaction Unlinkable). We say block b is transaction-unlinkable, if the probability of any polynomial probabilistic time adversary A in finding a balanced transaction within b is negligible.
Then, due to Lemma 7 we conclude that in MW all blocks are transaction-unlinkeable.
Property 4 (Transaction Untraceability). For every transaction in the mempool, it is not possible to relate the transaction to the IP address of the node which originated it.
Regarding this property, we should refer mainly, to the broadcast of the transactions. Once the transactions are created, they are broadcasted to the network and they are included in the mempool. Each node could track the IP address from the node which received the transaction. At that point nodes could record the transactions, allowing them to build a transaction graph.
We define that the broadcast of a transaction can be performed with or without confusion. Without confusion means that, once the transactions are created, they are broadcasted immediately to all the networks. However, if someone controls enough nodes on the network and discovers how the transaction moves, he could find out the IP address node from which the transaction comes from.
On the other hand, we define the broadcast with confusion as a way to obscure the IP address node. Definition 16 (Broadcast with confusion). Let us say node A sends a transaction to node B. We say B receives the transaction with confusion if given the IP address of node A, the node B does not know if the transaction was originated by the node A or not.
In other words, it can be said that if some malicious nodes, working together, construct a graph of the pairs (transaction, IP address node), the IP address node will not convey information about what node originated the transaction. Therefore, in our model, we require Property 16 to hold before the broadcast takes place. In order to achieve this, we can establish that the node broadcasting the transaction should be far enough from the one which originated it. Moreover, CoinJoin could be performed before the broadcast.
Dandelion, proposed by Bojja et al. [41], is a protocol for transaction broadcasting intended to resist that deanonymization attack. Dandelion is not part of the MW protocol, however this kind of protocols should be implemented by each node to lower the risk of creating the transaction graph. In Dandelion, broadcasting is performed in two phases: the "steam" phase and the "fluff" phase. In the "steam" phase the transaction is broadcasted randomly to one node, which then randomly sends it to another, and so on. This process finishes when the "fluff" phase is reached, and the transaction is broadcasted to the network using a gossip protocol. The following routines capture the semantic of the phases: Each node, besides having the local state, should implement these two routines. Once the transaction is created and is ready to be included in the mempool, its broadcasting start in the "steam" phase. When it reaches the "fluff" phase, it is broadcasted to the network and added in the mempool.
Dandelion relies on the following three rules: all nodes obey the protocol, each node generates exactly one transaction, and all nodes on the network run Dandelion. The problem is that an adversary can violate them. For that reason, Grin implements a more advanced protocol called Dandelion++ [42] which intends to prevent that [43]. However, it is believed that Dandelion++ is not good enough to guarantee the privacy of a virtual coin [44]. For instance, the flashlight attack [45] is an open problem still under investigation [46]. The scenario here is when an 'activist' want to accept donations but he cannot reveal his identity. At some point, he will deposit those payments to an exchange and his identity would be compromised. The adversary injects 'tainted coins' and could build a 'taint tree' looking through all deposits to the exchange. This way, he could link those deposits to the 'activist'.
The combined use of the MW protocol with a Zerocash-style commitment-nullifier schema has been put forward in Lim [47] as a countermeasure to the above attack. In the case of Zcash, every shielded transaction has a large anonymity set, namely, a set of transactions form which it is indistinguishable from. In the case of Spectrecoin (now Alias) [48], the main idea is the use, only once, of public addresses (XSPEC) to receive the payments combined with an anonymous staking protocol.

Model-Driven Verification
MW is built on top of a consensus protocol. In that direction, we have developed a Z specification of one such protocol, part of which is included in what follows. Z specifications in turn can be easily translated into the {log} language [49], which can be used both as a (prototyping) programming language and a satisfiability solver for an expressive fragment of set theory and set relation algebra. We present an excerpt of the {log} prototype after its Z specification. This {log} prototype can be used as an executable model where simulations can be run. This allows us to analyze the behavior of the protocol without having to implement it in a low level programming language.
We also plan to use {log} to prove some of the basic properties mentioned above, such as the invariance of valid state. However, for complex properties or for properties not expressible in the set theories supported by {log} we plan to develop a complete and uniform formulation of several security properties of the protocol using the Coq proof assistant [21]. The Coq environment supports advanced logical and computational notations, proof search and automation, and modular development of theories and code. It also provides program extraction towards languages like Ocaml and Haskell for execution of (certified) algorithms [50]. Additionally, Coq has an important set of libraries; for example [51] contains a formalization of elliptic curves theory, which allows the verification of elliptic curve cryptographic algorithms.
The fact of first having a {log} prototype over which some verification activities can be carried out without much effort helps with simplifying the process of writing a detailed Coq specification. This is in accordance with proposals such as QuickChick whose goal is to decrease the number of failed proof attempts in Coq by generating counterexamples before a proof is attempted [52].
By applying the program extraction mechanism provided by Coq we would be able to derive a certified Haskell prototype of the protocol. This prototype can be used as a testing oracle and also to conduct further verification activities on correct-by-construction implementations of the protocol. In particular, both the {log} and Coq approaches can be used as forms of model-based testing. That is, we can use either specification to automatically generate test cases with which protocol implementations can be tested [52,53].

Excerpt of a Z Model of a Consensus Protocol
The following is part of a Z model of a consensus protocol based on the model developed by Pîrlea and Sergey [20]. For readers unfamiliar with the Z notation we have included some background in Appendix B.
The time stamps used in the protocol are modeled as natural numbers. Then we have the type of addresses (Addr), the type of hashes (Hash), the type of proofs objects (Proof ) and the type of transactions (Tx). Differently from Pîrlea and Sergey's model (PS) we model addresses as a given type instead as natural numbers. In PS the only condition required for these types is that they come equipped with equality, which is the case in Z.
The block data structure is a record with three fields: prev, (usually) points to the parent block; txs, stores the sequence of transactions stored in the block; and pf is a proof object required to validate the block.
Block prev : Hash txs : seq Tx pf : Proof Next, we define the following parameters of the model: hashb, a function computing the hash of a block; hasht, a function computing the hash of a transaction; mkProof , a function computing a proof object of a node; VAF, a relation used to validate proof objects; txValid, a relation used to validate transactions; and txExtend, a relation used to modify a set of pending transactions stored in a node. VAF is constrained to triplets where the block whose proof is being considered is not one of the blocks of the chain being considered.
Another parameter of our model is the genesis block, called GB, which should be provided by the client of the model. Clearly, GB is a block enjoining two particular properties: it has no parent and it contains no transactions.
GB : Block GB.prev = hashb GB GB.txs = The local state space of a participating network node is given by four state variables: this, representing the address of the node; as, are the addresses of the peers this node is aware of; bf , is a block forest which records the minted and received blocks; and tp, is a set of received transactions, which will eventually be included in minted blocks.
LocState this : Addr as : P Addr bf : Hash → Block tp : P Tx As nodes can send messages, we define their type. NullMsg is used when, actually, the node does not send any message (think of it as the null statement in programming languages, that is, skip). Note that some messages contain some data, for example, AddrMsg which communicates a set of peers by transmitting their addresses.
Packets are used to build so-called packet soups which are used later to define the system configuration (see schema Conf ). A packet is a triple where the first component is the message sender, the second is the destination's address and the third is the message content. With the model elements defined so far, we can give the specification of the local transitions. We start with RcvNull which models a rather trivial operation of the protocol when actually the state of the node does not change because the NullMsg has been received.

RcvNull
ΞLocState p? : Packet ps! : P Packet p?.2 = this p?.3 = NullMsg ps! = ∅ RcvConnect specifies the transition where the node receives a ConnectMsg message making it to add the sender's address as a new peer. In this transition we see that a node can output a set of packets (which in this particular case is a singleton set) as a side effect of receiving a message. In this case the packet says that this node is sending the packet addressed to the node that just sent a packet to this. In turn the payload of the packet is an InvMsg message which informs the destination the transactions and blocks stored by this. The next transition is RcvAddr. As can be seen, it sends out a set of packets which can potentially have many elements. The node checks whether or not the packet's destination address coincides with its own address. In that case, the node adds the received addresses to its local state and sends out a set of packets that are either of the form (this, a, ConnectMsg) or (this, a, AddrMsg as ). The former are packets generated from the received addresses and sent to the new peers the node now knows, while the latter are messages telling its already known peers that it has learned of new peers.  |)))}

RcvAddr
The next transition is RcvBlock which specifies a node receiving a block instead of a transaction. However, in order to specify RcvBlock we first need to introduce several elements in the form of parameters to the model. We start by introducing FCR as an order relation on the set of block chains. Note that the axioms imply that FCR is total, transitive and irreflexive. The fourth axiom states that extensions of a chain are "heavier" than the chain itself.
The function maxFCR returns the maximum chain of a set of chains according to the FCR order. maxFCR : P(seq Block) → seq Block Function chain is one of the building blocks necessary to compute the ledger of a block forest. Block forests are represented as partial functions from hashes to blocks (formally Hash → Block). Then, chain takes as inputs a block forest and a block returning a block chain. We will use chain to "iterate" over all the blocks of a given block forest.
Now, we define the ledger of a block forest as the longest block chain returned by chain for each block in the forest.
As can be seen, RcvBlock adds the received block to the block forest without checking its validity. This is because the node might not have received the preceding blocks which determine the validity of the received block.
RcvInv specifies the behavior of the node when it receives an InvMsg message. Such a message is used to inform nodes of the transactions and blocks stored by a given node. Then, when this receives an InvMsg message it asks the system the transactions and the block it does not yet know, by means of a GetDataMsg message. The last receiving local transition is RcvGetData. This operation is divided into three cases: the node receiving a block (RcvGetDataBlock); the node receiving a transaction (RcvGetDataTx); and the node receiving data it has not requested (RcvGetDataNull).

RcvGetDataBlock
The system configuration is represented by two state variables: Delta, which establishes a mapping between network addresses and the corresponding node (local) states (in PS this variable is referred to as the global state); and P, a set of packets (which represent the messages exchanged by nodes).

Excerpt of the {log} Prototype Generated from the Z Specification
In this section, we show part of the {log} code corresponding to the Z model presented above. {log} code can be seen as both a formula and a program [49]. Thus, in this case we use the code as a prototype or executable specification of the Z model. The intention is twofold: to show that passing from a Z specification to a {log} program is rather easy, and to show how a {log} program can be used as a prototype or executable specification. The first point is achieved mainly because {log} provides the usual Boolean connectives and most of the set and relational operators available in Z. Hence, it is quite natural to encode a Z specification as a {log} program.
Given that {log} is based on Prolog its programs resemble Prolog programs. The {log} encoding of RcvAddr is the following: As can be seen, rcvAddr is a clause receiving the before state (LocState), the input variable (P), the output variable (Ps) and the after state (LocState_). As in Prolog, {log} programs are based on unification with the addition of set unification. In this sense, a statement such as LocState = {[as,As], [this,This] / Rest} (set) unifies the parameter received with a set term singling out the state variables needed in this case (As and This) and the rest of the variables (Rest). The same is done with packet P where _ means any value as first component and addrMsg(Asm) gets the set of addresses received in the packet without explicitly introducing an existential quantifier.
The set comprehensions used in the Z specification are implemented with {log}'s so-called Restricted Intensional Sets (RIS) [54]. A RIS is interpreted as a set comprehension where the control variable ranges over a finite set (D and As).
We can execute rcvAddr by calling it as part of a {log} query, as follows: That is, {log} binds values for all the free variables in a way that the formula is satisfied (if it is satisfiable at all). In this way we can trace the execution of the protocol w.r.t. states and outputs by starting from a given state (e.g., S) and input values (e.g., [_,This,addrMsg({a1,a2})]), and chaining states throughout the execution of the state transitions included in the simulation (e.g., S1 and S2).

Mimblewimble Implementations
In August 2016, someone called "Tom Elvis Jedusor" (the French name for Voldemort in Harry Potter) posted a link to a text file on the IRC Channel describing a cryptocurrency protocol with a different approach from BitCoin. This article titled 'Mimblewimble' [6] addressed some privacy concerns and the ability of compressing the transaction history of the chain without loss of validity verification. Since this document left some questions open, in October 2016 Andrew Poelstra published a paper [7] where he describes, in more detail, the design of a blockchain based on Mimblewible. In 2019, the first two practical implementations were launched: Grin and Beam. In what follows, we shall first describe the main features of their design and will compare them with our model. Then, we shall discuss features that set apart Grin from Beam.

Grin
Grin [9] is an open source software project with a simple approach to MW. As we will see below, its design is a straightforward interpretation of our model.

Blocks and Transactions
In order to provide privacy and confidentiality guarantees, Grin transactions are based on confidential transactions. In Figure A1 (Appendix C), we can observe that each transaction contains a list of inputs and outputs. Each input and output is in the form of a Pedersen commitment, that is, a linear combination of the value of the transaction and a blinding factor. For instance, in the input structure (Appendix C, Figure A2, line 1729), there is a field that stores the commitment pointing to the output being spent.
In addition, the transaction structure has a list of transaction kernels (of type TxKernel) with the transaction excess and the kernel signature. All these data have a straightforward relation to our definition of transaction (Definition 3).
However, it is important to notice that the transaction kernel structure differs from our model since it does not contain the list of range proofs of the outputs. In Grin, it is part of the output structure (Appendix C, Figure A3, line 2045).
Moreover, a Grin transaction also includes the block number at which the transaction becomes valid. We did not add this data to the transaction structure yet. We also should include it in the signature process. In Grin, not only the transaction fee is signed, the signing process also takes into account the absolute position of the blocks in the chain. In this way, if a kernel block points to a height greater than the current one, it is rejected. If the relative position points to a specific kernel commitment, Grin has the same behavior.
Grin Blocks also stores a kernel offset which is the sum of all the transaction kernel offsets added to the block. In our model, the kernel offset is defined within a block (Definition 9) and the notion of adding a transaction into a block is formalized on the block aggregation (Definition 11). Besides, the single aggregate offset allows to prove Lemma 7 as part of the Transaction Unlinkability property (Property 3).

Privacy and Security Properties
The cut-through process, as explained in Section 4.1, provides scalability and further anonimity. Grin performs this process in the transaction pool, which we formalized as mempool (Definition 8). Outputs that have already been spent as new inputs are removed from the mempool, using the fact that every transaction in a block should sum to zero.
CoinJoin, as we mentioned in Section 4.2, combines inputs and outputs from multiple transactions into a single transaction in order to obfuscate them. In Grin, every block is a CoinJoin of all other transactions in the block.
In addition, Grin supports a pruning process. This process could be applied to past blocks. Outputs that have been spent in a previous block are removed from the block. Block validity (Property 2) should be invariant w.r.t. the pruning process. Each node maintains a local state with a local copy of the chain. The pruning process can be applied recursively to the chain and can keep it as compact as possible. Pruning is useful to free space. As a consequence, when a new node wants to join the network, it can receive just a pruned (i.e., partial) chain and the node needs to validate it, which makes the synchronization process faster. In Section 3.5, validChain should be modified to guarantee the validity of a partial chain.
As we have mentioned in Section 4.2, Switch commitments provide perfect hiddenness and statistical bindingness. Grin implements a switch commitment [55] as part of a transaction output in order to provide more security than computational bindingness (Definition 13), which is crucial for the age of quantum adversaries.

Beam
Beam [56] was the other Mimblewimble project launched in January 2019. This open source system has a founding model and a dedicated development team.

Blocks and Transactions
Beam transactions are confidential transactions implemented by the Pedersen commitment scheme. This follows the same approach as our model. Figure A4 in Appendix C shows (line 439) how Beam's input stores the commitment, that is, a point over the elliptic curve (class of ECC::Point).
In Section 3 we described how each node maintains a local state. The state keeps track of the UTXO set. Beam extends the behavior of that set, supporting the incubation period on a UTXO. This means that Beam sets the minimum number of blocks created after the UTXO entered the blockchain, before it can be spent in a transaction. This number is included in the transaction signature. Figure A5 in Appendix C shows (line 510) how Beam's output stores the number of blocks corresponding to the incubation period.
In our model (Section 3.5), the predicate validChain should check that every output with certain incubation period on a block was 'lawfully' spent for the entire blockchain (global state). In other words, if we have an output transaction o with an incubation period d on a confirmed block b over the chain and a later confirmed block b containing o as an input, then b should be, at least, d blocks away from b on the blockchain.

Privacy and Security Properties
Beam supports cut-through as we described above. In addition, Beam adds a scalable feature to eliminate all intermediate transaction kernels [57] in order to keep the blockchain as compact as possible. It would be important to prove that the resulting transaction is still valid in Property 1.

Discussion
Both Grin and Beam implementations address the main features of the MW protocol, namely the properties of confidentiality, anonymity and unlinkability comprised in our work.

Broadcasting Protocol
Both Grin and Beam use the Dandelion scheme as broadcasting protocol [41]. We have formalized that a broadcasting protocol should hold Property 4 of Transaction Untraceability. It should not be possible to link transactions and their originating IP addresses, in other words, to deanonymize users. Broadcast with confusion, as we describe in Property 16, should be carried out to satisfy Transaction Untraceability. We have also described the steam and fluff phases of the Dandelion scheme.
Grin's implementation, in the steam phase, allows for transaction aggregation (Coin-Join) and cut-through, which provides greater anonymity to the transactions before they are broadcasted to the entire network.
In addition, in order to improve privacy, Beam's implementation adds dummy transaction outputs at the steam phase. Each output has a value of zero and it is indistinguishable from regular outputs. Later, after a random number of blocks, the UTXOs are added as inputs to new transactions, that is, they are spent and removed from the blockchain.
In Section 4.5 we have specified the steam routine. Following, we extend the routine to capture Beam's behavior: subroutine steam(tx : Transaction){ (* incubation period random choice *) i ← {min, max} (* create zero value output with incubation period i *) zeroOut ← createZeroValueOutput(i) addOutputTransactionUTXO(zeroOut) addOutputTransaction(zeroOut, tx) . . . } To capture that semantic, we have combined two Beam's features: incubation period on UTXO and aggregation of zero value transaction outputs. Firstly, we randomly choose i as an incubation period. Then, we create a zero value transaction output (zeroOut) with incubation period i. The incubation period will ensure not to spend the dummy output before i blocks are confirmed on the chain. After that, zeroOut is added to the UTXO set (which is maintained in the local state of the node) and to the transaction tx that is being broadcasted. Finally, the routine follows as we specified in Section 4.5.

Range Proofs
Grin and Beam implement range proofs using Bulletproofs [35]. Bulletproofs are a non-interactive zero-knowledge proof protocol. They are short proofs (logarithmic in the witness size) with the aim of proving that the committed value is in a certain (positive) range without reveling it. Proof generation and verification times are linear in the length of the range. Regarding our model, it is the first property a transaction should satisfy to be valid (Property 1). Furthermore, for every transaction in a bock, the range proofs of all the outputs should be valid too (Property 2).

Some Design Decisions Emission Scheme
It is known that BitCoin has a limited and finite supply of coins. Nowadays, new coins come from the process called "mining" where miners are paid because of their work of aggregating new blocks to the chain besides of the transaction fees. However, once the maximum amount of coins in circulation is reached, there will not be new coins and the miners will be paid only with the transaction fees.
Grin has a different approach. It has a static emission rate, where a fixed number of coins is released as a reward for aggregating a new block to the chain. This algorithm has no upper bound for new coins. However, Beam has a capped total supply standing at 262M. The reward algorithm is decreasing over the years [58].

Parties Negotiation
Mimblewimble establishes that communication between the parties to construct a new transaction is made off-chain. Parties should collaborate in order to choose blinding factors and construct a valid transaction, in particular, a balanced transaction as in Definition 5. Grin offers this process synchronously. Both parties are connected directly to one another and they should be online simultaneously.
On the other hand, in order to construct a new transaction, Beam offers a non-interactive negotiation between the parties. The Secure Bulletin Board System (SBBS) [59] runs on the nodes and it allows the parties to communicate off-line. Moreover, Beam also presents a one-side payment scheme. This scheme allows senders to pay a specified value to a particular receiver, without any interaction from the receiver side. The key here is not revealing blinding factors. It is addressed with a process called kernel fusion. Basically, both parties construct a half kernel and both kernels should be present in the transaction.

Chain Syncronization
Grin allows partial history syncronization. When a new node wants to enter the network, it is not necesary to download the full history of the chain but it will query the block header at a horizon. The node can increase this limit as necessary. Then, it will download the full UTXO set of the horizon block.
Beam improves node synchronization using macroblocks. A macroblock is a compressed version of blockchain history after applying the cut-through process. Each node stores macroblocks locally. When a new node connects to the network, it will download the latest macroblock and will start working from that point.

Final Remarks
MW constitutes an important step forward in the protection of anonymity and privacy in the domain of cryptocurrencies. Since it facilitates traceability and the validation process, both Grin and Beam adopted the MW protocol for their implementations.
We have highlighted elements that constitute essential steps towards the development of an exhaustive formalization of the MW cryptocurrency protocol, the analysis of its properties and the verification of its implementations. The proposed idealized model is key in the described verification process. First of all, we have defined the main components of our idealized model: transactions, blocks and chain. Then, we have provided validity conditions to guarantee the correctness of the blockchain. We have stated precise conditions for a valid transaction and a valid block. Furthermore, we have defined and proved that the validity of a block is invariant with respect to the cut-through process and CoinJoin.
The main difficulty we have faced during that process was the lack of "official" documentation, so we have made an exhaustive literature review in order to analyze and conceptualize the main components of MW. Furthermore, even when Grin and Beam have documentation available on-line, the main challenge was to build a model which abstracts away the specifics of their implementations.
We have also identified and precisely stated the conditions for our model to ensure the verification of relevant security properties of MW which is a important contribution of this work. Firstly, we have proved that no new funds are produced from thin air in a valid transaction. Secondly, since MW transactions are in the form of Pedersen commitments, we have analyzed the strength of the scheme regarding the main security properties a cryptocurrency protocol must have. In particular, we have defined the computational binding commitment property and we have shown a security reduction using a gamebased cryptographic proof approach. Then, we have defined and analyzed two important security properties: unlinkability and untraceability. In particular, we have proved that the probability in finding a balanced transaction within a valid block is negligible. Moreover, we have defined what broadcast with confusion is in order to obscure the IP address from which the transaction comes from.
Since MW is built on top of a consensus protocol, we have developed a Z specification of a consensus protocol and presented an excerpt of the {log} prototype after its Z specification. This {log} prototype can be used as an executable specification where functional scenarios can be exercised. This allows us to analyze the behavior of the protocol without having to implement it in a low level programming language.
Finally, we analyze and compare the Grin and Beam implementations in their current state of development, considering our model and its properties as a reference base. We have presented detailed connections between our model and their implementations regarding the MW structure and its security properties. In particular, we have extended our steam routine abstraction to capture Beam's behavior combining dummy transactions and incubation period in order to improve privacy.
The main challenge we have faced to address the comparison between our model and the implementations was reading the source code of Grin and Beam to identify the components of our model and analyze how they implement them.
With respect to our previous paper [31], we have extended the definition of the MW protocol and the idealized model, incorporating in particular the discussion on the security properties of Pedersen commitments. Furthermore, we have studied the strength of the commitment scheme introduced regarding the main security properties a cryptocurrency protocol must have.
As future work, we plan to extend the formalization of our model to include new definitions and security properties. In particular, we will extend the definition of valid transaction to enable zero-knowledge proofs in order to prove that the transaction amount is in certain range without revealing the value.
In addition, since cryptographic proofs are becoming increasingly error-prone and difficult to check, we plan to carry out a specification of our MW model using an interactive prover, in order to provide an automated verification of our model. Security goals and hardness assumptions shall be modeled in order to verify the security properties we have stated. Firstly, we plan to evaluate tools for the verification of cryptographic protocols and implementations, such as EasyCrypt [28], ProVerif [60], CryptoVerif [61] and Tamarin [62]. In particular, we are especially interested in using EasyCrypt, an interactive framework for verifying the security of cryptographic constructions in the computational model. Secondly, we will specify our model using the tool, according to all definitions we have stated in this work. Then, all the properties we have presented and proved should be verified using the interactive prover. Furthermore, we shall specify and verify the security properties.
In prticular, the game-based cryptographic proof in Lemma 6 where the goal is to construct a security reduction as a sequence of games proving that any attack against the security of the system would lead to an efficient way to solve the discrete logarithm problem.
The results presented in this work constitute a relevant contribution in order to analyze the correctness of the MW protocol and its security properties over an idealized model beyond any particular implementation. Directions for future research are to verify that Grin and Beam are a correct implementation of the idealized model in order to guarantee security properties with a formal approach.
Author Contributions: This article builds upon and extends a previously published paper in AIBlock 2020 [31]. All authors have contributed equally to this work. However, the master's student A.S. made the main contributions in Sections 4 and 6. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:

Math Symbol Meaning
X → Y set of partial functions from X to Y P X powertset of set X X ↔ Y set of binary relations from X to Y X → Y set of total functions from X to Y ran R range of R empty sequence a, b, c sequence of a, b and c ∅ empty set

Appendix B. Background on the Z Notation
The Z notation is a formal method based on first-order logic and Zermelo-Fraenkel set theory. Any Z specification takes the form of a state machine-not necessarily a finite one. This machine is defined by giving its state space and the transitions between those states. The state space is given by declaring a tuple of typed state variables. A transition, called operation in Z, is defined by specifying its signature, its preconditions and its postconditions. The signature of an operation includes input, state and output variables. Each operation can change the state of the machine. State change is described by giving the relation between before-state and after-state variables.
In order to introduce the Z notation we will use a simple example. Think in the savings accounts of a bank. Each account is identified by a so-called account number. The bank requires to keep record of just the balance of each account. Since account numbers are used just as identifiers we can abstract them away, not caring about their internal structure. Z provides so-called basic or given types for these cases. The Z syntax for introducing a basic type is:

[ACCNUM]
In this way, it is possible to declare variables of type ACCNUM and it is possible to build more complex types involving it-for instance the type of all sets of account numbers is P ACCNUM.
We represent the money that clients can deposit and withdraw and the balance of savings accounts as natural numbers. The type for the integer numbers, Z, is built-in in Z. The notation also includes the set of natural numbers, N. Then, we define: In other words, we introduce two synonymous for the set of natural numbers so the specification is more readable.
The state space is defined as follows: Bank balances : ACCNUM → BALANCE This construction is called schema; each schema has a name that can be used in other schemas. In particular this is a state schema because it only declares state variables. In effect, it declares one state variable by giving its name and type. Each state of the system corresponds to a particular valuation of this variable. The type constructor → defines partial functions. (It must be noted that the Z type system is not as strong as the type systems of other formalisms, such as Coq. So we will be as formal as is usual in the Z community regarding its type system). Then, balances is a partial function from ACCNUM onto BALANCE. It makes sense to define such a function because each account has a unique ACCNUM; and it makes sense to make balances partial because not every account number is used all the time at the bank (i.e., when an account is opened the bank assigns it an unused account number; and when an account is closed its account number is no longer used). Now, we can define the initial state of the system as follows: InitBank Bank balances = ∅ InitBank is another schema. The upper part is the declaration part and the lower part is the predicate part-this one is optional and is absent from Bank. In the declaration part we can declare variables or use schema inclusion. The latter means that we can write the name of another schema instead of declaring their variables. This allows us to reuse schemas. In this case the predicate part says that the state variable is equal to the empty set. It is important to remark that the = symbol is logical equality and not an imperative assignment-Z has no notion of control flow. In Z, functions are sets of ordered pairs. Being sets they can be compared with the empty set. The symbol ∅ is polymorphic in Z: it is the same for all types. Now we can define an operation of the system (i.e., a state transition) specifying how an account is opened.
NewAccountOk ∆ Bank n? : ACCNUM n? / ∈ dom balances balances = balances ∪ {n? → 0} As can be seen, operations are defined by means of schemas. The expression ∆Bank in the declaration part is a shorthand for including the schemas Bank and Bank . We already know what it means including Bank. Bank is equal to Bank but all of its variables are decorated with a prime. Therefore, Bank declares balances of the same types than that in Bank. When a state variable is decorated with the prime it is assumed to be an after-state variable. The net effect of including ∆Bank is, then, the declaration of one before-state variable and one after-state variable. A ∆ expression is included in every operation schema that produces a state change.
Variables decorated with a question mark, like n?, are assumed to be input variables. Then, n? represents the account number to be assigned to the new savings account. To simplify the specification a little bit we assume that a bank's clerk provides the account number when the operation is called-instead of the system generating it.
Note that the predicate part consists of two atomic predicates. When two or more predicates are in different rows they are assumed to be a conjunction. In other words: n? / ∈ dom balances balances = balances ∪ {n? → 0} is equivalent to: n? / ∈ dom balances ∧ balances = balances ∪ {n? → 0} Z uses the standard symbols of discrete mathematics and set theory so we think it will not be difficult for the reader to understand each predicate. Remember that functions and relations are sets of ordered pairs so they can participate in set expressions. For instance, balances ∪ {n? → 0} adds an ordered pair to clients. Again, the expression balances = balances ∪ {n? → 0} is actually a predicate saying that balances is equal to balances ∪ {n? → 0}, and not that the latter is assigned to the former. In other words, this predicate says that the value of balances in the after-state is equal to the value of balances in the before-state plus the ordered pair n? → 0.
Note that operations are defined by giving their preconditions and postconditions. In NewClientOk the precondition is: This is another way of writing schemas, called horizontal form. It has the same meaning than: AccountAlreadyExists Ξ Bank n? : ACCNUM n? ∈ dom balances The expression ΞBank is a shorthand for: ΞBank ∆Bank balances = balances If a Ξ expression is included in an operation schema, it means that the operation will not produce a state change because all the primed state variables are equal to their unprimed counterparts. When a schema whose predicate part is not empty is included in another schema, the net effect is twofold: (a) the declaration part of the former is included in the declaration part of the latter; and (b) the predicate of the former is conjoined to the predicate of the latter. Hence, AccountAlreadyExists could have been written as follows: AccountAlreadyExists balances, balances : ACCNUM → BALANCE n? : ACCNUM n? ∈ dom balancesbalances = balances Usually, schemas like NewClientOk are said to specify the successful cases or situations, while schemas like AccountAlreadyExists specify the erroneous cases. Finally, we assemble the two schemas to define the total operation-that is, an operation whose precondition is equivalent to true-for opening a savings account in the bank: NewClient == NewClientOk ∨ AccountAlreadyExists NewClient is defined by a so-called schema expression. Schema expressions are expressions involving schema names and logical connectives. Let A be the schema defined as [D A | P A ] where D A is the declaration part and P A is its predicate. Similarly, let B the schema defined by [D B | P B ]. Then, the schema C defined by A B, where is any of ∧, ∨ and ⇒, is the schema [D A ; D B | P A P B ]. In other words, the declaration parts of the schemas involved in a schema expression are joined together and the predicates are connected with the same connectors used in the expression-if there is some clash in the declaration parts it must be resolved by the user. In symbols: