Diffusion on the Peer-to-Peer Network

: In a peer-to-peer complex environment, information is permanently diffused. Such an environment can be modeled as a graph, where there are ﬂows of information. The interest of such modeling is that (1) one can describe the exchanges through time from an initial state of the network, (2) the description can be used through the ﬁt of a real-world case and to perform further forecasts, and (3) it can be used to trace information through time. In this paper, we review the methodology for describing diffusion processes on a network in the context of exchange of information in a crypto (Bitcoin) peer-to-peer network. Necessary deﬁnitions are posed, and the diffusion equation is derived by considering two different types of Laplacian operators. Equilibrium conditions are discussed, and analytical solutions are derived, particularly in the context of a directed graph, which constitutes the main innovation of this paper. Further innovations follow as the inclusion of boundary conditions, as well as the implementation of delay in the diffusion equation, followed by a discussion when doing approximations useful for the implementation. Numerous numerical simulations additionally illustrate the theory developed all along the paper. Speciﬁcally, we validated, through simple examples, the derived analytic solutions, and implemented them in more sophisticated graphs, e.g., the ring graph, particularly important in crypto peer-to-peer networks. As a conclusion for this article, we further developed a theory useful for ﬁtting purposes in order to gain more information on its diffusivity, and through a modeling which the scientiﬁc community is aware of.


Introduction
This document aims at describing the diffusion of information in the peer-to-peer (P2P) network related to a crypto, for instance Bitcoin, through a fundamental diffusion approach. The information could be anything, but the object of communication between agents (i.e., miners and users), and, to add context, it could be the hash calculation. This designates the processes which calculate hashes for mining blocks, given relevant remaining transactions to hash. More specifically, miners have the purpose of hash transactions within a list of transactions-the memory pool-which appear in the next block in the blockchain Lipton and Treccani (2021). Each transaction circulates from node to node in the memory pool used for remembering all new transactions, and miners receiving new transaction lists, supposing no double spending, start to generate hashes to create the new block. To this extent, the hash calculation starts as long as miners receive the list of transactions, thus the hash calculation can be seen as the information of transactions diffusing along the network (see Shahsavari et al. (2017) for a more theoretical discussion). The more a miner receives transactions to be mined, the more he/she is likely to calculate hashes, i.e., the stronger the hash calculation. On the contrary, if there is a low amount of information, i.e., not so many transactions to hash, then the lower the amount of transactions are to be hashed, and the lower the hash calculation. Conversely, if the hash calculation is high, this means that the miner has likely received a long list of transactions ready to be hashed. The hash calculation is intimately linked to the hash rate, so we will name this type of diffusion: diffusion of hash rate. In another more economical context, the diffused information could simply be Bitcoin cash flows. This is true only if all miners have equal power of computing hashes, which we know is not realistic. This is not an issue: the diffusion model should depend on each node, and we will need to incorporate a rate of creation of hashes to assign to each node. In essence, not only do we have a diffusion of hash rate, but also the creation of hashes, increasing the hash rate for each miner. Being able to derive the hash rate with respect to time and node is being able to predict such an evolution for all nodes, at a given time and context, which is one main motivation for this approach.
A simple but fundamental physics model of diffusion is given by the famous diffusion equation, where h is the hash rate (or more generally, a 'rate' of information), t is the time variable, D is the diffusion coefficient throughout the network, and Γ is the local hash creation rate. The operator ∆ is called the Laplacian operator. One may think we could handle this approach and solve the equation for h. However, what do spatial coordinates represent? It seems that the majority of physics fields where this equation applies contains spatial coordinates, which are real numbers with respect to a pre-defined coordinate system. This is not the case for a proper network. This means we need to handle the diffusion theory on a graph. There have already been established approaches somehow related to the diffusion description on a graph, with many applications (e.g., Bondy and Murty 2007;Pacreau et al. 2021;Thanou et al. 2017;Viriyasitavat and Hoonsopon 2019). We discuss some of the approaches for modeling diffusion on a network.
The paper An et al. (2021) specifies an optimality approach for minimizing the maximum regret due to decision taken in an uncertainty environment, and specifically calculates the maximal regret through a linear program. This program has the advantage of considering the connectivity of the network, and the evolution of the information diffusion should be a relevant complement to describe the structure of the network. The paper  defends Hayek's theory of private money, which, once applied to a state, should avoid a significant amount of private money within the population of this state and reduce the inequalities by means of further regulations through central banks digital currencies (CBDC). This imposes a certain structure in the network of exchanges: the current Bitcoin P2P network is distributed, and the application of this theory would make it change into a more centralized network. The diffusion properties may significantly change from one type to the other (see Section 7 for numerical illustrations), and our approach applies to both different types. The authors in the paper Hollanders et al. (2014) model a P2P network as a bipartite graph, where one family is the set of files, and the other is the set of peers. This way of modeling allows to find an 'S' shape in the dynamic of the information diffusion, and such a shape is a strong expectation of the social networks community. The equations guiding the evolution are through the susceptible/infectious (SI) equations. There are underlying assumptions, such as the strong one: the constancy of population. The paper Manini and Gribaudo (2008) engages a probabilistic approach of diffusion of files into a P2P network. In Musa et al. (2018), the diffusion methodologies applied to a P2P network are explicitly reviewed. If not based on the SI approach, the models are essentially probabilistic. Finally, the authors in Li et al. (2019) see the blockchain as a kind of Markov chain.
Interestingly, all the mentioned approaches overall focus on probabilistic modeling for the diffusion processes on a graph. This present study proposes to focus on the more fundamental aspects of the diffusion behavior, mainly governed by an equation of the style of Equation (1). It is worth stressing that the operator Γ is very general, and can be the source of randomness through, for instance, non-linearities, bringing chaos into the system. However, one possibly can also include probability in Γ, which would make the present study very general. More conceptually, Γ can gather any network specificity, such as its consensus, since it is related to sources of information.
However, some definitions already introduced in some of the previous studies, and including the Laplacian operator, may need to be further discussed, especially from a more intuitive and fundamental view of sharing information between two vertices of a graph. This is especially true when the graph is directed. Thus, in the context of diffusion on a P2P network, we need, for deriving the appropriate diffusion equation, to introduce the derivative and the integration on a graph, for a function f defined on each node x at a given time t. The Laplacian operator will follow. One understands that deriving the appropriate diffusion equation is a task in itself. Then, once derived, we need to solve the equation, which represents another task. In addition, we will see that there are different possible Laplacian operators for a diffusion theory (at least two), and we will focus on them separately. We will need some linear algebra tools to do it, as a graph is equivalent to its adjacency matrix, and, conversely, any matrix gives a (weighted) graph.
We now discuss the two hypotheses made (and mitigated) in this study. The first one is that graphs are undirectional. This is a strong assumption, as it means that once a vertex agent receives information that it did not previously have, then it will give back part of it to the source. Although this might be convenient in the context of bitcoin exchange (Alice sends some bitcoins to Bob, but also to herself, i.e., she generates new addresses she owns to send part of the total amount of bitcoins), a piece of information, whatever its type, has no reason to come back to its source. Considering that a directed graph is essential if we need to describe the interactions within a P2P network, this is what we do in this paper, and this constitutes the main innovation of the present study.
Another hypothesis is made in this modeling: when a vertex agent receives a piece of information, he/she immediately treats it and diffuses it to its neighbors. In other words, the response to the information reception is immediate. This is also a strong assumption, as, for instance, cash flows are performed in a delayed manner. Thus, Alice sends some bitcoins to Bob, but Bob does not immediately send a part (or more) of it to Charlie. In fact, the immediate response is the straightforward consequence of the structure of the diffusion equation, whose solutions are a differentiable function of time. It turns out that one way to make a non-differentiable solution at some given times is to introduce time dependency in the Laplacian operator, and the Heaviside function (whose value is 0 below a given time, and 1 above it) is a relevant candidate for differentiability breaking at selected points in time. Introducing Heaviside functions in the Laplacian operator is a much easier task in the context of graph modeling, for the reason that there are no discontinuity issues for the variable x, usually continuous, and here being discrete (and representing the node index). We engage this point of view at the end of the study.
The rest of this paper is depicted as follows. Section 2 is an introduction to the modeling, with many fundamental definitions which will imply the Laplacian ones, as well as the diffusion theory. Section 3 derives and solves the diffusion equation defined on an undirected graph. It is worth pointing out that the two previous sections are strongly inspired from Chung et al. (2007). The rest of this paper constitutes the main innovations. Section 4 introduces two different types of equilibrium notion, the standard one (t → ∞) and the blockchain one (t = 10 min for Bitcoin), as well as some conceptual links between both. Section 5 shows the implementation of the derived solution in Section 3. Section 6 is an important generalization of the approach done thus far: it introduces the diffusion equation and its resolution in a directed graph, very important properties of the P2P network as already discussed above. Section 7 shows numerical illustrations of the previous section for several famous networks (including the Erdos-Reyni network, or the binary tree). Section 8 is the direct implication of the previous directed graph case when there are boundary conditions, typically certain vertices constantly sending information to the rest of the network. Finally, Section 9 includes delay in the response of nodes when they receive information. The last section concludes the study.

Introduction to Graph Theory
Let G = (E, V) be a graph, that is a set of nodes E and a set of hedges V, composed of pairs of nodes in E. Thus, if x ∈ V, i.e., x is a node of G, and y ∈ E is another node of G; if (x, y) ∈ V, we say that the nodes x and y are connected in G. We suppose that G is connected (that is, there exists a connection between any two nodes of G), unoriented (x to y or y to x is the same) and unweighted (all the paths are equally effective); this last hypothesis may be relaxed in the future. An example of such a graph is shown in Figure 1. Here, nodes have between one and four neighbors. In particular, node 6 diffuses information (e.g., list of transactions) to its neighbors, i.e., nodes 3, 5, and 7.
There is a matrix equivalence for such a graph. Node 1 is connected to nodes 2 and 3, in Figure 1, but not to any other node. We can represent the graph G by a matrix A, which is named the adjacency matrix. If we arbitrarily label nodes with numbers, the adjacency matrix for the graph G in Figure 1 is given by (2) The first row corresponds to the connections of node 1. Node 1 is connected to nodes 2 and 3 only, so columns 2 and 3 for the first row prints 1; otherwise, it prints 0, and so on. The diagonal (from top left to bottom right) of this matrix is composed of zeroes. It actually turns out the this matrix is symmetric (i.e., with respect to its diagonal). In this example, E is isomorphic to {1, 2, . . . , 9}-we will identify both sets if there is no ambiguity-and the element a ij of A is 1 if (i, j) ∈ V for (i, j) ∈ E 2 .
Studying the matrix A or the graph G is therefore equivalent, and the P2P network could indeed be represented this way. In addition, in Figure 1, node 6 spreads the information of new transactions to nodes 3, 5, and 7. Node 6 starts to hash first, and then nodes 3, 5, and 7 start once they receive the information: the 'hash rate' propagates. Here, node 6 firstly computes hashes (purple), and secondly nodes 3, 5, and 7 compute hashes (they become purple). This approach generalizes to all the nodes that create hashes, i.e., diffuses list of transactions to other nodes as well as starting to compute hashes, and therefore will make its neighbors compute hashes.
The arising question is, how is the diffusive process of such hash information, i.e., can we describe it and thus allow a prediction? The answer is yes.

Derivative
We need to introduce the derivative and the integration properly before doing anything else. It is worth noting that significant effort has been made to get closer to the notations involved in the differential calculus. From now on, the sequence of this paper will be mathematical, unless specified otherwise. The diffusive quantity of interest, here the hash rate, is given by the function f , function of nodes x ∈ V and time t ∈ R + .
In the following, we focus on a graph G = (E, V) with associated adjacency matrix A = (a ij ) 1≤i,j≤n (n ∈ N * \{1} nodes). The graph G is connected, unoriented and unweighted so that the matrix A is symmetrical, and its elements are in {0, 1}. We finally call N (x) the set of all neighbors of x ∈ V, and we write d A x = Card N (x) the degree of x, i.e., number of neighbors of x, and we call Vol(G) = ∑ x∈V d A x the volume of the graph G. We write x ∈ V or x ∈ G equivalently in the sequence.
Definition 1 (Derivative). Let f be a function of E × R + in R. 1 The directional derivative of f from z ∈ G to y ∈ G is given by The operator ∂ z is a linear map of f , for any z ∈ G. Figure 2 represents the derivative. The rest of the definitions depend on this, so it is important to well understand this last one. y z Figure 2. The directional derivative of f from z to y. This actually represents the (finite) variation of f for the information going from node z to node y. It is positive if y gets more information than z, and negative otherwise. This concept makes sense only if y and z are neighbors; hence, the presence of a yz ∈ {0, 1} in the definition. Definition 2 (Gradient). Let f : E × R + → R. The gradient of f at y ∈ G is the vector given by 2 The gradient at a node is the collection of directional derivatives toward this node.
Definition 3 (Laplacians). Let f : E × R + → R. The normalized Laplacian of f from x ∈ G to y ∈ G is given by while the non-normalized Laplacian of f from x ∈ G to y ∈ G is given by The global normalized Laplacian of f at x is given by while the global non-normalized Laplacian of f at x is given by L f (or ∆ f ) is said to be the Laplacian of f .
The Laplacian ∆ of f is the average of the second directional derivative of f at node x, while the Laplacian L of f is the average of the second directional derivative of f at node x. We then could define the following operator: Definition 4 (Divergence). Let x ∈ G and f z : Define the vectorial function f = ( f z ) z∈N (x) . The divergence of f at x ∈ G is given by or depending on whether we choose L or ∆.
This is the sum (or average) value of f for a given node, the average being taken at each of the considered node's neighbors.
Proposition 1 (Div of grad is Laplacian). For any x ∈ G, we have depending on whether we choose L x or ∆ x for the definition of the divergence.
Proof. Note that, for all y, z ∈ G, we have Since a 2 yz = a yz ∈ {0, 1}, we therefore have that ∂ z ∂ z f (y) = ∂ z f (y). It turns out that Thus, we note that ∂ z ∂ z = ∂ z , and it turns out that ∆ f (x) = ∆ x f (x), for any x ∈ G. A straightforward but essential property for the Laplacian is depicted as follows.
Proposition 2. Let f : E × R + → R. The Laplacians of f at x ∈ G satisfy the following property: Thus, the Laplacian of f at x is the value of f at x, minus the average value of f taken by all its neighbors.
Proof. From the definition, and since a xz = 1 if and only if z ∈ N (x), we have which implies the results.
This actually allows to introduce the Laplacian matrix.
Definition 5 (Laplacian matrix). The non-normalized Laplacian matrix is given by where A is the adjacency matrix, and D = diag (d A x, d A y, . . .) is the diagonal matrix of degrees. The Laplacian matrix is given by where 1 is the identity matrix.
Contrary to any other field in mathematics, there are no criteria of derivability. The function just needs to be defined at the two considered points.

Integration
After having defined the derivatives, we have to define the integration.
Definition 6. Let f , h : E → R. We introduce the map scalar product of f by h on any sub-graph g ⊆ G of G, as follows: The following proposition is straightforward, but fundamental.
Proposition 3 (Scalar product). The map ·, · g : f , h → f , h g is a bilinear form from E 2 to R, for any g ⊆ G. This justifies its name scalar product. Thus, the family (G, ·, · ) is said to be a Euclidean graph.
In the following, the graph G is always considered a Euclidean graph.
Definition 7 (Integration). Let f : E → R. We introduce the integration of f on any sub-graph g ⊆ G of G, as follows: d A is a measure on G by ·, · .
Here again, the function, to be integrable in g, just needs to be defined on g. The previous and following properties are still satisfied for a function f : The above definition actually is consistent with integration by parts.

Proposition 4 (Integration by parts
Equation (18) is proven. In addition, we have which aims at proving Equation (19) as was previously done.
We now have all the tools to develop the diffusion theory on a network, which is the object of the next session.

Derivation of the Equation
In order to derive the diffusion equation, we need the following essential proposition. In the sequence, we call f : E × R + → R the number of hashes produced per miner, as a function of nodes and time.
Proposition 5. The density of the current of hashes from at x ∈ G at time t is given by the following law of diffusion: where D > 0 is the diffusion coefficient of information.
We indeed assume that D is constant and is a network characteristic. Despite the fact that this law is very intuitive and is a typical Fick law (the information goes toward where there is not any information yet and hence the negative sign), further developments need to be made to rigorously justify this equation in the context of the network. Let us write j(t, x) = (j z (t, x)) z∈N (x) the vector j at x and t. Thus, j z (t, x) represents the flow of information from z to x.
Here is the first important result of the paper.
Theorem 1. Let f : E × R + → R represent the information function. If D is the diffusion coefficient and Γ : E × R + → R the creation function of the information, the following diffusion equation holds for any x ∈ V and t ∈ R + : Separately, the following diffusion equation also is valid As long as one remembers that it is local, these equations can simply be written as The minus sign is actually a necessity for the system to be described as diffusive, and not an explosive one, contrary to standard math and physics systems.
Proof. On the one hand, let δN be the quantity of hashes produced during the infinitesimal variation of time dt, at node x ∈ G. The hash rate is the flux of rates given by φ = δN/dt, and the number of such hashes produced per neighbors, for node x ∈ G, from time t and On the other hand, node x (i) receives the diffused information from its neighbor, implying δN enter hashes, and (ii) can create (or destruct) information, implying δN create hashes. We therefore must have the following conservation of information: On point (i), the quantity of information entering into node x is provided from all its neighbors, and thus δN entering = ∑ z∈N (x) j z (t, x) dt. On point (ii), we introduce the creation rate per node Γ(t, x) at node x and time t, which is the rate per node per second hashes are calculated. We therefore have Bearing all the above in mind, Equation (23) becomes , which does conclude the proof.
Separately, the function F can be seen as a matrix M, and the above equation is thus written as MV = λV, or component-wise MV(x) = λV(x). We adapt to the matrix viewpoint in the following. If M is symmetric, then it is diagonalizable and (V i ) i∈{1,2,...,n} is an orthonormal basis of R n , that is where δ ij is the Kronecker symbol.

Laplacian: Eigenvalues/Eigenvectors
The above applies to the Laplacian matrix (see Definition 5). For instance, the nonnormalized Laplacian matrix for Figure 1 writes as The matrix L is symmetric and positive (semi-)definite. We write (µ i ) i∈{1,2,...,n} the eigenvalues of the matrix L such that µ 1 ≥ µ 2 ≥ . . . ≥ µ n ≥ 0 with the associated eigenvalues (v i ) i∈{1,2,...,n} , forming an orthonormal basis of R n . The normalized Laplacian matrix writes The matrix ∆ is only positive semi-definite.
It is interesting to further develop the eigenvector V n associated with the eigenvalue λ n = 0. Set v n as the eigenvector of L associated with the eigenvalue λ n = µ n = 0 (same common eigenvalue between L and∆). In addition, we have Lv n = 0; hence, Dv n = Av n , which implies that v n = e/ √ n with the appropriate normalization. Actually, we easily can establish that It turns out that V n = D 1/2 v n , and with the appropriate normalization, we finally have Two additional points to bear in mind are that∆ is equivalent to ∆ by construction; therefore, both matrices have the same eigenvalues. Finally, since∆ = D −1/2 L D −1/2 , then, according to Sylvester's inertia theorem,∆ and L, and thus ∆, have the same number of negative, zero, and positive eigenvalues. Further developments can be done here.

Resolution of the Equation
We need the following lemma Lemma 1. The operator L is self-adjoint, in the sense that h, L f G = Lh, f G .
In addition, the operator∆ also is self-adjoint Proof. The first two equations have proofs very similar to the one for Proposition 4. We thus focus on∆ x and∆. First we note, for any x ∈ G, that It turns out that Bearing this in mind, we calculate Equation (31) is proven. For Equation (32), we have which aims at proving the result, as previously done.
Here is the second important result of the paper. We have the following analytical expression for the hash rate per node. (21) is given by

Theorem 2. The solution for the diffusion Equation
The solution for the diffusion Equation (22) is given by It is worth pointing out that these analytical solutions easily are implementable in a computer.
Proof. We first focus on the solution for Equation (21). We decompose f (t, x) into the basis (v i ) i∈{1,2,...,n} of R n made of eigenvectors of the matrix L.
where, for all i ∈ {1, 2, . . . , n}, we have Using Lemma 1, we have and using Equation (21), we have Implementing Equation (35), we have We can easily solve this differential equation, and we have We obtain the final result by placing the previous equation to Equation (35).
We now focus on the solution for Equation (22). We decompose (D 1/2 f )(t, x) into the basis (V i ) i∈{1,2,...,n} of R n made of eigenvectors of the matrix∆. The rest of the calculation remains exactly the same as below, with the additional trick∆D 1/2 = D 1/2 ∆.
We note that for any i ∈ 1, n and x ∈ G. It turns out that if Γ = 0, using L or ∆ leads to the exact same result. (21) is written as

Corollary 1. The solution for the diffusion Equation
where U L : R + × E 2 → R is the diffusion propagator for the matrix L, and is such that We also have The advantage of this last expression is the explicit dependency of f with respect to the initial condition on all the nodes of G, i.e., with f (0, y) for all y ∈ G.
Proof. We derive the expression for the matrix∆. From the general solution expression, we have Therefore, we have Implementing into the main solution expression, we have Some re-ordering leads to This explicitly is what we are looking for. A variable change will lead to the result.

Remark 1. We have
where δ xy = 1 if x = y and 0 otherwise: the same for matrix L. We also have in which case lim t→+∞ U∆(t, x, y) does not depend on x. We also found an interesting interpretation for the eigenvector V n . Indeed, we have Therefore, V 2 n (y) represents the propagation of information due to the diffusion, to node y.
The next two sections deal with further developments on the implications of the facts established above. This also holds for the operator L.

Standard Equilibrium and Blockchain Equilibrium
The equilibrium is an important aspect of the current paper. We identified two types of equilibrium, which we further develop here. The zero eigenvalue as well as its eigenvector will play an important role, as we are going to see.

Integrable Creation Destruction Function
The standard equilibrium corresponds to the steady state, i.e., when t → +∞. It is expected to have permanent exchanges between nodes at the same rate, which is exactly equivalent to the fact that each node keeps having the same quantity of information, i.e., the same quantity of hash generation.
More specifically, we have the following proposition.
Theorem 3 (Standard Equilibrium Framework). Let f be solution of the diffusion Equation (1). We assume that t → Γ(t, y) is integrable on R + for any y ∈ G, and Then the standard equilibrium state corresponds to: We have a similar result by considering the matrix L.
Proof. From the expression we see that all the terms cancel since λ 1 ≥ λ 2 ≥ . . . ≥ λ n−1 > λ n = 0, except the last one. Thus This does not depend on x. We then start from the following equation Hence, The previous equation is the most general one. We come to the factor between parenthesis. The previous equation is then We now focus on the Equations (40). We will prove that γ i (y) = 0, for all i ∈ {1, . . . , n − 1} and y ∈ G, which will prove Equation (41). A variable change gives Bearing this in mind, for i = n, we have Hence |Γ(τ, y)| ≤ C(y) < +∞ with C function from G to R, and lim τ→+∞ Γ(τ, y) = 0, for any y ∈ G. It turns out that for any i ∈ {1, 2, . . . , n − 1}, we have Hence, for any i ∈ {1, 2, . . . , n − 1} and t > τ, we have By a variable change and using triangular inequality, we have In other words, we have Suppose now that t ∈ R * + is such that Then we have which concludes the proof.
For example, set Γ(t, x) = P(t) e αt , where P is a polynomial and α ∈ R with some restrictions that we need to discover.
Let n ∈ N. A quick calculation allows to establish that (−1) n−j 1 a n−j+1 n! j! τ n−j e aτ + C te if a = 0 τ n+1 n + 1 + C te otherwise If we set Γ(τ, x) = τ n e ατ , then we deduce that We deduce that, for γ i (y) to be finite, we need to have α ∈ R * − , while n ∈ N without restriction, and the degree of P is in N. However, if α = 0, then we necessarily need to have n = 0. The less negligible function Γ is thus Γ(t, x) = P(0), which is the only form giving a nonzero γ i (y) = 0. (40) is actually close to the one found by applying the mean value theorem, provided Γ(·, y) is continuous for any y ∈ G: there exists a i ∈ [0, t] such that
Remark 3. General function theory might be interesting if one decides to break the assumption that Γ(·, y) is continuous.

Continuous-by-Parts Creation Destruction Function
If we require the solution f to be continuous at all times, then, according to the diffusion equation, Γ must be at least continuous by parts. In this case, the function Γ(·, x) is continuous by parts, for any x ∈ E, i.e., there are some points on the time axis R + where the function Γ(·, x) is not continuous (discontinuity of first kind). In this case, for any x ∈ G, the function Γ(·, x) still is integrable on R + .

Blockchain Equilibrium (Approximation)
In fact, we have to consider that time is discretized, i.e., subdivided into interval of time which are of length 10 min approximately. Ten minutes is the average time needed for a miner to mine a new block in the Bitoin blockchain Lipton and Treccani (2021), and this could be changed to 15 s for the Ethereum blockchain. We will go to deeper detail to this point in the future, but approximately every 10 min, the blockchain equilibrium equation is satisfied. The following are functions of interest: c-cost per hash, supposed to be constant; • P t -price of one Bitcoin, in USD, at time t; • f i -fee for transaction i ∈ I, where I is the set of all transactions that will be stacked into block b; • N b -number of hashes in the network, for block b.
From these five variables, we can easily compute the revenues R b for block b, as follows.
We can compose the costs, more precisely the expected costs C b , for block b, as well: We have the blockchain equilibrium equation: Thus, N b is connected to f (t, x), which gives the cost for forming a block. We can, in fact, make the assumption that the circulation of transaction lists within miners is very fast: it takes approximately 1 millisecond for the nodes to receive the transaction list from an original list. It is relevant to assume that the diffusion happens very fast once one miner has a transaction list to be sent, and that the diffusion coefficient D is very high. This is the blockchain equilibrium approximation: Dλ n−1 t 1, for any time t. In other words, Dλ n−1 t is so high that e −Dλ i t ∼ (Dλ n−1 t→+∞) 0, for any i ∈ {1, 2, . . . , n − 1}.
Only the term corresponding to λ n = 0 survives.
Proposition 6 (Blockchain Equilibirum Approximation). Let f be solution of the diffusion Equation (1). We focus on t ∈ [0, T], where T is set to be 10 min. Then the blockchain equilibrium state corresponds to We have a similar result by considering the matrix L.
It turns out that f (T, x), which does not depend on x, is the same for all the nodes.

Proof.
We have We deduce the equation from which concludes the proof.
Thus, f (T, x) is to be associated to N b . Finally, in essence, the standard equilibrium and blockchain equilibrium give the same results. In the following, ' f (T, ·)' refers to the function at the equilibrium state.

2 Nodes
As a quick application, suppose E = {x, y}, only two nodes, and V = {(x, y)}, one connection. Suppose Γ = 0, f (0, x) = f 0 ∈ R + , f (0, y) = 0. We have V 1 (x) 2 + V 1 (y) 2 = 1, λ 2 = 0, and We can calculate the eigenvalues and eigenvectors for the network. The non-normalized Laplacian matrix is given by L = 1 −1 −1 1 and the Laplacian matrix by ∆ =∆ = L. The eigenvalues are λ = 2 and λ 2 = 0, and the eigenvectors are See Figure 3 for a plot of these two solutions. The equilibrium is therefore given by Finally, due to the diffusion effect, both nodes share the same information at the equilibrium. Figure 3. Illustration of the diffusion information for 2 nodes. The initial conditions are Γ = 0, f (0, x) = f 0 ∈ R + , and f (0, y) = 0. We set D = 1/2, and f 0 = 2. Blue is for f (t, x), while red is for f (t, y).
The equilibrium state corresponds to the following expressions.
We are now going to propose the solutions for specific P2P networks. It is worth stressing that implementation of the above in a prototype is in progress.

Generalization to Directed Graphs
This section focuses on the generalization of what has been seen so far, but applied to a directed graph. Concretely, information between different layers of the whole Bitcoin network is directed. A very simple example is that the users layer is sending information to the P2P network as transaction lists, and they do not expect any information back to them since they are not miners. The diffusion is thus directed. This breaks the symmetry of the adjacency matrix, which is not diagonalizable anymore. Thus, we could handle a more general framework to perform the diffusion. So far, we may handle with straight numerical solutions for the implementation, but we are going to see that we can derive an analytical solution base on the singular value decomposition (SVD).
Definition 8 (SVD). Let A ∈ M n,m (R) be a matrix of size n × m, with n, m ∈ N * . Then, A admits the following decomposition: where U and V are orthogonal square matrices of size n and m, respectively, and Σ ∈ M n,m (R) is a 0 matrix, except its diagonal elements, which are non-negative numbers.
If m > n, we have If Σ is a diagonal matrix Σ = diag(σ 1 , . . . , σ n ). The numbers σ 1 ≥ . . . ≥ σ n ≥ 0 are the singular values of A. The columns of matrices U and V are the singular vectors of A.

Matrix Exponential and Adjoint of the Laplacian Operator
As a reminder, we need the following three definitions.
Definition 9 (Exponential of Matrix). Let A ∈ M n (R) for n ∈ N * . The exponential of the matrix A is given by We write e A for exp(A).
Definition 10 (Hyperbolic Cosine and Sine of Matrix). Let A ∈ M n (R) for n ∈ N * . The hyperbolic cosine of the matrix A is given by The hyperbolic sine of the matrix M is given by It turns out that Definition 11. Let T be a (linear) operator and g ⊆ G be a subgraph. The adjoint operator t T of T is such that h, T f g = t Th, f g , for any f , g : G → R. The adjoint operator is the transposed of the associated matrix.
6.2. Solution for the Directed Diffusion Involving the Operator L 6.2.1. Intuition In case of the directed network, the diffusion also is oriented. To gain intuition, suppose there are two nodes x and y, so that x is connected to y, but not the converse. In this case, the adjacency matrix writes A = 0 1 0 0 so that the non-normalized Laplacian matrix is In addition, the node x loses information, while y gains the same amount of information that x loses. Therefore, at any time t ∈ R + , the conservation of total information is satisfied: Since x loses information but in a diffusive effect such that the more information it has, the more it gives, we must have Solving this equation leads to This is the solution of the differential equation: Introducing the vector F(t) = t ( f (t, x) f (t, y)), we thus obtain the following system of differential equations: dF dt (t) = −D t L F(t).

The Directed Diffusion Equation Involving L
Definition 12. Let f : E × R + → R representing the information function for a graph with n nodes. If D is the diffusion coefficient and Γ : E × R + → R the creation function of the information, the following directed diffusion equation holds, for any t ∈ R + : We have the same Equation (21) if and only if L = t L, that is, if and only if A is symmetrical.

Solution Involving L
From Equation (12), we can use the Cauchy theorem to obtain the following solution.
Theorem 4 (Application of the Cauchy Theorem). The solution of the related Cauchy problem associated with Equation (12) is given by This expression is straightforwardly implementable in a computer. As a tip, we would rather use the following equation: The advantage of this equation is to force step-by-step dependency of configuration at time t − ∆t, for any time t, for the increment ∆t > 0. See Section 8.3 for a practical implementation from this equation.

Analytical Solution Involving L
The reader may find it useful to derive an analytical solution based on SVD.
Theorem 5. Let L = uS t v be the SVD of L (here S is indeed square), and let (u i ) i∈{1,2,...,n} and (v i ) i∈{1,2,...,n} be the columns of the matrices u and v, respectively. The solution for the directed diffusion Equation (52) is given by where Proof. From the SVD L = u S t v, we deduce that L t L = uS t S t u and t L L = v t S S t v. This means that u and v are the matrices of eigenvectors of the symmetric matrices L t L and t L L, respectively, which thus are diagonalizable matrices. Therefore the columns of u and v form an orthonormal basis of R n . We can decompose the function f on those two bases: Set i ∈ {1, 2, . . . , n}. On the one hand, we have The left-hand-side gives The right-hand-side gives, after some calculations On the other hand, we have The left-hand-side gives The right-hand-side gives There is the supplement term (L − t L) f (t, ·), u i G . We calculate it below. We have Finally, we obtain The system to solve is, therefore, We are now facing a system of differential equations, first order, non-homogeneous, with constant coefficients. In fact, we can solve this system of equations using a matrix approach.
Next, we set and The system of differential equations then becomes dα dt (t) = −DMα(t) + γ(t).
The Cauchy theorem leads to We can go further. We are calculating the powers of the matrix M, as follows. In fact, we can prove by mathematical induction that which we can write as This achieves the proof.
As a simple verification, what does this theorem become if we set L to be symmetric? We would then have s i = µ i , for any i ∈ {1, . . . , n}, and u = v, so that t vu = t uv = Id and α u = α v . In additionΣ would be diagonal ands ii = µ i , i.e.,S = S. Then  This conforms to the previous theorem's relevance and truth.

Solution for the Directed Diffusion Involving the Operator ∆ 6.3.1. The Directed Diffusion Equation Involving ∆
Definition 13. Let f : E × R + → R represent the information function for a graph with n nodes. If D is the diffusion coefficient and Γ : E × R + → R the creation function of the information, the following directed diffusion equation holds for any t ∈ R + : We have the same Equation (22) if and only if ∆ = t ∆, that is if and only if A is symmetrical.

Solution Involving ∆
Theorem 6 (Application of the Cauchy Theorem). The solution of the related Cauchy problem associated with Equation (13) is given by This expression is straightforwardly implementable in a computer.

Analytical Solution Involving ∆
Similarly, we can prove the following theorem.
Theorem 7. Let∆ = U Σ t V be the SVD of∆ (here Σ is indeed square), and let (U i ) i∈{1,2,...,n} and (V i ) i∈{1,2,...,n} be the columns of the matrices U and V, respectively. The solution for the directed diffusion Equation (57) is given by where

Directed Graph: Case Studies
This section shows simulations for important graphs.

Erdos-Reyni
The Erdos-Reyni graph is uniformly random: fix the number of nodes and edges. Then the connections are all random. Figure 5 shows a simulation. There might be counter-intuitive facts, e.g., the node which has a high number of neighbors is not necessarily the most influential in the network. The influence depends on connectivity, but also on a given network configuration, as distribution of information among all the nodes.

Binary Tree
The binary graph is the simplest hierarchical structure on graphs: every node has two children. Figure 6 shows a simulation. In the standard configuration as in Figure 6, the information is circulating from parents to children. We, however, observed (as in Mandala networks) the other possible direction.

Ring Graph
The ring graph is the simplest cycle on a network. Cycles are a very important aspect of graphs: they allow to extract seasonality patterns for a given context. Figure 8 shows a simulation. The most important characteristics we observe is the envelope: the cycle behaves as two distinct sub-systems which interact continuously up to the dissipation level. One sub-system gives the information to the other, which receives it, and vice versa.

Motivations
This section deals with an important aspect of the Bitcoin network. There are different families (layers) of nodes in this network. In fact, the P2P network is a subnetwork of the whole system. There is, in addition, the layer of users which may send information (e.g., a transaction) to the miners so that they mine it. There also is the layer of exchanges, which sends in the blockchain any information which may be about public key diffusion, or even collects prices from different exchanges. In particular, an exchange needs to be in the center of the whole network, communicating to nodes of all the layers by sending and receiving all types of information. Such a complex system could in fact be simplified and illustrated in Figure 9. If the related graph is directed, then the layers are said to be directed, and the layer which receives information from others, and whose nodes exchange information with each other, is named the main layer. A graph configuration where the main layer is the layer of miners could represent all the list of transactions diffusion from users to miners, and also between miners (see Figure 9), whereas if the main layer is the layer of exchanges, the resulting diffusion model would describe all the interactions (i.e., emission/reception) of data from users, exchanges, and miners, the exchanges could have. In particular, this graphical representation of an exchange should genuinely reveal, in a clear way, the whole flow of information dealing with the exchange. The diffusion model, for a given time condition, should predict the evolution of information propagation all around the network. Modeling this is the object of the current section. Surprisingly, bearing in mind the previous section, the result is simple. Figure 9. Two directed layers, i.e., families of nodes. The red layer sends information toward the green layer, while nodes of the last layer send information to each other. Thus, the green layer could represent the layer of miners, while the red layer could represent the layer of exchanges. In such a model, the miners are the main layer. We could easily change the configuration so that exchanges are the main layer, and the following theory also applies.

Directed Diffusion with Boundary Conditions-Analytical Expression
Let S ⊂ G be the main layer (the green layer in Figure 9), andS ⊂ G (the red layer in Figure 9) such that S ∪S = G and S ∩S = ∅. We keep the notations in the previous sections. The following theorem is the Theorem 5, with an additional boundary condition.

Theorem 8. Suppose L is written as
where, in particular, L S is the Laplacian of the sub-graph S with singular vectors u S and v S . Suppose f (t, x) = B(t, x), fixed and known, for any (t, x) ∈ R + ×S (boundary condition). Then, Theorem 5 applies to the sub-graph S, by replacing Γ(τ, ·), u · G with Γ (τ, ·), u S · S , and Γ(τ, ·), v · G with Γ (τ, ·), v S · S , wherẽ Proof. Letμ 1 ≥ . . . ≥μ s−1 >μ s = 0 be the singular values for L S , and let u S and v S be the singular vector matrices for L S . In fact, we set u S i (x) = 0 if x / ∈ S, idem for v S , for any i ∈ {1, 2, . . . , n}, so that u S i and v S i are n-dimensional vectors. Bearing in mind the proof for Theorem 5, we need to set On the one hand, we have Identically, we have In particular, the last term is written as The same calculations performed as in the proof for Theorem 5 actually allow to conclude.

Remark 4.
When the exchanges constitute the main layer S, two other layers could be of interest, the layers of users (customers)S 1 and the layers of minersS 2 , so that the whole network G is such that G = S ∪S 1 ∪S 2 , and S ∩S 1 ∩S 2 = ∅. SettingS =S 1 ∪S 2 , we deduce that Theorem 8 applies to the layer S of exchanges, and, considering the two distinct sources {customers, miners}, we specifically havẽ

Directed Diffusion with Boundary Conditions-Practical Case
The previous section demonstrates that any boundary condition must be considered as a constraint to the nodes whose neighbors are boundary nodes, multiplied by the diffusion coefficient D. Figure 9 thus becomes the object of Figure 10. Figure 10. Provided that the red-nodes layer plays the role of boundary condition layer in Figure 9, the two directed layers network is mathematically equivalent to one unique network, whose node neighbors with the boundary layer nodes have a reaction term Γ applied for the diffusion of information.
Basically, each contrainst node has a stronger diffusive effect toward the rest of the network. More specifically, since (supposing ∆t sufficiently small) and from Equation (54), we can set where E (t, x) = o(∆t) is the error committed by the approximation. In practice, the approximation is sufficiently good if ∆t ≤ 0.05 second such that we set ∆t = 0.05. Provided that Γ is of class C ∞ , we indeed have an error of If the function Γ is sufficiently smooth, then In practice, we shall have sup n∈N * Γ (n) (t, x) ≤ 1 so that the error E never exceeds 2% such that the equality E (t, x) = o(∆t) is justified, for any x. Figure 11 plots a diffusion without the creation/destruction (Γ = 0) of a graph, where the information is provided by node 1, and diffuses up to node 10 (without ever reaching 11). Note that nodes 2 and 3 (respectively, 5, 6, and 7) play symmetrical roles and have the same information amount through time. Figure 11. The information diffuses from node 1 ( f (0, 1) = 1 while f (0, x) = 0 for any x = 1) to node 10. Here, Γ = 0 and D = 1.
We now add the constraint Γ such that In words, node 2 is constantly fed by external layers with a rate of r (creation) if it does not have a sufficient resource, while node 10 is constantly feeding external layers with a rate of r (destruction) if it does have too much of a resource. We choose rates r = r = ∆t = 0.05.
With a cubic spline smoothing, we may observe that sup n∈N * Γ (n) (t, x) ≤ 1 so that E ≤ 0.2%. Figure 12 shows the simulation. It is interesting to note the literal change of behavior by applying such constraints. Node 2 is submitted to permanent oscillations around 1 with a frequency-decreasing transition regime, so node 10 once it reaches information higher than 1. The oscillations induced at node 2 transfer visible oscillations to node 4, but the diffusion is not sufficiently strong to see them.
For all intermediate nodes, oscillations of information are observed at a frequency identical to the one imposed by node 2 reception. Concretely, it means that one node providing constant information to the others without exceeding its resources is governing the dynamics of the network. The shift of the minimum and maximum values at the permanent regime is due to the delay to obtain the information, and the shift is higher if the node is more distant from node 2.
In fact, the graph in Figures 12 and 13 behaves as a cycle: nodes 2 and 10 could be connected to each other through an intermediate set of layers. This means that cycles are maintaining oscillations through a permanent regime. These oscillations are importantly induced by one parameter: the diffusion coefficient D, which therefore appears as a bifurcation parameter (its value determines chaotic behavior in the network).

Delay in the Differential Equation
So far, there has been an immediate response from a stimulation of the network. However, time lags are essential to consider since response to the information never is automatic: there is a response time to understand the information and find an appropriate answer, from the human as well as machine viewpoints. It is essential to find a way to describe delay in the study.
Generally speaking, a delayed differential equation-an active topic of academic research nowadays-for the above diffusion equation could be written as a delayed-response diffusion equation: where τ ≥ 0 is the time lag for the answer of the system due to diffusion effects. Usually, the function f (·, x), for any x ∈ G, is defined on [−τ, +∞[ and is given on [−τ, 0]. Although there are methods for solving such an equation, we may develop some technics to make an easy implementation in the following. One trick is to make L time dependent, as we are going to see.

Time Dependent Laplacian L(t)
The adjacency matrix may be time dependent, as nodes can change their configuration through time. In the concern of expressing the time-dependent Laplacian matrix L(t) = D(t) − A(t) with respect to A(t) exclusively, we note that and the Laplacian matrix is rewritten as , which behaves as a functional of the matrix A(t).
Bearing this in mind, a way to introduce delay in a graph system is to make connections at a given time τ > 0. Without any loss of generality, we suppose that node rth, for a fixed r ∈ 1, n , is an isolated node at time t ∈ [0, τ[, then gets connected to, say, k ∈ 1, n \ {r} (from k to r) in the network, from time t = τ.
This makes the adjacency matrix A(t) time dependent, since where H is the Heaviside distribution.
Thus, we can prove that the rth diagonal element of L is l rr (t) = ∑ n i=1 a ri = l rr (0) + H(t − τ), while l kr (t) = −a kr (t) = −H(t − τ). In this configuration, we therefore have where H(t) is zero everywhere, except at element (k, r), where it is equal to −H(t − τ), and at its diagonal element (r, r), which is equal to H(t − τ). We actually can generalize the above approach to the following proposition.
Thus, the matrix H behaves as a Laplacian matrix whose elements are either 0 or Heaviside distributions.
The idea now is to find a solution with the Heaviside distribution as a kernel.

Definition Set for the Solution and Its Derivative
From the previous Section 9.1, the diffusion equation is written as It is worth stressing that the matrix H(t) is not defined at times when an evenness occurs, which therefore implies that f is not derivable on those points. It turns out that f is of class C almost everywhere (and not everywhere), provided that Γ is continuous almost everywhere as well, but still is required to be continuous at all times. If the study starts at t = 0 and there is no connection between distant nodes, then L(0) = 0 M n (R) and the equation simplifies to which has the usual form.
9.3. Simple Cases 9.3.1. Heaviside integration Suppose we would like to calculate where g is a continuous and integrable function. Let up to a constant. Then By integration by parts, we have true up to a constant C te ∈ R, and we thus can write

Simple Case of Two Nodes
Suppose in E = {1, 2}, the network is composed of two nodes. Without creation/destruction, suppose that node 1 sends information to node 2 from time τ > 0. The Laplacian matrix is written as

Simple Case of Three Nodes
The case of three nodes E = {1, 2, 3} might be of particular interest as well. We could break the symmetry by supposing that node 1 sends information to node 2 at rate α from time τ > 0, and to node 3 at rate β from time τ ≥ τ. The Laplacian matrix is written as This equation is just from the diffusion equations given by the system: Again, according to Equation (67), the first equation gives which we can rewrite, since f 1 (τ) = f 1 (0), as Deriving f 2 is the most complex task here. First of all, we note that H(t − τ)H(t − τ ) = H(t − τ ), and then we have We deduce, from Equation (67), that The continuity of the function f 2 at times τ and τ allows to find an equation for C te 1 and one for C te 2 . We therefore derive the function f 2 given by We now derive expression for f 3 . We find that The resolution leads to It is interesting to note that the symmetry is broken in the long-run and, quite importantly, we can demonstrate that The conservation of information is held at all time.

Delay in the Differential Equation-Practical Case
We will need the following results, which generalize Definition 12 (specifically Equation (52)) and Theorem 4. Definition 14. Let f : E × R + → R representing the information function for a graph with n nodes. If D is the diffusion coefficient and Γ : E × R + → R the creation function of the information, with a time-dependent Laplacian operator L(t), the following directed di f f usion equation holds, for any t ∈ R + : Here the main difference is that the Laplacian operator L is a function of time t.

Solution Involving L(t)
From Equation (12), we can use the Cauchy theorem to obtain the following solution.
Theorem 9 (Application of the Cauchy Theorem). The solution of the related Cauchy problem associated with Equation (12) is given by As was done previously, we would rather use the following equation: t+∆t t e −D t τ ( t L(s)ds) Γ(τ, x) dτ which we can reduce again by noticing that t+∆t t ( t L(s)ds) = t L(t)∆t + o(∆t) and t τ ( t L(s)ds) = t L(τ)∆t + o(∆t) since t − τ ≤ ∆t, so that Equation (64) is also verified: f (t + ∆t, x) = e −D t L(t) ∆t f (t, x) + Γ(t, x) ∆t + E (t, x).

Practical Case-Committed Error
Here, the error E (t, x) committed due to the approximation being split into two terms, and one has is the error due to the approximation at the diffusion term, and is the error due to the approximation at the creation/destruction term. We derive an upper bound for these two errors. Regarding the error on the diffusion term, we note that = 1 − D ∆t t L(t) + D 2 (∆t) 2 t L (t) + D 2 2 (∆t) 2 ( t L(t)) 2 f (t, x) − 1 − D ∆t t L(t) + D 2 2 (∆t) 2 ( t L(t)) 2 f (t, x) + o((∆t) 2 ) = 1 2 (D ∆t t L(t)) 2 f (t, x) + o((∆t) 2 ).
This proves that E Γ (t, x) = o(∆t). Finally, we have proven that the committed error is such that E (t, x) = o(∆t). Thus, Equation (72) can be implemented by neglecting E (t, x) as this represents a negligible error (see Section 8.3 for numbers).

Practical Case-Implementation 3-Node Case
We implement the model through Equation (72). We took a 3-node graphs as a first step, and wanted to plot the solutions for the case depicted in Section 9.3.3. We set f 1 (0) = 1 and f 2 (0) = f 3 (0) = 3, with D = 3, and we set the adjacency matrix A as  Figure 14 shows the information evolution per node (yellow is node 1; orange is node 2; and red is node 3). Figure 14. A 3 node-graph. Node 1 gives information to node 2 (resp. 3) with rate 0.4 (resp. 0.6). There is no delay, and D = 3.
We compare these results with the ones obtained from adding delays: node 2 starts receiving the information from time τ = 0.3 and node 3 from time τ = 0.9. The timedependent adjacency matrix becomes  Figure 15 plots the results. Figure 15. Same structure as in Figure 14, but with a delay τ = 0.3 for node 2 and τ = 0.9 for node 3.
The results are showing expected behaviors, with another interesting aspect: if a node takes too much delay to receive information, even if it has strong affinity, it will receive much less than the expectations. This could modify the whole network configuration and change its dynamical properties.
A last remark is about the obvious discontinuity of the derivative at times where nodes start receiving the information.

Random Walk
Random walk is an important modelization of the propagation of information in a network. It represents the abrupt movement of a bunch of bitcoins (resp. epidemia) among different addresses (resp. people), and is a powerful tool for the traceability of sub-signals of one signal.
From what we have seen above, we can confidently state that random walk actually is a particular case of the above delayed differential equation theory. Indeed, a random walk is obtained by taking the limit D → +∞, with delay times as a multiple of a fixed delay τ > 0, marking the time step of the movement of information from node to node (creating a discretization of time from a continuous time environment). It is worth pointing out that, in this case, diffusion becomes propagation. Thus, more generally, on a network, it turns out that a signal propagation is a particular diffusion of it, which is a non-trivial statement (but the converse is incorrect, as the sub-signals are not modified in essence through their propagation throughout the network).

Conclusions
In this paper, we studied the process of diffusion on a network, undirected and directed, with boundary conditions and response delays. More specifically, we introduced some fundamental definitions in the context of a crypto's P2P network, and then derived the diffusion equation, with two different Laplacian operators. The main innovations are the analytic solution derivation (by means of singular value decomposition), with or without boundary conditions. We also have characterized delays into the response of vertices.
It is worth pointing out that a tool, based on the evolution of information within a real blockchain network, could be envisaged through this approach. For instance, a visualization of any kind of exchange in the Bitcoin network could be implemented. The tool can trace any piece of the exchange, from the past to the future, using the forecasting ability, and this at any instant. The forecast, thus, can be updated by feeding the system with all historical information up to the present. We also can imagine a parametrized machine learning process which captures the historical configurations of the Bitcoin network, and suggests likely ones in the future. Thus, the traceability of information is a starting point for further transparency within all the agents, and implies the diminution of money-laundering risk. Modeling the P2P network is useful for any exchange, not only for trading and forecast purposes, but also for compliance duties.
Finally, it is worth stressing that the results developed in this paper are generalizable to any kind of P2P network, except Section 4.2, which is Bitcoin specific, even if the approach could be adapted. No matter what the underlying protocol is, we will systematically find exchange of information between agents, nodes, and users. The structure of the network as well as its underlying consensus mechanism (e.g., proof-of-work and proof-of-stake) are additional data which the general diffusion equation on a directed graph, given by Equation (52), can be a fitting model with parameter D, and, more importantly, with the operator Γ, function of the consensus, as Γ is related to the source of information. One could also make the model even more general by implementing a tensor D as a function of the vertices and of the edges of the network. This is left for further studies.