An Efficient Causal Group Communication Protocol for Free Scale Peer-to-peer Networks

In peer-to-peer (P2P) overlay networks, a group of n (≥2) peer processes have to cooperate with each other. Each peer sends messages to every peer and receives messages from every peer in a group. In group communications, each message sent by a peer is required to be causally delivered to every peer. Most of the protocols designed to ensure causal message order are designed for networks with a plain architecture. These protocols can be adapted to use in free scale and hierarchical topologies; however, the amount of control information is O(n), where n is the number of peers in the system. Some protocols are designed for a free scale or hierarchical networks, but in general they force the whole system to accomplish the same order viewed by a super peer. In this paper, we present a protocol that is specifically designed to work with a free scale peer-to-peer network. By using the information about the network's architecture and by representing message dependencies on a bit level, the proposed protocol ensures causal message ordering without enforcing super peers order. The designed protocol is simulated and compared with the Immediate Dependency Relation and the Dependency Sequences protocols to show its lower overhead.


Introduction
Multiparty overlay systems, such as peer-to-peer, are technologies that can solve problems in information distribution and processing, in areas including file distribution [1,2], multiplayer interactive games [3] and telecommunication [4].These systems are characterized by considering a group communication between n (≥2) peers that are distributed geographically.To achieve a consistent exchange of information in this environment, messages from the peers have to be causally ordered [5].The use of causal ordering in the overlay peer-to-peer systems provides message synchronization and reduces the indeterminism produced by asynchronous execution of the peers and by random and unpredictable delays of the messages in the communication channels.Causal ordering guarantees that the actions (requests or questions) are delivered before corresponding reactions (results or answers).
Some works have proposed protocols to guarantee causal ordering delivery of the messages .These protocols can be classified into three categories.The first category includes the protocols designed for plain peer-to-peer networks.We can classify [12,14,24,25] in this category.One disadvantage of these protocols is that the size of the control information in the worst case scenario grows with the number of peers in the network.In addition, the protocols from this category add additional processing to the peers that also depends on the number of peers in the system.The second category includes protocols that are designed for free scale peer-to-peer networks.In the case of this topology, the peers can be separated into two groups: peers and super peers.Super peers are characterized by higher processing powers and/or bandwidth.This network topology was designed to solve some problems of the plain P2P networks, such as bottlenecks caused by the very limited capabilities of some peers.Free scale peer-to-peer networks take advantage of system heterogeneity, assigning greater responsibility to those who are more capable of handling it [28].The similar topology can be seen in the Virtual Private Network (VPN) technology [29], where Intranet computers can communicate directly between each other but the remote clients communicate through the VPN server.However, up to date there are no protocols to guarantee causal ordering delivery of the messages, specifically designed for such networks.To overcome the lack of dedicated solutions for these networks, some protocols for plain topology can be adapted by establishing the causal order under each super peer.Nevertheless, besides the problems related to the uncontrolled overhead growth, these approaches may induce unnecessary inhibition on the delivery of messages.
The third category incorporates all the protocols that are designed for the hierarchical networks [7,8,11,13,[15][16][17][18]26,27], where the network topology is established to disable the direct communication among peers, allowing only the connections among super peers and between a super peer and each of its dependent peers.Similar to the case of free-scale networks, some protocols proposed for other topologies can be adapted to hierarchical networks.This is the case of the protocol proposed by Friedman and Manor [10], which has good characteristics in hierarchical sparse networks where the peers have a low message sending rate.However, since this protocol has a quadratic traffic cost, serious overloads in super peers' channels be caused in dense networks.Besides this approach, there are some protocols specifically designed for hierarchical networks [13,17,18] which are based on the use of a global time reference to ensure causal ordering of messages.Unfortunately, the characteristics of most hierarchical networks make it difficult to establish a global time reference; this is mainly due to the absence of shared memory and the lack of perfectly synchronized clocks [30].In addition to this disadvantage, these protocols require knowing the maximum delays of the messages in the communication channels, which is not always feasible [13].
In this paper, we present the design of a protocol that ensures causal ordering of the messages in the overlay networks with free scale topology without using the global time reference and maintaining the low overhead in the communication channels and in the peers.To achieve this, our causal protocol works on two communication groups according to their connection type.An internal group consists of peers that are connected to a super peer.This group uses a bit vector to represent message dependencies producing overhead on a bit level.Another group is an external group consisting of interconnected peers.This group uses an idea of hierarchical clocks [15] to represent message dependencies.The results obtained via a simulation demonstrate that the overhead in an internal group is 3.5 times lower than the overhead from the Immediate Dependency Relation protocol [14] while the overhead in the external group can be compared to the overhead in the Immediate Dependency Relationship protocol, but it is half the overhead of the Dependency Sequences protocol.

Materials and Methods
This section describes the system model as well as the concepts and definitions used in the protocol presented in this work.

System Model
A free scale peer-to-peer network consists of peers that are connected only to super peers and peers that have a direct connection with other peers.Based on this, a free scale peer-to-peer network can be divided into two groups (see Figure 1): 1.
Internal group (internal peers that are connected only a super peer); and 2.
External group (interconnected external peers).Peers that form an internal group are called internal peers.Peers in an external group are called external peers (see Figure 1).Peers in a free scale network can only belong to one group (internal or external).A super peer is a member of both groups at the same time.An internal group can have only one super peer, while an external group can consist of several.
A super peer is a special node with major processing and bandwidth capacities in comparison to peers.Peers in an internal group are considered to have lower processing power or bandwidths compared to external group peers.Peers in an internal group can be represented by mobile phones and tablets, among others, connected by a cellular network to the Internet.A super peer in an external group can be seen as a meta-process [15] representing all of the events in an internal group and as a meta-process representing all of the events in the external group for peers in the internal group.With this representation, peers in the internal group do not require an extensive knowledge about peers in an external group and vice versa.
The communication channels are considered to be reliable with random but finite delays.Thus, every message will eventually arrive to its destination process.In addition, the channels are considered to be non FIFO (First In, First Out) channels.This implies that messages can be reordered by a communication channel.

Background and Definitions
Causal ordering was developed to remove inconsistencies in message delivery, which is produced by an unpredictable delay in the communication channels.Causal order is based on the "happened before" relation defined by Lamport [30].This relation is denoted by → as follows.

The Immediate Dependency Relation (IDR)
The Immediate Dependency Relation [14] is the propagation threshold of the control information, regarding the messages sent in the causal past which must be transmitted to ensure a causal delivery.IDR is denoted as "↓" and its formal definition is as follows.

Definition 2. Two messages m and m′ form an IDR m↓m′ if and only if m → m′ and m″ does not exist, such
that m → m″ and m″ → m′.Peers that form an internal group are called internal peers.Peers in an external group are called external peers (see Figure 1).Peers in a free scale network can only belong to one group (internal or external).A super peer is a member of both groups at the same time.An internal group can have only one super peer, while an external group can consist of several.
A super peer is a special node with major processing and bandwidth capacities in comparison to peers.Peers in an internal group are considered to have lower processing power or bandwidths compared to external group peers.Peers in an internal group can be represented by mobile phones and tablets, among others, connected by a cellular network to the Internet.A super peer in an external group can be seen as a meta-process [15] representing all of the events in an internal group and as a meta-process representing all of the events in the external group for peers in the internal group.With this representation, peers in the internal group do not require an extensive knowledge about peers in an external group and vice versa.
The communication channels are considered to be reliable with random but finite delays.Thus, every message will eventually arrive to its destination process.In addition, the channels are considered to be non FIFO (First In, First Out) channels.This implies that messages can be reordered by a communication channel.

Background and Definitions
Causal ordering was developed to remove inconsistencies in message delivery, which is produced by an unpredictable delay in the communication channels.Causal order is based on the "happened before" relation defined by Lamport [30].This relation is denoted by → as follows.This relation can be extended to messages in the following form: message m → message m' if and only if send(m) → send(m') where send is the message sending event.

The Immediate Dependency Relation (IDR)
The Immediate Dependency Relation [14] is the propagation threshold of the control information, regarding the messages sent in the causal past which must be transmitted to ensure a causal delivery.IDR is denoted as "↓" and its formal definition is as follows.
Definition 2. Two messages m and m form an IDR m↓m if and only if m → m and m" does not exist, such that m → m" and m" → m .
Thus, a message m directly precedes a message m , if and only if no other message m" exists in a system, such that m" belongs at the same time to the causal future of m, and to the causal past of m .
This relation is important since if the delivery of messages respects the order of their diffusion for all pairs of messages in IDR, then the delivery will respect the causal delivery for all messages.
Causal information that includes the messages immediately preceding a given message is sufficient to ensure a causal delivery of such message [14].

Process and Meta-Process
Definition 3. A single process is defined to be a totally ordered set of events [30].In other words, a process can be defined as a set of events and for each two events from this set, it is possible to determine which of these events happened before.Definition 4. A meta-process is defined to be a partially ordered set of events [15].It can be used to represent a group of processes.A meta-process allows for some events to be concurrent, thus condition 1 from Definition 1 cannot be applied to a meta-process.Thus, if a and b are events in the same meta-process, and a comes before b, this does not mean that a → b.

Data Structure
In order to define data structures to ensure message causal ordering, we need to define additional data types and structures that will be used throughout this document.

Bit Vector
A bit vector is an array of variable size.Each element can take only two values: set (represented by 1) and cleared (represented by 0).Each bit vector can be extended with zeros to a required size and the trailing zeros can be trimmed.An empty bit vector is denoted as Ø.
Bits are numbers starting from 1. V[x] represents a bit at position x in vector V.A bit at position 0 is assumed to be always set.Bit vectors support AND (&), OR (|) and NOT ( -) operations that are bitwise, i.e., the operation is applied to bits at Position 1, then bits at Position 2, etc.

Extended Linear Time
Extended Linear Time (LTx) is a data type that can contain one of the following: An integer number and a bit vector.Extended Linear Time cannot contain an integer and a bit vector at the same time.In addition, it is possible to determine at any given time whether a given linear time contains an integer or a bit vector.If this data type contains an integer, it represents a process, and if it contains a bit vector, it represents a meta-process.

Extended Vector Time
Extended Vector Time (VTx) is a vector of LTx.Each element does not depend on others.Thus, a vector can have one element that is an integer and another element that is a bit vector at the same time.

Data Structures in an Internal Peer
Each internal peer maintains the following data: • id int -identifier of a peer in the internal group.This identifier must be unique in a group.

•
SN-an integer representing a sequence number of a message.

•
RV-bit vector representing received messages.

•
DV-bit vector representing message IDR.

Data Structures in an External Peer
In an external group, each peer maintains the following variables: • id ext (p)-identifier of a peer in the external group.This identifier must be unique in a group.

•
VTx(p)-extended vector time.The size of a vector is G, where G is the number of peers and super peers in an external group.

•
CI-vector of pairs representing message control information.Each pair consists of a process identifier and LTx.CI[x] is a pair where the process identifier is x.

Data Structures in a Super Peer
A super peer maintains the following variables: • id ext (sp)-identifier of a super peer in the external group.

•
VTx(sp)-extended vector time.The size of a vector is G, where G is the number of peers and super peers in an external group.• I-bit vector representing identifiers of translated messages.

•
LR-vector of pairs of size L, where L is the number of peers in an internal group of this super peer.Each pair contains two sequence number names, in and out.

•
TT-vector of vectors of pairs.The size of a vector is G.Each pair consists of two message identifiers called in and out.

Internal Group Message Structure
Messages in an internal group are denoted by m int and have the following structure: where Id-the identifier of a sending process in the internal group.

•
SN-an integer representing a message sequence number.

•
Last-an integer identifier of the last message from this peer.• DV-a bit vector representing message dependency.

•
Data-the user data to be transmitted.
This message structure is used by both peers and super peers in an internal group (see Figure 2).This message structure is used by both peers and super peers in an internal group (see Figure 2).

•
Data-the user data to be transmitted.
This message structure is used by both peers and super peer in an external group (see Figure 2).

Specification of the Causal Protocol
Initially each process performs an initialization process.An internal peer performs this process as follows.VTx(sp)[Z] := 0 , if Zis an identifier of a peer or Z = id ext O , if Zis an identifier of a super peer LR := (<0, 0>, <0, 0>, . . ., <0, 0>) TT := (Ø, Ø, . . ., Ø) I := Ø When an internal peer wants to send a message to a super peer, it constructs the message using the following procedure (see Listing 4).SN := SN + 1 2 m int = (id int , SN, 0, DV, Data) 3 DV := Ø When an internal peer receives a message from a super peer, it verifies its delivery condition.First, it should verify a FIFO condition that a received message has not arrived before a previous message from the same sender.After such verification, a peer needs to check that it has received all of the messages that form the immediate dependency relation with the currently received message (see Listing 5).End if If a delivery condition is satisfied, a peer updates its data structures (see Listing 6) and it can deliver a message to a corresponding application.If a delivery condition is not satisfied, a message should be buffered.
An internal peer will receive its own message returned by a super peer.In this case, it should only update its receive vector RV.After a message is delivered, a process should check its buffer.If a message in a buffer satisfies a delivery condition, it should be delivered using the same algorithm (see Listing 6).Since a super peer only transmits messages from other peers, it does not have a message emission phase.When a super peer receives a message from an internal peer, it checks the message delivery condition.Since an internal peer can only receive messages from a super peer, it means that only a FIFO dependency needs to be checked.To check the message delivery condition, a super peer uses the following algorithm (see Listing 7).
Listing 7. Super peer delivery condition for messages from an internal group.
If a delivery condition is satisfied, a super peer can forward this message to other peers (see Listing 8).If a delivery condition is not satisfied, a message should be buffered.
Send m int to all peers in the internal group 6 Transform and send message m int to all peers in the external group After a message is delivered, a process should check its buffer.If a message in a buffer satisfies a delivery condition, it should be delivered using the same algorithm (see Listing 8).A message transformation will be discussed later in this section.
When an external peer wants to send a message, it constructs it using the algorithm (see Listing 9).
Listing 9. Message sending by external peer. 1 When a message is received by an external peer or a super peer, it requires updating its clock with the internal message identifiers that a message is carrying in its field m ext .I (see Listing 10).
Listing 10.Message receiving by peers and super peers in an external group.
After this update, a message delivery condition can be checked.A message delivery condition consists of two parts.If a message arrived from an external peer, a message FIFO order should be checked.If the message FIFO ordering is not violated or a message has arrived from a super peer, then a message causal delivery condition is checked.This condition consists in checking message identifiers that are inside messages control information (see Listing 11).These conditions (FIFO and causal) are checked in both external peers and a super peer when it receives a message from an external group.
If a delivery condition is satisfied in an external peer, it can deliver a message to an application.To do this, a peer is required to update its data structures to ensure that following messages will be correctly ordered (see Listing 12).This update consists of two parts.First, it needs to update the clock so that the messages that depend on this one can be delivered.The second part is to update control information so that a message sent from this peer will be correctly ordered by other peers.End for If for any pair in CI a bit vector is empty, this pair can be removed from CI.If a delivery condition is satisfied in a super peer, it can forward this message to an internal group.To do this, a peer is required to update its data structures to ensure the delivery of messages that depend on this one.This requires an update of a clock so that messages that depend on this one can be delivered.
Listing 13.External message delivery to a super peer.
Transform and send message m ext to all peers in the internal group After a message is delivered to a peer or a super peer, a process should check its buffer.If a message in a buffer satisfies a delivery condition, it should be delivered using the same algorithm (see Listing 12 for external peer, and Listing 13 for super peer).
As bit vectors RV, DV and VTx grow in size during the execution of a protocol with each message, it is necessary to use mechanisms to reduce bit vector sizes.Communication channels are considered to be reliable; thus, every message sent by a super peer will be delivered to an internal group peer.This means that an RV and VTx will have bits for each message set.Since a super peer numbers messages with consecutive integers after some execution time, an RV and VTx will start with consecutive set bits.Considering that, after a bit is set, it is not changed to a cleared state at any time.Thus, it is required to store bits between the first cleared bit and the last set bit.
A vector DV is based on the immediate dependency relation.Each bit is set only once and then it is cleared when a message is sent or a dependent message is received.Thus, it is only required to store bits between the first and the last set bits.
To be completely functional, a protocol requires a mechanism to transform messages from an internal group to an external group and vice versa.This transformation is performed by a super peer because it participates in both groups at the same time.
A message that originated from an internal group generally carries dependencies on other messages from an internal group and dependencies on messages from an external group.The dependencies on messages from an internal group are represented in a form of bit vector, but to be interpreted correctly in an external group, they should be transformed into a vector of pairs (process identifier, and message dependency).This transformation can be achieved by using the algorithm presented below (see Listing 14).For each <id, PT> in TT // PT is a vector of pairs 6 For each <in, out> in PT in reverse order 7 If If a message carries only dependencies on external messages, a CI[id ext ] will be an empty vector.In this case, this dependency can be removed from CI.
A message received by a super peer that originated from an external group generally carries dependencies on other messages from an external group, as well as dependencies on messages from an internal group.The dependencies on messages from an external group are represented in a form of pairs (process identifier, and message dependency), but to be interpreted correctly in an internal group, they should be transformed to a bit vector form (see Listing 15).In addition, a super peer can receive messages that contain dependencies on messages that are not yet received.To deal with this, a super peer checks the message delivery condition as described previously to ensure that this message can be delivered by a super peer, and only then, transforms it to ensure that all of the message dependencies are resolved.This does not affect the message order in any way.If a super peer does not receive message m, none of the peers in an internal group have received this message m.Thus, a message m that requires m to be delivered before it cannot be delivered to any peer in an internal group.To show that our algorithm ensures the causal delivery (correctness), we provide a correctness proof.In order to do the proof as simple as possible, we focus on the part of the internal group where the message dependencies are represented on a bit level.In this proof we will use the following notations: • m k : message with SN = k.• p j : peer with identifier j.
We show that with this information we ensure the causal order.
Theorem 1.For each set bit in m l .DV[k], k identifies a message from the same group or TT contains (in, out) at position id such that (k = out), where (id, in) identifies a message m k , which has IDR with m l .
Main steps of the proof.The proof is composed of three lemmas and a proposition.The lemmas are intermediate results necessary for our proof: Step 1: The proof is by contrapositive.We proof that if m l .DV[k] is cleared then m r exists such that m k → m r → m l ; thus, the message m k does not have an immediate dependency relation with message m l .We assume that m l .DV[k] is cleared.Only two events can clear DV[k] before sending m l , these are: Therefore, m k does not directly precede message m l .

•
By Line 3 of Listing 4, the sending of m r empties DV.In addition, the event of send(m r ) takes place such that delivery(m k ) → send(m r ) → send(m l ).Therefore, m k does not directly precede message m l .
If neither of these two events occur, we have that DV[k] is set when the send(m l ) is carried out and by Line 1 of Listing 4, we have that m l .DV[k] is set.
Step 2: The proof is by contradiction.By Lemma 1, we know that if m l .DV[k] is set then m k → m l with p i = p j .We suppose that there is a message m r such that send(m k ) by p i → send(m r ) by p r → send(m l ) by p j , and in addition that m k ↓m r .The proof considers two cases: p r = p j and p r = p j .

•
We consider the case where p r = p j and the delivery m k causally precedes m r (delivery(m k ) → delivery(m r )) at p j .By step 1, we know that m r .DV[k] is set.Hence, on the delivery m r (delivery(m r )) at p j , DV[k] is cleared Line 3 of Listing 6.When performing the sending of m l (send(m l )) and because of delivery(m r ) → send(m l ), then DV[k] at peer p j is cleared and therefore, m l .DV[k] is cleared, which is a contradiction.

•
In the case where p r = p j , we have that delivery(m k ) → send(m r ) → send(m l ), because the sending of m r (send(m r )) takes place, DV[k] is cleared by Line 3 of Listing 4. Therefore, we have that m l .DV[k] is cleared, which is a contradiction.
Finally, the following proposition shows that through the bits attached to the sent messages and the information stored at the super peer, we ensure the causal order in the overlay peer-to-peer network.

Proposition 1.
If m l .DV[k] is set and m k is a message from the external group then TT contains (in, k).
Proof.By Line 19 of Listing 15, we have that (in, k ) is inserted into TT[m k .id]only after the delivery of message m k at the super peer where (m k .id,in) identifies a message in the external group.The corresponding internal message has SN equals to k .In the delivery of m k at p j with SN= k , we have (by Line 4 of Listing 6) that DV[k ] is set.We know by Lemma 3 that if m l .DV[k ] is set then m k ↓m l .On the reception of message m l sent by p j with m l .DV[k ] set at the super peer, by Lines 5-21 of Listing 14, we have that TT contains an element (in , out ) where out' = k at position m k .idwhich identifies a message (m k .id, in ) in the IDR protocol of the external group.

Causal Protocol Description
To describe how our proposed protocol detects causal order violations, we use a scenario (see Figure 3) composed of a network that consists of the following:

•
Two internal peers with identifiers in an internal group 1 and 2, denoted as P(i) 1 and P(i) 2 .

•
A super peer with an identifier in an external group 1, denoted as Sp 1 .

•
Two external peers with identifiers in an external group 2 and 3, denoted as P(i) 2 and P(i) 3 .
Appl.Sci.2016, 6, 234 13 of 25 • Two external peers with identifiers in an external group 2 and 3, denoted as P(i)2 and P(i)3.This scenario contains three types of messages: • Messages in an external group, represented as a solid line.

•
Messages in an internal group from peer to super peer, represented by a dashed line.

•
Messages in an internal group from super peer to peers, represented by a dotted line.
We mark with the X a message delivery that violates a causal order.
Diffusion of m3 at P(i)2 • First, an internal peer increments its sequence number to 1 (see Listing 4, Line 1).

•
The DV variable is cleared (see Listing 4, Line 3).

Reception of m3 at Sp1
• When a super peer Sp1 receives m3, it checks its delivery condition.The FIFO delivery condition is satisfied (see Listing 7, Line 1): 1 = 0 + 1 and message m3 can be delivered.• m3.Last field is changed to 0 (LR [2].out = 0) (see Listing 8, Line 1).This scenario contains three types of messages: • Messages in an external group, represented as a solid line.

•
Messages in an internal group from peer to super peer, represented by a dashed line.

•
Messages in an internal group from super peer to peers, represented by a dotted line.
We mark with the X a message delivery that violates a causal order.
Diffusion of m 3 at P(i) 2

•
First, an internal peer increments its sequence number to 1 (see Listing 4, Line 1).

•
The DV variable is cleared (see Listing 4, Line 3).
Reception of m 3 at Sp 1

•
When a super peer Sp 1 receives m 3 , it checks its delivery condition.The FIFO delivery condition is satisfied (see Listing 7, Line 1): 1 = 0 + 1 and message m 3 can be delivered.

•
Then the delivery condition is checked.
The FIFO delivery condition is not checked because the message arrived from a super peer.m(e) 3 .CI contains <2, 1> and but VTx[2] = 0, so this dependency is not satisfied.
The message delivery condition is not satisfied; therefore, the message m(e) 3 should be buffered.
Reception of m 2 at P(e) 3 • When a peer P(e) 3 receives m 2 , it checks its delivery condition.
The FIFO delivery condition is satisfied (see Listing 11, Line 1): 1 = 0 + 1 and m 2 .CI contains <3, 1>, and this dependency is satisfied because id ext = 3.Both conditions are satisfied, and the message m 2 can be delivered.Reception of m(i) 4 at P(i) 2 • When a peer P(i) 2 receives m(i) 4 , it checks its delivery condition.
The FIFO delivery condition is satisfied (see Listing 5, Line 1): RV[2] = 1, and the causal delivery condition (see Listing 5, Line 2) is also satisfied: 001 & 111 = 001.Both conditions are satisfied, and the message m(i) 4 can be delivered.
Reception of m(i) 4 at P(i) 1 • When a peer P(i) 1 receives m(i) 4 , it checks its delivery condition.
Reception of m(i) 3 at P(i) 1 • When a peer P(i) 1 receives m(i) 3 , it checks its delivery condition.
The FIFO delivery condition is satisfied (see Listing 5, Line 1): RV[0] = 1.The causal delivery condition (see Listing 5, Line 2) is also satisfied: 01 & 11 = 01.Both conditions are satisfied, and the message m(i) 3 can be delivered.

•
The message buffer contains m(i) 4 .Its delivery condition should be revalidated.
The FIFO delivery condition is satisfied (see Listing 5, Line 1): RV[2] = 1, and the causal delivery condition (see Listing 5, Line 2) is also satisfied: 001 & 111 = 001.Both conditions are satisfied, and the message m(i) 4 can be delivered.

Overhead Analysis
As the proposed protocol depends on the Immediate Dependency Relation protocol [14], the size of the control information of message m depends on the number of concurrent messages that form an IDR with m.
In the internal group, all of the messages are sequentially numbered.As a message m cannot form an IDR with more than n-1 messages (n is the number of processes in a system), its dependency vector cannot contain more than the n-1 set bits.However, the set bits can be separated by cleared bits.Let m 1 be the message with the lowest sequence number to form an IDR with m.Then a bit vector can have no more bits than the number of messages concurrent to m 1 that exist in a system.As message delay is finite, then each process can generate a finite number of messages concurrent to m 1 .Thus, a total number of messages concurrent to m 1 is also finite and it is proportional to a number of processes in a system producing an overhead of O(n) bits.
In the external group, message dependencies are represented as a combination of dependencies on external messages and dependencies on internal messages.The number of elements that represent dependencies on external messages are limited by the number of processes in an external group (peers and super peers), thus limiting a number of pairs that represent message dependency to O(l) (l is the number of peers and super peers in the external group).As vector I in the external message, its size is limited by the number of messages that can be generated in the external and internal groups producing an overhead of vector I to O(n) (the same as overhead in the internal group).
We notice that in our protocol, as for the minimal causal algorithm in [14], the likelihood that the worst case will occur approaches zero as the number of participants in the group grows.This is because the likelihood that k concurrent messages occur decreases inversely proportional to the size of the communication group.This behavior has been shown in [14].

Simulations
To analyze our protocol, we carry out different simulations.The scenario used in these simulations consists of one internal group and one external group connected by one super peer.All of the peers in the system are divided equally between these two groups.In the simulation, each peer generates a message every 70-90 milliseconds.The system is simulated with a different number of peers and with different delays in the communication channels.All of the simulation scenarios are listed in Table 1.The simulations are performed in the OMNeT++ discrete event simulator [31].All of the delays are distributed normally with the mean being the middle of the interval and variance equal to one fourth of the interval (for example, message generation time is distributed like N(80, 5) milliseconds).If a random value is generated outside of the interval, the value of the nearest interval end is taken instead.
The simulation program uses the Immediate Dependency Protocol [14] to compare and validate the protocol presented in this paper.When a message can be delivered to an application by our protocol, it is validated against an IDR to detect causal order violations not detected by our protocol.In addition, the overheads for IDR and Dependency Sequences (DS) [15] were calculated to be compared with an overhead generated by our protocol.
As the IDR protocol is designed for plain networks, it is simulated in the following way.The super peer in the system is considered transparent for the IDR implementation, and each peer (internal and external) performs the full calculations of the IDR protocol as described in [14].A super peer only retransmits the messages between groups.
As for DS protocol simulations, only the external group is considered.Each external peer is simulated as a super peer with exactly one peer in its group.
The results of simulations are analyzed in the following way.Two overheads are mainly analyzed: an overhead of communication (amount of control information required to be sent with a message) and storage overhead (amount of information required to be stored in each peer).
As the Dependency Sequences [15] stores information only at super peers level, the storage overhead for this protocol is not analyzed.
By considering channel delays from 0 to 50 milliseconds (see Figure 4), the simulation results show that an overhead in an internal group is lower than an overhead in an external group, and an overhead in an external group is lower than an overhead produced by the Immediate Dependency Relation and the Dependency Sequences protocols.For 900 peers, each internal peer requires storing on average 128 bytes of information, and the average overhead in an internal group is around 90 bytes.In the external group, each peer stores on average 2700 bytes and produces an overhead on average of 950 bytes.The IDR protocol requires storing on average 4900 bytes on each peer and to send 1650 bytes, while the DS protocol sends on average 3950 bytes.relationship m1↓m2, and all the peers in the internal group have received and delivered m2, then the information about m1 can be removed from a super peer.This can be seen from two different aspects: message reception and message sending.
Message reception: a peer can deliver m2 that m1↓m2 if and only if it has already delivered m1.If a peer received a message m3 that m1↓m3 and has already delivered m2 (which implies delivery of m1), then it can deliver m3 without any delay.This way the information about dependency on m1 does not affect the m3 delivery in any way.
Message sending: if a peer delivered message m2, then no message originated from this peer can carry any dependency on m1 and the information about m1 in super peer is no longer required.
Therefore, if all peers have delivered m2 such that m1↓m2, then the delivery of messages depending on m1 will not be affected in the internal group in any way, and a super peer will not receive any message depending on m1 from the internal group.Thus, the information about m1 can be deleted from variable TT in a super peer.By considering channel delays from 50 to 250 milliseconds (see Figure 5), the results also show that an overhead in an internal group is lower than an overhead in an external group, and the overhead in an external group is lower than an overhead produced by the Immediate Dependency Relation and the Dependency Sequences protocols.For 500 peers, each internal peer requires storing on average 300 bytes of information, and the average overhead in an internal group is around 230 bytes.In the external group, each peer stores on average 2100 bytes and produces an overhead on average of 1700 bytes.The IDR protocol requires storing on average of 3400 bytes on each peer and sending 2400 bytes, while the DS have a communication overhead of 2800 bytes.For these delays the results show that the overhead in an internal group is 18.3 times lower than the overhead of IDR and require storing 38 times less information.Overhead in an external group is about 1.7 times lower than the overhead of IDR and 4.1 times lower than the one of DS protocol, and require storing 1.7 times less information.
As all of the overheads have a liner type growth we calculate the slope using the least squares method.For communication overheads, the slopes are: 0.098 for internal group, 1.018 for external group, 1.760 for IDR and 4.356 for DS protocol.For storage overheads, the slopes are: 0.143 in internal group, 2.952 in external group and 5.388 for IDR protocol.
In our work in order to reduce the size of the variable TT in a super peer, we use the fact that a peer in the internal group can only receive a message that has been already received and delivered by a corresponding super peer.If m 1 and m 2 are messages that have an immediate dependency relationship m 1 ↓m 2 , and all the peers in the internal group have received and delivered m 2 , then the information about m 1 can be removed from a super peer.This can be seen from two different aspects: message reception and message sending.
Message reception: a peer can deliver m 2 that m 1 ↓m 2 if and only if it has already delivered m 1 .If a peer received a message m 3 that m 1 ↓m 3 and has already delivered m 2 (which implies delivery of m 1 ), then it can deliver m 3 without any delay.This way the information about dependency on m 1 does not affect the m 3 delivery in any way.
Message sending: if a peer delivered message m 2 , then no message originated from this peer can carry any dependency on m 1 and the information about m 1 in super peer is no longer required.
Therefore, if all peers have delivered m 2 such that m 1 ↓m 2 , then the delivery of messages depending on m 1 will not be affected in the internal group in any way, and a super peer will not receive any message depending on m 1 from the internal group.Thus, the information about m 1 can be deleted from variable TT in a super peer.
By considering channel delays from 50 to 250 milliseconds (see Figure 5), the results also show that an overhead in an internal group is lower than an overhead in an external group, and the overhead in an external group is lower than an overhead produced by the Immediate Dependency Relation and the Dependency Sequences protocols.For 500 peers, each internal peer requires storing on average 300 bytes of information, and the average overhead in an internal group is around 230 bytes.In the external group, each peer stores on average 2100 bytes and produces an overhead on average of 1700 bytes.The IDR protocol requires storing on average of 3400 bytes on each peer and sending 2400 bytes, while the DS have a communication overhead of 2800 bytes.
a peer received a message m3 that m1↓m3 and has already delivered m2 (which implies delivery of m1), then it can deliver m3 without any delay.This way the information about dependency on m1 does not affect the m3 delivery in any way.
Message sending: if a peer delivered message m2, then no message originated from this peer can carry any dependency on m1 and the information about m1 in super peer is no longer required.
Therefore, if all peers have delivered m2 such that m1↓m2, then the delivery of messages depending on m1 will not be affected in the internal group in any way, and a super peer will not receive any message depending on m1 from the internal group.Thus, the information about m1 can be deleted from variable TT in a super peer.By considering channel delays from 50 to 250 milliseconds (see Figure 5), the results also show that an overhead in an internal group is lower than an overhead in an external group, and the overhead in an external group is lower than an overhead produced by the Immediate Dependency Relation and the Dependency Sequences protocols.For 500 peers, each internal peer requires storing on average 300 bytes of information, and the average overhead in an internal group is around 230 bytes.In the external group, each peer stores on average 2100 bytes and produces an overhead on average of 1700 bytes.The IDR protocol requires storing on average of 3400 bytes on each peer and sending 2400 bytes, while the DS have a communication overhead of 2800 bytes.The results for delays from 50 to 250 milliseconds show that the overhead in an internal group is ten times lower than the overhead of IDR and requires storing eleven times less information.The overhead in an external group is about 1.4 times lower than the overhead of IDR and 1.64 times lower than the one of the DS, and requires storing 1.6 times less information.
For communication overheads, the slopes are: 0.461 for internal group, 3.480 for external group, 4.965 for IDR and 5.561 for DS protocol.For storage overheads, the slopes are: 0.603 in internal group, 4.225 in external group and 6.838 for IDR protocol.In scenario with delays from 50 to 550 milliseconds (see Figure 6) and for 400 peers, each internal peer requires storing on average 550 bytes of information and the average overhead in an internal group is around 400 bytes.In the external group, each peer stores on average 1860 bytes and produces an overhead on average of 1400 bytes.The IDR protocol requires storing on average 2500 bytes on each peer and sending 1500 bytes while the DS requires sending 2850 bytes.
The results for delays from 50 to 250 milliseconds show that the overhead in an internal group is ten times lower than the overhead of IDR and requires storing eleven times less information.The overhead in an external group is about 1.4 times lower than the overhead of IDR and 1.64 times lower than the one of the DS, and requires storing 1.6 times less information.
For communication overheads, the slopes are: 0.461 for internal group, 3.480 for external group, 4.965 for IDR and 5.561 for DS protocol.For storage overheads, the slopes are: 0.603 in internal group, 4.225 in external group and 6.838 for IDR protocol.
In scenario with delays from 50 to 550 milliseconds (see Figure 6) and for 400 peers, each internal peer requires storing on average 550 bytes of information and the average overhead in an internal group is around 400 bytes.In the external group, each peer stores on average 1860 bytes and produces an overhead on average of 1400 bytes.The IDR protocol requires storing on average 2500 bytes on each peer and sending 1500 bytes while the DS requires sending 2850 bytes.These results show that the overhead in an internal group is 3.5 times lower than the overhead of the IDR and requires storing 4.5 times less information.The overhead in an external group is similar to the overhead of the IDR, but requires storing 1.3 times less information.As for the DS protocol, the proposed solution has two times lower communication overhead.
For communication overheads, the slopes are: 0.990 for internal group, 3.617 for external group, 3.830 for IDR and 7.162 for DS protocol.For storage overheads, the slopes are: 1.325 in internal group, 4.680 in external group and 6.378 for IDR protocol.

Scalability Analysis
To analyze the scalability of the system, we evaluate the control information management of the protocol against the message density, which is defined as the number of messages sent per process per time unit.For this last analysis, we conducted the following experiment.The system model from the previous experiment was modified in the following way: each peer, instead of generating the message every 80 milliseconds was modified to generate messages so that the total number of messages remain fixed for each number of peers in the system.For example, in a system with a generation rate of 1000 messages per second, with 50 peers, each peer generates 20 messages per second; in a system with 100 peers, each peer generates 10 messages per second; and in a system with 500 peers, each peer generates two messages per second.
The system is analyzed for the following message generation rates: 2500, 2000, 1500 and 1000 messages per second.The obtained results are compared with the results from the experiment described above.In the experiment above, the number of messages generated in the system is 12.5 per second per peer.
By considering channel delays from 0 to 50 milliseconds (see Figure 7), the simulation results show that more messages are generated in the system per second; more control information is These results show that the overhead in an internal group is 3.5 times lower than the overhead of the IDR and requires storing 4.5 times less information.The overhead in an external group is similar to the overhead of the IDR, but requires storing 1.3 times less information.As for the DS protocol, the proposed solution has two times lower communication overhead.
For communication overheads, the slopes are: 0.990 for internal group, 3.617 for external group, 3.830 for IDR and 7.162 for DS protocol.For storage overheads, the slopes are: 1.325 in internal group, 4.680 in external group and 6.378 for IDR protocol.

Scalability Analysis
To analyze the scalability of the system, we evaluate the control information management of the protocol against the message density, which is defined as the number of messages sent per process per time unit.For this last analysis, we conducted the following experiment.The system model from the previous experiment was modified in the following way: each peer, instead of generating the message every 80 milliseconds was modified to generate messages so that the total number of messages remain fixed for each number of peers in the system.For example, in a system with a generation rate of 1000 messages per second, with 50 peers, each peer generates 20 messages per second; in a system with 100 peers, each peer generates 10 messages per second; and in a system with 500 peers, each peer generates two messages per second.
The system is analyzed for the following message generation rates: 2500, 2000, 1500 and 1000 messages per second.The obtained results are compared with the results from the experiment described above.In the experiment above, the number of messages generated in the system is 12.5 per second per peer.
By considering channel delays from 0 to 50 milliseconds (see Figure 7), the simulation results show that more messages are generated in the system per second; more control information is required to ensure the causal ordering of messages.In addition, the overhead growth has two different stages: a slow growth for a low number of peers (less than 100), and a linear growth part when the number of peers is high (more that 100-200 in this experiment).
for 2500 messages/second, 0.0728 for 2000 messages/second, 0.0735 for 1500 messages/second and 0.0719 for 1000 messages/second.In the original experiment, the slopes are 0.0979 for the internal group and 1.0177 for the external group.
In this scenario, the difference between slopes with the message limit is less than five percent.However, the difference with the original experiment is 1.60 times for the internal group and 13.9 times for the external group.By considering channel delays from 50 to 250 milliseconds (see Figure 8), the results also show that more messages are generated in the system per second; more overhead is required to ensure the causal ordering.The overhead has the same growth stages as for the system with communication channel delays from 0 to 50 milliseconds.
Analyzing the growth of the overhead (linear growth part) in the internal group, the slopes are 0.0083 for 2500 messages/second, 0.0041 for 2000 messages per second, 0.0045 for 1500 messages/second and 0.0241 for 1000 messages/second.In the external group, the slopes are 0.4632 for 2500 messages/second, 0.1215 for 2000 messages/second, 0.0147 for 1500 messages/second and −0.0374 for 1000 messages/second.In the original experiment, the slopes are 0.4606 for the internal group and 3.4804 for the external group.Analyzing the growth of the overhead (linear growth part) in the internal group, the slopes are 0.0612 for 2500 messages/second, 0.0611 for 2000 messages per second, 0.0603 for 1500 messages/second and 0.0583 for 1000 messages/second.
In the external group, the slopes are 0.0685 for 2500 messages/second, 0.0728 for 2000 messages/second, 0.0735 for 1500 messages/second and 0.0719 for 1000 messages/second.In the original experiment, the slopes are 0.0979 for the internal group and 1.0177 for the external group.
In this scenario, the difference between slopes with the message limit is less than five percent.However, the difference with the original experiment is 1.60 times for the internal group and 13.9 times for the external group.
By considering channel delays from 50 to 250 milliseconds (see Figure 8), the results also show that more messages are generated in the system per second; more overhead is required to ensure the causal ordering.The overhead has the same growth stages as for the system with communication channel delays from 0 to 50 milliseconds.
when the number of peers is high (more that 100-200 in this experiment).
Analyzing the growth of the overhead (linear growth part) in the internal group, the slopes are 0.0612 for 2500 messages/second, 0.0611 for 2000 messages per second, 0.0603 for 1500 messages/second and 0.0583 for 1000 messages/second.In the external group, the slopes are 0.0685 for 2500 messages/second, 0.0728 for 2000 messages/second, 0.0735 for 1500 messages/second and 0.0719 for 1000 messages/second.In the original experiment, the slopes are 0.0979 for the internal group and 1.0177 for the external group.
In this scenario, the difference between slopes with the message limit is less than five percent.However, the difference with the original experiment is 1.60 times for the internal group and 13.9 times for the external group.By considering channel delays from 50 to 250 milliseconds (see Figure 8), the results also show that more messages are generated in the system per second; more overhead is required to ensure the causal ordering.The overhead has the same growth stages as for the system with communication channel delays from 0 to 50 milliseconds.
Analyzing the growth of the overhead (linear growth part) in the internal group, the slopes are 0.0083 for 2500 messages/second, 0.0041 for 2000 messages per second, 0.0045 for 1500 messages/second and 0.0241 for 1000 messages/second.In the external group, the slopes are 0.4632 for 2500 messages/second, 0.1215 for 2000 messages/second, 0.0147 for 1500 messages/second and −0.0374 for 1000 messages/second.In the original experiment, the slopes are 0.4606 for the internal group and 3.4804 for the external group.Analyzing the growth of the overhead (linear growth part) in the internal group, the slopes are 0.0083 for 2500 messages/second, 0.0041 for 2000 messages per second, 0.0045 for 1500 messages/second and 0.0241 for 1000 messages/second.In the external group, the slopes are 0.4632 for 2500 messages/second, 0.1215 for 2000 messages/second, 0.0147 for 1500 messages/second and −0.0374 for 1000 messages/second.In the original experiment, the slopes are 0.4606 for the internal group and 3.4804 for the external group.
In this scenario in both groups the slopes have a tendency toward zero, which means that the overhead directly depends on the density of the messages exchanged, and does not depend on the number of peers in the system.The slopes from this experiment are 45 times lower for the internal group and 25 times lower for the external group.
In a scenario with delays from 50 to 550 milliseconds (see Figure 9), the results show that more messages are generated in the system per second, more overhead is required to ensure the causal ordering.The overhead has the same growth stages as for the system with communication channel delays from 0 to 50 milliseconds and from 50 to 250 milliseconds.In this scenario in both groups the slopes have a tendency toward zero, which means that the overhead directly depends on the density of the messages exchanged, and does not depend on the number of peers in the system.The slopes from this experiment are 45 times lower for the internal group and 25 times lower for the external group.
In a scenario with delays from 50 to 550 milliseconds (see Figure 9), the results show that more messages are generated in the system per second, more overhead is required to ensure the causal ordering.The overhead has the same growth stages as for the system with communication channel delays from 0 to 50 milliseconds and from 50 to 250 milliseconds.
Analyzing the growth of the overhead in the internal group, the slopes are 0.002177379 for 2500 messages per second, 0.00272099 for 2000 messages per second, 0.005812997 for 1500 messages per second and −0.006493243 for 1000 messages per second while in the original experiment the slope is 0.990.As for external group the slopes could not be calculated because the linear growth part was not reached but the same tendency as for other network delays can be observed.In this scenario, the slopes have a tendency toward zero which means that the overhead does not directly depends on the number of peers in the system, but it depends on the message density in the system.
The results obtained from this experiment show that when the number of messages generated in the system per second remains constant, the overhead growth slope has a tendency toward zero.This means that there is no direct dependency between the number of peers in the system and the amount of control information required for causal message ordering.Thus, the number of peers in the system can be relatively high while the message generation rate of the system remains constant.
In addition, it should be noted that the above experiments consider that all of the peers in the system equally participate in the communication.All of the peers that are passive receivers do not increase system overhead.For example, in a system with 300 peers where 200 peers are passive receivers and do not generate any messages, the overhead will be the same as in a system containing only 100 participating peers.We note that in real world P2P applications, such as video call/meeting, current solutions only support up to 25 peers participating equally in the communication.This means that with our protocol, applications could allow up to 100 peers exchange streams in an equitable participation, keeping a controlled overhead growth.
This leads to the following.If several communication groups exist over the same peer-to-peer network, the communication and storage overhead will only depend on the number of peers participating in a group communication, not on the total number of peers in the system.For example, the overhead will depend on the number of people participating in a video conference, and not on a total number of people using the software.Analyzing the growth of the overhead in the internal group, the slopes are 0.002177379 for 2500 messages per second, 0.00272099 for 2000 messages per second, 0.005812997 for 1500 messages per second and −0.006493243 for 1000 messages per second while in the original experiment the slope is 0.990.As for external group the slopes could not be calculated because the linear growth part was not reached but the same tendency as for other network delays can be observed.
In this scenario, the slopes have a tendency toward zero which means that the overhead does not directly depends on the number of peers in the system, but it depends on the message density in the system.
The results obtained from this experiment show that when the number of messages generated in the system per second remains constant, the overhead growth slope has a tendency toward zero.This means that there is no direct dependency between the number of peers in the system and the amount of control information required for causal message ordering.Thus, the number of peers in the system can be relatively high while the message generation rate of the system remains constant.
In addition, it should be noted that the above experiments consider that all of the peers in the system equally participate in the communication.All of the peers that are passive receivers do not increase system overhead.For example, in a system with 300 peers where 200 peers are passive receivers and do not generate any messages, the overhead will be the same as in a system containing only 100 participating peers.We note that in real world P2P applications, such as video call/meeting, current solutions only support up to 25 peers participating equally in the communication.This means that with our protocol, applications could allow up to 100 peers exchange streams in an equitable participation, keeping a controlled overhead growth.
This leads to the following.If several communication groups exist over the same peer-to-peer network, the communication and storage overhead will only depend on the number of peers participating in a group communication, not on the total number of peers in the system.For example, the overhead will depend on the number of people participating in a video conference, and not on a total number of people using the software.

Discussion
Some protocols have been proposed to implement causal message ordering for peer-to-peer overlay networks.These protocols can be divided into three groups based on the network architecture they are designed for.In addition, each group can be further divided into two sub-groups, based on whether a protocol uses a global time reference or if it is based on logical references between messages.These protocols are described as follows.

Protocols Designed for Plain Networks
A plain peer-to-peer overlay network consists of a number of interconnected peers.Each peer communicates directly with other peers.In a broadcast group, each peer sends messages to every other peer in a group.

Protocols that Use A Global Time Reference
All protocols from this category [9,[19][20][21] are based on the same idea of using a combination of logical and physical clocks to ensure causal message order.To achieve this, protocols are required to be able to synchronize time with a time server and have knowledge about the minimum and maximum delays of messages in communication channels and maximum clock drift.To determine the order of two messages m 1 and m 2 , these protocols use the physical time.If the difference in physical time when the messages were sent is greater than the maximum delay, then the messages can be ordered based on physical time.In addition, they can determine that a message m 1 cannot be received when a message m 2 was sent (difference in physical time is smaller than the minimum delay).If these protocols cannot determine a message order based on these rules, a logical clock is used to order messages.

Protocols that Use Logical References
This section contains the protocols that are designed for plain peer-to-peer networks and do not use the global time reference.To ensure causal ordering, these protocols use the "happened before" relation defined by Lamport [30].
The Causal Ordering in Deterministic Overlay Network [10] protocol uses the idea to forward each received message.Thus this protocol does not require any control information to be sent with a message, but in the worst case scenario, it produces n-1 copies (where n is the number of peers in the system) of the same message saturating the communication channels.
The Critical Causal Order of Events in Distributed Virtual Environment protocol [23] is based on the idea of reducing causal order violations and not completely removing them.To achieve this reduction, this protocol resends the last received message with the new message, thus doubling the average message size.However, the protocol does not require any control information to be sent alongside the message to ensure message causal ordering.
An Efficient Algorithm for Causal Message Ordering [12] extends the idea of vector clock [32] for the cases where each message is addressed to a different subset of processes.This modification requires more information to be stored than in the Vector clock protocol.In the worst case scenario, this protocol requires storing an entry for each process in a group, thus producing an overhead of O(n) (where n is the number of peers in the system) and when sending a message, all of these entries must be included.
Probabilistic Analysis of Causal Message Ordering [22] analyzes the probability of violation of causal order of messages.This protocol implies that if the message send is delayed after the last send or receive event, the probability decreases and can decrease down to 0. This method does not require any control information at all, but reduces the concurrency of the system.The Immediate Dependency Relation protocol [14] is based on the idea of sending only the identifiers of immediate predecessors of a message.This produces the reduced average overhead compared to the vector clock protocol.However, in the worst case scenario, the overhead in the communication channels can be as high as the number of processes in the system (O(n), where n is the number of peers in the system).In regard to storage requirements, this protocol stores vector clock and message control information, thus producing storage requirements twice as high as the number of peers in the system.
An optimal algorithm for enforcing the causal messages ordering is proposed in [24].The algorithm achieves the optimality by using four propagation constraints to reduce the propagation of redundant information and by using an encoding scheme to represent and attach in messages only the necessary causal dependency information.A performance analysis of the communication overhead of the optimal algorithm under a wide range of system conditions using simulations is presented in [25].The results indicate that the optimal algorithm performs significantly better than the algorithm proposed in [33].However, the communication overhead in the worst case is O(n), where n is the number of peers in the system (broadcast case).

Protocols Designed for Free Scale Networks
A free scale network consists of a number of peers and super peers (a super peer is a peer with major processing power and bandwidth).The peers are divided into two groups based on their connection types.Some of the peers are only connected to a super peer.Other peers are interconnected between them and super peers [28].While we surveyed the state of the art, no protocols were found specifically designed for this type of networks.However, protocols designed for plain networks can be adapted to be used in a free scale network architecture.However, these protocols do not use information about network topology and treat peers in different groups in the same manner.Thus, the overhead grows with the number of peers in the system.

Protocols Designed for Hierarchical Networks
A hierarchical network is a network consisting of peers and super peers.A peer is connected only to a super peer while super peers communicate between them [28].In this network architecture, if a peer sends a message to a peer in another group, it sends a message to its super peers.A super peer forwards this message to a super peer connected to a destination peer, and then the message is forwarded to its destination.

Protocols that Use a Global Time Reference
Protocols in this subgroup [13,17,18] are based on the same idea that is used for protocols with a global clock designed for a plain network.They use a combination of logical and physical clocks to ensure causal message order.

Protocols that Use Logical References
This section contains the protocols that are designed for hierarchical peer-to-peer networks that use the "happed before" relation defined by Lamport [30] or other mechanisms that do not imply global time reference.
Distributed Floor Control Protocols [7] and A Reliable Multicast [6] protocol use the synchronization mechanisms by coordinators to ensure causal ordering of events.As a result, these protocols do not require any control information to be sent, but require an infrastructure to support communication between peers.In addition, these protocols are a type of server-client protocol, where a server order is enforced to all clients.
The Two-Layered protocol for a Large-Scale Group of Processes [16] and the Domain-Based Causal Ordering Group Communication protocol [11] use two vector clocks: one for the subgroup, and another one for the global group.As this reduces the size of control information and storage requirements, it implies that the messages that are concurrent in one group become causally ordered in another group.Both protocols require sending g integers in a subgroup, where g is the number of peers in a subgroup and l integer in a global group, where l is the number of groups.
An Optimal Causal Broadcast Protocol [8] sends only the identifier of a last received or send message.As a result of this decision, the overhead is constant, but this can produce cases where the messages will not be correctly ordered.
Dependency Sequences [15] represent message causal relations as a sequence of message identifiers in the form of intervals.However, the proposed protocol does not contain any mechanisms to remove unnecessary dependencies.As a result of this fact, each new message will have an overhead that is no less than the overhead of a previous message.Without an external mechanism to remove unnecessary dependencies, an overhead will grow indefinitely.
Hierarchical Clocks [15] use two different clocks to represent a message causal relationship.One clock is used to represent the message dependency on external events (events from other groups) and the other clock to represent message dependency on local events (events from the same group).A disadvantage of these protocols is that they do not contain mechanisms to reduce sequence sizes, and thus the size of the control information will grow indefinitely.They require external mechanisms like checkpoints to accomplish this task.
In the Adaptive Causal Ordering Algorithm [26], a message carries control information only about its direct predecessors; however, in the worst case, the communication overhead is O(n) integers per message, where n is the number of peers in the system (broadcast case).
For the Causal Multicast in Mobile Networks [27], the communication overhead is dynamic having a worst case of size O(n), where n is the number of peers in the system (broadcast case).We note that in this protocol, the updating process of the control structures considers the acknowledgement by the peers for each message received to its local super peer, which increases the delay in the communication.

Conclusions
The protocol presented in this paper ensures causal message ordering in a free scale peer-to-peer overlay network.The protocol uses a combination of the Immediate Dependency Relation protocol and an idea of hierarchical clocks to reduce message overhead.Our protocol is efficient in the overhead attached per message at the communication channels.The overhead sent per message is characterized by being dynamically adapted according to the behavior of the concurrent messages.In addition, the simulations show that the protocol proposed in this work produces less overhead than the IDR and DS protocols, and also requires less storage overhead.As a result, this protocol can be used to ensure causal message ordering, while producing a lower overhead that the IDR and DS protocols.The lower overhead will allow for devices with lower computational capacities to be included in the system.The protocol proposed in this paper can be further extended by reducing an overhead dependency on the number of internal groups, allowing for more super peers and corresponding internal groups to be included in a system.This will allow a further extension of the protocol to be used in hierarchical peer-to-peer networks.

Figure 1 .
Figure 1.Internal and external groups of a free scale P2P (peer-to-peer) network.

Definition 1 .
The relation → on the set of events of a system is the smallest relation satisfying the three following conditions: If a and b are events in the same process, and a comes before b, then a → b.If a is the sending of a message by one process and b is the receipt of the same message by another process, then a → b If a → b and b → c then a → c.Two distinct events a and b are said to be concurrent a ∥ b if a ↛ b and b ↛ a.This relation can be extended to messages in the following form: message m → message m' if and only if send(m) → send(m') where send is the message sending event.

Figure 1 .
Figure 1.Internal and external groups of a free scale P2P (peer-to-peer) network.

Definition 1 .
The relation → on the set of events of a system is the smallest relation satisfying the three following conditions: If a and b are events in the same process, and a comes before b, then a → b.If a is the sending of a message by one process and b is the receipt of the same message by another process, then a → b If a → b and b → c then a → c.Two distinct events a and b are said to be concurrent a b if a b and b a.

Figure 2 .
Figure 2. Messages in a free scale peer-to-peer network.

Figure 2 .
Figure 2. Messages in a free scale peer-to-peer network.

Listing 2 .
External peer initialization.VTx(p)[Z] := 0 , if Zis an identifier of a peer O , if Zis an identifier of a super peer CI := Ø Furthermore, a super peer performs initialization as follows (Listing 3).

Listing 4 .
Message sending by an internal peer.

Listing 8 .
Internal message delivery to super peer.

Listing 12 .
Message delivery to an external peer.

Listing 15 .
Message transformation from an external to an internal group.

Figure 4 .
Figure 4. Communication (a) and storage (b) overheads in bytes for a system with delays from 0 to 50 milliseconds.

Figure 4 .
Figure 4. Communication (a) and storage (b) overheads in bytes for a system with delays from 0 to 50 milliseconds.

Figure 4 .
Figure 4. Communication (a) and storage (b) overheads in bytes for a system with delays from 0 to 50 milliseconds.

Figure 5 .
Figure 5. Communication (a) and storage (b) overheads in bytes for a system with delays from 50 to 250 milliseconds.

Figure 5 .
Figure 5. Communication (a) and storage (b) overheads in bytes for a system with delays from 50 to 250 milliseconds.

Figure 6 .
Figure 6.Communication (a) and storage (b) overheads in bytes for a system with delays from 50 to 550 milliseconds.

Figure 6 .
Figure 6.Communication (a) and storage (b) overheads in bytes for a system with delays from 50 to 550 milliseconds.

Figure 7 .
Figure 7. Internal (a) and external (b) group overheads in bytes for a system with delays from 0 to 50 milliseconds.

Figure 7 .
Figure 7. Internal (a) and external (b) group overheads in bytes for a system with delays from 0 to 50 milliseconds.

Figure 7 .
Figure 7. Internal (a) and external (b) group overheads in bytes for a system with delays from 0 to 50 milliseconds.

Figure 8 .
Figure 8. Internal (a) and external (b) group overheads in bytes for a system with delays from 50 to 250 milliseconds.

Figure 8 .
Figure 8. Internal (a) and external (b) group overheads in bytes for a system with delays from 50 to 250 milliseconds.

Figure 9 .
Figure 9. Internal (a) and external (b) group overheads in bytes for a system with delays from 50 to 550 milliseconds.

Figure 9 .
Figure 9. Internal (a) and external (b) group overheads in bytes for a system with delays from 50 to 550 milliseconds.
CI ∪ <m ext .id,m ext .SN> 11 Else 12 If not exists <i, dep> in CI that i = m ext .idthen 13 CI := CI ∪ <m ext .id,Ø> ext .id][mext .SN]: = 1 16 End if 17 For each <i, dep> in m ext .CI 18 If VTx[i] is an integer then 19 CI := CI <i, dep> 20 Else 21 CI[i] := CI[i] & dep 22 End if 23 set, then the message with SN = k causally precedes message m l .• Lemma 2 shows that if m l .Last = k, then message with SN = k causally precedes message m l .• Lemma 3 indicates that the message m k has an immediate dependency relation with the other message m l if and only if bit m l .DV[k] is set.• Proposition 1 shows that through the bits structure h(m) attached to messages sent and the causal information at the super peer, we ensure the causal order (Theorem 1).This implies that the delivery of m k precedes the sending of m l (delivery(m k ) → send(m l )).Therefore, m k → m l .If m l .Last = k then message m k → m l Proof.By Line 1 of Listing 8, we have that m l .Last is set to the value of LR[m l .id].out at the super peer.By Line 3 of Listing 8, LR[m l .id].out is set to k upon send of m k which was received by super peer from peer with id m l .id.This means that m k and m l were sent by the same peer and therefore m k → m l .k ↓m l if and only if m l .DV[k] is set The proof to this lemma is divided into two steps: First, we show that if m k ↓m l then m l .DV[k] is set and second, we show that if m If ml.DV[k]is set then message m k → m 1 Proof: By Line 2 in Listing 4, we have that m l .DV[k] is set if and only if DV[k] at peer p j is set when the sending of m l is carried out by p j .By using Line 4 in Listing 6, we have that DV[k] is set only after the delivery of m k .In addition, the super peer does not make any modifications to the DV of a message.m • A clock for this super peer is updated with a new value (see Listing 8, Line 2): VTx(p) = (3, 1, 1), and LR is updated with message identifiers (see Listing 8, Line 3): LR[2] = <1, 3>.• m 3 .SN is updated to 3, and m 3 is sent to all peers in an internal group.Transformation of m 3 at Sp 1 to External Group • I m = 11 and I is set to Ø (see Listing 14, Lines 1-2), and CI is created with value <1, 01> and TT[2] contains <1, 2> (see Listing 14, Line 7).• CI element <1, 01> has its bit 2 cleared (see Listing 14, Line 9): <1, Ø>, and this dependency is removed.CI is updated with message dependency (see Listing 14, Line 11): CI = <2, 1> and, m 3 is transformed to an external group: m(e) 3 = (1, 3, <2, 1>, 11, m 3 .Data).
[1] FIFO delivery condition is satisfied (see Listing 11, Line 1): 2 = 1 + 1 and m 4 .CI contains <1, 001> and VTx[1]is a bit vector, so this dependency is satisfied (see Listing 11, Line 2): 001 & 111 = 001.Both conditions are satisfied, and the message m 4 can be delivered.•Aclockcomponent for peer 2 is updated with a new value (see Listing 12, Line 2): When a super peer Sp 1 receives m 4 , it checks its delivery condition.The FIFO delivery condition is satisfied (see Listing 11, Line 1): 2 = 1 + 1. m 4 .CI contains <1, 001> and this dependency is satisfied because id ext = 1.Both conditions are satisfied and the message m 4 can be delivered.
• A clock component for peer 2 is updated with a new value (see Listing 12, Line 2): VTx(p) = (11, 1, 1) and CI is empty, so a new message dependency is inserted (see Listing 12, Line 10): CI = <2, 1>.• Message buffer contains m(e) 3 .Its delivery condition should be revalidated.The FIFO delivery condition is not checked because the message arrived from a super peer.m(e) 3 .CI contains <2, 1> and VTx[2] is an integer, so the causal condition is satisfied (see Listing 11, Line 2): 1 ≤ 1.Both conditions are satisfied, and the message m(e) 3 can be delivered.• A clock component for super peer (id ext = 1) is updated with a new value (see Listing 12, Line 2): Reception of m 4 at P(e) 3 • When a peer P(e) 3 receives m 4 , it checks its delivery condition.• A clock component for peer 2 (see Listing 13, Line 29) and for this super peer (see Listing 13, Line 6) are updated with a new value: VTx(p) = (4, 2, 1).Transformation of m 4 at Sp 1 to Internal Group • As m 4 .CI contains an element for process 1 (see Listing 15, Line 1) DV is initialized to 001.In this case, m 4 .CI does not contain other dependencies so m 4 is transformed to an internal group: m(i) 4 = (0, 4, 0, 001, Data) and TT[2] contains <1, 2> (see Listing 15, Line 16).• m(i) 4 .Last is updated to 2: m(i) 4 = (0, 4, 2, 001, Data) and TT is updated with message identifiers before and after transformation (see Listing 15, Line 19): TT[2] = <1, 2>, <2, 4> • A bit for internal message is set in variable I (see Listing 15, Line 20): I = 0001.

Table 1 .
Number of peers and delays.