Next Article in Journal
A Spatial-Temporal-Semantic Neural Network Algorithm for Location Prediction on Moving Objects
Next Article in Special Issue
A Flexible Pattern-Matching Algorithm for Network Intrusion Detection Systems Using Multi-Core Processors
Previous Article in Journal / Special Issue
DNA Paired Fragment Assembly Using Graph Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Asynchronous Message-Passing Distributed Algorithm for the Generalized Local Critical Section Problem

1
Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima,Hiroshima, 739-8527, Japan
2
Graduate School of Information Science and Technology, Osaka University, 1-4, Yamadaoka, Suita, Osaka,565-0871, Japan
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(2), 38; https://doi.org/10.3390/a10020038
Submission received: 27 January 2017 / Revised: 14 March 2017 / Accepted: 22 March 2017 / Published: 24 March 2017
(This article belongs to the Special Issue Networks, Communication, and Computing)

Abstract

:
This paper discusses the generalized local version of critical section problems including mutual exclusion, mutual inclusion, k-mutual exclusion and l-mutual inclusion. When a pair of numbers ( l i , k i ) is given for each process P i , it is the problem of controlling the system in such a way that the number of processes that can execute their critical sections at a time is at least l i and at most k i among its neighboring processes and P i itself. We propose the first solution for the generalized local ( l i , | N i | + 1 ) -critical section problem (i.e., the generalized local l i -mutual inclusion problem). Additionally, we show the relationship between the generalized local ( l i , k i ) -critical section problem and the generalized local ( | N i | + 1 k i , | N i | + 1 l i ) -critical section problem. Finally, we propose the first solution for the generalized local ( l i , k i ) -critical section problem for arbitrary ( l i , k i ) , where 0 l i < k i | N i | + 1 for each process P i .

1. Introduction

The mutual exclusion problem is a fundamental process synchronization problem in concurrent systems [1,2,3]. It is the problem of controlling the system in such a way that no two processes execute their critical sections (CSs) at a time. Generalizations of mutual exclusion have been studied extensively, e.g., k-mutual exclusion [4,5,6,7,8,9], mutual inclusion [10] and l-mutual inclusion [11]. The k-mutual exclusion problem is controlling the system in such a way that at most k processes can execute their CSs at a time. The mutual inclusion problem is the complement of the mutual exclusion problem; unlike mutual exclusion, where at most one process is in the CS, mutual inclusion places at least one process in the CS. In a similar way, the l-mutual inclusion problem is the complement of the k-mutual exclusion problem; unlike k-mutual exclusion, where at most k processes are in the CSs, l-mutual inclusion places at least l processes in the CSs. These generalizations are unified to a framework “the critical section problem” in [12]. Informally, the global ( l , k ) -CS problem is defined as follows. For each 0 l < k n , the global ( l , k ) -CS problem has at least l and at most k processes in the CSs in the entire network.
This paper discusses the generalized local CS problem, which is a new version of CS problems. When the numbers l i and k i are given for each process P i , it is the problem of controlling the system in such a way that the number of processes that can execute their CSs at a time is at least l i and at most k i among its neighbors and itself. In this case, we call this problem “the generalized local ( l i , k i ) -critical section problem”. Note that the local ( l , k ) -CS problem assumes that the values of l and k are shared among all processes in the network, whereas the generalized local CS problem assumes that the values of l i and k i are set for each process P i . These are the generalizations of local mutual exclusion [13,14,15,16,17], local k-mutual exclusion [18] and local mutual inclusion [11]. If every process has ( 0 , 1 ) , then the problem is the local mutual exclusion problem. If every process has ( 0 , k ) , then the problem is the local k-mutual exclusion. If every process has ( 1 , | N i | + 1 ) , then the problem is the local mutual inclusion problem, where N i is the set of P i ’s neighboring processes. The global CS problem is a special case of the local CS problem when the network topology is complete. However, to the best of our knowledge, our algorithm in this paper is the first solution for the generalized local ( l i , k i ) -CS problem.
The generalized local ( l i , k i ) -CS problem is interesting not only theoretically, but also practically, because it is useful for fault-tolerance and load balancing of distributed systems. For example, we can consider the following future applications.
  • One application is a formulation of the dynamic invocation of servers for the load balancing. The minimum number of servers that are always invoked for quick responses to requests for P i is l i . The number of servers is dynamically changed by the system load. However, the total number of servers is limited by available resources like bandwidth for P i , and the number is k i .
  • Another is fault-tolerance services if each process in the CS provides a service for the network. Because every process has direct access to at least l i servers, it guarantees fault-tolerant services. However, because providing services involve a significant cost, the number of servers should be limited at most k i for each process.
  • The other is that each process in the CS provides service A, and other processes provide service B for the network. Then, every process in the network has direct access to at least l i servers of A and has direct access to at least | N i | + 1 k i servers of B.
In each case, the numbers l i and k i can be set for each process.
In this paper, we propose a distributed algorithm for the generalized local ( l i , k i ) -CS problem for arbitrary ( l i , k i ) , where 0 l i < k i | N i | + 1 for each process P i . To this end, we first propose a distributed algorithm for the generalized local ( l i , | N i | + 1 ) -CS problem (we call it generalized local l i -mutual inclusion problem). It is the first algorithm for the problem. Next, we show that the generalized local ( l i , k i ) -CS algorithms and the generalized local ( | N i | + 1 k i , | N i | + 1 l i ) -CS algorithms are interchangeable by swapping process state, in the CS and out of the CS. By using this relationship between these two problems, we propose a distributed algorithm for the generalized local ( l i , k i ) -CS problem for arbitrary ( l i , k i ) , where 0 l i < k i | N i | + 1 for each process P i . We assume that there is a process P L D R , such that | N L D R | 4 , l L D R k L D R 3 and l q k q 3 for each P q N L D R .
This paper is organized as follows. Section 2 provides several definitions and problem statements. Section 3 provides a solution to the generalized local ( l i , | N i | + 1 ) -CS (i.e., generalized local l i -mutual inclusion) problem. Section 4 presents an observation on the relationships between the generalized local ( l i , k i ) -CS problem and the generalized local ( | N i | + 1 k i , | N i | + 1 l i ) -CS problem. Section 5 provides a solution to the generalized local ( l i , k i ) -CS problem. In Section 6, we give a conclusion and discuss future works.

2. Preliminaries

2.1. System Model

Let V = { P 1 , P 2 , . . . , P n } be a set of n processes and E V × V be a set of bidirectional communication links in a distributed system. Each communication link is FIFO. Then, the topology of the distributed system is represented as an undirected graph G = ( V , E ) . By N i , we denote the set of neighboring processes of P i . That is, N i = { P j | ( P i , P j ) E } . By dist ( P i , P j ) , we denote the distance between processes P i and P j . We assume that the distributed system is asynchronous, i.e., there is no global clock. A message is delivered eventually, but there is no upper bound on the delay time, and the running speed of a process may vary.
A set of local variables defines the local state of a process. By Q i , we denote the local state of each process P i V . A tuple of the local state of each process ( Q 1 , Q 2 , . . . , Q n ) forms a configuration of a distributed system.

2.2. Problem

We assume that each process P i V maintains a variable state i { InCS , OutCS } . For each configuration C, let CS ( C ) (resp., CS ( C ) ¯ ) be the set of processes P i with state i = InCS (resp., state i = OutCS ) in C. For each configuration C and each process P i , let CS i ( C ) (resp., CS i ( C ) ¯ ) be the set CS ( C ) ( N i { P i } ) (resp., CS ( C ) ¯ ( N i { P i } ) ). The behavior of each process P i is as follows, where we assume that P i eventually invokes entry-sequence when it is in the OutCS state, and P i eventually invokes exit-sequence when it is in the InCS state.
state i : = ( Initial state of P i in the initial configuration C 0 ) ;
while true{
if ( state i = OutCS ) {
  Entry-Sequence;
   state i : = InCS ;
  /* Critical Section */
 }
if ( state i = InCS ) {
  Exit-Sequence;
   state i : = OutCS ;
  /* Remainder Section */
 }
}
Definition 1.
(The generalized local critical section problem). Assume that a pair of numbers l i and k i ( 0 l i < k i | N i | + 1 ) is given for each process P i V on network G = ( V , E ) . Then, a protocol solves the generalized local critical section problem on G if and only if the following two conditions hold in each configuration C.
  • Safety: For each process P i V , l i | CS i ( C ) | k i at any time.
  • Liveness: Each process P i V changes OutCS and InCS states alternately infinitely often.
We call the generalized local CS problem when l i and k i are given for each process P i “the generalized local ( l i , k i ) -CS problem”.
We assume that the initial configuration C 0 is safe, that is each process P i satisfies l i | CS i ( C 0 ) | k i . In the case of ( l i , k i ) = ( 0 , 1 ) (resp., ( 1 , | N i | + 1 ) ), the initial state of each process can be OutCS (resp., InCS ) because it satisfies the condition for the initial configuration. In the case of ( l i , k i ) = ( 1 , | N i | ) , the initial state of each process is obtained from a maximal independent set I as follows; a process is in the OutCS state if and only if it is in I. Note that existing works for CS problems assume that their initial configurations are safe. For example, for the mutual exclusion problem, most algorithms assume that each process is in the OutCS state initially, and some algorithms (e.g., token-based algorithms) assume that exactly one process is in the InCS state and other processes are in the OutCS state initially. Hence our assumption for the initial configuration is common for existing algorithms.

2.3. Performance Measure

We apply the following performance measure as message complexity to the generalized local CS algorithm: the number of message exchanges triggered by a pair of invocations of exit-sequence and entry-sequence.

3. Proposed Algorithm for the Generalized Local l i -Mutual Inclusion

In this section, we propose an algorithm l i -LMUTIN for the case that k i = | N i | + 1 .
First, we explain how the safety is guaranteed. Initially, the configuration is safe, that is each process P i satisfies l i | CS i ( C 0 ) | | N i | + 1 . When P i wishes to be in the OutCS state, P i requests permission by sending a Request , ts i , P i message for each process in N i { P i } . When P i obtains a permission by receiving a Grant message from each process in N i { P i } , P i changes to the OutCS state. Each process P j grants permissions to | N j | l j + 1 processes at each time. Hence, at least l j processes in N j { P j } cannot be in the OutCS state at the same time. When P i wishes to be in the InCS state, P i changes to the InCS state and sends a message Release , P i for each process in N i { P i } to manage the next request for exiting the CS.
Next, we explain how the liveness is guaranteed. We incorporate the timestamp mechanism proposed by [19] in our algorithm. Based on the priority of the timestamp for each request to change the state, a process preempts a permission when necessary, as proposed in [11,20,21]. The proposed algorithm uses Preempt , P i and Relinquish , P i messages for this purpose.
In the proposed algorithm, each process P i maintains the following local variables.
  • state i : The current state of P i : InCS or OutCS .
  • ts i : The current value of the logical clock [19].
  • nGrants i : The number of grants that P i obtains for exiting the CS.
  • grantedTo i : A set of timestamps ( ts j , P j ) for the requests to P j ’s exiting the CS that P i has been granted, but that P j has not yet released.
  • pendingReq i : A set of timestamps ( ts j , P j ) for the requests to P j ’s exiting the CS that are pending.
  • preemptingNow i : A process id P j such that P i preempts a permission for P j ’s exiting the CS if the preemption is in progress.
For each request, a pair ( ts i , P i ) is used as its timestamp. We implicitly assume that the value of the logical clock [19] is attached to each message exchanged. Thus, in the proposed algorithm, we omit a detailed description of the maintenance protocol for ts i . The timestamps are compared as follows: ( ts i , P i ) < ( ts j , P j ) iff ts i < ts j or ( ts i = ts j ) ( P i < P j ) .
Formal description of the proposed algorithm for each process P i is presented in Algorithms 1 and 2. When each process P i receives a message, it invokes a corresponding message handler. Each message handler is executed atomically. That is, if a message handler is being executed, the arrival of a new message does not interrupt the message handler. In this algorithm description, we use the statement wait until (conditional expression). By this statement, a process is blocked until the conditional expression is true. While a process is blocked by the wait until statement and it receives a message, it invokes a corresponding message handler.
Algorithm 1 Local variables for process P i in algorithm l i -LMUTIN
state i { InCS , OutCS } , initially state i = InCS ( P i CS ( C 0 ) ) OutCS ( P i CS ( C 0 ) )
ts i : integer, initially 0;
nGrants i : integer, initially 0;
grantedTo i : set of (integer, processID), initially { ( 0 , P j CS i ( C 0 ) ) } ;
pendingReq i : set of (integer, processID), initially ∅;
preemptingNow i : processID, initially nil;
Algorithm 2 Algorithm l i -LMUTIN: exit-sequence, entry-sequence and message handlers.
Exit-Sequence:
ts i : = ts i + 1 ;
nGrants i : = 0 ;
for-each P j ( N i { P i } ) send Request , ts i , P i to P j ;
wait until ( nGrants i = | N i | + 1 ) ;
state i : = OutCS ;
 
Entry-Sequence:
state i : = InCS ;
for-each P j ( N i { P i } ) send Release , P i to P j ;
 
On receipt of a Request , ts j , P j message:
pendingReq i : = pendingReq i { ( ts j , P j ) } ;
if ( | grantedTo i | < | N i | l i + 1 ) {
    ( ts h , P h ) : = d e l e t e M i n ( pendingReq i ) ;
    grantedTo i : = grantedTo i { ( ts h , P h ) } ;
   send Grant to P h ;
 } else if ( preemptingNow i = nil ) {
    ( ts h , P h ) : = m a x ( grantedTo i ) ;
   if ( ( ts j , P j ) < ( ts h , P h ) ) {
         preemptingNow i : = P h ;
        send Preempt , P i to P h ;
   }
 }
 
On receipt of a Grant message:
nGrants i : = nGrants i + 1 ;
 
On receipt of a Release , P j message:
if ( P j = preemptingNow i ) preemptingNow i : = nil ;
 Delete ( * , P j ) from grantedTo i ;
if ( pendingReq i ) {
    ( ts h , P h ) : = d e l e t e M i n ( pendingReq i ) ;
    grantedTo i : = grantedTo i { ( ts h , P h ) } ;
   send Grant to P h ;
 }
 
On receipt of a Preempt , P j message:
if ( state i = InCS ) {
    nGrants i : = nGrants i 1 ;
   send Relinquish , P i to P j ;
 }
 
On receipt of a Relinquish , P j message:
preemptingNow i : = nil ;
 Delete ( * , P j ) from grantedTo i , and let ( ts j , P j ) be the deleted item;
pendingReq i : = pendingReq i { ( ts j , P j ) } ;
( ts h , P h ) : = d e l e t e M i n ( pendingReq i ) ;
grantedTo i : = grantedTo i { ( ts h , P h ) } ;
send Grant to P h ;

3.1. Proof of Correctness

In this subsection, we show the correctness of l i -LMUTIN.
Lemma 1.
(Safety) For each process P i V , l i | CS i ( C ) | | N i | + 1 holds at any configuration C.
Proof. 
We assume that the initial configuration C 0 is safe, i.e., l i | CS i ( C 0 ) | . Therefore, we consider the process P i , which becomes unsafe first for the contrary. Suppose that | CS i ( C ) | < l i , that is | CS i ( C ) ¯ | > | N i | + 1 l i . Because | CS i ( C 0 ) | l i , consider a process P j N i { P i } , which became the OutCS state by the ( | N i | + 2 l i ) -th lowest timestamp among processes in CS i ( C ) ¯ . Then, P j obtains permission to be the OutCS state from each process in N j { P j } . This implies that P i receives a request Request , ts j , P j from P j and that P i sends a permission Grant to P j . Because P i grants at most | N i | + 1 l i permissions to exit the CS at each time, P j cannot obtain a permission from P i ; this is a contradiction. ☐
Lemma 2.
(Liveness) Each process P i changes into the InCS and OutCS states alternately infinitely often.
Proof. 
By contrast, suppose that some processes do not change into the InCS and OutCS states alternately infinitely often. Let P i be such a process where the lowest timestamp value for its request to be the OutCS state is ( ts i , P i ) . Without loss of generality, we assume that P i is blocked in the InCS state. That is, P i is blocked by the wait until statement in the exit-sequence (recall that each process changes into the InCS state eventually when it is in the OutCS state). Let P j be any process in N i .
  • Suppose that P j changes into the InCS and OutCS states alternately infinitely often. After P j receives the Request , ts i , P i message from P i , the value of ( ts j , P j ) exceeds the timestamp ( ts i , P i ) for P i ’s request. Because, by this algorithm, the request with the lowest timestamp is granted preferentially, it is impossible for P j to change into the InCS and OutCS states alternately infinitely often. Then, P j eventually sends a Grant message to P i , and P i eventually sends a Grant message to itself.
  • Suppose that P j does not change into the InCS and OutCS states alternately infinitely often. Because the timestamp of P i is smaller than that of P j , by assumption, P j ’s permission is preempted, and a Grant message is sent from P j to P i . In addition, P i sends a Grant message to itself.
Therefore, P i eventually receives a Grant message from each process in N i { P i } , and the wait until statement in the exit-sequence does not block P i forever. ☐

3.2. Performance Analysis

Lemma 3.
The message complexity of l i -LMUTINfor P i V is 3 ( | N i | + 1 ) in the best case and 6 ( | N i | + 1 ) in the worst case.
Proof. 
First, let us consider the best case. In exit-sequence, for P i ’s exiting the CS, P i sends a Request , ts i , P i message to each process in N i { P i } ; each process in N i { P i } sends a Grant message to P i . In entry-sequence, after P i ’s entering the CS, P i sends a Release , P i message to each process in N i { P i } . Thus, 3 ( | N i | + 1 ) messages are exchanged.
Next, let us consider the worst case. For P i ’s exiting the CS, P i sends a Request , ts i , P i message to each process P j in N i { P i } . Then, P j sends a Preempt , P j message to the process P m to which P j sends a Grant message, P m sends a Relinquish , P m message back to P j and P j sends a Grant message to P i . After P i ’s entering the CS, P i sends a Release , P i message to each process P j in N i { P i } . Then, P j  sends a Grant message to return a grant to  P m or grant to some process with the highest priority in pendingReq j . Thus, 6 ( | N i | + 1 ) messages are exchanged. ☐
Theorem 1.
l i -LMUTIN solves the generalized local ( l i , | N i | + 1 ) -critical section problem with a message complexity of O ( Δ ) , where Δ is the maximum degree of a network.

4. The Generalized Local Complementary Theorem

In this section, we discuss the relationship between the generalized local CS problems.
Let A G ( l , k ) be an algorithm for the global ( l , k ) -CS problem, and A L ( l i , k i ) be an algorithm for the generalized local ( l i , k i ) -CS problem. By Co- A G ( l , k ) (resp., Co- A L ( l i , k i ) ), we denote a complement algorithm of A G ( l , k ) (resp., A L ( l i , k i ) ), which is obtained by swapping the process states, InCS and OutCS .
In [12], it is shown that the complement of A G ( l , k ) is a solution to the global ( n k , n l ) -CS problem. We call this relation the complementary theorem. Now, we show the generalization of the complementary theorem for the settings of local CS problems.
Theorem 2.
For each process P i , a pair of numbers l i and k i ( 0 l i < k i | N i | + 1 ) is given. Then,  C o - A L ( l i , k i ) is an algorithm for the generalized local ( | N i | + 1 k i , | N i | + 1 l i ) -CS problem.
Proof. 
By A L ( l i , k i ) , at least l i and at most k i processes among each process and its neighbors are in the CS. Hence, by Co - A L ( l i , k i ) , at least l i and at most k i processes among each process and its neighbors are out of the CS. That is, at least | N i | + 1 k i and at most | N i | + 1 l i processes among each process and its neighbors are in the CS. ☐
By Theorem 2, Co-( l i -LMUTIN) is an algorithm for the generalized local ( 0 , k i ) -CS problem where k i = | N i | + 1 l i . We call it k i -LMUTEX.

5. Proposed Algorithm for the Generalized Local CS Problem

In this section, we propose an algorithm LKCSfor the generalized local ( l i , k i ) -CS problem for arbitrary ( l i , k i ) , where 0 l i < k i | N i | + 1 for each process P i . We assume that, the initial configuration C 0 is safe. Before we explain the technical details of LKCS, we explain the basic idea behind it.

5.1. Idea

The main strategy in LKCS is the composition of two algorithms, l i -LMUTIN and k i -LMUTEX. In the following description, we simply call these algorithms LMUTIN and LMUTEX, respectively. The idea of the composition in LKCS is as follows.
  • Exit-Sequence:
    • Exit-Sequence for LMUTIN;
    • Exit-Sequence for LMUTEX;
  • Entry-Sequence:
    • Entry-Sequence for LMUTEX;
    • Entry-Sequence for LMUTIN;
This idea does not violate the safety by the following observation.
  • Exit-sequence keeps the safety because invocation of exit-sequence for LMUTIN keeps the safety, and invocation of exit-sequence for LMUTEX trivially keeps the safety.
  • Similarly, entry-sequence keeps the safety because invocation of entry-sequence for LMUTEX keeps the safety, and invocation of entry-sequence for LMUTIN trivially keeps the safety.
Because invocations of exit-sequence for LMUTIN in exit-sequence and entry-sequence for LMUTEX in entry-sequence may block a process forever, i.e., deadlocks and starvations, we need some mechanism to such situation which makes the proposed algorithm non-trivial.
A problem in the above idea is the possibility of deadlocks in the following situation. There is a process P u with state u = InCS such that | CS u ( C ) | = l u or P u has a neighbor P v N u with | CS v ( C ) | = l v . Then, P u cannot change its state by exit-sequence until at least one of its neighbors P w N u with state w = OutCS changes P w ’s state by entry-sequence. If | CS w ( C ) | = k w or P w has a neighbor P x N w with | CS x ( C ) | = k x , P w cannot change its state by entry-sequence until at least one of its neighbors P y N w with state y = InCS changes P y ’s state by exit-sequence. In the network, if every process is in such situation, a deadlock occurs.
To avoid such a deadlock, we introduce a mechanism “sidetrack”, meaning that some processes reserve some grants, which are used only when the system is suspected to be a deadlock. Hence, in a normal situation, i.e., not suspected to be a deadlock, the number of processes in the CS is limited. In this sense, LKCS is a partial solution to the ( l i , k i ) -CS problem unfortunately. Currently, a full solution to the problem is not known and left as a future task.
The idea of the “sidetrack” in LKCS is explained as follows. We select a process, say P L D R , with | N L D R | 4 as a “leader”, and each process P q within two hops from the leader may allow at least l q + 1 and at most k q 1 processes to be in the CSs locally in a normal situation. We assume that k q l q 3 , because k q 1 ( l q + 1 ) 1 . Other processes P i may allow at least l i and at most k i processes to be in the CSs locally in any situation and k i l i 1 . The leader observes the number of neighbor processes that may be blocked, and when the leader itself and all of the neighbors can be blocked, the leader suspects that the system is in a deadlock situation. Then, the leader designates a process within one hop (including the leader itself) to use the “sidetrack” to break the chain of cyclic blocking. Because the designated process P q uses one extra CS exit/entry, the number of processes in the CSs is at least ( l q + 1 ) 1 = l q and at most ( k q 1 ) + 1 = k q , and hence, LKCS does not deviate from the restriction of the ( l i , k i ) -CS problem. The suspicion by the leader process P L D R is not always correct, i.e., P L D R may suspect that the system is in a deadlock when this is not true. However, incorrect suspicion does not violate the safety of the problem specification.

5.2. Details of LKCS

We explain the technical details of LKCS below. Formal description of LKCS for each process P i is presented in Algorithms 3–7. The execution model of this algorithm is the same as the previous section, except that the while statement is used in LKCS. By while (conditional expression) { statement } , a process is blocked until the conditional expression is true. While a process is blocked by this statement, it executes only the statement between braces and message handlers. While a process is blocked by this statement and it receives a message, it invokes a corresponding message handler. That is, if the statement between braces is empty, this while statement is same as wait until statement.
Algorithm 3 Local variables and macros for process P i in algorithm LKCS
Local Variables:
enum at { MUTEX , MUTIN } ;
state i { InCS , OutCS } , initially state i = InCS ( P i CS ( C 0 ) ) OutCS ( P i CS ( C 0 ) )
ts i integer, initially 1;
nGrants i [ at ] set of processID, initially ∅;
grantedTo i [ at ] set of (integer, processID), initially { ( 1 , P j CS i ( C 0 ) ) } ;
pendingReq i [ at ] set of (integer, processID), initially ∅;
preemptingNow i [ at ] (integer, processID), initially nil;
Local Variable only for a leader P L D R :
candidate L D R : set of (integer, processID), initially ∅;
Macros:
L i l i dist ( P L D R , P i ) > 2 l i + 1 dist ( P L D R , P i ) 2
K i k i dist ( P L D R , P i ) > 2 k i 1 dist ( P L D R , P i ) 2
Grant L D R { P j | ( * , P j ) grantedTo L D R [ MUTIN ] ( * , P j ) grantedTo L D R [ MUTEX ] }
Waiting L D R | pendingReq L D R [ MUTIN ] | + | pendingReq L D R [ MUTEX ] | + | Grant L D R |
Cond i ( at = MUTEX | grantedTo i [ MUTEX ] | < K i ) ( at = MUTIN | grantedTo i [ MUTIN ] | < | N i | L i + 1 )
Algorithm 4 Algorithm LKCS: exit-sequence and entry-sequence.
Exit-Sequence:
ts i : = ts i + 1 ;
nGrants i [ MUTIN ] : = ;
for-each P j ( N i { P i } )  send Request , MUTIN , ts i , P i to P j ;
while ( | nGrants i [ MUTIN ] | < | N i | + 1 ) {
  if ( P i = P L D R Waiting L D R = | N L D R | + 1 ) {
    /* The configuration may be in a deadlock. */
     TriggerNomination ( ) ;
    wait until ( Waiting L D R < | N L D R | + 1 ) ;
  }
 }
state i : = OutCS ;
for-each P j ( N i { P i } )  send Release , MUTEX , P i to P j ;
 
Entry-Sequence:
nGrants i [ MUTEX ] : = ;
for-each P j ( N i { P i } )  send Request , MUTEX , ts i , P i to P j ;
while ( | nGrants i [ MUTEX ] | < | N i | + 1 ) {
  if ( P i = P L D R Waiting L D R = | N L D R | + 1 ) {
    /* The configuration may be in a deadlock. */
     TriggerNomination ( ) ;
    wait until ( Waiting L D R < | N L D R | + 1 ) ;
  }
 }
state i : = InCS ;
for-each P j ( N i { P i } )  send Release , MUTIN , P i to P j ;
When the leader P L D R suspects that the system is in a deadlock, it invokes the TriggerNomination ( ) function and selects a process P q within one hop ( P L D R itself or a neighbor of P L D R ) as a “trigger”, and P L D R sends a message (Trigger message type) to P q so that P q issues a special request. Then, P q sends a special request message (RequestByTrigger message type) to each neighbor P r N q . This message also cancels the current request of P q . After each P r receives such special request from P q , then P r cancels the request of P q by deleting ( * , P q ) from its pending list ( pendingReq r ) and its granted list ( grantedTo r ), inserts the special request ( 0 , P q ) to its granted list and immediately grants by using the “sidetrack”. The deleted element ( * , P q ) is a request that P r or other neighbors of P q kept it waiting if the system is in a deadlock, and the inserted element ( 0 , P q ) cannot be preempted because it has the maximum priority.
We explain the technical details how the leader P L D R suspects that the system is in a deadlock. When Waiting L D R = | N L D R | + 1 , then | pendingReq L D R [ MUTIN ] | + | pendingReq L D R [ MUTEX ] | + | Grant L D R | = | N L D R | + 1 . Because a request is not sent if a previous one is kept waiting, two pending lists pendingReq L D R [ MUTIN ] and pendingReq L D R [ MUTEX ] are disjoint. Thus, if there is a neighbor P q N L D R , which is not in these pending lists, then P q ’s request is granted by P L D R , but is kept waiting by other neighbor than P L D R in the deadlock configuration. That is, P q ’s request is in both of grantedTo L D R . To the suspicion possible, we assume that, at the leader process, ( k L D R 1 ) ( l L D R + 1 ) 1 holds, i.e., k L D R l L D R 3 . The underlying LMUTIN (resp., LMUTEX) algorithm sends at most | N L D R | + 1 ( l L D R + 1 ) (resp., k L D R 1 ) grants; the total number of grants of the two underlying algorithms is at most | N L D R | l L D R + k L D R 1 . Because k L D R l L D R 3 , | N L D R | l L D R + k L D R 1 | N L D R | + 2 > | N L D R | + 1 = | N L D R { P L D R } | holds. This implies that there exists at least a process P q ( N L D R { P L D R } ) that receives both grants of LMUTIN and LMUTEX from P L D R .
Algorithm 5 Algorithm LKCS: message handlers (1).
On receipt of a Request , at , ts j , P j message:
pendingReq i [ at ] : = pendingReq i [ at ] { ( ts j , P j ) } ;
if ( Cond i ) {
   ( ts h , P h ) : = d e l e t e M i n ( pendingReq i [ at ] ) ;
   grantedTo i [ at ] : = grantedTo i [ at ] { ( ts h , P h ) } ;
  send Grant , at , P i to P h ;
 } else if ( preemptingNow i [ at ] = nil ) {
   ( ts h , P h ) : = m a x ( grantedTo i [ at ] ) ;
  if ( ( ts j , P j ) < ( ts h , P h ) ) {
     preemptingNow i [ at ] : = ( ts h , P h ) ;
    send Preempt , at , P i to P h ;
  }
 }
 
On receipt of a Grant , at , P j message:
if ( P j nGrants i [ at ] ) {
   nGrants i [ at ] : = nGrants i [ at ] { P j } ;
 }
 
On receipt of a Release , at , P j message:
if ( ( * , P j ) = preemptingNow i [ at ] ) preemptingNow i [ at ] : = nil ;
 Delete ( * , P j ) from grantedTo i [ at ] ;
if ( pendingReq i [ at ] ) {
   ( ts h , P h ) : = d e l e t e M i n ( pendingReq i [ at ] ) ;
   grantedTo i [ at ] : = grantedTo i [ at ] { ( ts h , P h ) } ;
  send Grant , at , P i to P h ;
 }
 
On receipt of a Preempt , at , P j message:
if ( ( at = MUTEX state i = OutCS ) ( at = MUTIN state i = InCS ) ) {
  Delete P j from nGrants i [ at ] ;
  send Relinquish , at , P i to P j ;
 }
 
On receipt of a Relinquish , at , P j message:
preemptingNow i [ at ] : = nil ;
 Delete ( * , P j ) from grantedTo i [ at ] , and let ( ts j , P j ) be the deleted item;
pendingReq i [ at ] : = pendingReq i [ at ] { ( ts j , P j ) } ;
( ts h , P h ) : = d e l e t e M i n ( pendingReq i [ at ] ) ;
grantedTo i [ at ] : = grantedTo i [ at ] { ( ts h , P h ) } ;
send Grant , at , P i to P h ;
  • If the system is in a deadlock, P q is definitely involved in the deadlock. Giving special grants by the sidetrack resolves the deadlock.
  • If the system is not in a deadlock, P q is not be involved in the deadlock. Furthermore, in this case, LKCS gives special grants by the sidetrack. This is because exact deadlock avoidance mechanisms require global information collection, and they incur large message complexity.
With this local observation at the leader P L D R , the deadlock is avoided with less message complexity.
Algorithm 6 Algorithm LKCS: function TriggerNomination ( ) for the leader P L D R .
TriggerNomination ( ) {
   if ( | grantedTo L D R [ MUTIN ] | = | N L D R | L L D R ) {
    if ( pendingReq L D R [ MUTEX ] ) {
     ( ts h , P h ) : = d e l e t e M i n ( pendingReq L D R [ MUTEX ] ) ;
    } else {
    for-each P j Grant L D R {
       if ( ( ts j , P j ) grantedTo L D R [ MUTIN ] ( ts j , P j ) grantedTo L D R [ MUTEX ] ) {
       /* P j may be waiting for grant messages to enter. */
        candidate L D R : = candidate L D R { ( ts j , P j ) } ;
      }
    }
     ( ts h , P h ) : = m i n ( candidate L D R ) ;
     candidate L D R : = ;
   }
   send Trigger , MUTEX , ts h to P h ;
   } else if ( | grantedTo L D R [ MUTEX ] | = K L D R 1 ) {
    if ( pendingReq L D R [ MUTIN ] ) {
     ( ts h , P h ) : = d e l e t e M i n ( p e n d i n g R e q L D R [ MUTIN ] ) ;
    } else {
    for-each P j Grant L D R {
       if ( ( ts j + 1 , P j ) grantedTo L D R [ MUTIN ] ( ts j , P j ) grantedTo L D R [ MUTEX ] ) {
       /* P j may be waiting for grant messages to exit. */
        candidate L D R : = candidate L D R { ( ts j , P j ) } ;
      }
    }
     ( ts h , P h ) : = m i n ( candidate L D R ) ;
     candidate L D R : = ;
   }
   send Trigger , MUTIN , ts h to P h ;
  }
}
Algorithm 7 Algorithm LKCS: message handlers (2).
On receipt of a Trigger , at , ts message:
if ( state i = InCS at = MUTIN ts = ts i | nGrants i [ MUTIN ] | < | N i | + 1 ) {
   /* P i is waiting for grant messages to exit. */
    nGrants i [ MUTIN ] : = ;
   for-each P j ( N i { P i } )  send RequestByTrigger , MUTIN , P i to P j ;
   /* Request message as a trigger. */
} else if ( state i = OutCS at = MUTEX ts = ts i | nGrants i [ MUTEX ] | < | N i | + 1 ) {
   /* P i is waiting for grant messages to enter. */
    nGrants i [ MUTEX ] : = ;
   for-each P j ( N i { P i } )  send RequestByTrigger , MUTEX , P i to P j ;
   /* Request message as a trigger. */
 }
 
On receipt of a RequestByTrigger , at , P j message:
 Delete ( * , P j ) from pendingReq i [ at ] ;
 Delete ( * , P j ) from grantedTo i [ at ] ;
grantedTo i [ at ] : = grantedTo i [ at ] { ( 0 , P j ) } ;
send Grant , at , P i to P j ;
In the proposed algorithm, each process P i maintains the following local variables, where at is the algorithm type, MUTEX or MUTIN . These variables work as the same as those of l i -LMUTIN, and we omit the detailed description here.
  • state i : The current state of P i : InCS or OutCS .
  • ts i : The current value of the logical clock [19].
  • nGrants i [ at ] : A set of process ids from which P i obtains grants for exiting/entering to the CS.
  • grantedTo i [ at ] : A set of timestamps ( ts j , P j ) for the requests to P j ’s exiting/entering to the CS that P i has been granted but that P j has not yet released.
  • pendingReq i [ at ] : A set of timestamps ( ts j , P j ) for the requests to P j ’s exiting/entering to the CS that are pending.
  • preemptingNow i [ at ] : A timestamp ( ts j , P j ) of a request such that P i preempts a permission for P j ’s exiting/entering to the CS if the preemption is in progress.

5.3. Proof of Correctness

In this subsection, we show the correctness of LKCS. We assume the initial configuration is safe. First, we show that the process P i with dist ( P L D R , P i ) > 2 cannot become unsafe by the proof by contradiction. Next, we show that other processes P j cannot become unsafe because they normally execute the algorithm as their instance ( l j + 1 , k j 1 ) . Thus, we can derive the following lemma.
Lemma 4.
(Safety) For each process P i V , l i | CS i ( C ) | k i holds at any configuration C.
Proof. 
We assume that the initial configuration C 0 is safe. First, we consider the process P i for which dist ( P L D R , P i ) > 2 holds becomes unsafe first in a configuration C for the contrary.
  • Suppose that | CS i ( C ) | < l i , that is | CS i ( C ) ¯ | > | N i | + 1 l i . Because | CS i ( C 0 ) | l i , consider a process P j N i { P i } , which became the OutCS state by the ( | N i | + 2 l i ) -th lowest timestamp among processes in CS i ( C ) ¯ . Then, P j obtains permission to be the OutCS state from each process in N j { P j } . This implies that P i receives a permission request Request , MUTIN , ts j , P j from P j and that P i sends a permission Grant , MUTIN , P i to P j . Because P i grants at most | N i | + 1 l i permissions to exit the CS at each time by the condition Cond i , P j cannot obtain a permission from P i ; this is a contradiction.
  • Suppose that | CS i ( C ) | > k i . Because | CS i ( C 0 ) | k i , consider a process P j N i { P i } that became the InCS state by the ( k i + 1 ) -th lowest timestamp among processes in CS i ( C ) . Then, P j obtains permission to be the InCS state from each process in N j { P j } . This implies that P i receives a permission request Request , MUTEX , ts j , P j from P j and that P i sends a permission Grant , MUTEX , P i to P j . Because P i grants at most k i permissions to enter the CS at each time by the condition Cond i , P j cannot obtain a permission from P i ; this is a contradiction.
Next, we consider P i that has dist ( P L D R , P i ) 2 . Note that the leader P L D R sends a trigger request Trigger , at , ts to exactly one of its neighbors or itself at a time. Let P q be the receiver. If P q does not request to invert its state as a trigger, we can discuss by the same way as above, and l i + 1 | CS i ( C ) | k i 1 because of condition Cond i (of course, if P i = P L D R , l L D R + 1 | CS L D R ( C ) | k L D R 1 ). When P q requests to invert its state as a trigger by sending a message RequestByTrigger , at , P q , all of its neighbors P j grant it without attention to | CS j ( C ) | , and P q inverts its state without attention to | CS q ( C ) | . Thus, | CS i ( C ) | becomes l i + 1 1 or k i 1 + 1 (if P i = P L D R , | CS L D R ( C ) | becomes l L D R + 1 1 or k L D R 1 + 1 ). Therefore, P i does not become unsafe. ☐
Next, we consider that a deadlock occurs in a configuration. Then, processes waiting for grant messages constitute chains of deadlocks unless at least one process on chains changes its state. However, then, the leader process designates one of its neighbors or itself as a trigger, and the trigger changes its state by its preferential rights. Therefore, we can derive the following lemma.
Lemma 5.
(Liveness) Each process P i changes into the InCS and OutCS states alternately infinitely often.
Proof. 
For the contrary, we assume that a deadlock occurs in the configuration C. Let D be a set of processes that cannot change its state, that is they are in the deadlock. First, we assume that all of process P u in D has state u = OutCS . Then, all of their neighbors P v have k v neighbors P w with state w = InCS . However, such neighbors P w are not in D and eventually change their state to OutCS . Thus, P u eventually can change its state; this is a contradiction. Therefore, in D, there is a process P u with state u = InCS such that it cannot change its state by the exit-sequence. Then, it holds | CS u ( C ) | = l u or has a neighbor P v N u with | CS v ( C ) | = l v . It is waiting for grant messages and cannot change its state until at least one of its neighbors P w N u with state w = OutCS changes P w ’s state by the entry-sequence. If | CS w ( C ) | = k w holds or P w has a neighbor P x N w with | CS x ( C ) | = k x , P w cannot change its state by the entry-sequence until at least one of its neighbors P y N w with state y = InCS changes P y ’s state by the exit-sequence. By such chain relationship, it is clear that the waiting chain can be broken if at least one process on this chain changes its state. Thus, for the assumption, all processes in V are in such a chain in C, that is V = D .
However, in such a C, all of P L D R and its neighbors are waiting for grant messages from their neighbors. That is, their requests are in grantedTo L D R or pendingReq L D R , and Waiting L D R is equal to | N L D R | + 1 . Additionally, we assume that k L D R l L D R 3 and the number of grants P L D R can send with MUTIN (resp., MUTEX ) is | N L D R | + 1 ( l L D R + 1 ) (resp., k L D R 1 l L D R + 2 ). Because of safety, | grantedTo L D R [ MUTIN ] | = | N L D R | l L D R or | grantedTo L D R [ MUTEX ] | = k L D R 1 holds. Then, P L D R sends a Trigger , at , ts message to a neighbor P q and P q becomes a trigger if P q is also waiting for grant messages. Processes P r that receive RequestByTrigger , at , P q grant the request without attention to | CS r ( C ) | . Then, P q can change its state, and after that, P L D R can change its state. Therefore, the waiting chain in C can be broken. This is a contradiction. ☐

5.4. Performance Analysis

Lemma 6.
The message complexity of LKCS for P i V is 6 ( | N i | + 1 ) in the best case, and 12 ( | N i | + 1 ) in the worst case.
Proof. 
First, let us consider the best case.
  • For P i ’s exiting the CS, P i sends a Request , MUTIN , ts i , P i message to each process in N i { P i } ; each process P j in N i { P i } sends a Grant , MUTIN , P j message to P i ; then P i sends a Release , MUTEX , P i message to each process in N i { P i } . Thus, 3 ( | N i | + 1 ) messages are exchanged for P i ’s exiting the CS.
  • For P i ’s entering the CS, P i sends a Request , MUTEX , ts i , P i message to each process in N i { P i } ; each process P j in N i { P i } sends a Grant , MUTEX , P j message to P i ; then P i sends a Release , MUTIN , P i message to each process in N i { P i } . Thus, 3 ( | N i | + 1 ) messages are exchanged for P i ’s entering the CS.
Thus, the message complexity is 6 ( | N i | + 1 ) in the best case.
Next, let us consider the worst case.
  • For P i ’s exiting the CS, P i sends a Request , MUTIN , ts i , P i message to each process P j in N i { P i } . Then, P j sends a Preempt , MUTIN , P j message to the process P m to which P j sends a Grant , MUTIN , P j message; P m sends a Relinquish , MUTIN , P m message back to P j ; and P j sends a Grant , MUTIN , P j message to P i . After P i exits the CS, P i sends a Release , MUTEX , P i message to each process P j in N i { P i } . Then, P j sends a Grant , MUTEX , P j message to some process with the highest priority in pendingReq j [ MUTEX ] . Thus, 6 ( | N i | + 1 ) messages are exchanged.
  • For P i ’s entering the CS, P i sends a Request , MUTEX , ts i , P i message to each process P j in N i { P i } . Then, P j sends a Preempt , MUTEX , P j message to the process P m to which P j sends a Grant , MUTEX , P j message; P m sends a Relinquish , MUTEX , P m message back to P j ; and P j sends a Grant , MUTEX , P j message to P i . After P i enters the CS, P i sends a Release , MUTIN , P i message to each process P j in N i { P i } . Then, P j sends a Grant , MUTIN , P j message to some process with the highest priority in pendingReq j [ MUTIN ] . Thus, 6 ( | N i | + 1 ) messages are exchanged.
Thus, the message complexity is 12 ( | N i | + 1 ) in the worst case. ☐
Theorem 3.
LKCS solves the generalized local ( l i , k i ) -critical section problem with a message complexity of O ( Δ ) , where Δ is the maximum degree of a network.

6. Conclusions

In this paper, we consider the generalized local ( l i , k i ) -critical section problem, which is a new version of critical section problems. Because this problem is useful for fault-tolerance and load balancing of distributed systems, we can consider various future applications. We first proposed an algorithm for the generalized local l i -mutual inclusion. Next, we showed the generalized local complementary theorem. By using this theorem, we proposed an algorithm for the generalized local ( l i , k i ) -critical section problem.
In the future, we plan to perform extensive simulations and confirm the performance of our algorithms under various application scenarios. Additionally, we plan to improve the proposed algorithm in message complexity and time complexity and to design an algorithm that guarantees exactly l i | CS i ( C ) | k i in every process.

Acknowledgments

This work was supported by JSPS KAKENHI Grant Numbers 16K00018 and 26330015.

Author Contributions

Sayaka Kamei and Hirotsugu Kakugawa designed the algorithms. Sayaka Kamei analyzed the algorithms and wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dijkstra, E.W. Solution of a problem in concurrent programming control. Commun. ACM 1965, 8, 569. [Google Scholar] [CrossRef]
  2. Saxena, P.C.; Rai, J. A survey of permission-based distributed mutual exclusion algorithms. Comput. Stand. Interfaces 2003, 25, 159–181. [Google Scholar] [CrossRef]
  3. Yadav, N.; Yadav, S.; Mandiratta, S. A review of various mutual exclusion algorithms in distributed environment. Int. J. Comput. Appl. 2015, 129, 11–16. [Google Scholar] [CrossRef]
  4. Kakugawa, H.; Fujita, S.; Yamashita, M.; Ae, T. Availability of k-coterie. IEEE Trans. Comput. 1993, 42, 553–558. [Google Scholar] [CrossRef]
  5. Bulgannawar, S.; Vaidya, N.H. A distributed k-mutual exclusion algorithm. In Proceedings of the 15th International Conference on Distributed Computing Systems, Vancouver, BC, Canada, 30 May–2 June 1995; pp. 153–160. [Google Scholar]
  6. Chang, Y.I.; Chen, B.H. A generalized grid quorum strategy for k-mutual exclusion in distributed systems. Inf. Process. Lett. 2001, 80, 205–212. [Google Scholar] [CrossRef]
  7. Abraham, U.; Dolev, S.; Herman, T.; Koll, I. Self-stabilizing l-exclusion. Theor. Comput. Sci. 2001, 266, 653–692. [Google Scholar] [CrossRef]
  8. Chaudhuri, P.; Edward, T. An algorithm for k-mutual exclusion in decentralized systems. Comput. Commun. 2008, 31, 3223–3235. [Google Scholar] [CrossRef]
  9. Reddy, V.A.; Mittal, P.; Gupta, I. Fair k mutual exclusion algorithm for peer to peer systems. In Proceedings of the 28th International Conference on Distributed Computing Systems, Beijing, China, 17–20 June 2008. [Google Scholar]
  10. Hoogerwoord, R.R. An implementation of mutual inclusion. Inf. Process. Lett. 1986, 23, 77–80. [Google Scholar] [CrossRef]
  11. Kakugawa, H. Mutual inclusion in asynchronous message-passing distributed systems. J. Parallel Distrib. Comput. 2015, 77, 95–104. [Google Scholar] [CrossRef]
  12. Kakugawa, H. On the family of critical section problems. Inf. Process. Lett. 2015, 115, 28–32. [Google Scholar] [CrossRef]
  13. Lynch, N.A. Fast allocation of nearby resources in a distributed system. In Proceedings of the 12th Annual ACM Symposium on Theory of Computing, Los Angeles, CA, USA, 28–30 April 1980; pp. 70–81. [Google Scholar]
  14. Attiya, H.; Kogan, A.; Welch, J.L. Efficient and robust local mutual exclusion in mobile ad hoc networks. IEEE Trans. Mobile Comput. 2010, 9, 361–375. [Google Scholar] [CrossRef]
  15. Awerbuch, B.; Saks, M. A dining philosophers algorithm with polynomial response time. In Proceedings of the 31th Annual Symposium on Foundations of Computer Science, St. Louis, MO, USA, 22–24 October 1990; Volume 1, pp. 65–74. [Google Scholar]
  16. Chandy, M.; Misra, J. The drinking philosophers problem. ACM Trans. Program. Lang. Syst. 1984, 6, 632–646. [Google Scholar] [CrossRef]
  17. Beauquier, J.; Datta, A.K.; Gradinariu, M.; Magniette, F. Self-stabilizing local mutual exclusion and daemon refinement. In Proceedings of the International Symposium on Distributed Computing, Toledo, Spain, 4–6 October 2000; pp. 223–237. [Google Scholar]
  18. Khanna, A.; Singh, A.K.; Swaroop, A. A leader-based k-local mutual exclusion algorithm using token for MANETs. J. Inf. Sci. Eng. 2014, 30, 1303–1319. [Google Scholar]
  19. Lamport, L. Time, clocks, and the ordering of events in a distributed system. Commun. ACM 1978, 21, 558–565. [Google Scholar] [CrossRef]
  20. Maekawa, M. A N algorithm for mutual exclusion in decentralized systems. ACM Trans. Comput. Syst. 1985, 3, 145–159. [Google Scholar] [CrossRef]
  21. Sanders, B.A. The information structure of mutual exclusion algorithm. ACM Trans. Comput. Syst. 1987, 5, 284–299. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Kamei, S.; Kakugawa, H. An Asynchronous Message-Passing Distributed Algorithm for the Generalized Local Critical Section Problem. Algorithms 2017, 10, 38. https://doi.org/10.3390/a10020038

AMA Style

Kamei S, Kakugawa H. An Asynchronous Message-Passing Distributed Algorithm for the Generalized Local Critical Section Problem. Algorithms. 2017; 10(2):38. https://doi.org/10.3390/a10020038

Chicago/Turabian Style

Kamei, Sayaka, and Hirotsugu Kakugawa. 2017. "An Asynchronous Message-Passing Distributed Algorithm for the Generalized Local Critical Section Problem" Algorithms 10, no. 2: 38. https://doi.org/10.3390/a10020038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop