Next Article in Journal
On Surprise Indices Related to Univariate Discrete and Continuous Distributions: A Survey
Next Article in Special Issue
A Multi-Objective Mathematical Programming Model for Transit Network Design and Frequency Setting Problem
Previous Article in Journal
An Improved Dunnett’s Procedure for Comparing Multiple Treatments with a Control in the Presence of Missing Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Delay Time Profile of Multistage Networks with Synchronization

Industrial Engineering and Management, Ariel University, Ariel 40700, Israel
Mathematics 2023, 11(14), 3232; https://doi.org/10.3390/math11143232
Submission received: 23 June 2023 / Accepted: 11 July 2023 / Published: 23 July 2023
(This article belongs to the Special Issue Operations Research and Its Applications)

Abstract

:
The interaction between projects and servers has grown significantly in complexity; thus, applying parallel calculations increases dramatically. However, it should not be ignored that parallel processing gives rise to synchronization constraints and delays, generating penalty costs that may overshadow the savings obtained from parallel processing. Motivated by this trade-off, this study investigates two special and symmetric systems of split–join structures: (i) parallel structure and (ii) serial structure. In a parallel structure, the project arrives, splits into m parallel groups (subprojects), each comprising n subsequent stages, and ends after all groups are completed. In the serial structure, the project requires synchronization after each stage. Employing a numerical study, we investigates the time profile of the project by focusing on two types of delays: delay due to synchronization overhead (occurring due to the parallel structure), and delay due to overloaded servers (occurring due to the serial structure). In particular, the author studies the effect of the number of stages, the number of groups, and the utilization of the servers on the time profile and performance of the system. Further, this study shows the efficiency of lower and upper bounds for the mean sojourn time. The results show that the added time grows logarithmically with m (parallelism) and linearly with n (seriality) in both structures. However, comparing the two types of split–join structures shows that the synchronization overhead grows logarithmically undr both parallelism and seriality; this yields an unexpected duality property of the added time to the serial system.

1. Introduction

Nowadays, the concept of a “project” is becoming much more general and multidimensional. The interaction between projects and servers has grown in complexity, originating from today’s practice that projects are predominantly parallel subprojects that involve parallel calculations. Examples of such departures from the traditional one-server-per-project model include data centers at Google, Microsoft, and Facebook, where parallelism can occur both at the hardware and software levels [1,2,3], process mining and business activities evaluations [4,5,6].
Split–join networks are a key modeling tool for parallelism in operations research, queueing models, and supply chain management. The basic split–join network (also known as the fork–join network), is a one-stage network (see Figure 1). A stream of projects arrives at the split node (the first pink triangle), where each project is split into m tasks that are allocated to m parallel servers (the gray circles). A task may have to wait in a queue (the black rectangle) until its server finishes all previous tasks. After the task is completed, it waits in a join-type queue (the second pink triangle) until all the other m 1 tasks of the same project are completed. When all m tasks of the same project are completed, they rejoin (are synchronized) at the join node. The sojourn time is the time from the arrival of the project until its completion. (i.e., until its departure from the join node). Here, the terminology “server” is used when talking about something that processes tasks, such as a machine, a single CPU core, or a single thread. Additionally, the label “queue” is used to represent the place where the task waits. Accordingly, there are two types of queues: a queue before each server where the task is delayed due to a busy server, and a queue where a group waits until all the parallel groups are completed. In the remainder of the paper, the author refers to these two types of queues as operational queues and synchronization queues, respectively.
A split–join network with a more general topological structure is illustrated in Figure 2. Here, the project starts with task 1, and then splits into two parallel groups, starting with tasks 2 and 8, respectively. After finishing task 2, the project continues to three parallel groups, the first of which consists of two consecutive tasks (tasks 3 and 6). Finishing all these three groups, the project continues with task 7. After finishing tasks 7 and 8, the project is completed. The operational queues are marked by striped black rectangles, and the join (synchronization) queues are marked by blue half-ellipses. Note that the splitting action takes no time; similarly, the joining action takes no time (after all the relevant groups have arrived).
The scope of applications that use the split–join structure in reality is huge. It includes data center services, web searching, social networking and big data analysis, systems with a wide range of queue policies and service time distributions, systems with multiple servers per split node for failure finding and load balancing, and systems with a varying number of parallel tasks or sharing services due to limited resources [3,7,8]. In manufacturing, the split–join structure is used in assembly systems of high-tech equipment manufacturers (OEMs) that require several parts to be processed simultaneously at separate work stations or plant locations [9,10,11]. In project management and supply chain management, the split–join structure is typically used for representing the arrival of an order composed of several different items or products from a vendor, or for synchronization between arriving and departing vehicles at the transshipment location [12,13]. The split–join structure is also prevalent in healthcare and, in particular, in emergency departments, where multiple customer classes share multiple processing resources [14,15], in supply chain networks of high-tech manufacturers with multiple suppliers that each produce a unique component of the product [16,17,18,19], and in ocean transport for managing container terminals [20].
This study investigates two special split–join networks with a symmetric topological structure. The first is the parallel split–join network (see Figure 3). The project arrives, and splits into m parallel groups, each including n subsequent stages allocated to n different servers (in total, m × n tasks). The project leaves after all its groups are completed. It is assumed that projects arrive according to a renewal process and are served in a first-come-first-served (FCFS) order. Importantly, note that the FCFS policy is used in many settings, such as the Google Borg task scheduler and the management of multiserver jobs in the cloud-computing industry [8]. However, note that other scheduling rules such as FIFO or c μ rules are also examined in the literature (see, e.g., [14,15,21]).
In reality, we are indeed witnessing that projects must be processed in parallel. However, in practice, we cannot ignore that parallel processing needs to be more synchronized. Characterizing this constraint, this study introduces the second type of split–join network, namely, the serial split–join network. The serial network has similar features to the parallel split–join network, except that it has, in addition, multiple synchronization queues. Specifically, a project arrives and splits into m parallel groups; each of which is composed of n stages. When all groups of the same project complete stage j, they are obliged to reunite, after which they can continue to the next ( j + 1 ) -th stage; this procedure is the same for all stages. The project can exit the system when all groups in the last stage (the n-th stage) are completed. Here, there are n × m operational queues, but n synchronization queues.
Although the basic split–join network has been studied intensively in the literature, e.g., [22,23,24,25,26,27,28], no analytical expressions are known for the joint steady-state number of projects in the queues, nor for the mean sojourn time. There is agreement that split–join networks are another of those infamous queueing systems for which the stationary distribution is intractable; the complexity of the analysis is explained by the dependence of the times of the projects, due to their common moments of arrival [3,29]. Most exact results on split–join networks are limited to systems with two parallel servers. For example, Nelson and Tantawi [30] and Nelson et al. [31] obtain an exact expression for a two-node M/M/1 homogeneous split–join network where the jobs arrive according to a Poisson process and the service times are i.i.d exponentially distributed random variables (r.v.s). For split–join networks with more than two parallel servers, only approximations for the performance measures are obtained. For example, Nelson and Tantawi [30] used a scaling approximation technique to approximate the mean sojourn time in an M/M/1 split–join system. Ko and Serfozo [27] provided results on G/M/1 queues, and Fiorini [32] and Wang et al. [8] studied M/G/1 queues. Networks with different types of products at different machines were discussed in Ding [9]. For the general GI/G/1 split–join networks, upper and lower bounds on the mean sojourn time were derived by Baccelli and Makowski [22,23], Baccelli et al. [24,25], Lebrecht and Knottenbelt [33], Kemper and Mandjes [34], and, more recently, Enganti et al. [7] and Grobunova and Vishnevsky [2]. Ko and Serfozo [26] developed bounds and approximate expressions for evaluating the mean sojourn time and queue length distribution at join nodes. Takahashi et al. [35] used matrix analytic methods and assumed finite queue sizes, called buffers. Using the latter method, Qiu et al. [36] developed an efficient algorithm to approximate the distribution of response time in homogeneous multi-MAP/PH/1 split–join networks. Nelson et al. [30] compared four different structures of split–join networks: with/without central queueing and splitting/not splitting projects. Most of the above papers focus on systems where the number of servers is finite. Work on the heavy-traffic processing limit has been performed by, e.g., Varma and Makowski [37], Tan and Knessl [38], Knessl [39], Kushner [40], Atar et al. [14], Wang et al. [8], Schol et al. [19], Nguyen et al. [3], Zeng et al. [41] and Meijer et al. [42].
The above literature review shows that a great deal of effort has been devoted to the understanding and analysis of split–join queueing networks. This effort mainly focuses on deriving the sojourn time of a project, usually when there is only one type of project and two parallel servers. Indeed, the sojourn time is known to be intractable in more complex split–join networks; thus, only (upper and lower) bounds and approximations have been provided for them.
As far as the author knows, no explicit results are available for networks with more than two parallel groups or with more synchronization points. Despite their simple and symmetric structure, parallel and serial split–join networks have hardly been investigated and, at present, very little is known about their performance and time profile. Note that, in practice, parallel processing is subject to synchronization constraints and delays generating penalty costs that can offset the gains obtained from such processing. These include the cost of intermediate storage of uncompleted products, or the cost of memory in computer networks where subprojects are processed by different servers and then wait in the synchronization buffer. Thus, a thorough study of the performance and time profile of these systems is lacking but necessary. This study aims to take a first step toward filling this research gap.
To this end, the author focuses on the mean sojourn time of the parallel and the serial systems. This starts with the derivation of lower and upper bounds for the mean sojourn time; each bound emphasizes a different aspect of the time profile. Using extensive simulations, the efficiency of these bounds are evaluated, providing some insights into the interplay between operational delays (due to overloaded servers or limited resources) and synchronization delays. Then, a sensitivity analysis is carried out to study how the time profile is affected by different parameters of the networks, i.e., by the number of groups m, the number of stages n , and the utilization of the servers. Clearly, operational and synchronization delays are interdependent. Thus, accurately cataloging the time profile according to the different types of delays is not applicable and, hence, not possible. However, it is reasonable to attribute the number of groups to the synchronization delays, and the number of stages to the operational delays. In the serial split–join network, the overhead incurred due to additional synchronizations is significant; thus, comparing the serial network with the parallel network can improve our understanding of the time profile, the types of delays, and the impact of the various parameters on the mean sojourn time.
The contributions of this paper are summarized as follows. Two multistage split–join networks are introduced: The parallel network and the serial network. To the best of the author’s knowledge, such networks have hardly been discussed in the literature (with the exception of the two-stage serial split–join network discussed in Ko [29]). These networks can serve as applied models (or as effective approximations) in many fields, such as industry and manufacturing, computer modelling, reliability systems, and supply chain management. Moreover, while most existing studies on split–join networks deal with the sojourn times, this study differs by focusing on the time profile and distinguishing between two types of delays: synchronization delay (due to parallelism) and operational delay (due to seriality). Diagnosing the time profile and studying the effect of the parameters on the different types of delays can serve as a practical tool for decision makers in setting the optimal schedule for the project stages and allocating optimal resources at each stage in order to maximize its economic profitability. Furthermore, motivated by real-world examples, this study compares, numerically, the two networks and studies the effect of imposing synchronization constraints on the system’s performance and time profile. Doing so provides a better understanding of the interplay between parallelism and seriality, and addresses how to deal with situations of adding constraints or a change in resource allocation.
The main results can be summarized as follows: ( i ) Synchronization delays have a logarithmic impact O ( ln m ) on the mean sojourn time, while operational delays have a linear impact O ( n ) ; ( i i ) When increasing the number of stages n and the number of groups m, the impact of these delays diminishes; ( i i i ) The impact of parallelism and seriality is (almost) independent of the utilization; ( i v ) Contrary to expectations, the serial network is less sensitive to changes (and sometimes even robust to them), compared to the parallel one; ( v ) Comparing the two networks shows that the ratio of the mean sojourn time is increasing logarithmically in both n and m. As a result, this further reveals a kind of duality property, implying that the extra time in the serial network is relatively similar due to parallelism and seriality; ( v i ) Finally, the results show that, in most cases, increasing servers’ utilization slightly obscures the differences between the systems.
In summary, this analysis may shed some light on the time profile of multistage networks and, in particular, on the interplay between different types of delays: delays due to overloaded servers (seriality) and delays due to synchronization constraints (parallelism). As such, the analysis presented here can be used for estimation purposes when designing optimal multistage networks and allocating resources when more constraints are needed or, alternatively, in situations when reducing unnecessary synchronizations is possible.
The rest of the paper is organized as follows. Section 2 describes the parallel split–join network. Section 3 is devoted to preliminary results. Analyses of parallel and serial split–join networks are presented in Section 4 and Section 5, respectively. Finally, Section 6 concludes and discusses potential avenues of future research.

2. Description of the Parallel Split–Join Network

A project arrives and splits into m parallel groups, each including n subsequent stages allocated to n different servers (in total, m × n tasks). When group i, i { 1 , , m } , completes its n stages, it enters the synchronization queue, waiting for the other ( m 1 ) groups of the same project to be completed. The project leaves after all its groups are completed. A typical parallel split–join network is presented in Figure 3.
Note that, after splitting, each group goes through all n stages/tasks; thereafter, it waits at the synchronization node (join point). Thus, the system has one synchronization queue and n × m operational queues. The definitions and notations to be used throughput this paper are now introduced:
  • Let A k , k = 1 , 2 , , with A 0 = 0 , be the arrival time of the k-th project. Assume that the arrival process is a renewal process with rate λ k (specifically, this paper focuses on the Poisson process with rate λ ). Let ν k = A k A k 1 , k = 1 , 2 , be the inter-arrival time between the k-th project and the ( k 1 ) -th project. Thus, ν k , k = 1 , 2 , are identical independent (i.i.d) random variables (r.v.s) with average 1 / λ k ;
  • Upon arrival, each project k is split into m parallel groups. Alternatively, one can think of this as m parallel treatments that the project may simultaneously go through. Each group i, i = 1 , , m includes n sequential stages/tasks. Completion of stage j, j = 1 , , n 1 enables the group to continue to stage j + 1 . In what follows, the index i to group i = 1 , , m , the index j to stage j = 1 , , n , and the double-index ( i , j ) to task j in group i are used;
  • Each task is allocated independently to its own station. The station is characterized by an infinite queue and a single server. The queue is managed according to FCFS discipline. As a result, a task ( i , j ) of project k cannot enter the server before its predecessor tasks ( i , j ) of projects 1 , 2 , , k 1 ;
  • Let S k ( i , j ) be the service time of task ( i , j ) of project k. Assume that the times S k ( i , j ) are independent r.v.s in i , j, and k , having exponential distribution with rate μ k ( i , j ) ;
  • A project k is completed when all its groups finish their n stages. Assume that the time of that final synchronization is negligible. Thus, when all groups are finished, they are reunited with no time, and the project immediately leaves the system. Clearly, the FCFS discipline implies that a project cannot leave before its predecessors;
  • A sufficient and necessary condition for stability of the station ( i , j ) is E ( S k ( i , j ) E ( ν k ) = μ k ( i , j ) / λ k < 1 , k = 1 , 2 , The system is stable if ρ k ( i , j ) = μ k ( i , j ) / λ k < 1 for all i , j , k [21,23];
  • Let W k ( i , j ) be the waiting time of the project k at station ( i , j ) (i.e., the operational delay), and denote by T k ( i , j ) = S k ( i , j ) + W k ( i , j ) as the sojourn time (i.e., the waiting time plus the service time).
A snapshot of the system at time t 0 can be modeled by the m-column vector L ( t ) = ( L 1 ( t ) , L 2 ( t ) , , L m ( t ) ) T , where L i ( t ) is an n-row vector L i ( t ) = ( L i , 1 ( t ) , L i , 2 ( t ) , , L i , n ( t ) ) . The component L i , j ( t ) indicates the number of tasks at station ( i , j ) at time t (i.e., L i , j ( t ) 1 tasks are waiting at the queue, along with one task at the server). Recall that the synchronization action at the end join point does not take any time. In addition, let N i , P ( t ) , i = 1 , , m be the number of groups waiting at the joint point at time t. It is easy to verify that N i , P ( t ) satisfies
N i , P ( t ) = M a x 1 i m j = 1 n L i , j ( t ) j = 1 n L i , j ( t ) .
Example 1.
Let m = 5 , n = 4 , and assume that project 7 arrives. Figure 4 demonstrates the system as observed by project 7. Here, L ( t ) = ( ( 0 , 0 , 1 , 1 , 2 ) , ( 0 , 3 , 2 , 0 , 1 ) , ( 1 , 0 , 1 , 1 , 3 ) , ( 0 , 0 , 0 , 0 , 0 ) ) T . (Use t ( t + ) for the time just before (after) time t . ) Hence, project 7 enters immediately to servers 1 , 2 and 3 and L ( t ) = ( ( 1 , 0 , 1 , 1 , 2 ) , ( 1 , 3 , 2 , 0 , 1 ) , ( 2 , 0 , 1 , 1 , 3 ) , ( 1 , 0 , 0 , 0 , 0 ) ) T . We also see that M a x 1 i m j = 1 4 L 1 , j ( t ) = M a x { 5 , 7 , 7 , 1 } = 7 , leading to N 1 , P ( t ) = 2 , N 2 , P ( t ) = 0 , N 3 , P ( 3 ) = 0 , and N 4 , P ( t ) = 6 . Examples of possible transitions from L 1 ( t ) = ( 1 , 0 , 1 , 1 , 2 ) to L 1 ( t + ) are, e.g., at rate μ 7 ( 1 , 1 ) , we reach L 1 ( t + ) = ( 0 , 1 , 1 , 1 , 2 ) , and at rate μ 6 ( 1 , 3 ) , we reach L 1 t + = ( 1 , 0 , 0 , 2 , 2 ) . Similarly, from L 2 t = ( 1 , 3 , 2 , 0 , 1 ) , we reach L 2 t + = ( 0 , 4 , 2 , 0 , 1 ) and L 2 t + at rates μ 7 ( 2 , 1 ) and μ 4 ( 2 , 2 ) , respectively.
As mentioned in Section 1, accurate analytical analysis of the parallel network is complicated and intractable. Thus, upper and lower bounds for the sojourn time of the parallel network are derived; then, a numerical analysis is used to evaluate the efficiency of these bounds and to study the time profile of the system. The author first presents some preliminary results to be used.

3. Preliminary Results

3.1. Tandem System

Assume an M/M/1 system with a Poisson arrival process with rate λ , exponentially distributed service time with rate μ , and utilization ρ = λ / μ < 1 . It is well known that the sojourn time T in such a system is an exponentially distributed r.v. with rate ( μ λ ) and mean 1 / ( μ λ ) . Applying Burke’s theorem [43], the departure process is also a Poisson process with rate λ .
Next, consider a tandem system that consists of n stations, each with one server and infinite queue. The service time of server j is an independent r.v. with distribution e x p ( μ j ) . Customers join the first queue according to a Poisson process with rate λ , and, on completing service, immediately enter the next queue. We have the follwing:
  • The departure process from the first station (server), which is now also the arrival process of the second station (queue), is a Poisson process with rate λ , provided that the queue is in equilibrium. This can be achieved if λ < μ 1 [43];
  • Recursively, it is easy to prove that the departure rate from station j ( j = 1 , , n 1 ) , which is now also the arrival process of the subsequent station j + 1 , is a Poisson process with rate λ , provided that λ < μ j ;
  • The sojourn times at station j, j = 1 , , n are mutually independent [44].
As a result, we obtain Corollary 1.
Corollary 1.
Under the condition that λ < min j = 1 , , n { μ j } , the sojourn time T j at station j is an exponentially distributed r.v. with rate ( μ j λ ) ; the total sojourn time at the tandem system j = 1 n T j has a hyperexponential distribution with average
E j = 1 n T j = j = 1 n 1 μ j λ .
Remark 1.
Note that the hyperexponential distribution is a special case of the phase-type (PH) distribution family with representation α , T of order n , where α is the initial probability ( 1 × n ) vector and T is the ( n × n ) transition rate matrix among the transient states. More about the PH distribution can be found in Latouche and Ramaswami [45]. For the above tandem system, the ( 1 × n ) vector α and ( n × n ) matrix T are given by
α = ( 1 , 0 , , 0 ) ,       T = μ 1 λ       0 0 0     0 0 0 μ n λ .

3.2. Associated Random Variables

Definition 1.
The R-valued r.v.s { X 1 , , X K } are said to be associated if and only if
E f ( X ) g ( X ) E f ( X ) E g ( X )
for all monotonic non-decreasing mappings f , g : R K R 1 for which the expectations exist [24].
Definition 2.
The R-valued r.v.s { X ¯ 1 , , X ¯ K } form independent versions of the r.v.s { X 1 , , X K } if
(i)
the r.v.s { X ¯ 1 , , X ¯ K } are mutually independent;
(ii)
for every 1 k K , the r.v.s X k and X ¯ k have the same probability function.
The following properties are an easy consequence of Definitions 1 and 2.
Properties (Baccelli and Makowski [23]):
P1. Independent r.v.s are associated;
P2. The union of independent collections of associated r.v.s forms a set of associated r.v.s;
P3. Any subset of a family of associated r.v.s forms a set of associated r.v.s;
P4. Any monotonic non-decreasing function of associated r.v.s generates a set of associated r.v.s;
P5. If the r.v.s { X 1 , , X K } are associated, then the inequality
p max 1 k K X k x p max 1 k K X ¯ k x = Π i = 1 K p X ¯ k x
holds true for all x in R .

4. The Parallel Split–Join System: Analysis and Bounds

Let us start with some important results for the parallel system. Recall that projects arrive according to a renewal process and the service times are exponential.
Claim 1.
Generalized Lindley equation. The classic Lindley equation [46] for the GI/G/1 system can be expanded to the parallel system as follows. For i ,   j , and k, we have
W k + 1 ( i , j ) = l = 1 j T k ( i , l ) l = 1 j i T k + 1 ( i , l ) ν k + 1 + , k = 1 , 2 , , j = 1 , 2 , , n , i = 1 , , m ,
where [ x ] + = max ( x , 0 ) .
Proof. 
The proof is given in Appendix A. □
Assume project k arrives. Let T k ( i ) be the total sojourn time of group i , i.e., the time elapsed from stage 1 to n (not including a possible final synchronization delay), and let T k be the total sojourn time of project k in the system. Immediately after splitting, each group continues independently until the final synchronization; thus, we obtain
T k ( i ) = j = 1 n T k ( i , j ) ,     i = 1 , , m . T k = M a x i = 1 , , m { T k ( i ) } .
Claim 2.
(1). The set of random variables { W l ( i , j ) , S l ( i , j ) , ν l , i = 1 , , m , j = 1 , , n , l = 1 , , k } are associated.
(2). The set of random variables { T k ( i ) , i = 1 , , m } are associated.
Proof. 
The proof of Claim 2(1) is obtained by applying Claim 1 and a double induction on i and j, and is detailed in Appendix B. Applying P3, we obtain that any subset of a family of associated r.v.s forms a set of associated r.v.s. Now, consider the set { W k ( i , j ) , S k ( i , j ) , i = 1 , , m , j = 1 , , n } , and note that
T k ( i ) = j = 1 n W k ( i , j ) + S k ( i , j ) , i = 1 , , m .
The times T k ( i ) , i = 1 , , m are nondecreasing functions of associated r.v.s, and, thus, by applying P4, we obtain that they are associated, which completes the proof of Claim 2(2). □
Conclusion 1.
Applying P5 yields that
p max 1 i m T k ( i ) > t 1 Π i = 1 m p T ¯ k ( i ) t .
Conclusion 1 enables us to compare two systems. The left side of Equation (4) refers to the parallel system. Here, applying (2) yields that p max 1 i m T k ( i ) > t = p ( T k > t ) . As for the right side of Equation (4), recall that the set T ¯ k ( i ) , i = 1 , , m forms an independent version of T k ( i ) , i = 1 , , m . Hence, an alternative and useful way to describe the right side of Equation (4) is to consider a similar parallel system except for the common arrival; i.e., there are m independent renewal arrivals of groups, each comprising n stages, that are synchronized at the end (we refer to this system as a parallel–independent system). Now, let T ¯ k be the sojourn time of project k in such a system. We have
1 Π i = 1 m p T ¯ k ( i ) t = 1 p T ¯ k ( 1 ) t · · p T ¯ k ( m ) t = 1 p ( T ¯ k ( 1 ) t , T ¯ k ( m ) t ) = 1 p max 1 i m T ¯ k ( i ) t = p max 1 i m T ¯ k ( i ) > t = p ( T ¯ k > t ) .
Integrating all yields
p ( T k > t ) p ( T ¯ k > t ) , t 0 .
Conclusion 2.
The parallel system has a lower probability of a long sojourn time compared to that of the parallel-independent system; i.e., an initial synchronization stochastically reduces the sojourn time.
Let T m , n = lim k T k and T ¯ m , n = lim k T ¯ k be the sojourn time in steady state of the parallel system and the parallel–independent system, respectively. Equation (6) immediately implies that p ( T m , n > t ) p ( T ¯ m , n > t ) for t 0 . Consequently,
E ( T m , n ) = 0 p ( T m , n > t ) d t 0 p ( T ¯ m , n > t ) d t = E ( T ¯ m , n ) .
Conclusion 3.
The interdependency created as a result of the joint arrival (in other words, the initial synchronization) reduces the mean sojourn time of the parallel system compared to the mean sojourn time of the parallel–independent system.
Remark 2.
Consider constant inter-arrival times, i.e., ν k = a for some fixed a , k = 1 , 2 , Using Claim 1, it is easy to verify that the set of waiting times { W k ( i , j ) , j = 1 , , n } is independent of i = 1 , , m . As a result, T k ( i ) = j = 1 n W k ( i , j ) + S k ( i , j ) , i = 1 , 2 , , m , and the sojourn times of the groups are also independent of i . The immediate conclusion is that constant inter-arrival times lead to the parallel–independent system.

5. A Special Case: The Poisson Arrival Process

5.1. Lower and Upper Bounds

This section studies how the system performance is affected by the parameters; it focuses on the impact of the number of stages ( n ) , the number of groups ( m ) , and the servers’ utilization ( ρ ) on the mean sojourn time. It assumes a Poisson arrival process with rate λ k = λ , i.i.d. exponential service times with rate μ k ( i , j ) = μ and utilization ρ = λ / μ < 1 . Due to the complexity of performing an exact analysis, three bounds for the mean sojourn time are presented and then a numerical analysis is applied. Start by presenting two lower bounds (marked by subscript 1 and 2, respectively) and one upper bound (marked by subscript 3), as follows:
  • The first lower bound is obtained by neglecting operational queues and assuming an empty system for all arrivals (System 1). In this case, the sojourn time is the maximum of m i.i.d. services, each composed of n exponential stages; i.e., we have m i.i.d. r.v.s T 1 ( i ) E r l a n g ( n , μ ) . Let T 1 be the sojourn time in this system. Then,
    T 1 = max 1 i m { T 1 ( i ) } E T 1 = t = 0 1 1 k = 0 n 1 e μ t ( μ t ) k k ! m d t ;
  • The second lower bound is obtained by assuming no splitting into m groups; i.e., m = 1 (System 2). Let T 2 be the sojourn time. It is well known that T 2 E r l a n g ( n , μ λ ) , and, thus, E ( T 2 ) = n μ λ .
    Comparing Systems 1 and 2 (the lower bounds) to the parallel system highlights the different types of delays. The difference between the performance of System 1 (without operational queues) and the parallel system gives an estimate of the waiting time for a server. Moreover, the difference between the performance of System 2 (without splitting) and the parallel system gives an estimate of the impact of the final synchronization. Obviously, the quality of the bounds depends on the servers’ utilization. When ρ is low, neglecting operational delays is acceptable and, thus, the first bound performs better. However, as ρ increases, the operational delays become significant and the additional delay due to synchronization is negligible; thus, the second bound is preferred;
  • An upper bound is obtained by assuming m independent arrival processes of groups (System 3). Applying Burke’s theorem [43] and Conclusions 2 and 3, we obtain that the sojourn times of group i , i = 1 , , m , are i.i.d. r.v.s. with T 3 ( i ) E r l a n g ( n , μ λ ) distribution. Let T 3 = max 1 i m { T 3 ( i ) } (the total sojourn time). We obtain
    E T 3 = t = 0 1 1 k = 0 n 1 e ( μ λ ) t [ ( μ λ ) t ] k k ! m d t .
    Clearly, System 3 highlights the impact of the initial interdependency of the m groups. Note that when ρ 0 (i.e., λ 0 ), E T 3 and E ( T 1 ) converge to the same limit (intuitively, when the frequency of arrivals is low, there are almost no operational delays, so the effect of joint arrival is negligible, and we see a similar behavior in systems 1 and 3). Since E ( T 1 ) E ( T m , n ) E T 3 , by the Sandwich theorem (Squeeze theorem), these two bounds become tight.

5.2. Asymptotic Analysis for Large n and m

For a large n, Kang and Serfozo [47] prove that
lim m T 3 b ¯ ( m , n ) a ¯ ( m , n ) X , lim m T 1 b ̲ ( m , n ) a ̲ ( m , n ) X ,
where the r.v.s X and X are asymptotically Gumbel with normalizing constants
b ¯ ( m , n ) = μ λ 1 ln m + ( n 1 ) ln ln m ln ( n 1 ) ! ,     a ¯ ( m , n ) = 1 μ λ , b ̲ ( m , n ) = μ 1 ln m + ( n 1 ) ln ln m ln ( n 1 ) ! ,     a ̲ ( m , n ) = 1 μ .
The Gumbel distribution function represents the asymptotic limit distribution of the maximum among m exponentially distributed variables. Furthermore, it is well known that, for an r.v. X k satisfying
lim k p X k b k a k x = G ( x ) ,
it follows that, under some conditions (such as X k obtains positive values only), its expectation also converges:
lim a k r k E X k b k r = 0 x r d G x .
Equations (10) and (11) show that both bounds grow at a linear convergence rate in n, and a logarithmic convergence rate in m. Applying the Sandwich theorem (Squeeze theorem) implies that the mean sojourn time of the parallel system E ( T m , n ) has the same growth rate:
lim m E ( T m , n ) ln m 1 ,   n , lim n E ( T m , n ) n 1 ,   m .
Equation (14) says that E ( T m , n ) grows at logarithmic rate O ( ln   m ) and linear rate O ( n ) for large m and n , respectively.

5.3. The Efficiency of the Bounds

While the bounds E T 3 and E ( T 1 ) are tight for a low utilization ρ 0 , it is necessary to estimate the efficiency of the bounds for other cases. To do so, this study uses the Maple 2022 software tool to numerically obtain E ( T m , n ) . Let λ = 1 and vary μ in { 5 , 2 , 1.25 , 1.052 } so that ρ { 0.2 , 0.5 , 0.8 , 0.95 } . For each pair ( λ , μ ) , let n [ 2 , 100 ] and m [ 2 , 7 ] . In addition, the bounds E T 1 and E T 3 are derived for each pair ( λ , μ ) by applying (8) and (9), respectively. To study the efficiency of the bounds, the following measures have been suggested:
E U = E T m , n E T 3 , E L = E T 1 E T m , n .
Clearly, we have 0 < E U , E L 1 . The ratio E U captures the amount of time shortened due to initial synchronization. The ratio E L captures the additional time accumulated due to the stochastic nature of the arrival process and the resulting queues. Figure 5 and Figure 6 plot E U and E L as a function of m and n for ρ = { 0.2 , 0.5 , 0.8 , 0.95 } (the black circles, red squares, blue crosses, and green stars, respectively). Observations and insights are summarized in Conclusion 3.
However, we see that both ratios are certainly influenced by n , m , and ρ .
Conclusion 4.
(iThe effect the number of groups (splits) m . Increasing m increases the dependency between the groups due to the joint arrival, the difference between E T m , n and E T 3 increases, and, thus, E U decreases. On the other hand, as m increases, the synchronization delay at the final join node becomes significant compared to the operational delays for servers. As a result, the relative weight of those operational delays decreases and E L increases. Despite this increase, Figure 5 shows that, when comparing E U and E L , the ratio E U is more efficient for estimating the mean sojourn time, and, therefore, is more recommended;
(ii)
The effect of the number of stages n . Clearly, as n increases, the effect of the joint arrival fades. This is particularly evident in E U , which is increasing in n . Regarding E L , we would expect to see a decrease in E L , since increasing n further yields more servers and queues. However, Figure 6 and additional results (that are not reported here) imply that the changes in E L are inconsistent, probably since increasing n adds variability that blurs the differences between the groups and yields inconsistency;
(iii)
In fact, Figure 5 and Figure 6 show that the changes in E U and E L are relatively low in m and n. This can be explained by the fact that both T 3 and T 1 grow in m and n at the same rate (in orders O ( ln   m ) and O ( n ) , respectively) and, thus, also E T m , n changes at the same rate (which is consistent with the results in Section 5.2);
(iv)
The effect of the utilization ρ. The lower bound assumes only one project with no operational queues. Clearly, when ρ is low, the lower bound becomes tighter (see the black circles in Figure 6). However, when projects arrive more frequently, and ρ increases, E L drops sharply. By contrast, the changes in E U are inconsistent, and, although E U slightly decreases in ρ, its efficiency is quite high for most values of ρ.
Overall, we may conclude that E U is a good approximation, especially for large n. Table 1 summarizes the effect of increasing m , n , and ρ on the ratios E U and E L ;   and denote increasing and decreasing functions, respectively; and further denote a significant decrease and an inconsistent change, respectively.

5.4. Simulation Study

In this section, the aim is to investigate in depth the impact of the parameters ( n , m , and ρ ) on the parallel system’s performance for relatively small n and m. To do so, a simulation study is used and, in each run, one of the parameters m , n , and ρ is varied while keeping the others fixed.

5.4.1. The Influence of the Synchronization Delay

Start by investigating the effect of the final synchronization on the mean sojourn time. First, fix n and ρ , and define the m-ratio I s y n c h ( m ) as follows:
I s y n c h ( m ) = E T m , n E T 1 , n .
The ratio I s y n c h ( m ) compares the parallel system with m 1 groups to a parallel system with a single group and no synchronization. In this way, I s y n c h ( m ) captures the effect of parallelism intensified by the final synchronization and subsequent overhead. Clearly, I s y n c h ( m ) 1 with I s y n c h ( m = 1 ) = 1 .  Table 2 presents I s y n c h ( m ) where m and n vary in { 1 , , 10 } and ρ varies in { 0.2 , 0.4 , 0.5 , 0.8 , 0.9 } .  Figure 7a–c plots I s y n c h ( m ) as a function of m for n { 1 , , 10 } and ρ = 0.2 , 0.5 , and 0.9 , respectively.
Conclusion 5.
(i) Figure 7a–c shows that I s y n c h ( m ) increases logarithmically in m; this increase is consistent with the results of Section 5.2 for large m;
(ii)
Increasing n decreases I s y n c h ( m ) for fixed m, and decreases the growth rate of I s y n c h ( m ) in m. This can be explained by the fact that increasing n increases the number of servers and the operational delays. In this case, the time required for the final synchronization takes a relatively small fraction of the total sojourn time. As a result, the difference between E T m , n and E T 1 , n is reduced, and I s y n c h ( m ) decreases. We can further add that increasing n causes the system to be more deterministic, since each group behaves as an E r l a n g ( n , μ λ ) distributed r.v. with coefficient of variation c . v . = 1 / n , which is decreasing in n . Thus, the final synchronization has less effect. Turning to the edge case, consider that n . In this case, the total sojourn time at each group is deterministic and equal to the sojourn time of other groups; statistically, there is no difference between the groups, and, thus, I s y n c h ( m ) 1 ;
(iii)
Comparing Figure 7a–c shows that I s y n c h ( m ) is slightly decreasing in ρ. As discussed above, as the system becomes overloaded, more time is spent in waiting for servers rather than in synchronization. However, contrary to expectation, the results show that I s y n c h ( m ) is hardly affected by ρ , and the changes are quite negligible. Here, too, consider the edge case ρ 0 , where almost no operational delays exist but only the final synchronization. In this case, the differences between the systems are significant, and I s y n c h ( m ) increases to its maximum value.

5.4.2. The Influence of Multiple Stages

Next, the influence of the operational delays is studied, characterized by the number of stages n. Fix m and ρ and define the n-ratio I seq ( n ) to be
I seq ( n ) = E T m , n E T m , 1 .
The ratio I seq ( n ) compares the parallel system with n stages to a parallel system with only one stage (the basic split–join network; see Figure 1). In this way, I seq ( n ) captures the effect of operational delays intensified by the serial servers and queues; clearly, I seq ( n ) 1 and I seq ( n = 1 ) = 1 .  Table 3 tabulates I seq ( n ) , where n and m vary in { 2 , , 10 } and ρ varies in { 0.2 , 0.4 , 0.5 , 0.8 , 0.9 } .  Figure 8a,b plots I seq ( n ) as a function of n for m { 1 , , 10 } and ρ = 0.2 and 0.9 , respectively.
Conclusion 6.
(iFigure 8a,b shows a statistically significant linear growth in I seq ( n ) as a function of n . Consistent with the results of Section 5.2 for large n, the mean sojourn time increases proportionally with the number of added stages;
(ii)
Increasing m both decreases I seq ( n ) for fixed n and decreases the growth rate of I seq ( n ) in n. Obviously, the reason lies in the fact that increasing m decreases the relative weight of the operational delays compared to the synchronization delay, and so I seq ( n ) decreases;
(iii)
We further see that I seq ( n ) is hardly affected by ρ. This can be explained by the fact that changes in the utilization have a similar effect on all servers. Therefore, there is a negligible dependence between ρ and I seq ( n ) .
Summarizing Conclusions 4 and 5, we see that parallelism (i.e., synchronization delays) has a logarithmic effect O ( ln m ) on the mean sojourn time, while seriality (i.e., operational delays) has a linear effect O ( n ) . However, these two effects slightly decrease as n and m increase, respectively. This highlights the interplay between the relative weights of the different delays. Accordingly, changes in the utilization have an effect mostly on the synchronization time, and hardly at all on the waiting times for servers; generally speaking, the effects of parallelism and seriality are (almost) independent in the utilization.

6. The Serial Split–Join System

This section studies a version of the parallel split–join system called the serial system. The parallel processing of tasks gives rise to synchronization constraints that may cause project delays. The trade-off between increasing the number of servers engaged in parallel processing at the expense of synchronization delays plays an important role in choosing the structure of the network, especially when the network must process different types of projects or servers. To mitigate this trade-off, serial system is introduced. As in the parallel system, a project arrives and splits into m parallel groups, each of which is composed of n stages. However, after each stage j, all groups must be synchronized before the next stage, j + 1 , begins. The project exits the system when all groups in the n-th stage are completed. A typical serial system is presented in Figure 9. The serial system highlights the effect of synchronization. Here, there are n × m operational queues, and n synchronization queues where the groups must wait to be joined at each stage.
In practice, there is a growing interest in serial systems. One example of such a system is a medical process in an emergency room, where some of the initial tests (stage 1) can be administered simultaneously (for example, while a blood sample is being analyzed, a CT scan can be performed). However, the patient cannot be discharged until all these stage-1 tests are completed. After the results are analyzed, the patient continues to the stage-2 analysis, etc. In manufacturing systems, a maintenance procedure for a product requires parallel integrity checks that can only be assessed after receiving the results of previous tests. In supply chains, an order for a product requires several items simultaneously from vendors, where multiple parts are produced in parallel and then assembled into the product [29]. Other examples derive from the increase in multiprocessing technology and parallel programming in computer and telecommunications networks. For example, there are grid systems that divide applications into parallel and synchronized tasks [48], buffer size optimization systems for data transmission, and memory constraints in computer networks [9]. In real systems, it may happen that, although parallelism shortens time, the ensuing synchronization delays may play a significant role and affect managers’ decisions.
For background, the serial split–join network has hardly been investigated in the literature; it is only mentioned in the work of Ko [29], who considers a two-stage serial split–join network with m = 2 splits and a Poisson arrival process. However, only an approximation of the mean sojourn time of the network is derived.
The serial system at time t 0 is characterized by an n-row vector L ( t ) = ( L 1 ( t ) , L 2 ( t ) ,   , L n ( t ) ) , where L j ( t ) is an m-column vector L j ( t ) = ( L 1 , j ( t ) , L 2 , j ( t ) , , L m , j ( t ) ) T . The value L i , j ( t ) indicates the number of waiting tasks at station ( i , j ) (i.e., in the queue and at the server). Due to the FCFS policy, all tasks of project k are held in the same stage (either in the queue, waiting to be served, or at the join node after being served). For simplicity, denote each subsequent join node at stage j = 1 , . . , n with the index j, leading to have the next corollary.
Corollary 2.
Let N ( i , j ) , S ( t ) be the number of groups i waiting at join node j ,   j = 1 , , n at time t. It is easy to verify that N ( i , j ) , S ( t ) satisfies
N ( i , j ) , S ( t ) = M a x 1 i m L i , j ( t ) L i , j ( t ) .
Example 2.
Let m = 3 , n = 3 , and assume that project 6 arrives to a system with L ( t ) = ( ( 1 , 0 , 2 ) T ,   ( 2 , 0 , 1 ) T ,   ( 1 , 0 , 1 ) T ) . In this case, groups 1 and 3 of project 6 join the queues, and group 2 enters the server immediately. As a result, we obtain L ( t + ) = ( ( 2 , 1 , 3 ) T ,   ( 2 , 0 , 1 ) T ,   ( 1 , 0 , 1 ) T ) (see Figure 10). We also see that
M a x i = 1 , 2 , 3 { L i , 1 ( t + ) } = 3 ,           M a x i = 1 , 2 , 3 { L i , 2 ( t + ) } = 2 ,           M a x i = 1 , 2 , 3 { L i , 3 ( t + ) } = 1 .
The number of waiting groups at the join nodes are N ( 1 , 1 ) , S ( t + ) = 1 ,   N ( 2 , 1 ) , S ( t + ) = 2 ,   N ( 3 , 1 ) , S ( t + ) = 0 (at join node 1), N ( 1 , 2 ) , S ( t + ) = 0 ,   N ( 2 , 2 ) , S ( t + ) = 2 ,   N ( 3 , 2 ) , S ( t + ) = 1 (at join node 2), and N ( 1 , 2 ) , S ( t + ) = 0 ,   N ( 2 , 3 ) S ( t + ) = 1 ,   N ( 3 , 3 ) , S ( t + ) = 0 (at join node 3). Examples of possible transitions are, e.g., the state L 1 ( t ) = ( 2 , 1 , 3 ) T with rate μ 1 , 1 ( 5 ) is changed to L 1 ( t + ) = ( 1 , 1 , 3 ) T , and the states L 1 ( t ) = ( 2 , 1 , 3 ) T and L 2 ( t ) = ( 2 , 0 , 1 ) T with rate μ 3 , 1 ( 4 ) are changed to L 1 ( t + ) = ( 2 , 1 , 2 ) T and L 2 ( t + ) = ( 3 , 1 , 2 ) T , respectively (here, tasks ( 3 , 1 ) of project 4 is completed, so project 4 finishes stage 1, and continues to stage 2; as a result, both L 1 ( t ) and L 2 ( t ) are changed).
As discussed above, the mathematical analysis of the serial systems is complicated, even for m = 3 , and, thus, a numerical analysis is performed. The aim is to investigate the impact of the additional time due to synchronization delays as a function of the system parameters. To do so, the results of the serial system are numerically compared to those of the parallel system with the same parameters.

6.1. The Influence of Synchronization Overhead and Stages

Denote by E ( T ˜ m , n ) the mean sojourn time of the serial system with m groups and n stages. Similar to the ratios defined in (15) and (16), define respectively the m-ratio I ˜ s y n c h ( m ) and the n-ratio I ˜ seq ( n ) as follows:
I ˜ s y n c h ( m ) = E T ˜ m , n E T ˜ 1 , n , I ˜ seq ( n ) = E T ˜ m , n E T ˜ m , 1 .
Clearly, I ˜ s y n c h ( m ) 1 and I ˜ seq ( n ) 1 (equality holds when m = 1 and n = 1 , respectively). The ratio I ˜ s y n c h ( m ) denotes the effect of synchronization overhead that can be attributed to parallelism. The ratio I ˜ seq ( n ) denotes the effect of seriality that contributes to resource (server) delays and synchronization overhead simultaneously. Table 4 and Table 5 tabulate I ˜ s y n c h ( m ) and I ˜ seq ( n ) where m , n vary in { 1 , , 10 } , respectively, and ρ = { 0.2 , 0.4 , 0.5 , 0.8 , 0.9 } . For ρ = 0.5 , Figure 11 and Figure 12 illustrate I ˜ s y n c h ( m ) and I ˜ seq ( n ) for n { 1 , , 10 } and m { 1 , , 10 } , respectively.
Conclusion 7.
(i) Figure 11 shows a statistically significant logarithmic growth rate of I ˜ s y n c h ( m ) in m (the test shows that R 2 > 95 % ). Similarly, Figure 12 shows a statistically significant linear growth rate of I ˜ seq ( n ) in n (the test shows that R 2 > 99 % );
(ii)
We see that I ˜ s y n c h ( m ) is hardly affected by n when m is fixed, and, similarly, I ˜ seq ( n ) is hardly affected by m when n is fixed. This can be explained by the synchronization constraints needed at the end of each stage (which is the beginning of the next stage). Specifically, the variability obtained between the different groups does not accumulate, and each stage is almost identical to the first stage. This absence of dispersion aggregation causes I ˜ s y n c h ( m ) to be almost independent in n, and I ˜ seq ( n ) n ;
(iii)
Table 4 and Table 5 show that, as in the parallel system, I ˜ s y n c h ( m ) is slightly decreasing in ρ , and I ˜ seq ( n ) is hardly affected by ρ; see Conclusions 4 ( i i i ) and 5 ( i i i ) .
To summarize, we see that the effects of m , n , and ρ are similar in both the serial system and the parallel system. However, and contrary to expectation, the rate of change in the serial system is slower and may even be negligible. Thus, it seems that the serial system is significantly less sensitive to marginal changes and sometimes even indifferent.

6.2. Comparison of the Systems

In the previous section, we studied the performance of the serial system as a function of the different parameters. Here, to complete our investigation, we explore in greater depth the influence of multiple synchronization constraints on the project duration. To do so, we compare the mean sojourn time of the two systems with the same parameters. Obviously, due to the additional synchronizations, the time in the serial system E ( T ˜ m , n ) is greater than that of corresponding parallel system E ( T m , n ) (the equality holds when m = 1 or n = 1 ). Accordingly, it is interesting to study how significant the differences are in relation to the parameters. The insights gained from this study can be used in practice when more synchronization constraints are required for a project or, vice versa, when synchronization constraints are no longer necessary and can be eliminated. Let I S / P be the ratio of the mean sojourn times:
I S / P = E ( T ˜ m , n ) E ( T m , n ) .
The values I S / P are summarized in Table 6 for n , m { 2 , , 10 } and ρ { 0.2 , 0.4 , 0.5 , 0.8 , 0.9 } ; the cases m = 1 or n = 1 , where I S / P = 1 , are omitted.
Table 6 implies that the ratio I S / P is increasing in n and m , and decreasing in ρ . This is more formally stated in Conclusion 7. In addition, Figure 13 and Figure 14 plot I S / P as a function of m and n for ρ = 0.5 , respectively. We see that I S / P grows logarithmically in m , and, surprisingly, also logarithmically in n. Checking this growth rate for other utilizations leads to the same conclusion. For example, the blue, black, and purple surfaces of Figure 15 show I S / P as a function of n and m for ρ = 0.2 ,   0.5 and 0.9 .
Conclusion 8.
(i) The impact of n and m. Figure 13, Figure 14 and Figure 15 show that I S / P increases in both n and m at a logarithmic rate. Its logarithmic growth rate in m is consistent with the previous conclusions. However, its logarithmic growth rate in n is quite surprising, and contrary to the expectation of a linear growth rate. This can be explained as follows. The addition of stages adds operational delays in a relatively similar way to both systems, but it adds synchronization delays only to the serial system (since the parallel system has only one final join node, independently of n). Thus, these delays intensify the logarithmic component of the growth rate and offset the other components.
Furthermore, this observation leads to an unexpected result. Since m and m have the same logarithmic effect on the growth rate, the ratio I S / P shows a kind of duality for a fixed ρ:
I S / P ( m , n ) I S / P ( n , m ) .
Equation (21) implies that the added time in the serial system beyond that of the parallel system is relatively similar due to parallelism (i.e., increasing m) or due to seriality (i.e., increasing n). For example, Figure 16 shows I S / P ( m , n ) and its dual-value I S / P ( n , m ) (solid and dashed curves, respectively) for the pairs ( m , n ) = { ( 7 , 3 ) , ( 10 , 3 ) , ( 8 , 5 ) } as a function of ρ . We see that I S / P ( 7 , 3 ) I S / P ( 3 , 7 ) (blue curves), I S / P ( 10 , 3 ) I S / P ( 3 , 10 ) (black curves), and I S / P ( 8 , 5 ) I S / P ( 5 , 8 ) (gray curves);
(ii)
The impact of ρ . In most cases, increasing ρ decreases I S / P for fixed n and m. The explanation is simple: for a low utilization, the operational queues are almost empty and, thus, synchronization times (which exist mainly in the serial system) constitute the key share of the total time. Here, the serial system is significantly slower than the parallel system and, thus, I S / P increases. However, for an overloaded system (with a high utilization), the operational delays increase in both systems, thereby offsetting the synchronization delays that slightly decrease I S / P . In summary, the effect of efficiency is significant mainly for high n and m and for a low utilization. In this case, the large number of synchronizations becomes a major component in the time profile.

7. Concluding Remarks and Future Research

The present paper studies the influence of multiple stages and multiple synchronizations in expanded split–join/fork–join networks. Two different architectures are introduced, the parallel structure and the serial one. Since the mathematical analysis of such systems is difficult and mostly impossible, a sensitivity analysis is obtained to evaluate the efficiency of various bounds for the mean sojourn time. For each system, an extensive numerical study is performed to investigate the impact of ( i ) the number of stages n, ( i i ) the number of groups m, and ( i i i ) the utilization ρ on the mean sojourn time. Numerical results show that the mean sojourn time of both systems is hardly effected by ρ , and increases linearly in n and logarithmically in m, although the serial system is significantly less sensitive to changes in m and n. If we attribute the number of stages to operational delays due to slow servers or limited resources, and the number of groups to synchronization delays, then the limited resources have a linear influence, while the synchronization gap has a logarithmic influence. Accordingly, it is advisable to invest extensively in limited resource in order to improve the performance of the system.
Furthermore, comparing the two systems with the same parameters surprisingly shows a logarithmic rate effect of both parallelism (the synchronization delays) and seriality (operational delays). Thus, we obtain a kind of duality property for the ratio between the two systems. This duality can be used for estimation purposes in designing and optimizing the systems when more synchronization constraints are required or, alternatively, when fewer synchronization constraints are required.
For future research, it would be interesting to investigate mixed split–join systems that include both parallel and serial structures. In real life, the need to synchronize may be a probabilistic decision. Thus, assigning a probability to each join node (which is actually a combination of the serial and parallel systems) would be a promising practical stream of research. Moreover, this paper assumed an exponential service time and a Poisson arrival process. It would be interesting to generalize this work by including other distributions for service times or studying more general arrival processes. In this case, however, the author believes that it would be highly difficult to analyze these systems using only mathematical tools and, thus, the use of numerical analysis would probably be essential.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. The Proof of Claim 1

Proof. 
Claim 1 is obtained by a double induction on the number of projects k and the number of stages j in group i .
Step-1. Substituting j = 1 in yields
W k + 1 ( i , 1 ) = T k ( i , l ) ν k + 1 + = W k ( i , 1 ) + S k ( i , 1 ) ν k + 1 + ,
which is equal to the original Lindley equation with one task (i.e., m = 1 ), and, thus, holds for all k.
Induction-Step. Assume that Claim 1 holds for j; it is now proven for j + 1 .
Step-2. Use induction on k.
Step-1. Let k = 1 .
W 1 ( i , j + 1 ) = l = 1 j + 1 T 0 ( i , l ) l = 1 j T 1 ( i , l ) ν 1 + = 0
Clearly for the first project, the system is empty.
Induction-Step. Assume that Claim 1 holds for k ; prove it for k + 1 (recall that we assume j + 1 ) .
Step-2. By the induction, W k ( i , 1 ) , W k ( i , 2 ) , , W k ( i , j + 1 ) are the waiting times of project k in group i. Since S k ( i , 1 ) , S k ( i , 2 ) , , S k ( i , j + 1 ) are the service times, we have
l = 1 j + 1 T k ( i , l ) = W k ( i , 1 ) + S k ( i , 1 ) + + W k ( i , j + 1 ) + S k ( i , j + 1 )
to determine the time project k finishes task j + 1 . Similarly, l = 1 j T k + 1 ( i , l ) is the time that project k + 1 finishes task j. Project k + 1 starts ν k + 1 units of time after project k . Thus, if l = 1 j + 1 T k ( i , l ) < l = 1 j T k + 1 ( i , l ) + ν k + 1 , the station j + 1 is empty when project k + 1 arrives; otherwise, project k + 1 will wait, i.e.,
W k + 1 ( i , j ) = l = 1 j T k ( i , l ) l = 1 j i T k + 1 ( i , l ) ν k + 1 + .

Appendix B. The Proof of Claim 2(1)

Proof. 
The proof is obtained by using a double induction on the number of projects k and the number of stages j in group i .
Step-1. Assume j = 1 . Use induction on k.
Step-1. k = 1 . Clearly, W 1 ( i , 1 ) = 0 , i . The service times { S 1 ( i , 1 ) } , i = 1 , , m are independent r.v.s; thus, by property P1, they are associated r.v.s. Since ν 1 is an independent r.v., by P2, the r.v.s in the set
{ W 1 ( i , 1 ) , S 1 ( i , 1 ) , t 1 , i = 1 , , m }
are associated.
Induction-Step. Assume that
{ W l ( i , 1 ) , S l ( i , 1 ) , t l , i = 1 , , m , l = 1 , , k }
forms a set of associated r.v.s. Now, prove for k + 1 .
Step-2. The proof is obtained immediately, by applying Appendix A of Nelson and Tantawi [30], and Chapter 4.2.III of Baccelli and Makowski [23].
Induction-Step. Assume { W l ( i , r ) , S l ( i , r ) , t l , r = 1 , , j , l = 1 , , k } forms a set of associated r.v.s. Prove for j + 1 .
Step-2. Assume j + 1 . Use induction on k. □
Step-1. k = 1 . For the first project, W 1 ( i , j + 1 ) = 0 ; thus, applying P1 and P2, the r.v.s in set
{ W 1 ( i , r ) , S 1 ( i , r ) , t 1 , W 1 ( i , j + 1 ) , S 1 ( i , j + 1 ) , r = 1 , , j }
are associated.
Induction-Step. Assume that the r.v.s in set
{ W l ( i , r ) , S l ( i , r ) , t l , W l ( i , j + 1 ) , S l ( i , j + 1 ) , r = 1 , , j , l = 1 , , k }
are associated. Prove for k + 1 .
Step-2. By Claim 1 we have
W k + 1 ( i , j + 1 ) = l = 1 j + 1 T k ( i , l ) l = 1 j T k + 1 ( i , l ) ν k + 1 + = l = 1 j + 1 W k ( i , l ) + S k ( i , l ) l = 1 j W k + 1 ( i , l ) + S k + 1 ( i , l ) ν k + 1 + .
The function x + is a non-decreasing monotonic function. Thus, by the induction assumption and applying P4, the elements of set W k + 1 ( i , l ) , l = 1 , , j + 1 are also associated. Furthermore, the set S k + 1 ( i , l ) , l = 1 , , j + 1 contains independent r.v.s, and, thus, by P1, are associated. Finally, applying P2, the elements in set
{ W l ( i , r ) , S l ( i , r ) , t l , W l ( i , j + 1 ) , S l ( i , j + 1 ) , r = 1 , , j + 1 , l = 1 , , k + 1 }
are associated.

References

  1. Alesawi, S.; Ghanem, S. Overcome heterogeneity impact in modeled fork-join queuing networks for tail prediction. In Proceedings of the 2019 International Conference on Computing, Networking and Communications (ICNC), IEEE, Honolulu, HI, USA, 18–21 February 2019; pp. 270–275. [Google Scholar]
  2. Gorbunova, A.; Vishnevsky, V. The analysis of big data centers performance. Adv. Syst. Sci. Appl. 2022, 22, 70–83. [Google Scholar]
  3. Nguyen, M.; Alesawi, S.; Li, N.; Che, H.; Jiang, H. A black-box fork-join latency prediction model for data-intensive applications. IEEE Trans. Parallel Distrib. Syst. 2020, 31, 1983–2000. [Google Scholar] [CrossRef]
  4. Ardagna, D.; Bernardi, S.; Gianniti, E.; Karimian Aliabadi, S.; Perez-Palacin, D.; Requeno, J.I. Modeling performance of hadoop applications: A journey from queueing networks to stochastic well formed nets. In Proceedings of the Algorithms and Architectures for Parallel Processing: 16th International Conference, ICA3PP 2016, Granada, Spain, 14–16 December 2016; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. Proceedings 15. pp. 599–613. [Google Scholar]
  5. Delias, P.; Lagopoulos, A.; Tsoumakas, G.; Grigori, D. Using multi-target feature evaluation to discover factors that affect business process behavior. Comput. Ind. 2018, 99, 253–261. [Google Scholar] [CrossRef]
  6. Sethuraman, S. Analysis of Fork-Join Systems: Network of Queues with Precedence Constraints; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  7. Enganti, P.; Rosenkrantz, T.; Sun, L.; Wang, Z.; Che, H.; Jiang, H. ForkMV: Mean-and-variance estimation of fork-join queuing networks for datacenter applications. In Proceedings of the 2022 IEEE International Conference on Networking, Architecture and Storage (NAS), IEEE, Philadelphia, PA, USA, 3–4 October 2022; pp. 1–8. [Google Scholar]
  8. Wang, W.; Harchol-Balter, M.; Jiang, H.; Scheller-Wolf, A.; Srikant, R. Delay asymptotics and bounds for multi-task parallel jobs. ACM Sigmetrics Perform. Eval. Rev. 2019, 46, 2–7. [Google Scholar] [CrossRef] [Green Version]
  9. Ding, S. Multi-Class Fork-Join Queues & The Stochastic Knapsack Problem. Ph.D. Thesis, Universiteit Leiden, Amsterdam, The Netherlands, 2011. [Google Scholar]
  10. Krishnamurthy, A.; Suri, R. Performance analysis of single stage kanban controlled production systems using parametric decomposition. Queueing Syst. 2006, 54, 141–162. [Google Scholar] [CrossRef]
  11. Shaaban, S.; Romero-Silva, R. Performance of merging lines with uneven buffer capacity allocation: The effects of unreliability under different inventory-related costs. Cent. Eur. J. Oper. Res. 2021, 29, 1253–1288. [Google Scholar] [CrossRef] [Green Version]
  12. Matta, A.; Dallery, Y.; Di Mascolo, M. Analysis of assembly systems controlled with kanbans. Eur. J. Oper. Res. 2005, 166, 310–336. [Google Scholar] [CrossRef]
  13. Raghavan, N.S.; Viswanadham, N. Generalized queueing network analysis of integrated supply chains. Int. J. Prod. Res. 2001, 39, 205–224. [Google Scholar]
  14. Atar, R.; Mandelbaum, A.; Zviran, A. Control of fork-join networks in heavy traffic. In Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, Monticello, IL, USA, 1–5 October 2012; pp. 823–830. [Google Scholar]
  15. Özkan, E. Control of fork-join processing networks with multiple job types and parallel shared resources. Math. Oper. Res. 2022, 47, 1310–1334. [Google Scholar] [CrossRef]
  16. Prabhakar, B.; Bambos, N.; Mountford, T.S. The synchronization of Poisson processes and queueing networks with service and synchronization nodes. Adv. Appl. Probab. 2000, 32, 824–843. [Google Scholar] [CrossRef]
  17. Ramakrishnan, R.; Krishnamurthy, A. Analysis of kitting Operations in manufacturing systems. Asia Pac. J. Oper. Res. 2008, 25, 187–216. [Google Scholar] [CrossRef]
  18. Ramakrishnan, R.; Krishnamurthy, A. Performance evaluation of a synchronization station with multiple inputs and population constraints. Comput. Oper. Res. 2012, 39, 560–570. [Google Scholar] [CrossRef]
  19. Schol, D.; Vlasiou, M.; Zwart, B. Large fork-join networks with nearly deterministic service times. Mathematics 2019, arXiv:1912.11661. [Google Scholar]
  20. Roy, D.; van Ommeren, J.K.; de Koster, R.; Gharehgozli, A. Modeling landside container terminal queues: Exact analysis and approximations. Transp. Res. Part B Methodol. 2022, 162, 73–102. [Google Scholar] [CrossRef]
  21. Towsley, D.; Rommel, C.G.; Stankovic, J.A. Analysis of fork-join program response times on multiprocessors. IEEE Trans. Parallel Distrib. Syst. 1990, 1, 286–303. [Google Scholar] [CrossRef] [PubMed]
  22. Baccelli, F.; Makowski, A.M. Simple Computable Bounds for the Fork-Join Queue. Ph.D. Thesis, INRIA, Le Chesnay-Rocquencourt, France, 1985. [Google Scholar]
  23. Baccelli, F.; Makowski, A.M. Queueing models for systems with synchronization constraints. Proc. IEEE 1989, 77, 138–161. [Google Scholar] [CrossRef]
  24. Baccelli, F.; Makowski, A.M.; Shwartz, A. The fork-join queue and related systems with synchronization constrains: Stochastic Ordering and Computable Bounds. Adv. Appl. Probab. 1989, 21, 629–660. [Google Scholar] [CrossRef]
  25. Baccelli, F.; Massey, W.A.; Towsley, D. Acyclic fork join queuing networks. J. Assoc. Comput. Machanics 1989, 36, 615–642. [Google Scholar] [CrossRef]
  26. Ko, S.S.; Serfozo, R.F. Response times in M/M/s fork-join networks. Adv. Appl. Probab. 2004, 36, 854–871. [Google Scholar] [CrossRef]
  27. Ko, S.S.; Serfozo, R.F. Sojourn Times in G/M/1 fork-join networks. Nav. Res. Logist. 2008, 55, 432–443. [Google Scholar] [CrossRef]
  28. Varki, E. Mean value technique for closed fork-join networks. Perform. Eval. Rev. 1999, 27, 103–112. [Google Scholar] [CrossRef]
  29. Ko, S.S. Cycle times in a serial fork-join network. In Proceedings of the Computational Science and Its Applications–ICCSA 2007: International Conference, Kuala Lumpur, Malaysia, 26–29 August 2007; Springer: Berlin/Heidelberg, Germany, 2007. Proceedings, Part I 7. pp. 758–766. [Google Scholar]
  30. Nelson, R.; Tantawi, A.N. Approximation analysis of Fork/Join synchronization in parallel queues. IEEE Trans. Comput. 1988, 37, 739–743. [Google Scholar] [CrossRef]
  31. Nelson, R.; Towsley, D.; Tantawi, A.N. Performance analysis of parallel processing systems. IEEE Trans. Softw. Eng. 1988, 14, 532–540. [Google Scholar] [CrossRef]
  32. Fiorini, P.M. Analytic approximations of fork-join queues. In Proceedings of the 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), IEEE, Warsaw, Poland, 24–26 September 2015; Volume 2, pp. 966–971. [Google Scholar]
  33. Lebrecht, A.S.; Knottenbelt, W.J. Response time approximations in fork-join queues. In Proceedings of the 23rd Annual UK Performance Engineering Workshop (UKPEW 2007), ORMS, Kirk, UK, 31 July 2007. [Google Scholar]
  34. Kemper, B.; Mandjes, M. Mean sojourn times in two-queue fork-join systems: Bounds and approximations. OR Spectr. 2012, 34, 723–742. [Google Scholar] [CrossRef] [Green Version]
  35. Takahashi, M.; Osawa, H.; Fujisawa, T. On a synchronization queue with two finite buffers. Queueing Syst. 2000, 36, 107–123. [Google Scholar] [CrossRef]
  36. Qiu, Z.; Pérez, J.F.; Harrison, P.G. Beyond the mean in fork-join queues: Efficient approximation for response-time tails. Perform. Eval. 2015, 91, 99–116. [Google Scholar] [CrossRef] [Green Version]
  37. Varma, S.; Makowski, A.M. Interpolation approximations for symmetric fork-join queues. J. Perform. Eval. 1994, 20, 245–265. [Google Scholar] [CrossRef] [Green Version]
  38. Tan, X.; Knessl, C. A fork-join queueuing model:diffusion approximation, integral representations and asymptotics. Queueing Syst. 1996, 22, 287–332. [Google Scholar] [CrossRef]
  39. Knessl, C. A diffusion model for two parallel queues with processor sharing: Transient behavior and asymptotics. J. Appl. Math. Stoch. Anal. 1999, 12, 311–338. [Google Scholar] [CrossRef] [Green Version]
  40. Kushner, H.J. Heavy Traffic Analysis of Controlled Queueing and Communication Networks; Springer: New York, NY, USA, 2001. [Google Scholar]
  41. Zeng, Y.; Tan, J.; Xia, C.H. Fork and join queueing networks with heavy tails: Scaling dimension and throughput limit. J. ACM (JACM) 2021, 68, 1–30. [Google Scholar] [CrossRef]
  42. Meijer, M.S.; Schol, D.; van Jaarsveld, W.; Vlasiou, M.; Zwart, B. Extreme-value theory for large fork-join queues, with an application to high-tech supply chains. arXiv 2021, arXiv:2105.09189. [Google Scholar]
  43. Burke, P.J. The output process of a stationary M/M/s queueing system. Ann. Math. Stat. 1968, 39, 1144–1152. [Google Scholar] [CrossRef]
  44. Walrand, J. An Introduction to Queueing Networks; Caopter 4; Prentice Hall: Hoboken, NJ, USA, 1988. [Google Scholar]
  45. Latouche, G.; Ramaswami, V. Introduction to Matrix Analytic Methods in Stochastic Modeling; SIAM: Philadelphia, PA, USA, 1999. [Google Scholar]
  46. Lindley, D.V. The theory of queues with a single server. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1952; Volume 48, pp. 277–289. [Google Scholar]
  47. Kang, S.; Serfozo, R.F. Extreme values of phase-type and mixed random variables with parallel-processing examples. J. Appl. Probab. 1999, 36, 194–210. [Google Scholar] [CrossRef]
  48. Cremonesi, P.; Turrin, R.; Alexandrov, V.N. Modeling the effects of node heterogeneity on the performance of grid applications. J. Netw. 2009, 4, 837–854. [Google Scholar] [CrossRef]
Figure 1. The basic split–join network.
Figure 1. The basic split–join network.
Mathematics 11 03232 g001
Figure 2. A general split–join network.
Figure 2. A general split–join network.
Mathematics 11 03232 g002
Figure 3. A typical parallel split–join network.
Figure 3. A typical parallel split–join network.
Mathematics 11 03232 g003
Figure 4. A snapshot of the parallel system upon arrival of project 7.
Figure 4. A snapshot of the parallel system upon arrival of project 7.
Mathematics 11 03232 g004
Figure 5. The ratio E U = E T m , n E T 3 as a function of m , n for ρ { 0.2 , 0.5 , 0.8 , 0.9 } .
Figure 5. The ratio E U = E T m , n E T 3 as a function of m , n for ρ { 0.2 , 0.5 , 0.8 , 0.9 } .
Mathematics 11 03232 g005
Figure 6. The ratio E L = E T 1 E T m , n as a function of m , n for ρ { 0.2 , 0.5 , 0.8 , 0.9 } .
Figure 6. The ratio E L = E T 1 E T m , n as a function of m , n for ρ { 0.2 , 0.5 , 0.8 , 0.9 } .
Mathematics 11 03232 g006
Figure 7. The ratio I s y n c h ( m )   as   a   function   of   m   for   n = { 1 , , 10 }   and   different   values   of   ρ .
Figure 7. The ratio I s y n c h ( m )   as   a   function   of   m   for   n = { 1 , , 10 }   and   different   values   of   ρ .
Mathematics 11 03232 g007
Figure 8. The   ratio   I seq ( n )   as   a   function   of   n   for   m { 1 , , 10 }   and   ρ = 0.2   and   0.9 .
Figure 8. The   ratio   I seq ( n )   as   a   function   of   n   for   m { 1 , , 10 }   and   ρ = 0.2   and   0.9 .
Mathematics 11 03232 g008
Figure 9. A typical serial system.
Figure 9. A typical serial system.
Mathematics 11 03232 g009
Figure 10. A snapshot of the serial multi-join system upon arrival of project 6.
Figure 10. A snapshot of the serial multi-join system upon arrival of project 6.
Mathematics 11 03232 g010
Figure 11. I ˜ s y n c h ( m ) for n = { 1 , , 10 } ,   ρ = 0.5 .
Figure 11. I ˜ s y n c h ( m ) for n = { 1 , , 10 } ,   ρ = 0.5 .
Mathematics 11 03232 g011
Figure 12. I ˜ seq ( n ) for m = { 1 , , 10 } ,   ρ = 0.5 .
Figure 12. I ˜ seq ( n ) for m = { 1 , , 10 } ,   ρ = 0.5 .
Mathematics 11 03232 g012
Figure 13. I S / P ( m ) for n = { 1 , , 10 } ,   ρ = 0.5 .
Figure 13. I S / P ( m ) for n = { 1 , , 10 } ,   ρ = 0.5 .
Mathematics 11 03232 g013
Figure 14. I S / P ( n ) for m = { 1 , , 10 } ,   ρ = 0.5 .
Figure 14. I S / P ( n ) for m = { 1 , , 10 } ,   ρ = 0.5 .
Mathematics 11 03232 g014
Figure 15. The ratio I S / P ( m , n ) for ρ = { 0.2 ,   0.5 ,   0.9 } .
Figure 15. The ratio I S / P ( m , n ) for ρ = { 0.2 ,   0.5 ,   0.9 } .
Mathematics 11 03232 g015
Figure 16. The duality property I S / P ( m , n ) I S / P ( n , m ) for ( m , n ) = { ( 7 , 3 ) , ( 10 , 3 ) , ( 8 , 5 ) } .
Figure 16. The duality property I S / P ( m , n ) I S / P ( n , m ) for ( m , n ) = { ( 7 , 3 ) , ( 10 , 3 ) , ( 8 , 5 ) } .
Mathematics 11 03232 g016
Table 1. The efficiency of the bound as a function of m, n,   and   ρ .
Table 1. The efficiency of the bound as a function of m, n,   and   ρ .
The Efficiency of the Bound m n ρ
E U = E T m , n E T 3
E L = E T 1 E T m , n
Table 2. I s y n c h ( m ) ,   m , n { 1 , , 10 } ,   ρ = { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Table 2. I s y n c h ( m ) ,   m , n { 1 , , 10 } ,   ρ = { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Isynch(m)n m = 2 m = 3 m = 4 m = 5 m = 6 m = 7 m = 8 m = 9 m = 10
 11.48001.80001.99502.19002.33502.48002.70252.74132.7800
 21.34001.55001.68501.82001.91002.00002.14002.16502.1900
 31.28001.45001.55001.65001.72001.79001.89001.90501.9200
 41.25001.39501.48251.57001.62751.68501.77131.78561.8000
ρ = 0.251.22001.34001.41501.49001.53501.58001.65251.66631.6800
 61.20001.31001.37751.44501.48751.53001.59381.60441.6150
 71.18001.28001.34001.40001.44001.48001.53501.54251.5500
 81.17011.26351.31861.37361.41031.44701.50041.50871.5170
 91.15991.24651.29651.34641.37971.41301.46471.47381.4830
 101.15001.23001.27501.32001.35001.38001.43001.44001.4500
 11.45001.74001.93002.12002.24502.37002.57752.61882.6600
 21.32001.51001.63001.75001.83001.91002.03002.05002.0700
 31.25001.41001.50001.59001.64501.70001.79251.81131.8300
 41.21501.35501.43001.50501.55501.60501.68501.70001.7150
ρ = 0.451.18001.30001.36001.42001.46501.51001.57751.58881.6000
 61.17001.27001.32751.38501.42251.46001.51881.52941.5400
 71.16001.24001.29501.35001.38001.41001.46001.47001.4800
 81.14681.22351.27521.32691.35531.38361.42951.43821.4470
 91.13321.20651.25481.30311.32981.35641.39801.40551.4130
 101.12001.19001.23501.28001.30501.33001.36751.37381.3800
 11.44001.73001.90002.07002.20002.33002.47502.53252.5900
 21.30001.48001.60001.72001.79001.86001.98002.00502.0300
 31.25001.39001.47501.56001.61501.67001.75751.77381.7900
 41.22001.33501.41001.48501.53001.57501.65001.66501.6800
ρ = 0.551.19001.28001.34501.41001.44501.48001.54251.55631.5700
 61.17001.25501.31251.37001.40251.43501.48881.49941.5100
 71.15001.23001.28001.33001.36001.39001.43501.44251.4500
 81.14011.21351.26021.30691.33531.36361.40611.41321.4203
 91.12991.19651.23981.28311.30981.33641.37641.38301.3897
 101.12001.18001.22001.26001.28501.31001.34751.35381.3600
 11.41001.68001.82001.96002.09002.22002.39502.41752.4400
 21.27001.44001.44001.44001.60001.76001.91001.90501.9000
 31.21001.35001.35001.35001.47501.60001.72251.72131.7200
 41.18001.30501.30501.30501.40751.51001.60881.60691.6050
ρ = 0.851.15001.26001.26001.26001.34001.42001.49501.49251.4900
 61.13501.23001.23001.23001.30251.37501.44381.44191.4400
 71.12001.20001.20001.20001.26501.33001.39251.39131.3900
 81.11341.18681.18681.18681.24851.31021.36611.36321.3603
 91.10661.17321.17321.17321.23151.28981.33891.33431.3297
 101.10001.16001.16001.16001.21501.27001.31251.30631.3000
 11.40001.42001.67501.93002.04502.16002.31752.33882.3600
 21.27001.42001.52001.62001.68001.74001.82501.83751.8500
 31.22001.33001.40501.48001.53001.58001.65001.66001.6700
 41.18501.28001.34501.41001.45001.49001.54751.55631.5650
ρ = 0.951.15001.23001.28501.34001.37001.40001.44501.45251.4600
 61.14001.20501.25751.31001.33501.36001.40001.40751.4150
 71.13001.18001.23001.28001.30001.32001.35501.36251.3700
 81.12011.16681.21351.26021.28021.30021.33361.34021.3469
 91.10991.15321.19651.23981.25981.27981.31151.31731.3231
 101.10001.14001.18001.22001.24001.26001.29001.29501.3000
Table 3. I seq ( n ) ,   m , n { 1 , , 10 } ,   ρ = { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Table 3. I seq ( n ) ,   m , n { 1 , , 10 } ,   ρ = { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Iseq(n)m n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 n = 10
12.00003.00004.00005.00006.00007.00008.00009.000010.0000
 21.82002.60003.36504.13004.86005.59006.31677.04337.7700
 31.73002.41003.07003.73004.35504.98005.59676.21336.8300
 41.69502.33502.94753.56004.14254.72505.29505.86506.4350
ρ = 0.251.66002.26002.82503.39003.93004.47004.99335.51676.0400
 61.63502.21002.75253.29503.81004.32504.82175.31835.8150
 71.61002.16002.68003.20003.69004.18004.65005.12005.5900
 81.59682.13362.63883.14393.61744.09094.54775.00455.4613
 91.58322.10642.59633.08613.54263.99914.44234.88555.3287
 101.57002.08002.55503.03003.47003.91004.34004.77005.2000
 12.00003.01004.01505.02006.02007.02008.02009.020010.0200
 21.82002.59003.33004.07004.83005.59006.31007.03007.7500
 31.73002.43003.08503.74004.37505.01005.62676.24336.8600
 41.69002.34002.94503.55004.14004.73005.30175.87336.4450
ρ = 0.451.65002.25002.80503.36003.90504.45004.97675.50336.0300
 61.63002.20002.73753.27503.79504.31504.81675.31835.8200
 71.61002.15002.67003.19003.68504.18004.65675.13335.6100
 81.59352.12362.62883.13393.61244.09094.55445.01785.4813
 91.57652.09642.58633.07613.53763.99914.44904.89885.3487
 101.56002.07002.54503.02003.46503.91004.34674.78335.2200
 12.00003.00004.00505.01006.01007.01008.01339.016710.0200
 21.81002.60003.36004.12004.85505.59006.32337.05677.7900
 31.71002.40003.05503.71004.34004.97005.58676.20336.8200
 41.69002.33002.94503.56004.14504.73005.30505.88006.4550
ρ = 0.551.67002.26002.83503.41003.95004.49005.02335.55676.0900
 61.63502.21002.75503.30003.81754.33504.84505.35505.8650
 71.60002.16002.67503.19003.68504.18004.66675.15335.6400
 81.59012.13362.63543.13723.61904.10084.57105.04115.5113
 91.57992.10642.59463.08283.55104.01924.47244.92555.3787
 101.57002.08002.55503.03003.48503.94004.37674.81335.2500
 12.01003.01004.02005.03006.03007.03008.03679.043310.0500
 21.81002.60003.35504.11004.86505.62006.37007.12007.8700
 31.73002.41003.09503.78004.40005.02005.67676.33336.9900
 41.70502.35002.98253.61504.21504.81505.42506.03506.6450
ρ = 0.851.68002.29002.87003.45004.03004.61005.17335.73676.3000
 61.63502.23002.78253.33503.87754.42004.95835.49676.0350
 71.59002.17002.69503.22003.72504.23004.74335.25675.7700
 81.58342.15022.66203.17383.66234.15084.64545.14015.6347
 91.57662.12982.62803.12623.59774.06924.54465.01995.4953
 101.57002.11002.59503.08003.53503.99004.44674.90335.3600
 12.01003.02004.03505.05006.06007.07008.07009.070010.0700
 21.83002.63003.39504.16004.93005.70006.43677.17337.9100
 31.70002.40003.05003.70004.34504.99005.61336.23676.8600
 41.69002.35502.97753.60004.21504.83005.41836.00676.5950
ρ = 0.951.68002.31002.90503.50004.08504.67005.22335.77676.3300
 61.65002.26002.82753.39503.94504.49505.03005.56506.1000
 71.62002.21002.75003.29003.80504.32004.83675.35335.8700
 81.60682.18692.71213.23723.74404.25074.75425.25765.7611
 91.59322.16312.67303.18283.68114.17934.66925.15905.6489
 101.58002.14002.63503.13003.62004.11004.58675.06335.5400
Table 4. The ratio I ˜ s y n c h for n , m = { 1 , , 10 } ,   ρ { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Table 4. The ratio I ˜ s y n c h for n , m = { 1 , , 10 } ,   ρ { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Ĩsynch(m)m
ρ n235710
 11.481.802.192.482.78
 21.471.782.192.462.76
0.231.471.772.182.452.75
 51.471.772.172.442.73
 71.461.762.162.432.72
 101.461.762.162.422.71
 11.451.742.122.372.66
 21.441.722.092.342.61
0.431.441.712.082.322.58
 51.431.702.062.302.55
 71.431.702.052.292.54
 101.421.692.042.282.53
 11.441.732.072.332.59
 21.431.702.052.282.54
0.531.421.682.032.262.51
 51.411.672.012.242.48
 71.411.672.002.232.46
 101.401.661.992.212.45
 11.411.681.962.222.44
 21.381.621.922.132.35
0.831.381.621.922.112.33
 51.371.591.882.082.29
 71.371.591.882.072.27
 101.361.581.872.062.26
 11.401.681.932.162.36
 21.361.591.902.102.32
0.931.371.581.882.072.39
 51.341.571.862.042.24
 71.341.561.852.022.22
 101.341.561.832.012.22
Table 5. The ratio I ˜ seq for n , m = { 1 , , 10 } ,   ρ { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Table 5. The ratio I ˜ seq for n , m = { 1 , , 10 } ,   ρ { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Ĩseq(n)  n  
ρ m235710
 12.003.005.007.0010.00
 21.992.984.976.949.90
0.231.982.964.926.879.79
 51.992.984.946.909.83
 71.992.974.926.879.79
 101.992.964.916.859.76
 12.003.015.027.0210.02
 21.992.974.936.899.82
0.431.982.964.906.849.74
 51.972.944.866.779.63
 71.972.944.856.769.76
 101.972.924.836.729.55
 12.003.005.017.0110.02
 21.992.964.916.869.78
0.531.962.924.836.759.60
 51.982.944.876.779.62
 71.962.924.816.709.51
 101.962.914.796.679.47
 12.013.015.037.0310.05
 21.972.944.906.829.72
0.831.952.904.786.679.48
 51.982.944.856.749.59
 71.932.864.726.579.32
 101.942.874.736.559.31
 12.013.025.057.0710.07
 21.962.954.846.799.67
0.931.912.854.756.599.40
 51.972.944.856.769.55
 71.962.904.786.639.39
 101.983.064.816.669.50
Table 6. The ratio I S / P for m, n { 2 , , 10 } ,   ρ { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
Table 6. The ratio I S / P for m, n { 2 , , 10 } ,   ρ { 0.2 ,   0.4 ,   0.5 ,   0.8 ,   0.9 } .
ρ IS/Pn = 2n = 3n = 4n = 5n = 6n = 7n = 8n = 9n = 10
0.2 1.09601.14661.17511.20361.22271.24181.25291.26421.2752
0.4 1.09101.14611.17861.21101.22181.23251.24391.25541.2667
0.5 m = 2 1.09611.13901.16491.19081.20911.22741.23681.24641.2557
0.8 1.08561.13401.16241.19071.20231.21391.22081.22781.2347
0.9 1.07141.12141.14161.16171.17611.19051.20111.21191.2224
0.2 1.14471.22441.27201.31951.34911.37871.39691.41521.4332
0.4 1.14311.21751.26371.30991.33661.36331.38191.40061.419
0.5 m = 3 1.14681.21481.25811.30131.32971.35811.37471.39141.4078
0.8 1.12661.20201.23401.26591.29771.32941.33841.34761.3565
0.9 1.12371.18981.23631.28271.30181.32081.33721.35371.3699
0.2 1.17211.26951.32901.38851.42511.46181.48471.50791.53065
0.4 1.16851.26091.31921.37751.41021.44301.46451.48631.5076
0.5 m = 4 1.16801.25811.31081.36361.39841.43321.45311.47331.493
0.8 1.15161.24481.29051.33621.36651.39691.41101.42531.43935
0.9 1.14901.22991.28191.33401.35881.38361.40211.42081.4391
0.2 1.19951.31451.38601.45751.50121.54481.57261.60061.6281
0.4 1.19381.30421.37461.44501.48391.52271.54721.57191.5962
0.5 m = 5 1.18921.30141.36361.42581.46711.50831.53161.55511.5782
0.8 1.17661.28751.34701.40641.43541.46431.48361.50311.5222
0.9 1.17431.27001.32761.38521.41581.44631.46701.48781.5083
0.2 1.21571.34301.42081.49861.54681.59501.62661.65841.68965
0.4 1.20921.33351.40781.48201.52611.57021.59821.62661.6544
0.5 m = 6 1.20991.32681.39711.46741.51131.55521.58101.60711.6326
0.8 1.19471.30231.36971.43721.47311.50911.52911.54921.56895
0.9 1.19181.29011.35511.42001.45541.49081.51151.53251.5531
0.2 1.23191.37151.45561.53971.59251.64521.68051.71621.7512
0.4 1.22451.36281.44091.51901.56831.61761.64931.68131.7126
0.5 m = 7 1.23051.35211.43051.50891.55551.60211.63041.65901.687
0.8 1.21281.31701.39251.46791.51091.55391.57451.59531.6157
0.9 1.20921.31021.38251.45481.49501.53521.55611.57721.5979
0.2 1.24231.38981.47871.56761.62401.68051.71801.75581.7929
0.4 1.23601.37831.46201.54561.59811.65051.68431.71831.7517
0.5 m = 8 1.23701.36801.45061.53321.58311.63291.66391.69521.7259
0.8 1.22021.33131.41101.49071.53641.58221.60691.63181.6563
0.9 1.22441.34981.41561.48141.52221.56301.58761.61241.6368
0.2 1.25291.40841.50211.59581.65591.71611.75581.79581.835
0.4 1.24771.39391.48321.57251.62821.68381.71961.75581.7911
0.5 m = 9 1.24361.38401.47091.55771.61091.66401.69781.73181.7652
0.8 1.22771.34571.42971.51371.56221.61071.63961.66871.6973
0.9 1.23981.38971.44901.50831.54971.59111.61941.64801.676
0.2 1.26321.42651.52501.62341.68721.75101.79281.83501.8764
0.4 1.25911.40921.50411.59891.65771.71641.75421.79241.8299
0.5 m = 10 1.25001.39971.49081.58181.63821.69451.73091.76771.8038
0.8 1.23511.35981.44811.53631.58751.63871.67161.70491.7375
0.9 1.25481.42891.48181.53461.57661.61861.65061.68291.7146
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barron, Y. The Delay Time Profile of Multistage Networks with Synchronization. Mathematics 2023, 11, 3232. https://doi.org/10.3390/math11143232

AMA Style

Barron Y. The Delay Time Profile of Multistage Networks with Synchronization. Mathematics. 2023; 11(14):3232. https://doi.org/10.3390/math11143232

Chicago/Turabian Style

Barron, Yonit. 2023. "The Delay Time Profile of Multistage Networks with Synchronization" Mathematics 11, no. 14: 3232. https://doi.org/10.3390/math11143232

APA Style

Barron, Y. (2023). The Delay Time Profile of Multistage Networks with Synchronization. Mathematics, 11(14), 3232. https://doi.org/10.3390/math11143232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop