Next Article in Journal
Dispersion-Governed Lump Waves in a Generalized Calogero–Bogoyavlenskii–Schiff-like Model with Spatially Symmetric Nonlinearity
Previous Article in Journal
Simple Two-Sided Convergence Method for a Special Boundary Value Problem with Retarded Argument
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Queuing Model with Customers Arriving According to a Diffusion Process

Department of Mathematics and Industrial Engineering, Polytechnique Montréal, C.P. 6079, Succursale Centre-Ville, Montréal, QC H3C 3A7, Canada
Axioms 2025, 14(12), 868; https://doi.org/10.3390/axioms14120868 (registering DOI)
Submission received: 31 October 2025 / Revised: 17 November 2025 / Accepted: 18 November 2025 / Published: 27 November 2025

Abstract

In the case of heavy traffic, the arrival process in a queuing model can be approximated by a diffusion process. In this paper, we consider such a model. The number X ( t ) of customers in the system at time t evolves according to a degenerate two-dimensional diffusion process. In a special case, the distribution of X ( t ) is calculated explicitly. Moreover, a stochastic control problem known as a homing problem is formulated, and the equation satisfied by the value function is given. Particular homing problems are solved explicitly.

1. Introduction

Queuing theory originated from the pioneering work of Danish engineer Agner Krarup Erlang. In the early 20th century, he developed the first mathematical models to answer questions related to the number of telephone lines and operators needed to satisfy a random demand; see refs. [1,2]. Since then, this theory has become an important subject in many fields, notably in operations research, telecommunications, and computer system design.
Seminal works on queuing theory include the papers by Khinchine [3], Kendall [4], and Jackson [5]. Khinchine obtained a formula that characterizes the waiting-time and queue-length distributions in the M/G/1 queue, which is one of the most fundamental results in queuing theory. In his paper, Kendall proposed the notation (such as M/M/1, M/G/1) that is widely used to classify queuing systems. Jackson showed that under the assumptions of Poisson arrivals and exponential service times, the stationary distribution of open queuing networks factorizes into a product of independent node distributions.
Finally, classic books on this subject include the monographs by Cox and Smith [6] and Kleinrock [7].
In classical queuing models, the arrivals of customers constitute a discrete-space stochastic process. For instance, in the case of the M / M / s model, where s is the number of servers, customers arrive according to a Poisson process. However, in some applications, arrivals occur almost according to a diffusion process. This is true, in particular, in the case of heavy traffic (for instance, on the Internet). See, for example, Lee et al. [8] and Lee and Weerasingheb [9].
In this paper, which is a vastly expanded version of the conference paper [10], we assume that the number X ( t ) of customers in the system at time t is such that
d X ( t ) = ρ [ X ( t ) , Y ( t ) ] d t k X ( t ) d t ,
d Y ( t ) = m [ Y ( t ) ] d t + { v [ Y ( t ) ] } 1 / 2 d B ( t ) ,
where { B ( t ) , t 0 } is a one-dimensional standard Brownian motion. The functions m ( · ) R and v ( · ) > 0 are such that { Y ( t ) , t 0 } is a diffusion process. Moreover, the non-negative constant k is the rate at which the customers are served. The function ρ ( · , · ) should be such that if k = 0 , then X ( t ) will increase with time t.
This type of degenerate two-dimensional diffusion process, which was proposed by Rishel [11], has been used in reliability theory to model the wear of devices. Indeed, wear should be strictly increasing with time. If we want to model the remaining lifetime instead, then it should be decreasing with time. In ref. [11], the model introduced is in fact in multidimensional. Furthermore, the infinitesimal parameters of the process { Y ( t ) , t 0 } may depend on X ( t ) . Thus, Equation (2) can be replaced by
d Y i ( t ) = m [ X ( t ) , Y i ( t ) ] d t + { v [ X ( t ) , Y i ( t ) ] } 1 / 2 d B i ( t ) ,
for i = 1 , , n .
Remark 1.
(i) With this type of model, in which the variations in the number of customers in the system is approximated by a diffusion process, there is no individual service time distribution as such. We see that if ρ ( X ( t ) , Y ( t ) ) 0 in Equation (1), then X ( t ) e k t . Hence the statement that the constant k is the service rate of customers. (ii) Several authors have used a diffusion process as an approximation of the variations in X ( t ) . A commonly used model is reflected Brownian motion (see, for instance, refs. [12,13]), so that the number of customers cannot become negative. With the model that we propose, this result can be guaranteed without having to introduce a reflective barrier. (iii) To implement the proposed model, we must first identify the process { Y ( t ) , t 0 } on which the variations in X ( t ) depend. In the case of the application to reliability theory, Y ( t ) can be an environmental variable, such as temperature or the speed at which the machine is used. Here, Y ( t ) could be the market share of a company at time t. This share varies due, in particular, to competition between companies. The variable Y ( t ) could also be the pool of all potential customers for the company. This pool may increase or decrease, depending on the activities of the company we are interested in, for example by opening branches in new countries or closing others. Next, we would need to estimate the infinitesimal parameters (that is, m ( · ) and v ( · ) ) of { Y ( t ) , t 0 } . Finally, we would also need to estimate the function ρ ( · , · ) and the constant k using real data.
In the next section, a particular case for the various functions in Equations (1) and (2) will be considered. Then, in Section 3, an optimal control problem for the two-dimensional process { ( X ( t ) , Y ( t ) ) , t 0 } will be studied. In Section 4, explicit and exact solutions to particular cases of this control problem will be presented. Finally, some concluding remarks will end this paper in Section 5.

2. A Particular Case

Suppose that m ( · ) μ > 0 and v ( · ) σ 2 in Equation (2). Then, { Y ( t ) , t 0 } is a Wiener process with positive drift μ and dispersion parameter σ > 0 . A Wiener process, being a Gaussian process, can take both positive and negative values. Therefore, the function ρ ( · , · ) should be such that ρ [ X ( t ) , Y ( t ) ] is a non-negative function. For example, we could take ρ [ X ( t ) , Y ( t ) ] = Y 2 ( t ) . However, if we assume that y : = Y ( 0 ) and μ are both large enough, and that σ is small, then the probability that Y ( t ) becomes negative is negligible.
We choose ρ [ X ( t ) , Y ( t ) ] = c Y ( t ) , with c being a positive constant. With this choice, we can appeal to the following proposition to compute the joint probability density function of the random vector ( X ( t ) , Y ( t ) ) .
Proposition 1
([14]). Let { X ( t ) , t 0 } be an n-dimensional stochastic process defined by
d X ( t ) = ( A X ( t ) + a ) d t + N 1 / 2 d B ( t ) ,
where { B ( t ) , t 0 } is an n-dimensional standard Brownian motion, A is a square matrix of order n, a is an n-dimensional vector, and N 1 / 2 is a positive definite square matrix of order n. Then, given that X ( t 0 ) = x , we may write that
X ( t ) N m ( t ) , K ( t ) f o r   t t 0 ,
where
m ( t ) : = Φ ( t ) x + t 0 t Φ 1 ( u ) a d u
and
K ( t ) : = Φ ( t ) t 0 t Φ 1 ( u ) N [ Φ 1 ( u ) ] d u Φ ( t ) ,
where the symbol prime denotes the transpose of the matrix, and the function Φ ( t ) is given by
Φ ( t ) : = e A ( t t 0 ) = n = 0 A n ( t t 0 ) n n ! .
Remark 2.
In words, the above proposition tells us that any affine transformation of an n-dimensional standard Brownian motion (which is a Gaussian process) remains a Gaussian process. Moreover, it provides the formulas needed to compute the mean and the covariance function of the new process.
In our case, we have
A = k c 0 0 , a = 0 μ and N 1 / 2 = 0 0 0 σ .
The coefficient matrix A contains the service rate k and the constant c. We chose ρ [ X ( t ) , Y ( t ) ] = c Y ( t ) ; that is, the value of X ( t ) is assumed to increase at a rate which is proportional to Y ( t ) (if k = 0 ), and c is the constant of proportionality. We also assumed that { Y ( t ) , t 0 } is a Wiener process with infinitesimal parameters μ and σ . The vector a contains the drift μ , and σ appears in the degenerate constant matrix N , which is the noise matrix.
Now, for n 1 ,
A n = ( k ) n c ( k ) n 1 0 0 .
Hence, if t 0 = 0 , the function Φ ( t ) is given by
Φ ( t ) = 1 0 0 1 + 1 + e k t c k 1 + e k t 0 0 = e k t c k 1 + e k t 0 1 .
Next, we find that
0 t Φ 1 ( u ) a d u = c μ k t 1 + e k t k 2 μ t .
It follows that
m ( t ) = e k t x + c μ k t 1 + e k t k 2 c e k t 1 μ t + y k μ t + y .
Finally, with
N = 0 0 0 σ 2 ,
we calculate
0 t Φ 1 ( u ) N [ Φ 1 ( u ) ] d u = c 2 σ 2 2 k t + 4 e k t e 2 k t 3 2 k 3 c σ 2 k t 1 + e k t k 2 c σ 2 k t 1 + e k t k 2 σ 2 t
and
K ( t ) = c 2 σ 2 2 e 2 k t k t 3 e 2 k t + 4 e k t 1 e 2 k t 2 k 3 c σ 2 k t 1 + e k t k 2 c σ 2 k t 1 + e k t k 2 σ 2 t .
Notice that for t large, X ( t ) has a Gaussian distribution with mean and variance that are both (approximately) affine functions of t.
Making use of the above results, we can compute the probability that X ( t ) will become equal to zero, so that the queue is empty, as a function of time. Similarly, if the system capacity is finite, we can easily compute the probability that the system will become saturated.
The actual (approximate) number of customers in the system is given by
X r ( t ) : = 0 if X ( t ) 0 , X ( t ) if 0 < X ( t ) < r , r if X ( t ) r ,
where r is the system capacity.
Corollary 1.
Suppose that the system (1), (2) is of the form
d X ( t ) = c ln [ Y ( t ) ] d t k X ( t ) d t ,
  d Y ( t ) = μ + 1 2 σ 2 Y ( t ) d t + { σ 2 Y 2 ( t ) } 1 / 2 d B ( t ) ,
where c > 0 , μ R and σ > 0 . Then X ( t ) has a Gaussian distribution with mean and variance given by
E [ X ( t ) ] = e k t x + c μ k t 1 + e k t k 2 c e k t 1 μ t + y k
and
VAR [ X ( t ) ] = c 2 σ 2 2 e 2 k t k t 3 e 2 k t + 4 e k t 1 e 2 k t 2 k 3 .
Proof. 
The diffusion process { Y ( t ) , t 0 } is a geometric Brownian motion with infinitesimal mean μ + 1 2 σ 2 y and infinitesimal variance σ 2 y 2 . A geometric Brownian motion is non-negative (if it starts at Y ( 0 ) = y > 0 ), since it can be expressed as the exponential of a Wiener process. If we define Z ( t ) = ln [ Y ( t ) ] , then { Z ( t ) , t 0 } is a Wiener process with infinitesimal mean μ and infinitesimal variance σ 2 . The results are then deduced at once from Proposition 1. □
Remark 3.
We assume that both Y ( 0 ) = y and the parameter μ are large enough (and σ small enough) for the probability P [ Y ( t ) < 1 ] to be negligible.
Next, assume that the system (1), (2) is
d X ( t ) = c Y ( t ) d t k X ( t ) d t ,
d Y ( t ) = m 0 Y ( t ) d t + { v 0 Y 2 ( t ) } 1 / 2 d B ( t ) ,
where m 0 R and v 0 > 0 . Then, X ( t ) will not have a Gaussian distribution. We can at least compute its expected value.
Proposition 2.
The expected value of the random variable X ( t ) in the system (22), (23) is
E [ X ( t ) ] = 2 c y e ( v 0 / 2 ) t + m 0 t + k t 1 v 0 + 2 k + 2 μ + x e k t ,
where x = X ( 0 ) and y = Y ( 0 ) .
Proof. 
The solution of the ordinary differential equation (ODE)
X ( t ) = c Y ( t ) k X ( t )
that satisfies the initial condition X ( 0 ) = x is
X ( t ) = 0 t c Y ( w ) e k w d w + x e k t .
Moreover, we have
E [ Y ( t ) ] = y exp m 0 + v 0 2 t .
The expected value of X ( t ) is then obtained by computing
E [ X ( t ) ] = 0 t c y exp m 0 + v 0 2 w e k w d w + x e k t .
Remark 4.
In the special case when m 0 = v 0 / 2 , so that E [ Y ( t ) ] y , we have
lim t E [ X ( t ) ] = c y k .
In the next section, an optimal control problem for the two-dimensional stochastic process ( X ( t ) , Y ( t ) ) will be examined.

3. A Homing Problem

In this section, we suppose that the term k X ( t ) in Equation (1) is replaced by the function b [ X ( t ) , Y ( t ) ] u [ X ( t ) , Y ( t ) ] , where b ( · , · ) is a non-negative function and u ( · , · ) is a control variable that is assumed to be a continuous function.
Let
τ ( x , y ) : = inf { t > 0 : ( X ( t ) , Y ( t ) ) D R 2 X ( 0 ) = x , Y ( 0 ) = y , ( x , y ) D } .
The random variable τ ( x , y ) is called a first-passage time in probability theory.
Our aim is to find the control that minimizes the expected value of the cost functional
J ( x , y ) : = 0 τ ( x , y ) 1 2 q [ X ( t ) , Y ( t ) ] u 2 [ X ( t ) , Y ( t ) ] + θ d t + K [ X ( τ ( x , y ) ) , Y ( τ ( x , y ) ) ] ,
where q ( · , · ) > 0 , θ R , and K ( · , · ) is the final cost function. This type of stochastic control problem, in which a stochastic process is controlled until a given event occurs, is known as a homing problem; see Whittle [15]. The author of the current paper has written several articles on homing problems; see, for example, ref. [16]. Other papers on this subject are refs. [11,17].
Many papers have been published on the optimal control of queuing systems. Sometimes the authors assume that it is possible to control the service rate and/or the arrival rate of customers into the system.
In ref. [18], Laxmi and Jyothsna considered a discrete-time queue. Their objective was to minimize costs by optimizing the service rates. To do so, they used swarm optimization. In Tian et al. [19], the authors also treated a problem for a queuing system with varying service rates. Other papers in which the authors assumed that the service rate could be controlled are refs. [20,21,22,23,24]. In Wu et al. [25], the aim was rather to control customer arrivals. The main difference between these papers and the work presented here is that in our case the final time is a random variable. See Lefebvre and Yaghoubi [26] for other references on the optimal control of queuing systems.
To find the optimal control u * [ X ( t ) , Y ( t ) ] , we can try using dynamic programming, which enables us to express u * in terms of the value function
F ( x , y ) : = inf u [ X ( t ) , Y ( t ) ] t [ 0 , τ ( x , y ) ) E [ J ( x , y ) ] .
The function F ( x , y ) gives the smallest expected cost (or largest expected reward, if the cost is negative), starting form X ( 0 ) = x and Y ( 0 ) = y .
Using the results in ref. [15], we can state the following proposition.
Proposition 3.
The value function F ( x , y ) satisfies the dynamic programming equation
0 = inf u ( x , y ) { 1 2 q ( x , y ) u 2 ( x , y ) + θ + [ ρ ( x , y ) b ( x , y ) u ( x , y ) ] F x ( x , y ) + m ( y ) F y ( x , y ) + 1 2 v ( y ) F y y ( x , y ) } ,
where F x ( x , y ) = x F ( x , y ) , etc. The equation is valid for ( x , y ) D . We have the boundary condition F ( x , y ) = K ( x , y ) if ( x , y ) D . Moreover, the optimal control is given by
u * ( x , y ) = b ( x , y ) q ( x , y ) F x ( x , y ) .
If we substitute the expression for u * ( x , y ) into the dynamic programming equation, we find that to obtain the value function, we must solve the second-order non-linear partial differential equation (PDE)
θ b 2 ( x , y ) 2 q ( x , y ) [ F x ( x , y ) ] 2 + ρ ( x , y ) F x ( x , y ) + m ( y ) F y ( x , y ) + 1 2 v ( y ) F y y ( x , y ) = 0 .
Assume now that instead of the service rate, the optimizer can control the arrival rate of the process, so that the two-dimensional process ( X ( t ) , Y ( t ) ) is defined by the system of stochastic differential equations
d X ( t ) = ρ [ X ( t ) , Y ( t ) ] d t k X ( t ) d t ,
d Y ( t ) = b [ X ( t ) , Y ( t ) ] u [ X ( t ) , Y ( t ) ] d t + m [ Y ( t ) ] d t + { v [ Y ( t ) ] } 1 / 2 d B ( t ) .
Proceeding as above, we obtain the following corollary.
Corollary 2.
In the case of the controlled process defined in Equations (36) and (37), the optimal control is given by
u * ( x , y ) = b ( x , y ) q ( x , y ) F y ( x , y ) .
Furthermore, the value function satisfies the PDE
0 = θ b 2 ( x , y ) 2 q ( x , y ) [ F x ( x , y ) ] 2 + [ ρ ( x , y ) k x ] F x ( x , y ) + m ( y ) F y ( x , y ) + 1 2 v ( y ) F y y ( x , y ) .
Finally, in some cases, Equation (39) (as well as Equation (35)) can be linearized.
Proposition 4.
Suppose that there exists a positive constant α such that
α = b 2 ( x , y ) q ( x , y ) v ( y ) .
Then the function
G ( x , y ) : = e α F ( x , y )
satisfies the linear second-order PDE
α θ G ( x , y ) = [ ρ ( x , y ) k x ] G x ( x , y ) + m ( y ) G y ( x , y ) + 1 2 σ 2 G y y ( x , y ) .
The boundary condition is G ( x , y ) = e α K ( x , y ) if ( x , y ) D .
Remark 5.
The linear PDE in Equation (42) is in fact the Kolmogorov backward equation satisfied by the moment-generating function of the random variable τ ( x , y ) :
M ( x , y ) : = E e ω τ ( x , y ) ,
with ω : = α θ > 0 , for the uncontrolled process ( X 0 ( t ) , Y 0 ( t ) ) obtained by setting u [ X ( t ) , Y ( t ) ] 0 in Equation (37). Moreover, the boundary condition is the appropriate one, and we assume that P [ τ ( x , y ) < ] = 1 for the uncontrolled process.
In the next section, explicit solutions to particular homing problems will be presented.

4. Explicit Solutions

Problem I.
The first particular homing problem that we consider is the one for which the function ρ [ X ( t ) , Y ( t ) ] in Equation (1) is equal to c Y ( t ) , where c is a positive constant, and the term k X ( t ) is replaced by b 0 u [ X ( t ) , Y ( t ) ] , with b 0 > 0 . Moreover, we take
q [ X ( t ) , Y ( t ) ] = q 0 X 2 ( t ) ,
where q 0 > 0 , and we define the first-passage time
τ ( x , y ) = inf t > 0 : Y ( t ) X ( t ) = k 1 or k 2 | X ( 0 ) = x , Y ( 0 ) = y , 0 < k 1 < y x < k 2 ,
in which { Y ( t ) , t 0 } is a geometric Brownian motion that satisfies the stochastic differential Equation (23).
We assume that θ = 0 in Equation (31) and that the final cost is
K [ X ( τ ( x , y ) ) , Y ( τ ( x , y ) ) ] = K i if Y ( τ ( x , y ) ) X ( τ ( x , y ) ) = k i , for i = 1 , 2 .
To obtain the value function, we must solve the PDE
b 2 ( x , y ) 2 q ( x , y ) [ F x ( x , y ) ] 2 + c y F x ( x , y ) + m 0 y F y ( x , y ) + 1 2 v 0 y 2 F y y ( x , y ) = 0 ,
subject to the boundary conditions F ( x , y ) = K i if y / x = k i , for i = 1 , 2 .
Now, based on the boundary conditions, we look for a solution of the form
F ( x , y ) = H ( z ) with z : = y / x .
This is an application of the method of similarity solutions. We find that Equation (47) is transformed into the non-linear second-order ODE
1 2 b 0 2 q 0 z 2 [ H ( z ) ] 2 c z 2 H ( z ) + m 0 z H ( z ) + 1 2 v 0 z 2 H ( z ) = 0 .
Next,
β : = q 0 v 0 b 0 2
is a positive constant. We set
Ψ ( z ) : = e H ( z ) / β .
We find that, if m 0 = 0 , then the function Ψ ( z ) satisfies the simple linear ODE
c Ψ ( z ) 1 2 v 0 Ψ ( z ) = 0 ,
whose general solution is
Ψ ( z ) = c 1 + c 2 e 2 c z / v 0 .
The constants c 1 and c 2 are such that the boundary conditions Ψ ( z ) = e K i / β if z = k i , for i = 1 , 2 , are satisfied. We find that
Ψ ( z ) = e 2 c z β K 2 v 0 v 0 β + e 2 c z β K 1 v 0 v 0 β + e 2 c k 1 β K 2 v 0 v 0 β e 2 c k 2 β K 1 v 0 v 0 β e 2 c k 1 v 0 e 2 c k 2 v 0
for k 1 z k 2 .
From the function Ψ ( z ) , we obtain the value function F ( x , y ) = H ( z ) = β ln [ Ψ ( z ) ] , and hence the optimal control (see Equation (34))
u * ( x , y ) = b 0 x 2 q 0 F x ( x , y ) .
To illustrate the results, let us set c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 . The constant β is equal to 1, and we calculate
Ψ ( z ) = e 2 z 1 e 2 z + e 4 e e 4 e 2
for 1 z 2 . The function H ( z ) is presented in Figure 1. Moreover, the optimal control u * ( x , 1 ) for x [ 1 , 2 ] and u * ( 1 , y ) for y [ 1 , 2 ] are shown in Figure 2 and Figure 3, respectively.
Remark 6.
Let F 0 ( x , y ) denote the expected cost if one uses no control. We have
F 0 ( x , y ) = E [ K [ X ( τ ( x , y ) ) , Y ( τ ( x , y ) ) ] .
This function satisfies the partial differential equation
1 2 v 0 y 2 2 F 0 ( x , y ) y 2 + m 0 y F 0 ( x , y ) y + c y F 0 ( x , y ) x = 1 .
The boundary conditions are F 0 ( x , y ) = K i if y / x = k i , for i = 1 , 2 .
Let us look for a solution of the form F 0 ( x , y ) = H 0 ( z : = y / x ) . We find that we must solve the ODE
1 2 v 0 z 2 H 0 ( z ) + ( m 0 z c z 2 ) H 0 ( z ) = 1 .
With m 0 = 0 (as above), the general solution of Equation (59) is
H 0 ( z ) = 4 e 2 z Ei 1 ( 2 z ) z e 2 z c 1 z 2 z d z + c 2 ,
where E i 1 ( · ) is an exponential integral function. With the same choices for the various constants as above, the particular solution that satisfies the boundary conditions H 0 ( 1 ) = 0 and H 0 ( 2 ) = 1 can be expressed as follows:
H 0 ( z ) = 1 z 4 e 2 w Ei 1 ( 2 w ) w e 2 w c 1 w 2 w d w ,
with c 1 0.0292 . This function is displayed in Figure 1. We clearly see the improvement in the expected cost when one uses the optimal control rather than no control at all.
Problem II.
We now consider the first-passage time
τ ( x , y ) = inf t > 0 : X ( t ) Y ( t ) = k 1 or k 2 X ( 0 ) = x , Y ( 0 ) = y , 0 k 1 < x y < k 2 ,
and we suppose that the system (36), (37) is
d X ( t ) = ρ [ X ( t ) , Y ( t ) ] d t k X ( t ) d t ,
d Y ( t ) = b 0 X ( t ) u [ X ( t ) , Y ( t ) ] d t + m 0 Y ( t ) d t + { v 0 Y 2 ( t ) } 1 / 2 d B ( t ) ,
where b 0 > 0 , m 0 R and v 0 > 0 . Thus, { Y ( t ) , t 0 } is a controlled geometric Brownian motion. Moreover, the cost functional is
J ( x , y ) = 0 τ ( x , y ) 1 2 q 0 u 2 [ X ( t ) , Y ( t ) ] d t + K [ X ( τ ( x , y ) ) , Y ( τ ( x , y ) ) ] ,
where q 0 > 0 and the final cost is given by
K [ X ( τ ( x , y ) ) , Y ( τ ( x , y ) ) ] = K i if X ( τ ( x , y ) ) Y ( τ ( x , y ) ) = k i , for i = 1 , 2 .
The value function satisfies the PDE
b 0 2 2 q 0 x 2 [ F x ( x , y ) ] 2 + [ ρ ( x , y ) k x ] F x ( x , y ) + m 0 y F y ( x , y ) + 1 2 v 0 y 2 F y y ( x , y ) = 0 ,
subject to F ( x , y ) = K i if x y = k i , for i = 1 , 2 . We find that the function H ( z = x y ) is a solution of the ODE
1 2 b 0 2 q 0 z 2 [ H ( z ) ] 2 + [ ρ ( x , y ) y k z ] H ( z ) + m 0 z H ( z ) + 1 2 v 0 z 2 H ( z ) = 0 .
The boundary conditions are H ( z ) = K i if z = k i , for i = 1 , 2 .
For the method of similarity solutions to work, the term ρ ( x , y ) y must be expressed in terms of the similarity variable z. We will consider two cases.
Case 1.
Let ρ ( x , y ) = ρ 0 x 2 y , where ρ 0 > 0 . Then, since z > 0 , Equation (68) becomes
1 2 b 0 2 q 0 z [ H ( z ) ] 2 + ( ρ 0 z k ) H ( z ) + m 0 H ( z ) + 1 2 v 0 z H ( z ) = 0 .
Next, let
Ψ ( z ) : = e γ H ( z ) ,
where
γ : = b 0 2 q 0 v 0 > 0 .
The function Ψ ( z ) satisfies the linear ODE
( ρ 0 z k + m 0 ) Ψ ( z ) + 1 2 v 0 z Ψ ( z ) = 0 .
The general solution of the above equation involves the special function known as Whittaker function. Let us consider a special case: we set ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 . Equation (72) then reduces to
Ψ ( z ) + 1 2 Ψ ( z ) = 0 ,
whose general solution is
Ψ ( z ) = c 1 + c 2 e 2 z .
The solution that satisfies the boundary conditions Ψ ( 1 ) = 1 and Ψ ( 2 ) = e 1 (since γ = 1 ) is
Ψ ( z ) = e 2 z + 3 + 1 e + 1 for 1 z 2 .
We present the function H ( z ) in Figure 4. Furthermore, the optimal control is
u * ( x , y ) = x F y ( x , y ) = x 2 H ( x y ) = 2 x 2 e 1 1 e 2 x y e 2 x y 1 + e 3 + e 2 x y e 4 .
See Figure 5 and Figure 6.
Case 2.
Suppose that ρ ( x , y ) = ρ 0 / y , where ρ 0 > 0 . Under the same assumptions as in Case 1, the ODE satisfied by the function Ψ ( z ) is now
( ρ 0 k z + m 0 z ) Ψ ( z ) + 1 2 v 0 z 2 Ψ ( z ) = 0 .
The particular solution that satisfies the boundary conditions Ψ ( k i ) = e γ K i : = κ i , for i = 1 , 2 , is
Ψ ( z ) = κ 1 + ( κ 2 κ 1 ) k 1 z e 2 ρ 0 v 0 w w 2 ( k m 0 ) v 0 d w k 1 k 2 e 2 ρ 0 v 0 w w 2 ( k m 0 ) v 0 d w .
The function H ( z ) is displayed in Figure 7 when the constants are the same as in Case 1.
The optimal control is (see Equation (38))
u * ( x , y ) = x 2 H ( x y ) 0.1517 x 2 e 2 x y 1 0.1517 1 x y e 2 / w d w .
As in Case 1, the functions u * ( x , 1 ) and u * ( 1 , x ) are shown in Figure 8 and Figure 9, respectively.

5. Conclusions

In this paper, a queuing model in which customers arrive (approximately) according to a degenerate two-dimensional diffusion process was studied. The model is such that, in the case of the absence of service, the number of customers in the system is strictly increasing. Other authors have used diffusion processes as an approximation for the arrival of customers, but in only one dimension, for example a reflected Brownian motion.
Like any model, the one that we propose is based on simplifying assumptions. We believe it could be used successfully in certain applications, while traditional models might yield better results in others. The aim was to propose an alternative model that could prove useful in a variety of situations.
We wrote that the arrival process is approximated by a diffusion process. It should be emphasized that assuming that arrivals constitute a Poisson process, as in the case of the classic M / M / s model, is also only an approximation of reality. Using an exponential distribution for the time between successive customer arrivals in the system is a simplifying assumption. In some applications, approximating arrivals using a Poisson process may be more realistic than using a diffusion process, while in other applications, the opposite is true.
In a special case, we were able to derive the distribution of the number X ( t ) of customers in the system at time t. It would be interesting to obtain this distribution in other cases, or at least calculate the expected value of X ( t ) , as we did when { Y ( t ) , t 0 } is a geometric Brownian motion.
In Section 3, a stochastic control problem of the homing type was formulated for the queuing model. This is the first time that a homing problem has been studied for a queuing model such as the one proposed in the paper. We gave the non-linear partial differential equation satisfied by the value function, and we saw that it is sometimes possible to linearize this equation. Finally, several problems were solved explicitly and exactly by making use of the method of similarity solutions. When this method does not apply, we could use numerical techniques to find the value function and hence the optimal control for any particular problem.

Funding

This research was supported by the Natural Sciences and Engineering Research Council of Canada.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The author wishes to thank the anonymous reviewers of this paper for their constructive comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Erlang, A.K. The theory of probabilities and telephone conversations. Nyt Tidss. Math. B 1909, 20, 33–39. [Google Scholar]
  2. Erlang, A.K. Solution of some problems in the theory of probabilities of significance in automatic telephone exchanges. Elektroteknikeren 1917, 13, 5–13. [Google Scholar]
  3. Khinchin, A.Y. Mathematical theory of a stationary queue. Mat. Sb. 1932, 39, 73–84. [Google Scholar]
  4. Kendall, D.G. Some problems in the theory of queues. J. R. Stat. Soc. Ser. B Methodol. 1951, 13, 151–173. [Google Scholar] [CrossRef]
  5. Jackson, J.R. Networks of waiting lines. Oper. Res. 1957, 5, 518–521. [Google Scholar] [CrossRef]
  6. Cox, D.R.; Smith, W.L. Queues; Methuen: London, UK, 1961. [Google Scholar]
  7. Kleinrock, L. Queueing Systems, Volume I: Theory; Wiley: New York, NY, USA, 1975. [Google Scholar]
  8. Lee, H.W.; Yoon, S.H.; Lee, S.S. A continuous approximation for batch arrival queues with threshold. Computers Ops. Res. 1996, 23, 299–308. [Google Scholar] [CrossRef]
  9. Lee, C.; Weerasinghe, A. Convergence of a queueing system in heavy traffic with general patience-time distributions. Stoch. Process. Their Appl. 2011, 121, 2507–2552. [Google Scholar] [CrossRef]
  10. Lefebvre, M. A queuing model with arrivals according to a two-dimensional diffusion process. In Proceedings of the 5th Workshop on Intelligent Information Systems (WIIS2025), Chişinǎu, Republic of Moldova, 16–18 October 2025; pp. 206–213. [Google Scholar]
  11. Rishel, R. Controlled wear process: Modeling optimal control. IEEE Trans. Autom. Control 1991, 36, 1100–1102. [Google Scholar] [CrossRef]
  12. Harrison, J.M.; Reiman, M.I. Reflected Brownian motion in an orthant. Ann. Probab. 1981, 9, 303–308. [Google Scholar] [CrossRef]
  13. Reiman, M.I. Open queueing networks in heavy traffic. Math. Oper. Res. 1984, 9, 441–458. [Google Scholar] [CrossRef]
  14. Lefebvre, M. Applied Stochastic Processes; Springer: New York, NY, USA, 2007. [Google Scholar]
  15. Whittle, P. Optimization over Time; Wiley: Chichester, UK, 1982; Volume I. [Google Scholar]
  16. Lefebvre, M. Minimizing or maximizing the first-passage time to a time-dependent boundary. Optimization 2022, 71, 387–401. [Google Scholar] [CrossRef]
  17. Makasu, C. Bounds for a risk-sensitive homing problem. Automatica 2024, 163, 111575. [Google Scholar] [CrossRef]
  18. Laxmi, P.V.; Jyothsna, K. Optimization of service rate in a discrete-time impatient customer queue using particle swarm optimization. In Distributed Computing and Internet Technology, Proceedings of the International Conference on Distributed Computing and Internet Technology, Bhubaneswar, India, 15–18 January 2016; Springer International Publishing: Cham, Switzerland, 2016; pp. 38–42. [Google Scholar] [CrossRef]
  19. Tian, R.; Su, S.; Zhang, Z.G. Equilibrium and social optimality in queues with service rate and customers’ joining decisions. Qual. Technol. Quant. Manag. 2024, 21, 1–34. [Google Scholar] [CrossRef]
  20. Chen, G.; Liu, Z.; Xia, L. Event-based optimization of service rate control in retrial queues. J. Oper. Res. Soc. 2023, 74, 979–991. [Google Scholar] [CrossRef]
  21. Dudin, A.; Dudin, S.; Dudina, O. Analysis of a queueing system with mixed service discipline. Methodol. Comput. Appl. Prob. 2023, 25, 19. [Google Scholar] [CrossRef]
  22. Lakkumikanthan, I.; Balasubramanian, S. Optimal control of service rates of discrete-time (s,S) queueing—Inventory systems with finite buffer. In Proceedings of the 5th International Conference on Problems of Cybernetics and Informatics, Baku, Azerbaijan, 28–30 August 2023; pp. 1–4. [Google Scholar] [CrossRef]
  23. Su, Y.; Li, J. Optimality of admission control in an M/M/1/N queue with varying services. Stoch. Model. 2021, 37, 317–334. [Google Scholar] [CrossRef]
  24. Büke, B.; Qin, W. Many-server queues with random service rates in the Halfin-Whitt regime: A measure-valued process approach. arXiv 2019. [Google Scholar] [CrossRef]
  25. Wu, C.-H.; Yang, D.-Y.; Yong, C.-R. Performance evaluation and bi-objective optimization for F-policy queue with alternating service rates. J. Ind. Manag. Optim. 2023, 19, 3819–3839. [Google Scholar] [CrossRef]
  26. Lefebvre, M.; Yaghoubi, R. Optimal service time distribution for an M/G/1 waiting queue. Axioms 2024, 13, 594. [Google Scholar] [CrossRef]
Figure 1. Value function F ( x , y ) = H ( z = y / x ) (solid line) for 1 z 2 in Problem 1, when c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 . The dotted line represents the expected cost H 0 ( z ) defined in Equation (61) if one uses no control.
Figure 1. Value function F ( x , y ) = H ( z = y / x ) (solid line) for 1 z 2 in Problem 1, when c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 . The dotted line represents the expected cost H 0 ( z ) defined in Equation (61) if one uses no control.
Axioms 14 00868 g001
Figure 2. Optimal control u * ( x , 1 ) for 1 x 2 in Problem 1, when c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 2. Optimal control u * ( x , 1 ) for 1 x 2 in Problem 1, when c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g002
Figure 3. Optimal control u * ( 1 , y ) for 1 y 2 in Problem 1, when c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 3. Optimal control u * ( 1 , y ) for 1 y 2 in Problem 1, when c = b 0 = q 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g003
Figure 4. Value function F ( x , y ) = H ( z = x y ) for 1 z 2 in Problem 2, Case 1, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 4. Value function F ( x , y ) = H ( z = x y ) for 1 z 2 in Problem 2, Case 1, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g004
Figure 5. Optimal control u * ( x , 1 ) for 1 x 2 in Problem 2, Case 1, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 5. Optimal control u * ( x , 1 ) for 1 x 2 in Problem 2, Case 1, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g005
Figure 6. Optimal control u * ( 1 , y ) for 1 y 2 in Problem 2, Case 1, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 6. Optimal control u * ( 1 , y ) for 1 y 2 in Problem 2, Case 1, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g006
Figure 7. Value function F ( x , y ) = H ( z = x y ) for 1 z 2 in Problem 2, Case 2, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 7. Value function F ( x , y ) = H ( z = x y ) for 1 z 2 in Problem 2, Case 2, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g007
Figure 8. Optimal control u * ( x , 1 ) for 1 x 2 in Problem 2, Case 2, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 8. Optimal control u * ( x , 1 ) for 1 x 2 in Problem 2, Case 2, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g008
Figure 9. Optimal control u * ( 1 , y ) for 1 y 2 in Problem 2, Case 2, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Figure 9. Optimal control u * ( 1 , y ) for 1 y 2 in Problem 2, Case 2, when ρ 0 = k = b 0 = q 0 = m 0 = v 0 = 1 , k 1 = 1 , k 2 = 2 , K 1 = 0 and K 2 = 1 .
Axioms 14 00868 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lefebvre, M. On a Queuing Model with Customers Arriving According to a Diffusion Process. Axioms 2025, 14, 868. https://doi.org/10.3390/axioms14120868

AMA Style

Lefebvre M. On a Queuing Model with Customers Arriving According to a Diffusion Process. Axioms. 2025; 14(12):868. https://doi.org/10.3390/axioms14120868

Chicago/Turabian Style

Lefebvre, Mario. 2025. "On a Queuing Model with Customers Arriving According to a Diffusion Process" Axioms 14, no. 12: 868. https://doi.org/10.3390/axioms14120868

APA Style

Lefebvre, M. (2025). On a Queuing Model with Customers Arriving According to a Diffusion Process. Axioms, 14(12), 868. https://doi.org/10.3390/axioms14120868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop