Next Article in Journal
Modeling and Mathematical Analysis of Liquidity Risk Contagion in the Banking System Using an Optimal Control Approach
Previous Article in Journal
Numerical Investigation of the Wave Equation for the Convergence and Stability Analysis of Vibrating Strings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling and Estimation of Traffic Intensity in M/M/1 Queueing System with Balking: Classical and Bayesian Approaches

1
Department of Statistics, Dibrugarh University, Dibrugarh 786004, India
2
Department of Statistics, Cotton University, Guwahati 781001, India
3
Faculty of Science, Statistics, Assam down town University, Guwahati 781026, India
4
Department of Statistics, Gauhati University, Guwahati 781014, India
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(1), 19; https://doi.org/10.3390/appliedmath5010019
Submission received: 11 January 2025 / Revised: 4 February 2025 / Accepted: 8 February 2025 / Published: 21 February 2025

Abstract

:
This article focuses on both classical and Bayesian inference of traffic intensity in a single-server Markovian queueing model considering balking. To reflect real-world situations, the article introduces the concept of balking, where customers opt not to join the queue due to the perceived waiting time. The essence of this article involves a comprehensive analysis of different loss functions, namely, the squared error loss function (SELF) and the precautionary loss function (PLF), on the accuracy of the Bayesian estimation. To evaluate the performance of the Bayesian method with various priors such as inverted beta, gamma, and Jeffreys distributions, an assessment is performed using the Markov Chain Monte Carlo (MCMC) simulation technique. The efficacy of the Bayesian estimators is assessed by comparing the mean squared errors (MSEs).

1. Introduction

A fundamental and widely used model in queuing theory is the M / M / 1 queue, which depicts a single-server queue in which arrival occurs according to the Poisson process, with rate λ and service times following an exponential distribution. The estimation of traffic intensity ( ρ ) in a M / M / 1 queue is an essential component of queuing theory, with substantial implications for a variety of real-world applications. This estimation aids in system performance analysis, capacity planning choices, and resource allocation optimization and ensures that customers obtain services that meet their expectations.
Understanding customers’ behavior is critical to enhancing operational efficiency in service systems and waiting for queue management. Balking is the phenomenon in which customers choose not to enter a queue upon arriving. This phenomenon is influenced by various factors, such as perceived waiting time, resource constraints, or alternative options, introducing a crucial degree of complexity to queueing systems. Neglecting balking can lead to the underestimation of performance measures, which can ultimately misguide the decision-making process related to service provision, resource allocation, and system congestion. In this article, balking probability is considered as a function rather than a constant, and it is affected by the number of customers in the system. This article primarily focuses on the classical and Bayesian estimation of traffic intensity ( ρ ).
The study of a single server Markovian queue was initiated by [1], who derived maximum likelihood estimators (MLEs) of the service rate ( μ ) and arrival rate ( λ ). Ref. [2] constructed the maximum likelihood estimator of ρ by connecting the M / M / 1 queuing system with the Bernoulli process. They also derived a novel approach towards the determination of sample size using a randomized trial technique. Ref. [3] studied the changing point problem for traffic intensity ρ for the M / M / 1 queue in case of changing arrival rate λ . Ref. [4] derived the expressions for operational performance characteristics and mean queue length by using the generating functions in the steady state. Ref. [5] presented the confidence intervals of traffic intensity ρ for the M / M / 1 , M / M / 2 , and M/ E k /1 queues. Ref. [6] studied a finite capacity single-server Markovian queue with discouraged arrivals. Ref. [7] considered an M / M / 1 queueing system with the customer impatience in the form of random balking. Ref. [8] studied a multi-server queueing system with impatient customers and presented a simple and insightful solution for the loss probability. Ref. [9] was influenced by a queueing model with balking where each arrival joins the queue if the length is less than a random quantity. Ref. [10] studied the equilibrium characteristics of the system based on the join-balk sequence. Ref. [11] investigated a queueing system, which consists of the Poisson input of customers, some of whom are lost to balking.
Ref. [12] obtained the Bayes estimators of the M / M / 1 queue under the squared error loss function (SELF) and entropy loss function (ELF). Ref. [13] obtained ML and Bayes estimates of ρ for the number of customers present in the system at successive departure epochs. Ref. [14] obtained ML estimators, Bayes estimators, and consistent estimators of arrival and service parameters for the single server with Markovian arrival and Erlang service times queueing system, by observing inter-arrival and service times in the system. Ref. [15] discussed the Bayesian problem and robust Bayesian estimation of the expected queue length in a queueing system M / M / 1 under a scale-invariant squared error loss function. Ref. [16] obtained the MLE and Bayesian estimation of M / M / 1 queue with balking. They considered balking as a function of the number of customers in the system. Ref. [17] studied the Bayesian estimation of arrival and service parameters along with various performance measures of the M / M / 1 queueing system under different asymmetric loss functions using McKay’s bivariate gamma prior distribution [18] (for details see page 432 Section 2.1). Ref. [19] obtained Bayes estimators of the ρ and various queue characteristics under different priors and squared error loss functions.
This study contributes to the literature by providing a robust framework for the estimation of traffic intensity in the presence of balking using both classical and Bayesian approaches. By evaluating the performance of Bayesian estimators under different priors and loss functions, this work not only enhances theoretical understanding but also provides actionable insights for improving real-world service systems.

2. Preliminaries

The following are the assumptions of an M / M / 1 queueing system with balking: customers arrive as a Poisson process with mean λ and join the system with probability 1 m + 1 and depart without service with residual probability m m + 1 . Thus, a customer who sees an empty queue will surely join, but customers become less and less likely to join as the queue gets longer. Furthermore, the service is independent of arrivals and follows a Poisson distribution with rate μ . It is important to note that the system has an infinite space for customers and that the first-come-first-served principle is used to provide service to customers. Furthermore, it is expected that after a long period of observation, the queuing system will finally attain a steady state. We know the number of customers in the system in a birth–death process with the birth rates given by
λ m = λ b m , m = 0 , 1 , 2 ,
Let m be the number of customers in the system at any point with p.m.f P m , which can be written as (see [20,21])
P m = P 0 λ μ m i = 1 m b i 1 , m = 1 , 2 , 3
where b m = 1 m + 1 ; b m is a monotonically decreasing function of m. Using the value of b m in Equation (1), we obtain,
P m = ρ m m ! P 0 , m = 1 , 2 , 3
where, ρ = λ μ .
Using the condition m = 0 P m = 1 and Equation (2), we obtain
P 0 = e ρ
Substituting the value of P 0 , we obtain (for the complete derivation see Appendix A)
P m = e ρ ρ m m ! , m = 0 , 1 , 2 0 , otherwise
The probability of m customers present in an M / M / 1 queuing system with balking follows a Poisson distribution with parameter ρ . The stability condition in this case is ρ < , as explained in [22].

Maximum Likelihood Estimation

In this section, we will investigate the maximum likelihood estimator for the parameter ( ρ ) of a single-server Markovian queueing system with balking. To construct maximum likelihood, we analyze n independent and identically distributed M / M / 1 queueing systems, in which customers may balk. Since we have considered the steady-state condition, this random sample will be identically distributed as they follow the same probability mass function (pmf) given in Equation (4). Let X i denote the number of customers in the i t h system at a given time point. As a result, we obtain a sample X = ( X 1 , X 2 , , X n ) comprised of n independent and identically distributed random variables, with each X i , for i = 1 , 2 , , n corresponding to the probability distribution established in Equation (4). Consequently, the likelihood function can be expressed as
L ( ρ | x ) = e n ρ ρ i = 1 n x i i = 1 n x i !
The maximum likelihood estimator (MLE) of ρ is
ρ ^ = y n , where y = i = 1 n x i

3. Bayesian Estimation

In this section, we use the Bayesian approach to estimate the value of traffic intensity ρ . According to Bayesian statistical inference, the goal of the researcher is to obtain decision rules specified under the loss functions and through the prior distribution in the parameter space. The prior information of ρ is characterized in Bayesian inference as a probability distribution known as the prior density τ ( ρ ) . This means that ρ is regarded as a random variable. Prior knowledge is changed in light of fresh information observed, and the quantified posterior distribution g ( ρ | x ) with respect to the prior τ ( ρ ) , given by
g ( ρ | x ) = L ( ρ | x ) τ ( ρ ) 0 L ( ρ | x ) τ ( ρ ) L ( ρ | x ) τ ( ρ )
As a result, given the sample data, all statistical findings are determined from its posterior distribution. The selection of the prior distribution is the most significant aspect of the data analysis process in Bayesian inference since it determines the outcome. Under the squared error and precautionary loss functions, we consider three different priors of ρ  viz.: inverted beta prior, gamma prior, and Jeffreys prior.
Squared error loss function (SELF): Bayesian estimators are referred to as point estimators in the Bayesian framework. The squared error loss function can be expressed as
g ( ρ ^ S E L F , ρ ) = ( ρ ^ S E L F ρ ) 2
where ρ ^ S E L F is a Bayes estimator under SELF.
Minimizing E [ g ( ρ ^ S E L F , ρ ) ] , i.e., solving
d d ρ E [ g ( ρ ^ S E L F , ρ ) ] = 0
we obtain
ρ ^ S E L F = E ρ | x ( ρ ^ ρ ) 2
where ρ ^ denotes the estimator and E [ g ( ρ | x ) ] is the expectation for the posterior distribution of the parameter given the observed data x.
Precautionary loss function (PLF): Ref. [23] provided an alternate asymmetric loss function as well as a general class of precautionary loss functions as an example. One potential issue with SELF is that it assigns equal losses to underestimation and overestimation. Moreover, there are some situations where an underestimation is more serious than overestimation. Thus, applying the symmetric loss function may be inappropriate. As a result, an asymmetric loss function called the precautionary loss function (PLF) is used. To avoid being underestimated, this loss function approaches infinity (extremely high values) around the origin. This behavior suggests that the loss function severely penalizes underestimation, giving the model a strong incentive to avoid underestimating, which yields a conservative estimator. To calculate the precautionary loss function (PLF), given by
g P L F ( ρ ^ P L F , ρ ) = ( ρ ^ P L F ρ ) 2 ρ ^ P L F
where ρ ^ P L F is a Bayes estimator of ρ under PLF.
Minimizing E [ g ( ρ ^ P L F , ρ ) ] , i.e., solving
d d ρ E [ g ( ρ ^ P L F , ρ ) ] = 0
We obtain,
ρ ^ P L F = E ρ 2 | x ( E ( ρ 2 | x ) ) 1 2

3.1. Inverted Beta Prior

The distribution given by Equation (4) belongs to the exponential family, and the distribution for ρ would be
τ B ( ρ ) = 1 B ( a , b ) ρ a 1 ( 1 + ρ ) ( a + b ) , ρ > 0 , a > 0 , b > 0
which is the inverted beta distribution, with parameters a and b. The beta family is known for its flexibility in modeling a variety of distributions simply by modifying the parameters a and b. Now, the posterior distribution that corresponds to the prior in Equation (7) is given by
τ B ( ρ | x ) 1 B ( a , b ) ρ a 1 ( 1 + ρ ) ( a + b ) e n p ρ i = 1 n x i i = 1 n x i ! , = ( ρ + 1 ) a b ρ a + y 1 e n ρ Γ ( a + y ) U ( a + y , b + y + 1 , n log ( e ) ) , ρ > 0 , a , b > 0
where U ( a , b , z ) = H y p e r g e o m e t r i c U [ a , b , z ] .
The Bayes estimator of ρ under the squared error loss function for inverted beta prior is
ρ ^ S E L F B = E [ ρ | x ] = 0 ρ τ B ( ρ | x ) d ρ = 0 ρ ( ρ + 1 ) a b ρ a + y 1 e n ρ Γ ( a + y ) U ( a + y , b + y + 1 , n log ( e ) ) , d ρ = ( a + y ) U ( a + b , b y , n log ( e ) ) n log ( e ) U ( a + b , b y + 1 , n log ( e ) )
The Bayes estimator of ρ under the precautionary loss function for inverted beta prior is
ρ ^ P L F B = ( a + y ) ( a + y + 1 ) U ( a + b , b y 1 , n log ( e ) ) n 2 log 2 ( e ) U ( a + b , b y + 1 , n log ( e ) )

3.2. Gamma Prior

The gamma distribution is a proficient choice due to its flexibility in accommodating various shapes depending on its parameters. The density function of the gamma prior for ρ would be
τ ( ρ ) = b a Γ ( a ) e b ρ ρ a 1 , ρ > 0 , a > 0 , b > 0
Therefore, the posterior density of ρ , considering the gamma prior and likelihood function, will be
τ G ( ρ | x ) = ρ a + y 1 e ρ ( ( b + n ) ) ( ( b + n ) log ( e ) ) a + y Γ ( a + y ) , ρ > 0 , a > 0 , b > 0
The Bayes estimator of ρ under squared error loss function (SELF) for the gamma prior is
ρ ^ S E L F G = E [ τ G ( ρ | x ) ] = Γ ( a + y + 1 ) ( b + n ) log ( e ) Γ ( a + y )
The Bayes estimator of ρ under PLF for gamma prior is given by
ρ ^ P L F G = Γ ( a + y + 2 ) ( b + n ) 2 log 2 ( e ) Γ ( a + y )

3.3. Jeffreys Prior

In this case, Jeffreys prior is defined in terms of the fisher information given as
I ( ρ ) = E 2 l o g ( ρ | x ) ρ 2
The prior distribution of ρ is given by
τ J ( ρ ) I ( ρ ) 1 2 = 1 ρ 1 2 , ρ > 0
The posterior probability distribution of ρ can be obtained as
τ J ( ρ | x ) = e n ρ ρ y 1 2 0 e n ρ ρ y 1 2 d ρ
= ρ y 1 2 e n ρ ( n log ( e ) ) y + 1 2 Γ y + 1 2 , ρ > 0
The Bayes estimator of ρ under squared error loss function for Jeffreys prior can be defined as
ρ ^ S E L F J = E [ τ J ( ρ | x ) ] = 0 ρ ρ y 1 2 e n ρ ( n log ( e ) ) y + 1 2 Γ y + 1 2 d ρ = Γ y + 3 2 n log ( e ) Γ y + 1 2
The Bayes estimator of ρ under the precautionary loss function for the Jeffreys prior is
ρ ^ P L F J = Γ y + 5 2 n 2 log 2 ( e ) Γ y + 1 2

4. Predictive Distribution

Bayesian approaches place a high focus on prediction. Given sufficient knowledge about the event’s prior and present behavior, predictive distribution is meant to anticipate the nature of the event’s forthcoming actions. Assume x = x 1 , x 2 , x n is a random sample of a distribution with p.m.f. or p.d.f. f ( x | θ ) , θ ϵ ψ . With a prior distribution τ ( θ | x ) and the associated posterior distribution g ( θ | x ) , the posterior predictive distribution of a future observation given the sample data x is given by [24] is defined as
p ( m | x ) = ψ f ( x | θ ) g ( θ | x ) d θ
The predictive distribution of a number of customers in the system for inverted beta prior (for m = 1, 2, …) is given as:
P 1 ( M = m | x ) = Γ ( a + m + y ) U ( a + m + y , b + m + y + 1 , ( n + 1 ) log ( e ) ) m ! Γ ( a + y ) U ( a + y , b + y + 1 , n log ( e ) )
The predictive distribution of a number of customers in the system for the gamma prior (for m = 1 , 2 , ) is obtained as
P 2 ( M = m | x ) = Γ ( a + m + y ) ( ( b + n ) log ( e ) ) a + y ( ( b + n + 1 ) log ( e ) ) a m y m ! Γ ( a + y ) ,
The predictive distribution of the number of customers in the system for the Jeffreys prior is obtained as
P 3 ( M = m | x ) = Γ m + y + 1 2 ( n log ( e ) ) y + 1 2 ( ( n + 1 ) log ( e ) ) m y 1 2 m ! Γ y + 1 2 , m = 1 , 2
The predictive distribution plays a vital role in the selection and comparison of two Bayesian models, say M i and M j . Then, to compare the models M i and M j based on random sample x = x 1 , x 2 x n , the Bayes factor can be used, which is given by
B i j = P i ( M = m | x ) P j ( M = m | x ) B 12 = P 1 ( M = m | x ) P 2 ( M = m | x )
The rules for evaluating the model’s performance are presented in Table 1 (see [25]).

5. Computational Results

In this section, estimates of ρ ^ are obtained for different priors under SELF and PLF to assess their performance, using Monte-Carlo simulation in R software (version R 4.4.2). The algorithm for estimating ρ ^ and computing predictive probabilities for different priors are presented in Algorithm 1 and Algorithm 2 respectively.
In Table 2, the ML estimates and their MSEs for ρ are presented for various sample sizes n. The Bayes estimates and the mean squared errors of traffic intensity ( ρ ) are computed using Equations (8), (11), and (16) under the squared error loss function (SELF) for different priors. The results for inverted beta prior and gamma prior under SELF can be found in Table 3 and Table 4 with different hyperparameter values [(2,2), (2,1), (1,2), (1,1)] and [(0.4,0.4), (0.4,1), (1,0.4), (1,1)], respectively. Table 5 presents the estimates for Jeffreys’s prior under-squared error loss function.
Algorithm 1 Monte Carlo Simulation for Estimation of ρ ^
  • Perform Monte Carlo simulation
  • for each sample size n { 10 , 20 , 50 , 100 , 200 }  do
  •     for each traffic intensity ρ { 0.2 , 0.5 , 0.8 , 5 }  do
  •         for  i = 1 to 10 , 000  do
  •            Generate a random sample X 1 , X 2 , , X n from the Poisson distribution using Equation (4)
    X i Poisson ( ρ ) , i = 1 , 2 , , n
  •             y i = 1 n x i
  •            Estimate parameters by chosen methods
  •             ρ ^ MLE = j = 1 n x i , j n
  •            Compute the Bayesian estimate of ρ using Equations (8), (9), (11), (12), (16), and (17), respectively, under SELF and PLF
  •             ρ ^ Bayes = Bayesian estimate based on specified prior and loss function
  •         end for
  •         Calculate performance metrics
  •          Mean ( ρ ^ ) = 1 10,000 i = 1 10,000 ρ ^ i
  •          MSE ( ρ ^ ) = 1 10,000 i = 1 10,000 ( ρ ρ ^ i ) 2
  •     end for
  • end for
  • Write results
  • Write Mean ( ρ ^ ) , MSE ( ρ ^ )
Algorithm 2 Monte Carlo Simulation for Predictive Distributions and Bayes Factors
  • Perform Monte Carlo simulation
  • for each traffic intensity ρ { 0.2 , 0.5 , 0.8 , 5 }  do
  •     for each sample size n { 100 }  do
  •         Generate a random sample X 1 , X 2 , , X n from the Poisson distribution using Equation (4)
    X i Poisson ( ρ ) , i = 1 , 2 , , n
  •          y i = 1 n x i
  •         for each m { 0 , 1 , 2 , 3 , 4 }  do
  •               Compute Predictive Distributions using Equations (18)–(20)
  •                P 1 Predictive Beta second kind prior
  •                P 2 Predictive Gamma prior
  •                P 3 Predictive Jeffreys prior
  •               Compute Bayes Factors using Equation (21)
  •                B 12 P 1 / P 2
  •                B 13 P 1 / P 3
  •                B 23 P 2 / P 3
  •               Store results
  •               Insert ( ρ , n , m , P 1 , P 2 , P 3 , B 12 , B 13 , B 23 ) to results
  •         end for
  •     end for
  • end for
  • Write results
Table 6 and Table 7 depicted Bayes estimates of traffic intensity under the precautionary loss function for inverted beta prior and gamma prior with different combinations of hyperparameter values [(2,2), (2,1), (1,2), (1,1)] and [(0.4,0.4), (0.4,1), (1,0.4), (1,1)], respectively. Table 8 showed the estimate for Jeffreys’s prior under the precautionary loss function along with the MSEs. The estimates closely approximated the true value of the parameters. Additionally, as the sample size increases, the estimates tend to converge toward the true parameter values, leading to diminishing MSE values that approach zero. This behavior affirms the expectations. Moreover, the behavior of all Bayesian estimators is consistent across different prior distributions or loss functions. However, the estimates reveal that under SELF the inverted beta, gamma, and Jeffreys priors performed better than the PLF. It is evident that the Bayesian method for inverted beta prior performs better with a lower MSE for some combinations of hyperparameter values. However, a close examination also shows that in the case of other priors, MLE performs better with a lower MSE.
The results obtained from the simulation are presented using visual graphics. In Figure 1, Figure 2, Figure 3 and Figure 4, the MSEs of classical and Bayesian estimates are highlighted with various sample sizes. It is important to mention that the numerical findings, shown in the tables mentioned above, have been graphically illustrated using a chosen value for ρ ( 0.2 , 0.5 , 0.8 , 5 ) .
A significant comparison has been conducted concerning various hyperparameter values for inverted beta prior, as seen in Figure 1. It is clear that the estimates of ρ with the Beta (2,1) distribution exhibit efficient convergence with lower mean squared error (MSE) values. From Figure 2, we can infer that Bayesian estimates derived from Gamma (0.4, 1) with different hyperparameters show improved convergence toward the true value of the parameter, and the error tends to approach zero as the sample size increases. It can also be concluded that as the sample size increases, the predicted values start to align more closely with the true value of the parameter. In reference to Figure 1, Figure 2, Figure 3 and Figure 4, we can interpret that when the sample size (n) is below 100, the estimates derived from Bayesian inference vary from the actual value. Moreover, the Bayesian estimates under SELF and PLF demonstrate strong convergence toward the actual value when the sample size reaches or exceeds 100. It has also been observed that the Bayesian estimates from Gamma (0.4, 1) and Jeffreys’ priors exhibit a high degree of convergence toward the true value with the low mean square error compared to other Bayesian estimates.
Figure 4 presents the classical estimates for ρ . From this figure, we observe that the estimates for ρ obtained using the maximum likelihood (ML) estimator show similar convergence toward the actual value compared to those derived from Bayesian estimators. When comparing ML and Bayesian estimators, it can be interpreted that the Bayesian estimator under SELF for ρ generates efficient estimates with the lowest mean square error for ρ value 0.5 and 5 but for the lower value ρ = 0.2 ML produces slightly better estimates. Based on the tables and figures presented above, it is clear that ML estimates tend to be more reliable in comparison to Bayesian estimates.
Equations (18)–(20) were used to predict the distribution of the number of customers in the system. To obtain the model’s efficiency, the Bayes factor is determined using Equation (21). It is observed from Table 9 that for each of the three predictive distributions, there are 0 to 4 customers. The predictive distributions make sense given that the probability P ( M = 0 | x ) decreases and P ( M = 4 | x ) increases with increasing traffic intensity from ρ = 0.20 to ρ = 0.80 for customers 0-4, but for ρ = 5 the predictive probability increases with the increase in the number of customers. It can also be seen from Table 9 that the Bayes factors are almost equal to 1 for all simulated ρ values, suggesting that all three distributions may be accepted when all of the factors are considered.

6. Real Life Application

In this section, an application is being discussed to better understand the proposed methodology. This is based on data collected from a supermarket of Dibrugarh district with a nationwide network. The data were collected from 4:00 p.m. to 8:00 p.m. on Saturday and Sunday because the number of customers was high at that time. Based on some previous work, it is reasonable to assume that customers arrive within a certain period of time according to a Poisson process and that service times are exponentially distributed. We focus on the first counter, while four additional counters are simultaneously operating during the data collection period. If a customer arrives at the first counter and sees a long queue, they might choose not to join the queue, and those customers are classified as balking. When customers decide to balk, the number of customers present in the system is recorded. In this scenario, 200 random observations were collected and presented below,
4, 7, 4, 8, 9, 2, 5, 8, 5, 5, 9, 5, 6, 5, 2, 4, 4, 7, 5, 7, 6, 6, 3, 8, 3, 2, 4, 9, 8, 6, 6, 11, 6, 6, 5, 5, 4, 3, 9, 8, 6, 7, 1, 5, 6, 3, 4, 3, 3, 4, 4, 4, 3, 3, 3, 5, 4, 7, 2, 5, 7, 2, 7, 7, 4, 6, 5, 3, 3, 6, 8, 4, 6, 2, 5, 3, 4, 5, 4, 2, 3, 6, 4, 5, 5, 5, 4, 5, 9, 5, 8, 8, 5, 4, 3, 9, 4, 2, 9, 6, 3, 5, 9, 5, 4, 6, 4, 4, 3, 4, 10, 3, 2, 3, 6, 6, 8, 6, 6, 5, 6, 7, 7, 10, 4, 4, 4, 1, 3, 7, 3, 3, 2, 3, 6, 7, 5, 4, 3, 2, 4, 5, 3, 5, 3, 5, 4, 6, 4, 4, 5, 6, 3, 4, 4, 6, 3, 7, 6, 6, 6, 4, 5, 8, 5, 7, 4, 6, 3, 7, 2, 4, 10, 8, 8, 3, 3, 6, 4, 6, 4, 3, 7, 2, 5, 5, 4, 5, 8, 8, 4, 4, 10, 6, 9, 5, 4, 6, 3, 5.
To find the best fit of the observed data, we obtain the goodness fit of the M/M/1 queue with balking along with the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). The results of log-likelihood, p-value of goodness of fit using chi-square, AIC, and BIC are summarized below.
To account for prior uncertainties and to provide robust estimates, Bayesian methods were employed using inverted beta, Gamma, and Jeffrey’s prior distributions. The results are presented below under two loss functions, i.e., squared error loss function and precautionary error loss function.
Thus, the M/M/1 queue with the balking model shows a significantly better fit as the p-value of the goodness of fit is greater than 0.05, as shown in Table 10. Table 11 suggests that the estimates obtained under PLF and SELF are similar for all the three priors. However, by carefully examining the estimates, it is found that the estimates under PLF are marginally higher than those under SELF for each of the priors. Given that PLF penalizes underestimation more severely, this is expected behavior. Thus, the estimates obtained under SELF perform better compared to PLF, which aligns with our simulation study presented in the above sections.

7. Concluding Remarks and Future Scope

This article employs the Bayesian methodology to estimate the traffic intensity ρ in the queueing model M / M / 1 with balking, using both the squared error loss function (SELF) and the precautionary loss function (PLF). The proposed model demonstrated robustness in making predictions and offers the advantage of incorporating prior knowledge about the system’s operation. The MCMC simulation technique was used to assess the performance of the Bayesian approach, using three distinct priors, namely, the inverted beta prior, the gamma prior, and the Jeffreys prior. The findings of the study indicated that under SELF, the inverted beta prior, Jeffreys prior, and gamma performed better estimates with lower MSE values. This indicates the critical role of different loss functions in Bayesian estimation. Additionally, the Bayesian approach yields better results compared to MLE with certain priors.
Moreover, future research endeavors could extend these methodologies to include a variety of queueing models, such as multi-server queueing systems and non-Markovian queueing systems. Investigating these expansions could enhance our knowledge of Bayesian methods within extensive queueing contexts, possibly providing important perspectives and uses in various practical situations.

Author Contributions

Conceptualization, D.D. and B.K.; methodology, D.D. and B.K.; software, B.K.; formal analysis, D.D. and B.K.; investigation, D.D., B.K., A.T., D.B., M.D. and A.C.; data curation, D.D. and B.K. and A.T.; writing–original draft preparation, D.D., B.K., A.T., D.B., M.D. and A.C.; writing–review and editing, D.D., B.K., A.T., D.B., M.D. and A.C.; visualization, B.K.; supervision, D.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

In this article, all the simulations are performed in R programming, which are available in the GitHub repository https://github.com/BhaskarKushvaha1995/M-M-1-Queueing-system-with-Balking.git (accessed on 3 February 2025).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Appendix A. Derivation of Pm of M/M/1 Queue with Balking

Let m be the number of customers in the system at any point with p.m.f P m , which can be written as
P m = P 0 λ μ m i = 1 m b i 1 , m = 1 , 2 , 3 ,
In this model, the balking probability is given as
b m = 1 m + 1
Here, b n is a monotonically decreasing function of n. The probability expression contains a product term
i = 1 m b i 1
Substituting b m = 1 m + 1 , we obtain
i = 1 m 1 i = 1 1 · 2 · 3 m = 1 m !
Thus, the probability mass function is simplified to
P m = P 0 λ μ m 1 m ! , m = 1 , 2 , 3 ,
We know,
ρ = λ μ ,
Therefore,
P m = ρ m m ! P 0
Using the normality condition
m = 0 P m = 1
P 0 + m = 1 ρ m m ! P 0 = 1
P 0 1 + m = 1 ρ m m ! = 1
The summation term is recognized as the Taylor series expansion of the exponential function
m = 0 ρ m m ! = e ρ
Using this result
P 0 e ρ = 1
Solving for P 0
P 0 = e ρ
Now, substituting P 0 = e ρ into
P m = ρ m m ! P 0 ,
we obtain
P m = ρ m m ! e ρ , m = 0 , 1 , 2 ,
This result follows the probability mass function of the number of customers in the system of M / M / 1 queue with balking.
To verify that the derived probability mass function satisfies the total probability condition
m = 0 P m = 1 ,
substituting the expression for P m , we obtain
m = 0 ρ m m ! e ρ = 1
e ρ m = 0 ρ m m !
The Taylor series expansion of the exponential function
m = 0 ρ m m ! = e ρ
Thus, we obtain
e ρ · e ρ = 1
Since this holds true, the derived probability mass function correctly satisfies the total probability condition:
m = 0 P m = 1
Hence, the derivation is verified and found to be correct.

References

  1. Clarke, A.B. Maximum likelihood estimates in a simple queue. Ann. Math. Stat. 1957, 28, 1036–1040. [Google Scholar] [CrossRef]
  2. Choudhury, A.; Basak, A. Statistical inference on traffic intensity in an M/M/1 queueing system. Int. J. Manag. Sci. Eng. Manag. 2018, 13, 274–279. [Google Scholar] [CrossRef]
  3. Jain, S. Estimating changes in traffic intensity for M/M/1 queueing systems. Microelectron. Reliab. 1995, 35, 1395–1400. [Google Scholar] [CrossRef]
  4. Jain, M.; Dhyani, I. Control policy for M/Ek/1 queueing system. J. Stat. Manag. Syst. 2001, 4, 73–82. [Google Scholar]
  5. Lilliefors, H.W. Some confidence intervals for queues. Oper. Res. 1966, 14, 723–727. [Google Scholar] [CrossRef]
  6. Ammar, S.; El-Sherbiny, A.; El-Shehawy, S.; Al-Seedy, R.O. A matrix approach for the transient solution of an M/M/1/N queue with discouraged arrivals and reneging. Int. J. Comput. Math. 2012, 89, 482–491. [Google Scholar] [CrossRef]
  7. Manoharan, M.; Jose, J.K. Markovian queueing system with random balking. Opsearch 2011, 48, 236–246. [Google Scholar] [CrossRef]
  8. Boots, N.K.; Tijms, H. A multiserver queueing system with impatient customers. Manag. Sci. 1999, 45, 444–448. [Google Scholar] [CrossRef]
  9. Haight, F.A. Queueing with balking. Biometrika 1957, 44, 360–369. [Google Scholar] [CrossRef]
  10. Haight, F.A. Queueing with balking. II. Biometrika 1960, 47, 285–296. [Google Scholar] [CrossRef]
  11. Rubin, G.; Robson, D.S. A single server queue with random arrivals and balking: Confidence interval estimation. Queueing Syst. 1990, 7, 283–306. [Google Scholar] [CrossRef]
  12. Mukherjee, S.; Chowdhury, S. Bayes estimation of measures of effectiveness in an M/M/1 queueing model. Calcutta Stat. Assoc. Bull. 2010, 62, 97–108. [Google Scholar] [CrossRef]
  13. Chowdhury, S.; Mukherjee, S. Estimation of traffic intensity based on queue length in a single M/M/1 queue. Commun. Stat. Theory Methods 2013, 42, 2376–2390. [Google Scholar] [CrossRef]
  14. Vaidyanathan, V.; Chandrasekhar, P. Parametric estimation of an M/Er/1 queue. Opsearch 2018, 55, 628–641. [Google Scholar] [CrossRef]
  15. Kiapour, A. Bayesian estimation of the expected queue length of a system M/M/1 with certain and uncertain priors. Commun. Stat. Theory Methods 2022, 51, 5310–5317. [Google Scholar] [CrossRef]
  16. Bura, G.S.; Sharma, H. Maximum likelihood and Bayesian estimation on M/M/1 queueing model with balking. Commun. Stat. Theory Methods 2023, 53, 1–29. [Google Scholar] [CrossRef]
  17. Deepthi, V.; Jose, J.K. Bayesian inference on M/M/1 queue under asymmetric loss function using Markov Chain Monte Carlo method. J. Stat. Manag. Syst. 2021, 24, 1003–1023. [Google Scholar] [CrossRef]
  18. Kotz, S.; Balakrishnan, N.; Johnson, N.L. Continuous Multivariate Distributions, Volume 1: Models and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2019; Volume 334. [Google Scholar]
  19. Dey, S. A note on bayesian estimation of the traffic intensity in M/M/1 queue and queue characteristics under quadratic loss function. Data Sci. J. 2008, 7, 148–154. [Google Scholar] [CrossRef]
  20. Arizono, I.; Takemoto, Y. An analysis of steady-state distribution in M/M/1 queueing system with balking based on concept of statistical mechanics. RAIRO-Oper. Res. 2021, 55, S327–S341. [Google Scholar] [CrossRef]
  21. Gross, D. Fundamentals of Queueing Theory; John wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  22. Sztrik, J. Basic Queueing Theory; GlobeEdit OmniScriptum GmbH, KG: Saarbrucken, Germany, 2016. [Google Scholar]
  23. Norstrom, J.G. The use of precautionary loss functions in risk analysis. IEEE Trans. Reliab. 1996, 45, 400–403. [Google Scholar] [CrossRef]
  24. Chen, M.H.; Shao, Q.M.; Ibrahim, J.G. Monte Carlo Methods in Bayesian Computation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  25. Jeffreys, H. The Theory of Probability; OuP Oxford: Oxford, UK, 1998. [Google Scholar]
Figure 1. Mean squared errors (MSEs) against sample sizes for inverted beta prior, considering different loss functions and different combinations of hyperparameter values.
Figure 1. Mean squared errors (MSEs) against sample sizes for inverted beta prior, considering different loss functions and different combinations of hyperparameter values.
Appliedmath 05 00019 g001
Figure 2. Mean squared errors (MSEs) against sample sizes for Gamma prior, considering different loss functions and different combination of hyperparameter values.
Figure 2. Mean squared errors (MSEs) against sample sizes for Gamma prior, considering different loss functions and different combination of hyperparameter values.
Appliedmath 05 00019 g002
Figure 3. Mean squared errors (MSEs) against sample sizes for Jeffreys prior, considering different loss functions and different combinations of hyperparameter values.
Figure 3. Mean squared errors (MSEs) against sample sizes for Jeffreys prior, considering different loss functions and different combinations of hyperparameter values.
Appliedmath 05 00019 g003
Figure 4. Mean squared errors (MSEs) against sample sizes for maximum likelihood estimator (MLE) for different values of ρ .
Figure 4. Mean squared errors (MSEs) against sample sizes for maximum likelihood estimator (MLE) for different values of ρ .
Appliedmath 05 00019 g004
Table 1. Rules for accepting the Bayes factor.
Table 1. Rules for accepting the Bayes factor.
ResultConclusion
B 12 1 Model 1 is supported
0.316 B 12 < 1 Minimal evidence against model 1
0.1 B 12 < 0.316 Substantial evidence against model 1
0.01 B 12 < 0.1 Strong evidence against model 1
B 12 < 0.01 Decisive evidence against model 1
Table 2. ML estimates and MSEs of traffic intensity ( ρ ).
Table 2. ML estimates and MSEs of traffic intensity ( ρ ).
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
MLE0.20.20150.01980.20030.01010.20180.00400.20050.00200.20030.0010
0.50.50180.05060.50010.02590.49940.01020.49890.00500.49990.0025
0.80.79890.08000.79810.04140.80120.01590.79920.00790.79930.0040
54.99760.50015.00020.25385.00050.09985.00320.04995.00230.0253
Table 3. Bayes estimates and MSEs of ρ under SELF for inverted beta prior.
Table 3. Bayes estimates and MSEs of ρ under SELF for inverted beta prior.
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
Beta(2,2)0.200.39190.05100.30560.01930.24460.00560.22310.00240.21160.0011
0.500.64690.06060.58070.02810.53170.01040.51600.00510.50770.0026
0.800.92000.07990.85910.03960.82380.01570.81190.00810.80570.0040
5.004.96600.49344.98650.25054.98990.10194.99770.04964.99640.0257
Beta(2,1)0.200.41860.06310.31460.02190.24820.00610.22460.00260.21240.0012
0.500.68450.07450.59570.03150.53840.01110.52040.00520.50970.0026
0.800.96180.09490.88210.04340.83240.01650.81660.00810.80850.0039
5.005.03920.49445.02520.25105.01060.10115.00610.05135.00070.0248
Beta(1,2)0.200.33320.03270.27060.01350.22880.00450.21460.00210.20760.0010
0.500.59250.04760.54800.02370.51970.00970.50930.00510.50480.0025
0.800.86640.07050.83390.03770.81400.01540.80660.00800.80260.0041
5.004.95360.50114.97670.24524.98870.09974.99500.05034.99940.0252
Beta(1,1)0.200.34970.03880.27710.01480.23370.00490.21630.00220.20870.0011
0.500.62850.06000.56620.02770.52670.01040.51300.00500.50660.0025
0.800.90620.08240.85300.03950.82180.01570.81090.00800.80560.0039
5.005.03790.49025.01430.24805.00880.10084.99680.05005.00330.0246
Table 4. Bayes estimates and MSEs of ρ under SELF for Gamma prior.
Table 4. Bayes estimates and MSEs of ρ under SELF for Gamma prior.
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
Gamma (0.4,0.4)0.200.30830.03970.26320.01410.22570.00450.21440.00220.20640.0010
0.500.61450.05970.56060.02760.52530.01060.51200.00510.50590.0025
0.800.90650.08490.85680.04290.82430.01640.81030.00780.80480.0039
5.004.93590.46824.96270.23624.98830.10114.99260.05024.99500.0243
Gamma (0.4,1)0.200.29100.03340.25610.01280.22270.00430.21240.00210.20560.0010
0.500.58330.04960.54380.02450.51910.00990.50950.00490.50340.0025
0.800.85300.06890.82850.03680.81210.01530.80510.00780.80300.0040
5.004.67740.51074.83400.25204.93450.10124.96440.05084.98330.0252
Gamma (1,0.4)0.200.35920.05800.29200.01890.23780.00530.21960.00230.21000.0011
0.500.67200.07740.58880.03160.53420.01080.51820.00520.50940.0025
0.800.96190.10060.88540.04530.83400.01670.81600.00820.80970.0042
5.004.99480.45534.99590.23854.99520.09945.00120.05044.99850.0247
Gamma (1,1)0.200.34060.04930.28470.01740.23590.00510.21750.00230.20910.0011
0.500.63500.06020.57210.02810.52890.01030.51280.00500.50750.0025
0.800.90790.07980.85660.03940.82320.01590.81150.00810.80680.0041
5.004.72830.49574.85360.24924.94460.09744.97170.04944.98530.0250
Table 5. Bayes estimates and MSEs of ρ under SELF for Jeffreys prior.
Table 5. Bayes estimates and MSEs of ρ under SELF for Jeffreys prior.
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
Jeffreys0.200.24950.02250.22540.01060.21110.00420.20470.00200.20260.0010
0.500.54870.05190.52590.02550.50930.01020.50600.00500.50270.0026
0.800.85400.16140.82350.07820.81000.03170.80490.01620.80340.0108
5.005.03950.50575.02120.25165.00740.09895.00320.04975.00310.0249
Table 6. Bayes estimates and MSEs of ρ under PLF for inverted beta prior.
Table 6. Bayes estimates and MSEs of ρ under PLF for inverted beta prior.
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
Beta(2,2)0.200.43590.07020.32640.02410.25500.00650.22730.00260.21410.0011
0.500.69260.07630.60130.03170.54230.01100.52250.00530.51070.0025
0.500.95800.09010.88340.04260.83080.01600.81770.00830.80900.0041
5.005.01810.48425.00510.24905.00720.09905.00090.05035.00170.0250
Beta(2,1)0.200.45860.08260.33810.02770.25820.00720.22880.00270.21510.0012
0.500.72860.09390.61770.03600.54910.01180.52450.00550.51230.0026
0.801.00090.10860.90370.04620.84210.01720.82120.00810.80980.0040
5.005.10540.51405.04950.24895.02480.09865.00700.05005.00330.0245
Beta(1,2)0.200.37190.04500.28900.01620.23800.00510.21940.00220.20980.0010
0.500.63750.05940.57480.02800.52960.01040.51380.00500.50720.0025
0.800.91260.08050.85720.04020.82320.01610.81070.00810.80450.0040
5.004.99230.47815.00320.24765.00130.09894.99780.04915.00360.0254
Beta(1,1)0.200.39680.05580.30200.01940.24210.00560.22140.00230.21070.0010
0.500.66770.07090.58900.03070.53760.01090.51700.00510.50920.0025
0.800.95430.09590.88360.04490.83080.01650.81600.00820.80720.0040
5.005.08340.48845.04030.25455.01500.10245.00570.05005.00360.0256
Table 7. Bayes estimates and MSEs of ρ under PLF for Gamma prior.
Table 7. Bayes estimates and MSEs of ρ under PLF for Gamma prior.
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
Gamma (0.4,0.4)0.200.29520.03450.25940.01310.22520.00450.21250.00220.20720.0010
0.500.56530.05090.53200.02520.51260.00990.50600.00490.50350.0025
0.800.85310.15090.83370.07960.81300.03180.80620.01590.80410.0107
5.004.69100.52844.82770.25814.92910.09834.96390.05064.98150.0248
Gamma (0.4,1)0.200.30000.03590.25810.01320.22620.00450.21260.00210.20710.0010
0.500.53550.04320.51760.02330.50770.00970.50430.00480.50200.0024
0.800.80720.13240.80430.07380.80410.03100.80030.01630.80050.0104
5.004.69210.50834.83680.25244.92720.09974.96680.05104.98110.0251
Gamma (1,0.4)0.200.29980.03530.26130.01320.22610.00460.21260.00210.20650.0011
0.500.62150.06040.56080.02820.52690.01060.51280.00510.50630.0025
0.800.91110.16350.85650.07920.82300.03160.81180.01620.80800.0108
5.004.66600.52404.83450.25384.92770.10034.96170.05044.98220.0241
Gamma (1,1)0.200.29940.03560.26040.01340.22430.00450.21310.00210.20600.0010
0.500.56710.05150.53170.02460.51540.01000.50650.00490.50390.0024
0.800.86210.13340.83250.07090.81540.03050.80710.01530.80460.0106
5.004.68120.51274.83590.25464.93290.09984.96680.04954.98260.0258
Table 8. Bayes estimates and MSEs of ρ under PLF for Jeffreys prior.
Table 8. Bayes estimates and MSEs of ρ under PLF for Jeffreys prior.
Sample Size
Prior ρ 10 20 50 100 200
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
Jeffreys0.200.29390.02930.24940.01290.22140.00450.20860.00210.20450.0010
0.500.59520.05860.55130.02750.51850.01030.50930.00510.50540.0024
0.800.89530.16900.85120.08490.82140.03250.81010.01610.80750.0106
5.005.09810.51395.03700.25745.01840.09975.01010.04965.00370.0249
Table 9. Predictive distribution of the number of customers in the system.
Table 9. Predictive distribution of the number of customers in the system.
No. of Customer ρ Prior B 12 B 13 B 23
Inverted Beta Gamma Jeffreys
00.20.77900.77030.78801.01140.98860.9775
10.18640.19260.17910.96821.04111.0752
20.03000.03210.02850.93341.05151.1265
30.00400.00450.00390.90571.03911.1473
40.00050.00060.00050.88391.01411.1473
00.50.65560.64720.65121.01291.00670.9939
10.26500.26970.26640.98280.99481.0123
20.06470.06740.06660.95920.97091.0123
30.01230.01310.01310.94110.94040.9993
40.00200.00220.00220.92790.90660.9771
00.80.38390.38400.36760.99981.04441.0446
10.35130.35200.35090.99811.00131.0031
20.17600.17600.18341.00000.95950.9595
30.06390.06360.06951.00510.91940.9148
40.01880.01850.02131.01340.88120.8696
050.02120.02590.01920.82051.10861.3511
10.07800.09060.07230.86171.08011.2534
20.14700.16230.13960.90571.05281.1625
30.18890.19830.18400.95251.02661.0778
40.18640.18590.18611.00241.00150.9991
Table 10. Estimating AIC and BIC of M/M/1 queue with balking and M/M/1 queue.
Table 10. Estimating AIC and BIC of M/M/1 queue with balking and M/M/1 queue.
DistributionLoglikelihoodAICBICGoodness of Fit
M/M/1 queue with balking−422.2357846.4715849.76980.586978
Table 11. Point estimates for the real-life example.
Table 11. Point estimates for the real-life example.
Estimates ρ ^ SELF ρ ^ PLF
Inverted Beta (a = 2, b = 2)5.07335.07S58
Gamma (a = 2, b = 2)5.03465.0371
Jeffrey’s5.07755.0799
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kushvaha, B.; Das, D.; Tamuli, A.; Bora, D.; Deka, M.; Choudhury, A. Modeling and Estimation of Traffic Intensity in M/M/1 Queueing System with Balking: Classical and Bayesian Approaches. AppliedMath 2025, 5, 19. https://doi.org/10.3390/appliedmath5010019

AMA Style

Kushvaha B, Das D, Tamuli A, Bora D, Deka M, Choudhury A. Modeling and Estimation of Traffic Intensity in M/M/1 Queueing System with Balking: Classical and Bayesian Approaches. AppliedMath. 2025; 5(1):19. https://doi.org/10.3390/appliedmath5010019

Chicago/Turabian Style

Kushvaha, Bhaskar, Dhruba Das, Asmita Tamuli, Dibyajyoti Bora, Mrinal Deka, and Amit Choudhury. 2025. "Modeling and Estimation of Traffic Intensity in M/M/1 Queueing System with Balking: Classical and Bayesian Approaches" AppliedMath 5, no. 1: 19. https://doi.org/10.3390/appliedmath5010019

APA Style

Kushvaha, B., Das, D., Tamuli, A., Bora, D., Deka, M., & Choudhury, A. (2025). Modeling and Estimation of Traffic Intensity in M/M/1 Queueing System with Balking: Classical and Bayesian Approaches. AppliedMath, 5(1), 19. https://doi.org/10.3390/appliedmath5010019

Article Metrics

Back to TopTop