Next Article in Journal
Numerical Solutions for Fractional Bagley–Torvik Equation with Integral Boundary Conditions
Previous Article in Journal
Predefined Time Transient Coordination Control of Power-Split Hybrid Electric Vehicle Based on Adaptive Extended State Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approximate Solution for M/G/1 Queues with Pure Mixture Service Time Distributions

1
Department of Industrial Engineering, Cukurova University, Adana 01330, Turkey
2
Department of Industrial Engineering, Adana Alparslan Turkes Science and Technology University, Adana 01250, Turkey
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1753; https://doi.org/10.3390/sym17101753
Submission received: 8 September 2025 / Revised: 29 September 2025 / Accepted: 7 October 2025 / Published: 17 October 2025
(This article belongs to the Section Mathematics)

Abstract

This study introduces an approximate solution for the M/G/1 queueing model in scenarios where the service time distribution follows a pure mixture distribution. The derivation of the proposed approximation leverages the analytical tractability of the variance for certain mixture distributions. By incorporating this variance into the Pollaczek–Khinchine equation, an approximate closed-form expression for the M/G/1 queue is obtained. The formulation is extended to service-time distributions composed of two or more components, specifically Gamma, Gaussian, and Beta mixtures. To assess the accuracy of the proposed approach, a discrete-event simulation of an M/G/1 system was conducted using random variates generated from these mixture distributions. The comparative analysis reveals that the approximation yields results in close agreement with simulation outputs, with particularly high accuracy observed for Gaussian mixture cases.

1. Introduction

The M/G/1 queue model is widely applied across various fields due to its straightforward solution methodology. The M/G/1 model represents a system in which arrivals follow a Markovian (Poisson) process with parameter λ , and the service time follows a non-exponential probability distribution. The exact solution of the M/G/1 model was first developed by Pollaczek–Khinchine (P-K). To derive performance metrics, the service-time distribution must conform to a theoretical distribution and its variance must be known [1]. However, when distribution-fitting tests are performed on heterogeneous datasets, it is often observed that these data do not conform to any theoretical distribution. In such cases, rather than directly using an empirical distribution, it can be tested whether the dataset fits a mixture distribution.
Mixture distributions are often applied to represent heterogeneous populations composed of multiple hidden but internally homogeneous subgroups [2]. Such models, also referred to as latent class models or as forms of unsupervised learning, have seen increasing use in recent years across numerous fields. A batch process monitoring approach based on Gaussian mixture models was introduced, outperforming conventional methods in fault detection [3]. The connection between theoretical advances in mixture modeling and practical applications in medical and health sciences has been demonstrated through extensive real-world examples [4]. A new claim number distribution was introduced by mixing a negative binomial distribution with a Gamma distribution, and its performance was evaluated [5]. Wiper et al. [6] also applied mixture models through a Bayesian density estimation approach to the M/G/1 queueing model. Bosch-Domènech et al. [7] introduced a mixture model based on the Beta distribution to analyze Beauty-Contest data, providing better fit and more precise insights into common reasoning patterns than Gaussian-based mixture models. Finite mixture models (FMMs) were applied for Danish fire insurance losses, and Gamma mixture density networks have been applied to motor insurance claim dataset, improving both risk assessment and the prediction of claim severities [8,9]. FMMs combining Gamma, Weibull, and Lognormal distributions have been applied to model the length of hospital stay within diagnosis-related groups, addressing skewness and improving estimation [10]. Gaussian mixture modeling, combined with manifold learning and data augmentation, has been applied to predict the flow of water in coal mines, significantly improving predictive precision, interpretability and applicability under complex geological conditions [11]. Time-course gene expression analysis was conducted by Hefemeister et al. [12] to identify co-expressed genes, predict treatment responses, and classify novel toxic compounds.
Mixture models have been widely studied for their ability to handle heterogeneous data, particularly in large datasets essential for scientific and practical applications [13]. Mixture distributions consist of two or more component distributions. If all distributions belong to the same distribution family but have different parameter sets, they are called pure mixture distributions. If at least one of them is different from the others, they are referred to as convoluted mixture distributions. In convoluted mixture distributions, the calculations are somewhat more complex. Therefore, pure mixture distributions are more commonly used in practice. Among pure mixture distributions, the Gaussian mixture distribution is the most widely used in the literature, as it is formed by combining several Gaussian distributions with distinct parameters. A key property of the Gaussian distribution is its symmetry. Symmetry plays a fundamental role in statistical theory, since in symmetric distributions the sample mean and variance are uncorrelated. The Gaussian mixture distribution comprises multiple component distributions, each of which is symmetric.
Estimation of mixture distribution parameters represents a critical methodological challenge with wide-ranging applications in statistics, machine learning, and data analysis [14]. Several methods exist for estimating the parameters of a mixture distribution, each with its advantages and limitations. Rao [15] was the first to introduce the Maximum Likelihood Estimation (MLE) approach to the problem of normal mixtures, proposing an iterative solution for the case of two components with equal standard deviations using Fisher’s scoring method. This approach was extended to more complex cases with more than two components and unequal variances [16,17]. The use of MLE was generalized to mixtures of multivariate normal distributions and the first computer program enabling their routine application was developed [18,19]. Wolfe’s method for solving MLEs is an early example of the EM (Expectation Maximization) algorithm, which was later generalized by Dempster [20]. Bermudez et al. [21] introduced finite mixtures of multiple Poisson and multiple negative binomial models, estimating them through the EM algorithm on an empirical dataset. The results indicated that the finite mixture of multiple Poisson models fit the data better in terms of EM parameters than the negative binomial model for the dataset concerning the number of days of work disability resulting from motor vehicle accidents. Several researchers have also applied Bayesian analysis for mixture models. Diebolt et al. [22] described the Bayes estimators with Gibbs sampling and compared them with MLE results. Roberts and Rezek [23] presented a Bayesian approach for parameter estimation of a Gaussian mixture model, while Tsionas [24] and Bouguila et al. [25] applied it to the Gamma mixture model and the Beta mixture model, respectively.
Although Gaussian-based finite mixtures have traditionally dominated discussions and applications, various other mixture distributions have found practical use in specific domains. In cases of failure or survival, the observed times may fit mixture distributions due to the presence of multiple causes [26]. The components of these mixtures may consist of distributions such as the negative exponential, Weibull, and others. Elmahdy and Aboutahoun [27] employed finite mixtures of Weibull distributions to simulate lifetime data for systems with failure modes. This involves estimating the unknown parameters, which constitutes crucial statistical work, particularly for reliability analysis and life testing. Instead of representing the length of hospital stay (LOS) with a single distribution, Ickowicz and Sparks [28] modeled it using convolutive mixture distributions and employed MLE and EM for parameter estimation.
In addition to parameter estimation, evaluating how well the system reacts to changes in its parameters provides a means of assessing the reliability of the approximation methods in M/G/1 mixture models. Among the most notable contributions to the adaptation of mixture distributions to M/G/1 systems is the study by Mohammadi and Salahi [29]. Their work considers the case where a certain proportion of customers require re-service, assuming that both the initial service time and the re-service time follow a truncated normal mixture distribution. Based on this assumption, they derived closed-form expressions for the system parameters within a Bayesian framework. In a subsequent study, the same authors extended their analysis to the case where the initial service time and the re-service times of customers follow a Gamma mixture distribution, once again deriving the corresponding system parameters using a Bayesian approach [30].
The main distinction of this study from the aforementioned works lies in the estimation of system parameters for an M/G/1 queueing model where customers receive service only once, and the service-time distribution follows k-component Gaussian, Gamma, or Beta mixture distributions. A major contribution of this study is the derivation of lower- and upper-bound formulations for the mean waiting time in the queue, separately for each of the three systems under consideration. When the service-time distribution consists of pure (non-mixture) distributions, the determination of the upper bound of the mean waiting time in an M/G/1 system was first introduced by Marchal [31]. In Marchal’s study, it was demonstrated that the upper bound of the mean waiting time in the queue is equal to the exact solution obtained from the P-K formula for the same parameter. In this study, the analysis of the lower and upper bound formulations for the mean waiting time—derived and presented in detail in the subsequent sections—shows that, for all three mixture distributions considered, both the simulation results and the derived formulations for the mean waiting time closely approximate the upper bound. This consistency provides strong evidence supporting the validity of the proposed formulations and constitutes one of the key contributions of this work.
In the M/G/c system, when the service-time distribution consists of pure distributions, no exact solution for the system parameters has yet been established. However, several studies have proposed approximate solutions [32,33,34]. Morozov et al. [35] conducted a sensitivity analysis of the system parameters in an M/G/c model with a two-component Pareto mixture service-time distribution, employing the regenerative perfect simulation method. Since the M/G/c system does not have an exact solution, they evaluated approximate solutions with simulation. In this study, an approximate solution is proposed for the M/G/1 model when the service-time distribution consists of a pure mixture distribution. The rationale for deriving the approximate formula is based on the ability to compute the variance of certain pure mixture distributions. By substituting this variance into the P-K equation, we obtain an approximate formula for the M/G/1 queue. The approximate formula has been adapted for service-time distributions consisting of two components, including the Gamma, Gaussian, and Beta mixture distributions. To measure the effectiveness of the proposed approximate formula, an M/G/1 system was simulated using random numbers generated from mixture distributions. The results indicate that the proposed approximate formula yields outcomes very close to those of the simulation, particularly for Gaussian mixture distributions.
The remainder of this paper is organized as follows. Section 2 introduces mixture distributions and describes both exact and approximate solution approaches. In Section 3, the approximate formulas for M/G/1 queueing models with mixture distributions are presented. Section 4 outlines the experimental design framework used to analyze different model parameters. Section 5 reports the numerical results along with performance graphs. Finally, Section 6 concludes the paper.

2. Mixture Distributions

In recent years, the growing prevalence of heterogeneous datasets in real-life applications has necessitated the use of efficient data modeling techniques to uncover the valuable information they contain. Within this framework, finite mixture distributions have emerged as prominent statistical tools, grounded in solid theoretical foundations, for modeling complex data structures, identifying latent subpopulations, and revealing hidden patterns within the data. A mixture distribution is a probability distribution typically expressed as a convex linear combination of probability density functions (PDFs).
The PDF of a finite mixture distribution can be defined as a weighted combination of k component distributions as in Equation (1).
f ( x ; Ψ ) = j = 1 k π j g j ( x ; θ j ) , x R x , i = 1 , , n
where the mixing proportions satisfy
j = 1 k π j = 1 , 0 < π j < 1
Here, g j ( x ; θ j ) denotes the PDF of the j-th component distribution with parameter vector θ j (e.g., mean and variance in the case of a Gaussian distribution). The full parameter set of the mixture model is denoted by Ψ = { π j , θ j } for j = 1 , . . , k , where π j is the mixing proportion of the j-th component.

2.1. Parameter Estimation

In the analysis of FMMs, various methodologies are employed to estimate the parameters of the mixture distribution and to determine the optimal number of components. The most common approaches include the method of moments, minimum distance methods, MLE, and Bayesian techniques. Among these alternatives, MLE has gained particular prominence due to its broad applicability.

2.1.1. Maximum Likelihood Estimation

Given a random sample of observations ( x 1 , , x n ) generated from the mixture distribution in Equation (1), the likelihood function of a FMM with k components is given in Equation (3).
L ( Ψ ) = i = 1 n j = 1 k π j g j ( x i ; θ j ) , L ( Ψ ; x ) L ( Ψ )
L ( Ψ ) denotes the likelihood of the observed data given the set of parameters Ψ = { π 1 , π k ; θ 1 , θ k } . Here, π j denotes the mixing proportion of the j-th component, and  g j ( x i ; θ j ) is the PDF of the i-th observation under the j-th component with parameter θ j . The likelihood is computed as the product over all observations of the sum of the weighted component densities.
In Equation (4), ( Ψ ) refers to the log-likelihood function associated with the sample data, obtained by applying the logarithm to the summation of weighted probability densities for all observations. Such a transformation is beneficial, as it changes the product form in the likelihood expression into an additive form, thereby simplifying the mathematical formulation, particularly in the maximization step for estimating the parameter θ .
l o g L ( Ψ ) = ( Ψ ) = i = 1 n l o g ( j = 1 k π j g j ( x i , θ j ) )
The likelihood and log-likelihood functions play a fundamental role in the statistical analysis of mixture distributions and constitute the foundation for parameter estimation techniques, including MLE which corresponds to the set of parameters that maximizes the log-likelihood function ( Ψ ) . It can be obtained by computing the gradient of the log-likelihood, as shown in Equations (5) and (6). Solving these equations yields the MLEs of the component parameters θ j and the mixing proportions π j .
( Ψ ) θ j = i = 1 n π j g j ( x i , θ j ) l = 1 k π l g l ( x i ; θ l ) = 0 j = 1 , . . , k
( Ψ ) π j = i = 1 n g j ( x i ; θ j ) g j ( x i ; θ k ) l = 1 k π l g l ( x i ; θ l ) = 0 j = 1 , . . , k 1
Since closed-form solutions for the MLE of mixture distributions are typically intractable, the EM algorithm, originally proposed by Dempster et al. [20], is commonly employed to estimate the parameters of such models.

2.1.2. Expectation Maximization Algorithm

The EM algorithm is an iterative computational procedure used to obtain either maximum likelihood estimates or maximum a posteriori estimates of the parameters within statistical models. It works by introducing missing or latent data, representing unobserved information that simplifies the model and its likelihood function. The application of the EM algorithm to mixture distributions has been extensively described in the literature [2].
Consider an independent and identically distributed (i.i.d.) sample ( x 1 , , x n ) generated from the mixture distribution introduced in Equation (1). In practice, it is more convenient to employ the log-likelihood function, denoted by ( Ψ ) and presented in Equation (7).
l o g L ( Ψ ) = ( Ψ ) = l o g i = 1 n j = 1 k π j g j ( x i , θ j ) = i = 1 n l o g j = 1 k π j g j ( x i ; θ j )
We can define the missing data as a binary variable z i j .
z i j = 1 if i t h observation is from the j t h component of the mixture 0 otherwise
for i = 1 , , n and j = 1 , , k and z i = ( z i 1 , , z i k ) T .
Let y = ( y 1 , , y n ) complete data where y i = ( x i , z i T ) . In this case, the complete likelihood function of augmented data can be written as
L c ( Ψ , y ) = i = 1 n j = 1 k z i j l o g ( π j g j ( x i , θ j ) )
Since the latent variables z i j are not observable, the MLE of Ψ cannot be obtained in closed form. To address this limitation, the EM algorithm is employed, which iteratively alternates between two stages: the Expectation (E) step and the Maximization (M) step.
In the E-step, the log-likelihood is replaced by its conditional expectation given the observed data, thereby incorporating the contribution of the unobserved components.The expected value of the complete-data log-likelihood is evaluated, given the observed data and the current parameter estimates. This expectation can be expressed as follows:
Q ( Ψ Ψ ( m ) ) = E [ log L c ( Ψ ; y ) x ; Ψ ( m ) ]
where Ψ ( m ) represents the parameter estimates obtained in the m-th iteration. Since the complete-data log-likelihood L c ( Ψ , y ) is linear in latent variables z i j , these unobserved indicators are replaced with their conditional expectations, also called responsibilities. Accordingly, the conditional probability that the i-th observation belongs to the j-th component is given by Equation (11).
p i j ( m + 1 ) = E [ z i j x i , Ψ ( m ) ] = P ( z i j = 1 x i , Ψ ( m ) ) = π j ( m ) g j ( x i ; θ j ( m ) ) l = 1 k π l ( m ) g l ( x i ; θ l ( m ) )
p i j ( m + 1 ) represents the posterior probability (responsibility) that the i-th observation belongs to the j-th component, based on the estimates of the current parameters Ψ ( m ) . In this context, p i j ( m ) can be viewed as the estimator of the latent indicator z ^ i j ( m ) . Subsequently, these probabilities are utilized in the M-step to estimate the model parameters, thereby guiding the iterative process of the EM algorithm toward convergence. In the M-step, Equations (12) and (13) describe a constrained optimization problem.
θ ^ j , π ^ j = a r g max θ j , π j Q ( θ 1 , , θ k , π 1 , , π k )
subject to
j = 1 k π j = 1
This problem can be reformulated by introducing the Lagrange multiplier ( δ ) in Equation (14).
L ( θ 1 , , θ k , π 1 , , π k , δ ) = Q ( θ 1 , , θ k , π 1 , , π k ) δ j = 1 k π j
By substituting the likelihood and log-likelihood expressions (given in Equations (7)–(9)) into the probability structure of the mixture model, the Lagrangian for the constrained maximization problem can be explicitly written as
L ( θ 1 , , θ k , π 1 , , π k , δ ) = i = 1 n j = 1 k [ p ^ i j log π j + p ^ i j log g j ( x i ; θ j ) ] δ ( j = 1 k π j 1 )
By taking the partial derivatives of the Lagrangian in Equation (15) with respect to θ j , π j , and  δ , and equating them to zero, the parameter estimators can be derived as follows:
L θ j = i = 1 n p ^ i j g j ( x i ; θ j ) g j ( x i ; θ j ) θ j = 0
L δ = j = 1 k π j 1 = 0 j = 1 k 1 δ i = 1 n p i j = 1 δ = i = 1 n j = 1 k p i j
L π j = i = 1 n p ^ i j π j δ = 0 that yields to π j = 1 δ i = 1 n p i j
π ^ j = i = 1 n p i j i = 1 n l = 1 k p i l , j = 1 , , k
The solution of Equations (16)–(19) provides estimates of the parameters θ ^ j and π ^ j for j = 1 , , k in each iteration [36]. The relation between p ^ i j and z ^ i j can be expressed as follows.
z ^ i j = 1 , if p ^ i j p ^ i c for all j c 0 , otherwise
The steps of the EM algorithm are explained in Algorithm 1.
Algorithm 1 EM Algorithm
  • Initialize the parameters with starting values θ ^ j ( m ) , π ^ j ( m ) ( m = 0 ) j = 1 , , k and iterate the steps until the algorithm converges.
  • E-step: Estimate the responsibility that observation x i belongs to j-th component according to Equation (21).
  • M-step: Update the parameter estimates by maximizing the expected complete-data log-likelihood, using the responsibilities from the E-step as in Equations (22) and (23).
p i j ( m + 1 ) = π j ( m ) g j ( x i ; θ j ( m ) ) l = 1 k π l ( m ) g l ( x i ; θ l ( m ) ) , i = 1 , , n , j = 1 , , k
θ j ( m + 1 ) = arg max θ j i = 1 n p i j ( m + 1 ) log g j ( x i ; θ j )
π j ( m + 1 ) = 1 n i = 1 n p i j ( m + 1 )

2.1.3. Identifying the Number of Components

To identify the best-fitting model, we use the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), which are statistical tools commonly applied in model selection. These criteria are defined as follows:
A I C = 2 k 2 l n ( L )
B I C = l n ( n ) k 2 l n ( L )
where k denotes the number of parameters, L is the likelihood function, and n represents the sample size. Both AIC and BIC include penalty terms that account for the complexity of the model, reflecting the number of parameters. However, BIC imposes a stricter penalty than AIC, rendering it more conservative when it comes to selecting overly complex models. AIC generally favors models that fit the data more closely, but this can sometimes result in overfitting, especially with large datasets. In contrast, BIC reduces the risk of overfitting by applying a stronger penalty, making it a more appropriate choice when striking a balance between model complexity and goodness of fit is important [37]. The model selection is based on a comparison of AIC and BIC values, with the model producing the lowest criterion value considered the most suitable for representing the dataset.

2.2. The Gaussian Mixture Distribution

Let f ( x , Ψ ) = n j ( x i ; θ j ) for j = 1 , , k where θ j = ( μ j , σ j 2 ) denotes the parameters of the j-th component, with μ j representing the mean and σ j 2 the variance. Then, the density function of the Gaussian mixture distribution can be expressed as
n j ( x i ; μ j , σ j 2 ) = 1 2 π σ j 2 exp ( ( x μ j ) 2 2 σ j 2 ) = ϕ ( x i μ j σ j ) for j = 1 , , k
By substituting the Gaussian mixture distribution given in Equation (26) in place of PDF in Equation (21), we obtain the following expression:
p ^ i j ( m + 1 ) = π j ( m ) ϕ ( x i μ j ( m ) σ j ) l = 1 k π l ( m ) ϕ ( x i μ l ( m ) σ l ) i = 1 , , n , j = 1 , , k
Given the following equation,
Q ( μ 1 , , μ k , σ 1 2 , , σ k 2 , π 1 , π k ) = i = 1 n j = 1 k [ p ^ i j l o g π j + p ^ i j ( 1 2 ) l o g ( 2 π ) l o g σ j ( x i μ j ) 2 2 σ j 2 ]
The Lagrange function is
L ( μ 1 , , μ k , σ 1 2 , , σ k 2 , π 1 , , π k , δ ) = i = 1 n j = 1 k [ p ^ i j log π j + p ^ i j ( 1 2 ) log ( 2 π ) log σ j ( x i μ j ) 2 2 σ j 2 ] δ ( j = 1 k π j )
Therefore, the parameter estimates of the Gaussian mixture distribution are obtained using the following equations.
δ L δ μ j = i = 1 n p ^ i j ( x i μ j σ j 2 ) = 0 then μ j = i = 1 n p ^ i j x i i = 1 n p ^ i j j = 1 , , k
δ L δ σ j = i = 1 n p ^ i j ( 1 σ j + ( x i μ j ) 2 σ j 3 ) = 0 then σ j 2 = i = 1 n p ^ i j ( x i μ ^ j ) 2 i = 1 n p ^ i j j = 1 , , k

2.3. The Gamma Mixture Distribution

The PDF of the gamma-mixture distribution is defined as follows:
f ( x , Ψ ) = j = 1 k π j γ j ( x i , θ j )
where θ j = ( α j , β j ) for j = 1 , , k denotes the set of parameters of the components of the gamma mixture. α j and β j represent the shape and scale parameters of the j-th gamma distribution, respectively. π j denotes the mixing proportion of the j-th component, satisfying π j 0 for all k and j = 1 k π j = 1 .
The PDF of the gamma distribution denoted by γ j and the gamma function Γ α j for the j-th component are defined in Equations (33) and (34), respectively.
γ j ( x i , α j , β j ) = β j α j x α j 1 e β j x Γ ( α j ) , x 0 and j = 1 , , k
Γ ( α j ) = 0 t α j 1 e t d t
The mean and variance of the j-th gamma distribution are denoted by Equations (35) and (36), respectively.
μ γ j = α j β j j = 1 , , k
σ γ j 2 = α j β j 2 j = 1 , , k
The EM algorithm, widely used for estimating the parameters of the Gamma mixture distribution [38], completes its E-step by applying Equation (37).
p i j ( r + 1 ) = E z i j x i , Ψ ( r ) = P z i j = 1 x i , Ψ ( r ) = π j ( r ) γ x i ; α j ( r ) , β j ( r ) v = 1 k π v γ x i ; α v ( r ) , β v ( r )
Subsequently, these probabilities are employed in the M-step of the algorithm to update the parameter estimates.
In the M-step, the parameters of the gamma mixture distribution are substituted into the log-likelihood function, and the constrained optimization problem is solved using the Lagrange multiplier δ . To estimate the scale parameters of the gamma mixture distribution, the partial derivatives of the log-likelihood with respect to the parameters α and β are calculated. The estimation of the parameter β is given in Equation (38).
β ^ j = i = 1 n p ^ i j x i α j i = 1 n p ^ i j
It is observed that the estimation of the scale parameter β j depends on the shape parameter α j . Since α j cannot be obtained in closed form, its estimation is carried out by exploiting the iterative and gradient-based nature of the EM algorithm. During each iteration, the parameters are updated by performing a positive projection along the gradient of the log-likelihood function with respect to the model parameters. Consequently, the shape parameter α j is updated in the gradient direction with a step size ε j ( 0 , 1 ) as in Equations (39) and (40).
α j ( r + 1 ) = α j ( r ) + ε j G α j X , θ ( r )
G α j X , θ ( r ) = Q X , θ ( r ) n α j = 1 n i = 1 n log ( x i ) + log β j ( r ) ψ α j ( r ) p i j ( r + 1 )
The digamma function, represented by the symbol ψ , is defined in Equation (41). Since the digamma function does not have a closed-form solution, particularly for x 2 , an approximate representation is provided in Equation (42).
ψ ( x ) = Γ ( x ) Γ ( x )
ψ ( x ) G ( x ) = log x 1 2 + 1 24 x 1 2 2
Consequently, the parameter α j can be computed iteratively by using Equation (43).
ψ α j ( r ) log α j ( r ) 1 2 + 1 24 α j ( r ) 1 2 2
For a gamma mixture distribution, in which each component follows a Gamma( α j , β j ) distribution with mixing proportion π j ( j = 1 , , k ), the mean μ γ m and variance σ γ m 2 can be expressed as follows:
μ γ m = j = 1 k π j ( α j β j )
σ γ m 2 = j = 1 k π j ( α j 2 β j + α j 2 β j 2 ) μ γ m 2

2.4. The Beta Mixture Distribution

The PDF of the beta mixture distribution with k components is given by
f ( x , Ψ ) = j = 1 k π j b j ( x , α j , β j )
where Ψ ( θ 1 , θ 2 , , θ k ) represents the parameter space of the mixture distribution, with each component defined as θ j = ( π j , α j , β j ) for j = 1 , , k . Here, α j refers to the shape parameter of the j-th component, while β j represents the scale parameter of the j-th component and takes positive real values. π j denotes the mixing proportion of the j-th component, where π j 0 for all j and the weights satisfy the constraint j = 1 k π j = 1 . In this framework, b j ( x , α j , β j ) defines the density of the j-th beta component, which is scaled by its corresponding proportion π j , as expressed below:
b j ( x , α j , β j ) = Γ ( α j + β j ) Γ ( α j ) Γ ( β j ) x α j 1 ( 1 x ) j = 1 , , k and 0 < x < 1
where Γ ( . ) is the Gamma function, as given in Equation (48).
Γ ( z ) = 0 t z 1 e t d t
For the Beta mixture distribution, the log-likelihood function is given in Equation (49).
l o g L ( x ; Ψ ) = l ( Ψ ) = l o g i = 1 n j = 1 k π j b j ( x i , α j , β j ) = i = 1 n l o g j = 1 k π j b j ( x i ; α j , β j )
If we substitute the PDF of the Beta mixture distribution into Equation (49), we obtain Equation (50).
p i j ( m ) ) = π j ( m ) b j ( x i , α j , β j ) s = 1 k π s ( m ) b j ( x i , α j , β j )
The Lagrange function for the beta mixture distribution is
L ( θ 1 , , θ k , π 1 , , π k , δ ) = i = 1 n j = 1 k p ^ i j l o g π j + l o g ( b j ( x i ; α j , β j ) ) δ ( j = 1 k π j 1 )
By differentiating the Lagrangian function with respect to α j and β j and equating the results to zero, the following equations are obtained.
δ L δ α j = i = 1 n p ^ i j [ l o g x i + ψ ( α j + β j ) ψ ( α j ) ] = 0
δ L δ β j = i = 1 n p ^ i j [ l o g ( 1 x i ) + ψ ( α j + β j ) ψ ( β j ) ] = 0
where ψ (.) is the digamma function and is expressed as Γ ( x / Γ ( x ) . The following equations are obtained.
ψ ( α j ) ψ ( α j + β j ) = i = 1 n p ^ i j l o g x i i = 1 n p ^ i j
ψ ( β j ) ψ ( α j + β j ) = i = 1 n p ^ i j l o g ( 1 x i ) i = 1 n p ^ i j
Since there is no exact solution for these equations, approximate solutions can be obtained using the Newton–Raphson method [39,40].

3. Incorporating Mixture Distributions into M/G/1 Queueing System

The exact results of the M/G/1 queueing system were first derived by Pollaczek and Khintchine. When the interarrival times of customers follow an exponential distribution and the mean and variance of the service time distribution are known, the corresponding equations for the parameters defined below are presented in Equations (56)–(60).
W q : expected waiting time of a customer in the queue ; W : expected time a customer spends in the system ; L q : expected number of customers in the queue ; L : expected number of customers in the system ; ρ : utilization factor .
Under the stability condition ρ = λ μ S < 1 ,
L q = λ 2 σ 2 + ρ 2 2 ( 1 ρ )
L = ρ + L q
W q = λ 2 σ S 2 + λ 2 μ S 2 2 λ ( 1 λ μ S ) = λ μ S 2 + σ S 2 2 ( 1 λ μ S )
W q = L q λ
W = W q + μ s
where
1 / λ : the mean of the interarrival time distribution ; μ S : the mean of the service time distribution ; σ S 2 : the variance of the service time distribution .
The mean and variance of the Gaussian mixture distribution, with each component parameterized by θ j = ( μ j , σ j 2 ) for j = 1 , , k , are expressed in Equations (61) and (62).
μ m n = j = 1 k π j μ j
σ m n 2 = j = 1 k π j ( μ j 2 + σ j 2 ) μ m n 2
where
μ m n : the mean of the Gaussian mixture distribution ; σ m n 2 : the variance of the Gaussian mixture distribution ; μ j : the mean of the j -th Gaussian distribution , j = 1 , , k ; σ j 2 : the variance of the j -th Gaussian distribution ; π j : the mixing proportion of the j -th Gaussian distribution .
In the M/G/1 queueing system, if the service time distribution is modeled as a k-component Gaussian mixture distribution with the parameters defined above, let the average waiting time in the queue be denoted by W q n . Then, by substituting μ m n and σ m n 2 , as defined in Equations (61) and (62), in place of μ S and σ S 2 in Equation (58), the expression for W q n given in Equation (63) can be obtained.
W q n = λ j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j
In this system, let the expected waiting time of a customer in the system be denoted by W n . When Equation (59) is reformulated using the parameters of the Gaussian mixture distribution, the equality given in Equation (64) is obtained.
W n = λ j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j + j = 1 k π j μ j
Let the average number of customers in the queue and in the system be denoted by L q n and L n , respectively. When the equalities given in Equations (56) and (60) are adapted using the parameters of the Gaussian mixture distribution, the expressions in Equations (65)–(67) are obtained.
L q n = λ 2 j = 1 k π j μ j 2 + σ j 2 μ m n 2 + λ 2 μ m n 2 2 1 λ μ m n
L q n = λ 2 j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j
L n = λ 2 j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j + λ j = 1 k π j μ j
If the service-time distribution is modeled as a k-component Gamma mixture with the parameter set θ j = ( α j , β j ) , j = 1 , , k , the mean ( μ m γ ) and variance of the distribution ( σ m γ 2 ) can be expressed as in Equations (68) and (69).
μ m γ = j = 1 k π j α j β j
σ m γ 2 = j = 1 k π j ( β j + 1 ) α j 2 β j μ m γ 2
In such a system, if the average waiting time of a customer in the queue and in the system is denoted by W q γ and W γ , respectively, then the utilization factor ρ m γ is given by Equation (70).
ρ m γ = λ j = 1 k π j α j β j < 1
By substituting the Gamma mixture distribution parameters defined in Equations (68) and (69) into Equations (58) and (59), the expressions for W q γ and W γ given in Equations (71) and (72) are obtained:
W q γ = λ j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j
W γ = λ j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j + j = 1 k π j α j β j
By adapting the distribution parameters of the Gamma mixture to Equations (56) and (60), L γ and L q γ are obtained as given in Equations (73) and (74):
L γ = λ 2 j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j + λ j = 1 k π j α j β j
L q γ = λ 2 j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j
Finally, when the service-time distribution follows a k-component Beta mixture distribution, with each component parameterized by θ j = ( α j , β j ) , j = 1 , , k , let the mean and variance of the Beta mixture distribution be denoted by μ m b and σ m b 2 , respectively, with their formulas given in Equations (75) and (76).
μ m b = j = 1 k π j α j α j + β j
σ m b 2 = j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 μ m b 2
By performing the same steps as those applied in the Gaussian and Gamma mixture distributions, and employing the Beta mixture distribution parameters defined in Equations (75) and (76), the corresponding formulations for W b , W q b , L b , and L q b are derived, as presented in Equations (77)–(80).
W b = λ j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j + j = 1 k π j α j α j + β j
W q b = λ j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j
L b = λ 2 j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j + λ j = 1 k π j α j α j + β j
L q b = λ 2 j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j
The corresponding performance measures for the Gaussian, Gamma, and Beta mixture distributions are summarized in Table 1.

Determination of Lower and Upper Bounds for the Mean Waiting Time in the Queue

In the analysis of G/G/1 queueing systems, where both the interarrival and service times follow a general distribution, Marchal [31] and Kingman [41] established the lower and upper bounds of the W q for a customer, as given in Equations (81) and (82), in order to assess the consistency of the results [42].
W q λ ( σ A 2 + σ S 2 ) 2 ( 1 ρ )
W q λ 2 σ S 2 + ρ ( ρ 2 ) 2 λ ( 1 ρ )
ρ = λ μ s < 1
where
σ A 2 : the variance of interarrival time distribution ; σ S 2 : the variance of service time distribution ; μ S : mean of the service time distribution ; λ : arrival rate ; ρ : utilization rate .
For the M/G/1 model, the upper bound value was derived by [31], particularly as ρ 1 , employing the quotient presented in Equation (83):
1 + μ 2 σ S 2 1 ρ 2 + μ 2 σ S 2 = ρ 2 + λ 2 σ S 2 1 + λ 2 σ S 2
This was chosen to make the approximation exact for the M/G/1 queue. The resulting approximation for W q , obtained by multiplying the upper bound in Equation (84) by the quotient given in Equation (82), is expressed as
W q = λ σ A 2 + σ S 2 2 ( 1 ρ ) ρ 2 + λ 2 σ S 2 1 + λ 2 σ S 2
Since the interarrival time of the M/G/1 system follows an exponential distribution, its variance can be expressed as σ A 2 = 1 λ 2 . Accordingly, the relation in Equation (85) is obtained for W q :
W q = λ ( 1 / λ 2 + σ S 2 ) 2 ( 1 ρ ) ρ 2 + λ 2 σ S 2 1 + λ 2 σ S 2 = ρ 2 + λ 2 σ S 2 2 λ ( 1 ρ )
As can be observed from Equation (85), the upper bound of W q coincides with the exact solution of W q in the P-K formula. In this part of the study, for the M/G/1 queueing system where the service-time distribution follows a Gaussian mixture distribution, the lower and upper bounds of W q are determined. For a Gaussian mixture distribution with k components, defined by the mean μ m n and the variance σ m n 2 , and with the utilization of the system given by ρ n m = λ μ m n , the upper bound of W q is obtained as in Equation (86).
W q = λ 2 μ m n 2 + λ 2 σ m n 2 2 λ ( 1 λ μ m n ) = λ 2 ( μ m n 2 + σ m n 2 ) 2 λ ( 1 λ μ m n )
By performing the necessary simplifications in Equation (86), Equation (87) is obtained.
W q n u = λ ( μ m n 2 + σ m n 2 ) 2 ( 1 λ μ m n )
Based on the parameters of the Gaussian mixture distribution, the upper bound given in Equation (87) can be expressed as shown in Equation (88):
W q n u = λ j = 1 k π j ( μ j 2 + σ j 2 ) 2 ( 1 λ j = 1 k π j μ j )
By denoting the lower bound of this system as W q n l , and substituting the relevant parameters into the equality in Equation (82), the lower bound presented in Equation (89) is derived:
W q n l λ 2 σ m n 2 + λ μ m n ( λ μ m n 2 ) 2 λ ( 1 λ μ m n )
Furthermore, by rearranging and simplifying the numerator of Equation (89), Equations (90) and (91) are obtained:
W q n l λ 2 σ m n 2 + λ 2 μ m n 2 2 μ m n 2 ( 1 λ μ m n )
W q n l λ σ m n 2 + μ m n 2 2 μ m n 2 ( 1 μ m n )
Moreover, it can be expressed as in Equation (92), depending on the parameters of the Gaussian mixture distribution.
W q n l λ j = 1 k π j ( μ j 2 + σ j 2 ) 2 j = 1 k μ j 2 ( 1 λ j = 1 k μ j )
When the service-time distribution is modeled by a Gamma mixture distribution, the upper bound of the mean waiting time in the queue can be determined by substituting its mean and variance, as defined in Equations (68) and (69), into Equation (81). Accordingly, the upper bound is initially formulated in Equation (85), and upon performing the required simplifications, the expressions in Equations (93)–(95) are obtained, provided that ρ m γ = λ j = 1 k π j α j β j < 1 .
W q γ u = λ ( μ m γ 2 + σ m γ 2 ) 2 ( 1 λ μ m γ )
W q γ u = λ ( μ m γ 2 + j = 1 k π j ( β j + 1 ) ( α j 2 β j ) μ m γ 2 ) 2 ( 1 λ μ m γ )
W q γ u = λ ( j = 1 k π j ( β j + 1 ) ( α j 2 β j ) ) 2 ( 1 λ μ m γ )
By substituting the parameters of the Gamma mixture distribution into Equation (68), the upper bound can be expressed as in Equation (96).
W q γ u = λ ( j = 1 k π j ( β j + 1 ) ( α j 2 β j ) ) 2 ( 1 λ j = 1 k π j α j β j )
In a similar manner, by denoting the lower bound as W q γ l and substituting the parameters of the Gamma mixture distribution into Equation (82), the expressions for the lower bound given in Equations (97) and (98) are derived.
W q γ l λ σ m γ 2 + μ m γ 2 2 μ m γ 2 ( 1 μ m γ )
W q γ l λ ( j = 1 k π j ( β j + 1 ) ( α j 2 β j ) ) 2 j = 1 k π j α j β j ) 2 ( 1 λ j = 1 k π j α j β j )
Within the M/G/1 model, assuming that the service time is characterized by a mixture of k Beta distributions, the mean ( μ j b ) and variance ( σ j b 2 ) of j-th Beta mixture distribution are presented above in Equations (75) and (76), respectively.
By applying the same procedure used for the Gaussian and Gamma mixture distributions to the Beta mixture distribution, the upper bound can be derived as shown in Equation (99), whereas the lower bound is obtained as presented in Equation (100).
W q b u = λ j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 ( 1 λ j = 1 k π j α j α j + β j )
W q b l λ j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 j = 1 k π j α j α j + β j 2 ( 1 λ j = 1 k π j α j α j + β j )

4. Experimental Design

In M/G/1 queueing systems with mixture service time distributions, a simulation model (illustrated in Figure 1) has been developed to test the effectiveness of the proposed approximate solutions for system parameters.
In the first stage of the simulation model, 10,000 entities were generated so that the interarrival time distribution follows the exponential distribution. In the second stage, these entities were routed according to the parameters of the mixture proportions, and a service time was assigned to each entity based on the parameters of the corresponding mixture distribution in the selected route. At this stage, each entity that proceeded to the server was assigned a service time generated from the mixture distribution of the respective component determined in the previous step.
When the processing times in the process stage are considered collectively, it is evident that their distribution follows a k component mixture distribution. The simulation model illustrated in the flow diagram above was developed using Arena 13.0, which is particularly suitable for modeling discrete-event systems [43]. In each simulation run, the number of replications was set to 10 using the sequential method to improve the consistency of the results [44]. The simulation model used in this study, together with the datasets constructed according to the model parameters provided in Table A2, was employed for the analysis.
In the experimental design for the M/G/1 queueing model, it was assumed that the number of components (k) was 2 for each of the Gaussian, Gamma, and Beta mixture distributions. The parameters of the first and second components were hypothetically determined by setting the average service time of the first component to μ S 1 = 5 , and by increasing it by 10 % for the second component, i.e., μ S 2 = 5.5 . Based on these parameters, ten distinct samples were constructed, and for each sample, nine test groups were generated under M/G/1 queueing models with interarrival times corresponding to traffic intensities ( ρ ) ranging from 0.1 to 0.9 . The data are given at Table A1 and Table A2. In addition, the Gaussian distribution parameters, along with the shape parameters α and β for the Gamma and Beta distributions are provided in Table A3.
The approximate solutions for these test groups were obtained by integrating the formulas for the mixture distribution parameters into the P-K equation. To evaluate the accuracy of the proposed approximation across all distributions, discrete-event simulation models were developed and executed for each sample, from which the corresponding approximate values were then obtained.

5. Results and Discussion

In the case of the Gaussian distribution in Figure 2, the absolute deviation between the simulation and the approximation method remains relatively small at lower traffic intensities, typically below 2 % . Nevertheless, at higher intensities, irregular increases occur, with certain samples (e.g., Sample 1, Sample 2, and Sample 4) exhibiting pronounced fluctuations. This pattern suggests that while the approximation retains stable in the initial range, its accuracy deteriorates at higher intensities in a sample-dependent manner.
For the Gamma distribution in Figure 3, the deviations follow a more systematic trajectory. Starting close to zero, they increase gradually with traffic intensity, exceeding 10 % for several samples at higher traffic intensities. Unlike the irregular pattern observed in the Gaussian case, the Gamma distribution results reveal a smoother and more consistent upward trend across samples, suggesting that the approximation exhibits a cumulative bias rather than stochastic variability.
For the Beta distribution in Figure 4, deviations are markedly lower in magnitude compared to both the Gaussian and Gamma cases. Almost all values remain well below 1 % , with only a limited number of samples exhibiting minor increases at higher intensities. This indicates that the Beta-based approximation yields the most robust performance among the three distributions, while maintaining stability and minimizing errors across the entire range of tests.
Overall, these results highlight that the accuracy of the approximated method is distribution-dependent. While the Gaussian case suffers from irregular fluctuations and the Gamma case from systematic error growth, the Beta distribution consistently provides superior approximation quality with negligible deviations.
The expected waiting times in the queue for Gaussian, Gamma, and Beta mixture distributions had been determined in Equation (63), Equation (68), and Equation (85), respectively. In addition, the approximate formulas for the lower and upper bounds of the queue waiting times were presented in Equations (88) and (92) for the Gaussian mixture distribution, in Equations (96) and (98) for the Gamma mixture distribution, and in Equations (99) and (100) for the Beta mixture distribution. To evaluate the effectiveness of these approximation formulas, Figure 5, Figure 6 and Figure 7 illustrate the comparative plots of the lower and upper bound values obtained from the two-component mixture distributions, whose parameters are given in Table A1Table A3, together with the W q values calculated using Equation (63) and (68) for Sample 1 and Sample 10. The means of the first components are set to 5 for all three distributions, with their variances adjusted to 1, while the means of the second components are set to 5.5 and 10 respectively for all three distributions, with their variances likewise set to 1. For all three mixture distributions, it is observed that the W q values are close to the upper bound. This finding is consistent with Marchal’s [31], in which the upper bound of the W q in the M/G/1 model coincides with the W q value obtained from the P-K formula. Consequently, Figure 5, Figure 6 and Figure 7 provide graphical evidence supporting that the lower and upper bounds derived for W q yield reliable results.
Across the six plots comparing the Gaussian (Figure 5), Gamma (Figure 6), and Beta (Figure 7) distributions, the approximated results generally follow a consistent trend, although subtle differences emerge among the distributions. For the Gaussian distribution, both under small sample size (Sample 1) and larger sample size (Sample 10), the approximation curve remains closely aligned with the upper and lower bounds, indicating a high level of accuracy. In contrast, for the Gamma distribution, while the approximation maintains a reasonable fit under small samples, deviations become more evident as the sample size increases, particularly at higher traffic intensities where the curve tends to align more closely with the lower bound. Finally, for the Beta distribution, the approximation values also show good agreement with the bounds, though a slightly wider gap appears in larger samples, suggesting that the sensitivity of the approximation to distributional shape becomes more pronounced in this case. Overall, these comparisons highlight that while the approximation method is robust across different distributions, its precision is highest for Gaussian, moderately reliable for Beta, and relatively less accurate for Gamma as the traffic intensity increases.
Using the parameters described above, when the system was simulated with the model presented in Figure 1, the comparative graphs of the lower and upper bound values together with the W q values obtained from the simulation model are shown in Figure 8, Figure 9 and Figure 10. For all three mixture distributions, it was observed that the W q values obtained from the simulation model were also close to the upper bound. This finding can be interpreted as an indication that the approximation formulas for the lower and upper bounds yield reliable results.

6. Conclusions

This study presents an approximate solution for the M/G/1 queue under service time distributions characterized by pure mixtures. By exploiting the variance properties of such mixtures and integrating them into the P-K framework, we derive a tractable closed-form approximation applicable to Gaussian, Gamma, and Beta mixture cases. The accuracy of the proposed method was validated through discrete-event simulations, which demonstrated strong consistency between analytical estimates and simulated outcomes. In particular, the approximation shows high precision for Gaussian mixture distributions. These results suggest that the proposed approach provides a reliable and computationally efficient alternative for analyzing M/G/1 systems with complex service time structures and offers potential for broader applications in performance evaluation and system design.

Author Contributions

Conceptualization, M.K. and N.U.; methodology, M.K. and N.U.; software, M.K. and N.U.; validation, M.K. and N.U.; formal analysis, M.K. and N.U.; investigation, M.K. and N.U.; writing—original draft preparation, M.K. and N.U.; writing—review and editing, M.K. and N.U.; visualization, N.U.; supervision, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data are publicly available at https://doi.org/10.6084/m9.figshare.30233956.v2.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

MLEMaximum likelihood estimation
PDFProbability density function
FMMFinite mixture model
EMExpectation maximization
AICAkaike information criterion
BICBayesian information criterion
P-KPollaczek–Khinchine

Appendix A

Table A1. Design of tests for each sample.
Table A1. Design of tests for each sample.
Test 1Test 2Test 3Test 4Test 5Test 6Test 7Test 8Test 9
π 1 0.100.200.300.400.500.600.700.800.90
π 2 0.900.800.700.600.500.400.300.200.10
ρ 0.100.200.300.400.500.600.700.800.90
Table A2. M/G/1 system parameters for each distribution ( μ s 1 = 5 ).
Table A2. M/G/1 system parameters for each distribution ( μ s 1 = 5 ).
Sample12345678910
μ S 2 5.566.577.588.599.510
μ mix 1 / λ μ mix 1/ λ μ mix 1 / λ μ mix 1 / λ μ mix 1 / λ μ mix 1 / λ μ mix 1 / λ μ mix 1 / λ μ mix 1 / λ μ mix 1 / λ
Test 15.4554.505.9059.006.3563.506.8068.007.2572.507.7075.208.1581.508.6086.009.0590.509.5095.00
Test 25.4027.005.8029.006.2031.006.6033.007.0035.007.4036.207.8039.008.2041.008.6043.009.0045.00
Test 35.3517.835.7019.006.0520.176.4021.336.7522.507.1023.207.4524.837.8026.008.1527.178.5028.33
Test 45.3013.255.6014.005.9014.756.2015.506.5016.256.8016.707.1017.757.4018.507.7019.258.0020.00
Test 55.2510.505.5011.005.7511.506.0012.006.2512.506.5012.806.7513.507.0014.007.2514.507.5015.00
Test 65.208.675.409.005.609.335.809.676.0010.006.2010.206.4010.676.6011.006.8011.337.0011.67
Test 75.157.365.307.575.457.795.608.005.758.215.908.346.058.646.208.866.359.076.509.29
Test 85.106.385.206.505.306.635.406.755.506.885.606.955.707.135.807.255.907.386.007.50
Test 95.055.615.105.675.155.725.205.785.255.835.305.875.355.945.406.005.456.065.506.11
Table A3. Sample design parameters for Gaussian (for μ S 1 = 5 and σ 2 =1), Gamma (for μ S 1 = 5 , α = 25 and β = 0.20 ), and Beta distributions (for μ S 1 = 5 , α = 251.17 and β = 22.83 ).
Table A3. Sample design parameters for Gaussian (for μ S 1 = 5 and σ 2 =1), Gamma (for μ S 1 = 5 , α = 25 and β = 0.20 ), and Beta distributions (for μ S 1 = 5 , α = 251.17 and β = 22.83 ).
Sample 12345678910
μ S 2 5.566.577.588.599.510
Gaussian( μ S 2 , σ 2 )(5.5, 1)(6, 1)(6.5, 1)(7, 1)(7.5, 1)(8, 1)(8.5, 1)(9, 1)(9.5, 1)(10, 1)
Gamma α 30.253642.254956.256472.258190.25100
β 0.180.170.150.140.130.130.120.110.110.10
Beta * α 271.36290.70309.19326.83343.66359.67374.88389.30402.95415.83
β 27.3932.3037.5643.1749.0955.3361.8768.7075.8083.17
* Since the mean of the Beta distribution is less than 1, it has been expressed in hours.

References

  1. Hillier, F.S.; Lieberman, G.J. Introduction to Operations Research; McGraw-Hill Higher Education: Columbus, OH, USA, 2010. [Google Scholar]
  2. Yao, W.; Xiang, S. Mixture Models: Parametric, Semiparametric, and New Directions; Chapman and Hall/CRC: Boca Raton, FL, USA, 2024. [Google Scholar]
  3. Yu, J.; Qin, S.J. Multiway Gaussian mixture model based multiphase batch process monitoring. Ind. Eng. Chem. Res. 2009, 48, 8585–8594. [Google Scholar] [CrossRef]
  4. Ng, S.K.; Xiang, L.; Yau, K.W.W. Mixture Modelling for Medical and Health Sciences; Chapman & Hall/CRC: Boca Raton, FL, USA, 2019. [Google Scholar]
  5. Gençtürk, Y.; Yiğiter, A. Modelling claim number using a new mixture model: Negative binomial gamma distribution. J. Stat. Comput. Simul. 2016, 86, 1829–1839. [Google Scholar] [CrossRef]
  6. Wiper, M.; Insua, D.R.; Ruggeri, F. Mixtures of gamma distributions with applications. J. Comput. Graph. Stat. 2001, 10, 440–454. [Google Scholar] [CrossRef]
  7. Bosch-Domènech, A.; Montalvo, J.G.; Nagel, R.; Satorra, A. A finite mixture analysis of beauty-contest data using generalized beta distributions. Exp. Econ. 2010, 13, 461–475. [Google Scholar] [CrossRef]
  8. Miljkovic, T.; Grün, B. Modeling loss data using mixtures of distributions. Insur. Math. Econ. 2016, 70, 387–396. [Google Scholar] [CrossRef]
  9. Delong, Ł.; Lindholm, M.; Wüthrich, M.V. Gamma mixture density networks and their application to modelling insurance claim amounts. Insur. Math. Econ. 2021, 101, 240–261. [Google Scholar] [CrossRef]
  10. Atienza, N.; García-Heras, J.; Muñoz-Pichardo, J.M.; Villa, R. An application of mixture distributions in modelization of length of hospital stay. Stat. Med. 2008, 27, 1403–1420. [Google Scholar] [CrossRef]
  11. Zheng, Q.; Wang, C. Cross-Scenario Interpretable Prediction of Coal Mine Water Inrush Probability: An Integrated Approach Driven by Gaussian Mixture Modeling with Manifold Learning and Metaheuristic Optimization. Symmetry 2025, 17, 1111. [Google Scholar] [CrossRef]
  12. Hafemeister, C.; Costa, I.G.; Schönhuth, A.; Schliep, A. Classifying short gene expression time-courses with Bayesian estimation of piecewise constant functions. Bioinformatics 2011, 27, 946–952. [Google Scholar] [CrossRef]
  13. Tentoni, S.; Astolfi, P.; De Pasquale, A.; Zonta, L.A. Birthweight by gestational age in preterm babies according to a Gaussian mixture model. BJOG Int. J. Obstet. Gynaecol. 2004, 111, 31–37. [Google Scholar] [CrossRef]
  14. Neema, I.; Böhning, D. Improved methods for surveying and monitoring crimes through likelihood based cluster analysis. Int. J. Criminol. Sociol. Theory 2010, 3, 477–495. [Google Scholar]
  15. Rao, C.R. Maximum likelihood estimation for the multinomial distribution. Sankhyā Indian J. Stat. (1933–1960) 1957, 18, 139–148. [Google Scholar]
  16. Hasselblad, V. Estimation of parameters for a mixture of normal distributions. Technometrics 1966, 8, 431–444. [Google Scholar] [CrossRef]
  17. Hasselblad, V. Estimation of finite mixtures of distributions from the exponential family. J. Amer. Statist. Assoc. 1969, 64, 1459–1471. [Google Scholar] [CrossRef]
  18. Wolfe, J.H. A Computer Program for the Maximum-Likelihood Analysis of Types; Technical Bulletin, U.S. Naval Personnel Research Activity; Bureau of Naval Personnel: Millington, TN, USA, 1965. [Google Scholar]
  19. Wolfe, J.H. NORMIX: Computational Methods for Estimating the Parameters of Multivariate Normal Mixtures of Distributions; Research Memorandum SRM 68-2, U.S. Naval Personnel Research Activity; Bureau of Naval Personnel: Millington, TN, USA, 1967. [Google Scholar]
  20. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. Ser. B 1977, 39, 1–38. [Google Scholar] [CrossRef]
  21. Bermúdez, L.; Karlis, D.; Santolino, M. A finite mixture of multiple discrete distributions for modelling heaped count data. Comput. Stat. Data Anal. 2017, 112, 14–23. [Google Scholar] [CrossRef]
  22. Diebolt, J.; Robert, C.P. Estimation of finite mixture distributions through Bayesian sampling. J. R. Statist. Soc. Ser. B (Methodol.) 1994, 56, 363–375. [Google Scholar] [CrossRef]
  23. Roberts, S.J.; Rezek, I. A Bayesian approach to Gaussian mixture modeling. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1133–1142. [Google Scholar] [CrossRef]
  24. Tsionas, E.G. Bayesian inference for multivariate Gamma distributions. Stat. Comput. 2004, 14, 223–233. [Google Scholar] [CrossRef]
  25. Bouguila, N.; Ziou, D.; Monga, E. Practical Bayesian estimation of a finite beta mixture through Gibbs sampling and its applications. Stat. Comput. 2006, 16, 215–225. [Google Scholar] [CrossRef]
  26. Everitt, B.S. An introduction to finite mixture distributions. Stat. Methods Med. Res. 1996, 5, 107–127. [Google Scholar] [CrossRef]
  27. Elmahdy, E.E.; Aboutahoun, A.W. A new approach for parameter estimation of finite Weibull mixture distributions for reliability modeling. Appl. Math. Modell. 2013, 37, 1800–1810. [Google Scholar] [CrossRef]
  28. Ickowicz, A.; Sparks, R. Modelling hospital length of stay using convolutive mixtures distributions. Stat. Med. 2017, 36, 122–135. [Google Scholar] [CrossRef]
  29. Mohammadi, A.; Salehi-Rad, M.R. Bayesian inference and prediction in an M/G/1 with optional second service. Commun. Stat.-Simul. Comput. 2012, 41, 419–435. [Google Scholar] [CrossRef]
  30. Mohammadi, A.; Salehi-Rad, M.R.; Wit, E.C. Using mixture of Gamma distributions for Bayesian analysis in an M/G/1 queue with optional second service. Comput. Stat. 2013, 28, 683–700. [Google Scholar] [CrossRef]
  31. Marchal, W.G. Some simpler bounds on the mean queuing time. Oper. Res. 1978, 26, 1083–1088. [Google Scholar] [CrossRef]
  32. Whitt, W. Approximations for the GI/G/M queue. Prod. Oper. Manag. 1993, 2, 114–161. [Google Scholar] [CrossRef]
  33. Van Hoorn, M.; Tijms, H. Approximations for the waiting time distribution of the M/G/c queue. Perform. Eval. 1982, 2, 22–28. [Google Scholar] [CrossRef]
  34. Ma, B.N.W.; Mark, J.W. Approximation of the mean queue length of an M/G/c queueing system. Oper. Res. 1995, 43, 158–165. [Google Scholar] [CrossRef]
  35. Morozov, E.; Pagano, M.; Peshkova, I.; Rumyantsev, A. Sensitivity analysis and simulation of a multiserver queueing system with mixed service time distribution. Mathematics 2020, 8, 1277. [Google Scholar] [CrossRef]
  36. Ghojogh, B.; Ghojogh, A.; Crowley, M.; Karray, F. Fitting a mixture distribution to data: Tutorial. arXiv 2019, arXiv:1901.06708. [Google Scholar]
  37. Sun, T.; Wen, Y.; Zhang, X.; Jia, B.; Zhou, M. Gaussian mixture model for marine reverberations. Appl. Sci. 2023, 13, 12063. [Google Scholar] [CrossRef]
  38. Almhana, J.; Liu, Z.; Choulakian, V.; McGorman, R. A recursive algorithm for gamma mixture models. In Proceedings of the 2006 IEEE International Conference on Communications, Istanbul, Turkey, 11–15 June 2006; IEEE: New York, NY, USA, 2006; Volume 1, pp. 197–202. [Google Scholar]
  39. Ma, Z.; Leijon, A. Beta mixture models and the application to image classification. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; IEEE: New York, NY, USA, 2009; pp. 2045–2048. [Google Scholar]
  40. Trianasari, N.; Sumertajaya, I.; Mangku, I.W. Application of beta mixture distribution in data on GPA proportion and course scores at the MBTI Telkom University. Commun. Math. Biol. Neurosci. 2021, 2021, 44. [Google Scholar] [CrossRef]
  41. Kingman, J.F.C. Some inequalities for the queue GI/G/1. Biometrika 1962, 49, 315–324. [Google Scholar] [CrossRef]
  42. Shortle, J.F.; Thompson, J.M.; Gross, D.; Harris, C.M. Fundamentals of Queueing Theory; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  43. Kelton, W.D.; Sadowski, R.P.; Swets, N.B. Simulation with Arena, 6th ed.; McGraw-Hill Education: Columbus, OH, USA, 2015. [Google Scholar]
  44. Law, A.M.; Kelton, W.D. Simulation Modeling and Analysis, 3rd ed.; McGraw-Hill: New York, NY, USA, 2007. [Google Scholar]
Figure 1. Flowchart of the simulation model.
Figure 1. Flowchart of the simulation model.
Symmetry 17 01753 g001
Figure 2. Deviation between simulation and approximation for W under Gaussian mixture.
Figure 2. Deviation between simulation and approximation for W under Gaussian mixture.
Symmetry 17 01753 g002
Figure 3. Deviation between simulation and approximation for W under Gamma mixture.
Figure 3. Deviation between simulation and approximation for W under Gamma mixture.
Symmetry 17 01753 g003
Figure 4. Deviation between simulation and approximation for W under Beta mixture.
Figure 4. Deviation between simulation and approximation for W under Beta mixture.
Symmetry 17 01753 g004
Figure 5. Approximation of W q under Gaussian mixture with lower and upper bounds.
Figure 5. Approximation of W q under Gaussian mixture with lower and upper bounds.
Symmetry 17 01753 g005
Figure 6. Approximation of W q under Gamma mixture with lower and upper bounds.
Figure 6. Approximation of W q under Gamma mixture with lower and upper bounds.
Symmetry 17 01753 g006
Figure 7. Approximation of W q under Beta mixture with lower and upper bounds.
Figure 7. Approximation of W q under Beta mixture with lower and upper bounds.
Symmetry 17 01753 g007
Figure 8. Simulation of W q under Gaussian mixture with lower and upper bounds.
Figure 8. Simulation of W q under Gaussian mixture with lower and upper bounds.
Symmetry 17 01753 g008
Figure 9. Simulation of W q under Gamma mixture with lower and upper bounds.
Figure 9. Simulation of W q under Gamma mixture with lower and upper bounds.
Symmetry 17 01753 g009
Figure 10. Simulation of W q under Beta mixture with lower and upper bounds.
Figure 10. Simulation of W q under Beta mixture with lower and upper bounds.
Symmetry 17 01753 g010
Table 1. M/G/1 approximations under Gaussian, Gamma, and Beta mixture distributions.
Table 1. M/G/1 approximations under Gaussian, Gamma, and Beta mixture distributions.
GaussianGammaBeta
L λ 2 j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j + λ j = 1 k π j μ j λ 2 j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j + λ j = 1 k π j α j β j λ 2 j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j + λ j = 1 k π j α j α j + β j
L q λ 2 j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j λ 2 j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j λ 2 j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j
W λ j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j + j = 1 k π j μ j λ j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j + j = 1 k π j α j β j λ j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j + j = 1 k π j α j α j + β j
W q λ j = 1 k π j μ j 2 + σ j 2 2 1 λ j = 1 k π j μ j λ j = 1 k π j ( β j + 1 ) α j 2 β j 2 1 λ j = 1 k π j α j β j λ 2 σ m 2 + ρ m 2 ( 1 ρ m ) λ j = 1 k π j α j β j 2 ( α j + β j ) 3 ( α j + β j + 1 ) 2 2 1 λ j = 1 k π j α j α j + β j
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Koyuncu, M.; Uncu, N. An Approximate Solution for M/G/1 Queues with Pure Mixture Service Time Distributions. Symmetry 2025, 17, 1753. https://doi.org/10.3390/sym17101753

AMA Style

Koyuncu M, Uncu N. An Approximate Solution for M/G/1 Queues with Pure Mixture Service Time Distributions. Symmetry. 2025; 17(10):1753. https://doi.org/10.3390/sym17101753

Chicago/Turabian Style

Koyuncu, Melik, and Nuşin Uncu. 2025. "An Approximate Solution for M/G/1 Queues with Pure Mixture Service Time Distributions" Symmetry 17, no. 10: 1753. https://doi.org/10.3390/sym17101753

APA Style

Koyuncu, M., & Uncu, N. (2025). An Approximate Solution for M/G/1 Queues with Pure Mixture Service Time Distributions. Symmetry, 17(10), 1753. https://doi.org/10.3390/sym17101753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop