Next Article in Journal
Accurate Analytical Forms of Heaviside and Ramp Function
Previous Article in Journal
A Bounded Sine Skewed Model for Hydrological Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LINEX Loss-Based Estimation of Expected Arrival Time of Next Event from HPP and NHPP Processes Past Truncated Time

by
M. S. Aminzadeh
Department of Mathematics, Towson University, Towson, MD 21252, USA
Analytics 2025, 4(3), 20; https://doi.org/10.3390/analytics4030020
Submission received: 17 July 2025 / Revised: 18 August 2025 / Accepted: 20 August 2025 / Published: 26 August 2025

Abstract

This article introduces a computational tool for Bayesian estimation of the expected time until the next event occurs in both homogeneous Poisson processes (HPPs) and non-homogeneous Poisson processes (NHPPs), following a truncated time. The estimation utilizes the linear exponential (LINEX) asymmetric loss function and incorporates both gamma and non-informative priors. Furthermore, it presents a minimax-type criterion to ascertain the optimal sample size required to achieve a specified percentage reduction in posterior risk. Simulation studies indicate that estimators employing gamma priors for both HPP and NHPP demonstrate greater accuracy compared to those based on non-informative priors and maximum likelihood estimates (MLE), provided that the proposed data-driven method for selecting hyperparameters is applied.

1. Introduction

Let { N ( t ) , t 0 } be a counting process, where N ( t ) denotes the number of events that occur by time t. Let X n denote the time between the ( n 1 ) th and nth events of the process, n 1 . Let S N ( τ ) denote the time of last arrival before a truncated time τ and S N ( τ ) + 1 be the time of the first arrival after the time τ . If { X 1 , X 2 , } is a sequence of identically independent distributed (iid) interarrival times of the events, then the process { N ( t ) , t 0 } is called a renewal process. One of the measures associated with a renewal process is the expected time of next arrival, E [ S N ( τ ) + 1 ] , after a specified time τ . Note that S N ( t ) t . Ref. [1] provides the well-known result E [ S N ( τ ) + 1 ] = μ ( m ( τ ) + 1 ) , where μ is the mean of the interarrival distribution and m ( τ ) = E [ N ( τ ) ] is the renewal function. The proposed method presented in this article can be applied to any renewal process as long as μ and m ( τ ) can be expressed as closed functions of the parameter(s) of the renewal process. The estimation of m ( t ) has been considered in the literature. For instance, [2] utilizes numerical methods based on an MCMC algorithm for estimating m ( t ) when interarrival times follow a Pareto distribution.
The HPP is widely used in various fields of probability and applied statistics. Ref. [3] applied an HPP to model interarrivals of earthquakes. Ref. [4] provided a Bayesian approach to estimating the Poisson intensity function based on the Haar wavelet transform. The NHPP is also utilized in practice when the intensity function is not constant, as in the case of HPP. The NHPP model has numerous real-world applications. In the insurance industry, the occurrence of insurance claims can be effectively modeled using an NHPP. For example, the rate of car accident claims may increase during peak travel seasons or bad weather conditions. In financial market modeling, the arrival rate of transactions often exhibits time-varying characteristics. An NHPP can be considered to model these transactions. In software development, the NHPP can be considered as a possible model for the occurrence of software defects during testing. As testing progresses, the rate at which new defects are uncovered may decrease. Many authors have considered the power law (PL) intensity function for NHPP in the literature. Ref. [5] assumed that the failures of a system follow an NHPP with the PL intensity function and mentioned that the NHPP is a model commonly used to describe a system with minimal repairs. For minimal repair cases, it is assumed that the system is restored to its state immediately before the failure. This means the repair does not improve the system’s inherent reliability. Ref. [6] considered NHPP with linear and PL intensity functions to analyze COVID-19 cases. For NHPP, the rate of events (failures of a system) is not constant over time; it can increase or decrease, reflecting reliability growth or deterioration. Therefore, the occurrence rate is time-dependent. Under the PL intensity function, the failure intensity, or rate of failures per unit time, is modeled as a power function of time. The PL intensity is useful for systems undergoing continuous improvement or deterioration, where the reliability is not static. The PL mean-value function Λ ( t ) , and its intensity function λ ( t ) = d Λ ( t ) d t , respectively, are defined as
Λ ( t ) = ( τ θ ) β , λ ( t ) = β t ( τ θ ) β ,
where θ and β are scale and shape parameters, respectively.
Figure 1 reveals that for β < 1 , the intensity function decreases, indicating that the occurrence rate of events diminishes. In the context of a repairable system, this suggests that minimal repairs are sufficient. Conversely, for β > 1 , the intensity function increases, indicating that failure rates rise.
The LINEX loss function has many advantages over other asymmetric loss functions, such as Mean Squared Error, Mean Absolute Error, and Binary Cross-Entropy, to name a few. The LINEX loss function provides different penalties based on whether the error is an overestimation or an underestimation, and the degree and direction of asymmetry can be controlled by choosing its parameter value (a positive value implies overestimation is more serious, and a negative value implies underestimation has a greater cost). The continuity and differentiability properties of the LINEX loss enable mathematical optimization to find the optimal Bayes estimate, and it converges to the quadratic loss function as its parameter approaches zero. Ref. [7] used the loss function to predict lower bounds for the number of points in regression models. The loss function penalizes underestimation at an exponential rate while penalizing overestimation only linearly. Ref. [8] considered the loss function to determine optimum process parameters for product quality. Ref. [9] utilized the loss function with a variable parameter and considered several distributions for the parameter to find Bayes risks. Ref. [10] applied the loss function to several resampling methods.
The main goal of this article is to provide novel Bayes estimators for the expected arrival of the event past a truncated time, E [ S N ( τ ) + 1 ] , utilizing the LINEX loss function for two cases: 1. { N ( t ) , t 0 } is assumed to be an HPP with rate λ ; 2. { N ( t ) , t 0 } is assumed to be an NHPP with an intensity function λ ( t ) . For the HPP case, Laptance’s method for a twice-differentiable function was applied to obtain the Bayes estimator using gamma and non-informative priors. For the NHPP case, Bayes estimators based on gamma priors for the parameters of the PL intensity, as well as the Jeffreys prior, are applied to obtain Bayesian estimates. Additionally, the article provides closed formulas for the posterior risk based on informative and non-informative priors for both HPP and NHPP. It proposes a criterion based on a minimax-type method to determine the optimal sample size to satisfy a specified percentage decrease in posterior risk.
The remainder of this article is organized as follows: Section 2 consists of three subsections. It provides two Bayes estimators via gammas and the Jeffreys prior for the expected time of arrival of the event past a truncated time, assuming an HPP for the arrival times of events. Section 3 has three subsections. In this section, the NHPP is considered utilizing the PL intensity function. The computation of the Bayes estimates via gamma priors, as well as a non-informative prior, is discussed, and simulation results are presented. Section 4 has two subsections. In this section, a minimax-type algorithm is presented to compute the optimal sample size that meets a specified percentage decrease threshold level. In Section 5, a numerical example is presented to illustrate the computations involved in the goodness-of-fit of data to NHPP and the optimal sample size.

2. Homogeneous Poisson Process

For the HPP with rate λ , it is a well-known fact that X i Exponential ( λ ) , i = 1 , 2 , , with the probability density function (pdf) f ( x ; λ ) = λ e λ x . It can be shown that m ( τ ) = λ τ and μ = 1 / λ ; as a result, E [ S N ( τ ) + 1 ] = τ + 1 / λ . Of course, this result can also be argued via the memoryless property of the Exponential distribution. Since the truncated time τ is given in advance, we concentrate on deriving the Bayes estimator of the rate of occurrence θ = 1 / λ based on a random sample x 1 , , x n . Let v = θ ^ θ denote the error of estimating a parameter θ with an estimator θ ^ . The LINEX loss function is defined as
L ( v ) = q [ e ω v ω v 1 ] , ω 0 , q > 0
where ω and q are the shape and scale parameters, respectively, of the loss function. Without loss of generality, we let q = 1 . The shape parameter ω can take negative and positive values.
Figure 2 is the graph of the LINEX loss function, as defined above. Figure 2 reveals that for both ω > 0 and ω < 0 , the loss function is asymmetric. For ω > 0 and v > 0 , overestimation is more severe than when ω > 0 and v < 0 . That is, for v > 0 , ω > 0 , the loss function is greater (exponentially increases) than for v < 0 , ω > 0 (linearly increases). For example, LINEX ( v = 2 , ω = 0.8 ) = 2.35 , while LINEX ( v = 2 , ω = 0.8 ) = 0.801 . On the other hand, when ω < 0 , the loss function is almost linear for v > 0 and almost exponential for v < 0 . Additionally, when the shape parameter ω in absolute value is less than 1 and is very small, the loss function is symmetric and resembles the squared-error loss function. It is worth mentioning that the squared error loss treats overestimation and underestimation equally, penalizing both with the square of the error. However, LINEX loss introduces asymmetry, allowing for different penalties for overestimation and underestimation, which are controlled by the parameter ω .
The following Mathematica code provides a graph of the LINEX loss function for selected values of the shape parameter ω in Figure 2.
  • ω 1 = 0.8 ; ω 2 = 0.8 ; ω 3 = 0.2 ; ω 4 = 0.09 ; Plot[{Exp[v w1] − v w1 − 1, Exp[v w2] − v w2 − 1, Exp[v w3] − v w3 − 1, Exp[v w4] − v w4 − 1]}, {v, −4, 4}, PlotStyle → {{Red, Dashed}, Blue, Green, Brown}, PlotLabels → {“w = 0.8”, “w = −0.8”, “w = 0.2”, “w = −0.09”}]
Ref. [11] provides the general expression for the optimal Bayes estimator of θ with respect to the LINEX loss function as
θ ^ B a y e s = ln [ M θ ( ω ) ] ω
where M θ ( ω ) = E θ [ e ω θ ] is the moment generating function for the posterior distribution of θ , and E θ [ . ] denotes the expected value with respect to θ .

2.1. Bayes Estimator Based on a Gamma Prior

The gamma distribution, as a prior distribution in Bayesian analysis, has many advantages. It is a natural and appropriate choice for modeling non-negative parameters such as variances, rates, and scale parameters. The distribution serves as a conjugate prior for several important likelihood functions, and by adjusting its parameters, the gamma distribution can adopt a variety of shapes, allowing for a flexible representation of prior beliefs. In this article, the gamma distribution is utilized for the parameters of HPP and NHPP. According to Laplace’s method, for a twice-differentiable function g(x), which has a unique global maximum at x 0 and for a large number n,
a b e n g ( x ) d x 2 π n | g ( x 0 ) | e n g ( x 0 ) .
Assume θ is a parameter with the prior distribution ν ( θ ) and let L ( θ ; x ̲ ) denote the likelihood function. The moment generating function (mgf) for the posterior distribution of θ is
M ( θ ) ( ω ) = e ω θ ν ( θ ) L ( θ ; x ̲ ) d θ ν ( θ ) L ( θ ; x ̲ ) d θ ,
where x ̲ = ( x 1 , x 2 , x n ) is a random sample of interarrivals from the Exponential distribution. From (1), the LINEX Bayes estimator of θ is
ln [ M ( θ ) ( ω ) ] ω .
Based on a sample x ̲ = ( x 1 , x 2 , , x n ) of interarrival times from an HPP, the likelihood function is L ( λ ; x ̲ ) = f ( x ̲ ; λ ) = λ n e λ s , where s = i = 1 n x i . First, we consider a gamma prior for λ :
ν 1 ( λ ) = β α Γ ( α ) λ α 1 e β λ , α > 0 , β > 0 .
It can be shown that the posterior pdf for λ can be written as
ν 1 ( λ ; x ̲ ) = L ( λ ) ν 1 ( λ ) 0 L ( λ ) ν 1 ( λ ) d λ = λ n + α 1 e λ ( s + β ) ( β + s ) n + α Γ ( n + α )
which is the pdf of gamma ( n + α , s + β ) . Therefore, the posterior distribution for θ = 1 / λ (which is the main parameter of interest), is inverse-gamma with the pdf
ν 1 ( θ ; x ̲ ) = θ ( n + α + 1 ) e ( s + β ) θ ( β + s ) n + α Γ ( n + α ) .
Bayesian analysis assumes that some information is available on the hyperparameters α and β based on the behavior of θ . However, most of the time, such information is unavailable, and one can use a data-driven approach. This approach was implemented in ref. [2] and many other articles by the author. In the context of the present article, the MLE θ ^ m l e is used to choose the hyperparameter values. Simulation studies in Section 2.3 indicate that for a smaller sample size, the Bayes estimator of θ ^ B a y e s outperforms θ ^ m l e in terms of accuracy as measured by the square root of average squared error (SASE). However, for large samples, the accuracy of MLE is comparable to that of the Bayes estimator. The numerical computations summarized in Table 1, Table 2 and Table 3 confirm the findings. Since
θ Inverse-Gamma ( α , β ) ,
we have,
E [ θ ] = β α 1 , V a r ( θ ) = β 2 ( α 2 ) ( α 1 ) 2 , α > 2 .
In practice, where only one sample is available, the idea is to choose a large value for α to minimize the variance of the prior distribution, and then solve θ ^ m l e = β α 1 , for β . However, for the simulation studies, using selected “true” values of α t and θ t , we let β = θ t ( α t 1 ) . The idea is to utilize the MLE and ensure the relation between hyperparameters is consistent with the information contained in the MLE. Simulations confirm that using this approach, the gamma-based Bayes estimator outperforms both ML-based and Jeffreys-based estimators in terms of accuracy.
The closed formula for the mgf of the posterior (5) can be written as
M θ ( ω ) = 2 [ ( β + s ) ω ] 0.5 ( n + α ) K n + α ( 4 ω ( β + s ) Γ ( n + α )
where
K c ( x ) = 0.5 e 0.5 c π i e i x sinh ( v ) c v d v
is the modified BesselK function. The mgf can also be expressed in a closed form that does not involve the BesselK function. From (5), we get
M θ ( ω ) = ( β + s ) n + α Γ ( n + α ) 0 e n g 1 ( θ ) d θ
where g 1 ( θ ) = ( 1 / n ) [ ω θ + ( s + β ) θ + ( n + α + 1 ) l n ( θ ) ] . It can be shown that g 1 ( θ ) has its global maximum at
k 1 = ( n + α + 1 ) + ( n + α + 1 ) 2 + 4 ω ( s + β ) 2 ω .
From (2), an approximate expression for the posterior mfg under the gamma prior for θ , after many algebraic manipulations, can be written as
M θ ( ω ) 2 π n ( s + β ) n + α k 1 ( n + α + 1 ) e ω k 1 ( s + β ) / k 1 ( n + α + 1 ) k 1 + 2 ( s + β ) n k 1 3 Γ ( n + α ) ,
As a result, the Bayes estimator is θ ^ B a y e s ( g ) = ln [ M θ ( ω ) ] ω .

2.2. Bayes Estimator Using Jeffreys Prior

Often, the true values of hyperparameters for a prior distribution are unknown to practitioners, or they are difficult to guess based on the behavior of the parameter of interest. Sometimes, even the prior distribution itself cannot be guessed. In this section, the Jeffreys prior is considered to obtain a closed expression for the Bayes estimator. Using a sample x ̲ = ( x 1 , x 2 , , x n ) from an Exponential( λ ) distribution, the likelihood function L ( λ ; x ̲ ) = λ n e λ s , and the Jeffreys prior
ν 2 ( λ ) I ( λ ) = λ ,
where I ( λ ) is the Fisher information. The posterior pdf for λ is expressed as
ν 2 ( λ ; x ̲ ) = s n + 2 λ n + 1 e λ s Γ ( n + 2 ) ,
which is the pdf of gamma( n + 2 , s ). From (7), the posterior pdf for θ = 1 λ (the main parameter of interest) reduces to
ν 2 ( θ ; x ̲ ) = s n + 2 θ ( n + 3 ) e s / θ Γ ( n + 2 ) .
As a result, the mgf for θ in this case is
M θ ( ω ) = 2 ( s ω ) 0.5 ( n + 2 ) K n + 2 ( 4 ω s ) Γ ( n + 2 )
and can be approximated with
s n + 2 Γ ( n + 2 ) 0 e n g 2 ( θ ) d θ ,
where
g 2 ( θ ) = ( 1 / n ) [ ω θ + ( n + 3 ) l n ( θ ) + s / θ ] ,
with the global maximum k 2 = ( n + 3 ) + ( n + 3 ) 2 + 4 ω s 2 ω . From (2) and (8), the mgf simplifies to
M θ ( ω ) 2 π k 2 3 s n + 2 k 2 ( n + 3 ) e ω k 2 s / k 2 ( n + 3 ) k 2 2 s Γ ( n + 2 )
and then we compute θ ^ B a y e s ( J ) = ln [ M θ ( ω ) ] ω .

2.3. Simulation Results for HPP

This section summarizes the simulation results to assess the accuracy of the estimators obtained in Section 2.1 and Section 2.2. It is worth mentioning that the mgf, M θ ( ω ) , ω > 0 , for the inverse-gamma distribution is not defined; however, for ω > 0 , M θ ( ω ) is defined. This is the main reason that in Table 1, Table 2 and Table 3 (Section 2.3), only a few selected positive values of ω are considered. Therefore, with this limitation, we are considering the cases where the loss function is asymmetric, with overestimation being more severe than underestimation. For example, in the design of a safety-critical system, overestimating might result in unnecessary maintenance. For selected values of input variables w , n , α , and β = θ ( α 1 ) (see the discussion on choosing the hyperparameter values in Section 2.1), the simulation code generates N = 2000 samples from the Exponential distribution with the parameter λ . For every sample of size n, the approximate values of l n [ M ( θ ) ( w ) ] / w using (6) and (9) are found. The code also computes the ML estimate of θ . For each set of N simulated samples, the average of the estimates and the SASE, ξ = i = 1 N ( θ ^ i θ ) 2 N are computed. The reason for using SASE as opposed to ASE, i = 1 N ( θ ^ i θ ) 2 N , is that SASE resembles the standard deviation of a random variable, which is commonly used to compare variability. However, having SASE, one can obtain ASE based on SASE and vice versa. Table 1 lists the averages and values of ξ in parentheses. θ ^ B a y e s ( g ) and θ ^ B a y e s ( J ) denote Bayes estimates under gamma and Jeffreys priors, respectively. Table 1 reveals that as n increases, SASE decreases for all three estimators, regardless of the selected values for α , λ , n , and ω . θ ^ B a y e s ( g ) outperforms θ ^ B a y e s ( J ) with respect to their accuracy, as SASE for θ ^ B a y e s ( g ) in boldface is smaller. Also, θ ^ B a y e s ( J ) outperforms the MLE θ ^ m l e = 1 s , where s = i = 1 n x i .
It is worth mentioning that θ = 1 λ is the rate of occurrence of events under HPP, and that the estimator of θ is of primary interest, not λ . Note that MLEs of θ and λ are functionally related. Although selected values of λ are shown in the first row of Table 1, Table 2 and Table 3, the mean and SASE for θ ^ m l e , θ ^ B a y e s ( g ) , and θ ^ B a y e s ( J ) are provided in the main body of the tables. The SASE numbers in boldface in Table 1, Table 2 and Table 3 represent the smallest SASE values for selected values of the parameter ω of the LINEX loss, n, λ , and α .
Table 3 lists four different selections for the parameters ( α , λ ). For each selection, two values for β are listed. For instance, in the first row of the first selection, ( α = 5 , β = 3 , λ = 5 ) , β is not derived using the relation β = θ ( α 1 ) . In contrast, in the second row, ( α = 5 , β = 0.8 , λ = 5 ) follows the relation as described in Section 2.1 for the hyperparameter β . We observe that with the second option, the estimator θ ^ B a y e s ( g ) , for all four cases, produces the smallest SASE. However, when the hyperparameters are selected arbitrarily, as in the first row for each case, the estimator θ ^ B a y e s ( J ) has a slightly smaller or the same SASE value as the MLE, and θ ^ B a y e s ( g ) is no longer the best estimator. This confirms that the selection of the hyperparameter value, as proposed, is important.

3. Non-Homogeneous Poisson Process (NHPP)

This section considers the NHPP with the PL intensity function. As mentioned earlier, the PL intensity function has many practical applications, such as in repairable systems.

3.1. Expected Time of Next Arrival After a Truncated Time

A few results by the author from [12] are used in this section.
Let { N ( t ) , t 0 } be an NHPP with an intensity function λ ( t ) and mean-value function E [ N ( τ ) ] = Λ ( τ ) = 0 τ λ ( t ) d t , where τ is a predetermined truncated time; it can be shown that
E [ S N ( τ ) + 1 ] = e Λ ( τ ) τ t λ ( t ) e Λ ( v ) d t .
For a sequence of successive arrival (event) times t 1 < t 2 , < < t n < τ , where t i is the ith event time, using the time-truncation sampling scheme, the log-likelihood function reduces to
l ( t 1 , , t n , T ) = e Λ ( t n ) ( Π i = 1 n λ ( t i ) ) { e [ Λ ( T ) Λ ( t n ) ] } = e Λ ( T ) Π i = 1 n λ ( t i ) ,
And based on the PL mean-value function
Λ ( τ ) = E [ N ( τ ) ] = ( τ θ ) β ,
the MLEs of the parameters are obtained as β ^ = n i = 1 n ln ( τ / t i ) , θ ^ = τ n 1 / β ^ .
Letting L = Λ ( τ ) and using the following gamma priors for L and β :
π ( L ) = b a L a 1 e L / b / Γ ( a ) , L > 0 , a > 0 , b > 0 ,
π ( β ) = d c β c 1 e β / d / Γ ( c ) , β > 0 , c > 0 , d > 0 ,
the joint posterior pdf of ( L , β ) conditional on t ̲ and τ reduces to
π ( L , β | t ̲ , τ ) Γ β | n + c , D Γ ( L | n + a , b 1 + b ) ,
where D = d 1 + d i = 1 n ln ( τ / t i ) and Γ ( X | A , B ) is the pdf of the gamma random variable X with parameters A and B. The form of π ( L , β | t ̲ , τ ) implies that, conditional on ( t ̲ and τ ), β and L are independent. It is noted in simulation studies that the hyperparameter d should be selected carefully so that D in the posterior pdf Γ β | n + c , D is not a negative number. In general, small values less than 1 for the d do not cause any issues.
Consider a non-informative joint prior π ( θ , β ) ( θ β ) 1 , which has also been used by [12]. Based on the likelihood function, the non-informative prior, and β ^ = n i = 1 n ln ( τ / t i ) , it is shown that the posterior pdf reduces to
π ( L , β * | t ̲ , τ ) ( β * ) ( p 2 1 ) e 0.5 β * 2 p 2 Γ ( p 2 ) × e L L n 1 Γ ( n ) ,
The first term in the product on the right-hand side of (11) is the pdf of a scaled Chi-squared random variable β * = ( 2 n β ^ ) β with p = 2 ( n 1 ) degrees of freedom, and the second term is the pdf of gamma ( n , 1 ) . Therefore, the joint posterior is proportional to the product of the marginal posterior of L and β * . As a result, conditional on t ̲ and τ , it is concluded that β * and L are independent.

3.2. LINEX-Based Estimator of M via Gamma and Non-Informative Priors

We first consider an informative estimator for M using gamma priors. Recall (10), and the fact that conditional on ( t 1 , t 2 , , t n , τ ) , the variables L , β are independent. The LINEX estimator is defined as
M ^ Bayes ( g ) = ln [ E M ( e ω M ) ] ω .
And it can be shown that under the PL intensity function, we get
M = E [ S N ( τ ) + 1 ] = τ L 1 / β e L Γ ( 1 + 1 / β , L ) ,
However, E M ( e ω M ) , which the above LINEX estimator depends on and is defined as
E M ( e ω M ) = 0 0 e ω M Γ β | n + c , D Γ ( L | n + a , b 1 + b ) d ( β ) d ( L ) .
is intractable to derive as a closed formula. However, given the hyperparameters a , b , c , d and n , D , a numerical integration method via Mathematica evaluates its value for the selected values of τ and ω .
  • From (11), the Bayes estimator for M based on the non-informative prior is given by
M ^ Bayes ( non-info ) = ln [ E M ( e ω M ) ] ω
E M ( e ω M ) = 0 0 e ω M Γ ( 0.5 p , 2 ) Γ ( L | n , 1 ) d ( β * ) d ( L ) .
Similar to (12), the double integral (13) can be evaluated by Mathematica using a numerical method.

3.3. Simulation Results for NHPP

The summary of simulation studies for the NHPP case is given in Table 4 and Table 5. Recall that to obtain an accurate Bayes estimator via the gamma prior, in simulation studies, a relatively large value for b and a small value for d (see the discussion in Section 3.1) are selected. Then, values of a and c are found via
a = L t b c = β t d .
The selections ensure
E [ L ] = a b E [ β ] = c d ,
and that the variance of both prior distributions for L and β is small. As a result, the Bayes estimator using the gamma priors becomes more accurate than the ML estimator and the Bayes estimator using the non-informative prior. In a practical case, where only a sample on the arrival times of an NHPP is available, MLEs β ^ and θ ^ (see Section 3.1) are found, and the b and d values are selected, as discussed. Then, using L ^ = ( τ θ ^ ^ ) β ^ , we find the hyperparameter values a = L ^ b , c = β ^ d . Recall that a value for a truncated time τ is given in advance.
For the selected “true” values ( θ t , β t ), hyperparameters ( b , d ) , truncated time τ , and the parameter of LINEX loss ω , Table 4 and Table 5 provide simulation summaries. Recall that the hyperparameters ( a , c ) , as discussed earlier in this section, are selected based on ( b , d ) . The mean and SASE, inside parentheses, for the three estimators ML, Bayes(g), and Bayes(J) for M are listed in the tables. For each combination of the selected input variables ( θ t , β t , b , d , ω , τ ), N = 300 samples are generated from the NHPP with the power-law intensity. Examination of the tables reveals that regardless of the value of ω , as τ increases (note that this makes the sample size n increase), SASE for MLE and Bayes(g) decreases, while SASE for Bayes (J) increases. For all three selected values ω = ( 0.1 , 0.5 , 1.0 ) , when τ = 10 , Bayes(J) has a smaller SASE than ML, but Bayes(g) outperforms both estimators. For all selected values of ω , and τ = 50 and 80, MLE has a smaller SASE than Bayes(J), and Bayes (g) outperforms both MLE and Bayes(J) in terms of accuracy. The boldface numbers in parentheses, in Table 4 and Table 5, represent the smallest SASE among the three estimators. We can see that M ^ B a y e s ( g ) , for all selected input values, has the smallest SASE. For a better visual comparison, Figure 2 and Figure 3 use Table 3 and Figure 4, Figure 5 and Figure 6 use Table 4 to provide a graphical presentation of the SASE values. The graphs also confirm that M ^ B a y e s ( g ) outperforms MLE and M ^ B a y e s ( J ) in all cases with the smallest SASE value.
Table 6 provides a summary of the sensitivity analysis. Four combinations of distributions for the parameters of the PL intensity function under an NHPP are shown in the table. For each combination, 100 samples are generated from the distributions, and the SASEs of the three estimators for M = E [ S N ( τ ) + 1 ] are listed in the table. The numbers in each row in boldface represent the smallest SASE. We note that θ ^ B a y e s ( g ) has the smallest precision in all cases across the table. For each combination of the distributions, we also provide M t ¯ that represents the average of the “true” values of M based on simulated values of θ and β .

4. Optimal Sample Size: A Minimax-Type Approach

In this section, an algorithm is proposed for determining the optimal value of n, given that an HPP or NHPP is intended to be observed for a specified time T. There are many optimal designs for finding τ and n in the Bayesian framework. For instance, [13] provides optimal designs for Bayesian estimation of an HPP rate λ for a set of loss functions; however, LINEX loss was not considered. The article presents two sequential methods and a non-sequential method to find conditions for the optimal stopping time τ and n. The process involves a substantial number of computations. Ref. [14] considered an algorithm to determine the optimal stopping time of an HPP where the cost of observations grows linearly with τ .
In this article, we approach the problem computationally and propose a minimax-type method. Suppose the process is intended to be observed for time T. We are interested in finding the sufficient/optimal number of events n before time T that meets the specified risk level. This would help avoid waiting for all events to occur by time T.
The steps for computing the optimal sample size are similar for HPP and NHPP cases. Code B (see the Supplementary Materials) provides the computation of the optimal sample size based on the gamma priors for the NHPP case using the PL intensity function. The following steps are implemented in Code B.
  • Select a specialized percentage decrease threshold value δ for the risk. In Code B, we used 0.05. Select ω , and the maximum intended time T, to observe the events. Choose values for the hyperparameters b and d. T must be large enough to ensure that at least the initial arrival times do not exceed T; otherwise, the value of the optimal n would not converge.
  • Using the current arrival times t 1 , , t n , up to t n , compute the MLEs β ^ , θ ^ , which are the parameters of the PL intensity function. These estimates, along with the hyperparameters a and c, must be updated (see Section 3.3) as more data become available.
  • Compute the risks R g ( n , a , b , c , d , T , ω ) and R g ( n + 1 , a , b , c , d , T , ω ) using (17), Section 4.2. Note that to find the latter risk, MLEs, and the hyperparameters ( a , c ), values within the while loop must be updated.
  • If R g ( n , a , b , c , d , T , ω ) R g ( n + 1 , a , b , c , d , T , ω ) R g ( n , a , b , c , d , T , ω ) < δ , the latest value of n would be the optimal value based on the specifications; otherwise, in the while loop, increase the value of n by one, and repeat Steps 2–4. The code stops when the best value of n is identified.

4.1. The Case of HPP

Since { N ( t ) , t 0 } is an HPP with the rate λ , P ( N ( T ) = n ) = ( T θ ) n e ( T θ ) / n ! , and as a result, using (7), the posterior distribution of θ is given by
ν 1 ( θ ; n ) = θ ( n + α + 1 ) e ( T + β ) θ ( β + T ) n + α Γ ( n + α )
which is the pdf of the inverse-gamma ( n + α , β + T ) , with E [ θ | n ] = β + T n + α 1 .
For an estimator W ( x ̲ ) of a parameter θ , based on data x ̲ , the LINEX loss is
L ( W ( x ̲ ) θ ) = e ω ( W ( x ̲ ) θ ) ω ( W ( x ̲ ) θ ) 1 .
The posterior risk of the estimator with respect to a posterior pdf π ( θ | x ̲ ) is defined as
R ( W ( x ̲ ) | x ̲ ) = e w W ( x ̲ ) e w θ π ( θ | x ̲ ) d θ w W ( x ̲ ) + w θ π ( θ | x ̲ ) d θ 1 .
Using the above remark, it can be shown that the risk of the Bayes rule, θ ^ B a y e s ( g ) = ln [ M θ ( ω ) ] ω , simplifies to
R g ( n , T , α , β ) = ln [ M θ ( ω ) ] + ω ( β + T ) n + α 1
where
M θ ( ω ) = 2 [ ( β + T ) ω ] 0.5 ( n + α ) K n + α ( 4 ω ( β + T ) ) Γ ( n + α ) .
For a given value of T, R g ( n , T , α , β ) is decreasing in n, with the limit approaching zero as n increases. Since the longest time that the process is to be observed is T, the optimal value for n is proposed to be the smallest n for which the percentage decrease in the risk is less than a specified threshold δ . That is
R g ( n , T , α , β ) R g ( n + 1 , T , α , β ) R g ( n , T , α , β ) < δ ,
where δ is a small number such as 0.05. The above criteria guarantee that increasing n to n + 1 does not significantly reduce R g ( n , T α , β ) . Computationally, it is easy to find the optimal value of n once α , β , T, and δ are given. It can also be shown that for the Jeffreys prior, the risk of the Bayes rule can be expressed as
R J ( n , T ) = ln [ M θ ( ω ) ] + ω T n + 1
where M θ ( ω ) = 2 ( T ω ) 0.5 ( n + 2 ) K n + 2 ( 4 ω T ) Γ ( n + 2 ) . For this case, the optimal value of n can also be obtained using the same criterion as in (15).
Given a data set x 1 , , x N from Exponential ( λ ) , the Mathematica code (A), based on the gamma prior, uses a while loop to find the optimal value for n. We only provide the code for the gamma prior case, as Bayes ( g ) outperforms the Bayes ( J ) estimator regarding accuracy (see Table 1 and Table 2). Note that the values for α , δ , T , and ω should be selected by a user, and that T must be large enough so that k = 1 j x k < T for any values of j in the loop, while the condition (15) is being checked. It is worth noting that the computations for the risks in (15) are sequential. That is, the MLE θ ^ is updated as x j is added to the sample x 1 , , x j in the While loop. As a result, the risks in (15) are updated in the loop. Table 7 provides simulation results for the average values of the optimal n for selected values of λ , α , δ , and ω for the gamma prior case.
Table 7 reveals that for the selected λ , α , and ω , a smaller δ leads to a larger n, as expected. The optimal sample size is smaller for a larger value of the hyperparameter α (which provides a more accurate Bayes estimate; see Section 2.3). Also, for any selected values of λ , α , and δ , the three values of ω provide almost similar optimal sample sizes.

4.2. The Case of NHPP

Recall that the gamma-based and non-informative-based Bayes estimators are numerically computed using (12) and (13). Based on the LINEX loss, like in Section 4.1, the risk of the Bayes rule under LINEX loss can be written as
ln [ E M ( e ω M ) ] + ω E [ M ] .
The expected values in (16), for both gamma and non-informative prior cases, are based on the corresponding posterior distributions (10) and (11). Therefore, the risks based on gamma and non-informative priors, respectively, can be computed via
R g ( n , a , b , c , d , T , ω ) = ln [ 0 0 e ω M Γ β | n + c , D Γ ( L | n + a , b 1 + b ) d ( β ) d ( L ) ]                 + ω 0 0 M Γ β | n + c , D Γ ( L | n + a , b 1 + b ) d ( β ) d ( L ) .
R Non-info ( n , T , ω ) = ln [ 0 0 e ω M Γ ( 0.5 p , 2 ) Γ ( L | n , 1 ) d ( β * ) d ( L ) ]           + ω 0 0 M Γ ( 0.5 p , 2 ) Γ ( L | n , 1 ) d ( β * ) d ( L ) .
Given a data set t 1 , , t N on arrival times from the NHPP with the power-law intensity, the Mathematica code (B) uses a while loop to find the optimal value for n. We only provide the code for the gamma prior case, as Bayes ( g ) outperforms the Bayes ( n o n i n f o ) estimator regarding accuracy (see Table 4 and Table 5). As in Section 4.1, we are interested in the smallest values of n for which
R g ( n , a , b , c , d , T , ω ) R g ( n + 1 , a , b , c , d , T , ω ) R g ( n , a , b , c , d , T , ω ) < δ .
A user should select the values for the hyperparameters b and d, and T must be large enough so that for the sum of interarrival times, we have k = 0 j ( t k + 1 t k ) < T . That is, t j < T , with t 0 = 0 . This should be true for any value of j in the loop while condition (19) is being checked. It is worth noting that the computations for the risks in (19) are sequential. That is, the MLEs
β ^ = j k = 1 j ln ( T / t k ) , θ ^ = T j 1 / β ^ ,
from Section 3.1 are updated as t j is added to the sample t 1 , , t j in the while loop. As a result, the risks in (19) are updated in the loop. As mentioned in Section 3.3, to get a more accurate Bayes(g), using the data-driven approach, a large value for hyperparameter b and a small value for d (to ensure that D in Γ β | n + c , D is not a negative number) are selected, and then the hyperparameters c and d are found using the MLEs. That is,
a = ( 1 b ) ( T θ ^ ) β ^ c = β ^ d .
It is noted that the second double integral on the far right in (17) cannot be computed via a numerical integration method in Mathematica. Therefore, we use the Monte Carlo method to simulate m = 5000 independent samples from Γ β | j + c , D and Γ ( L | j + a , b 1 + b ) for each iteration based on the sample size j in the loop to approximate the second integral.
Table 8 provides simulation results for the average values of the optimal n for selected values of b , d , δ , and ω based on (19).
From a practical perspective, when performing data analysis, one must select an appropriate value for ω that matches the specific application. Several options are available, some of which are discussed in the literature but are not covered in this article.
  • ω can be modeled as a random variable with a probability distribution. In this case, the Bayes estimator is obtained by taking the expected value of the LINEX-based estimator; see [8]. Therefore, with this approach, a specific value for ω is not chosen.
  • A cost function depending on ω can be incorporated with the LINEX loss function, and the value of ω that minimizes this cost function is selected.
For this article, as previously noted, negative values of ω make it computationally impossible to provide Bayes estimates of the expected arrival past a truncated time for the HPP and NHPP cases. We considered only selected positive values in all tables. Due to this limitation, the proposed methods are applicable when overestimating the expected arrival beyond a truncated time is more critical than underestimating. We suggest selecting several potential values of ω 1 , ω 2 , , ω k where the choices can be based on previous similar applications, if available. For ω j , j = 1 , 2 , , k , compute risks (14) and (17) for HPP and NHPP cases, respectively, using the informative priors (which, as shown by simulations, are more accurate than non-informative priors) and then choose the ω that yields the smallest risk value.

5. Numerical Example

The following data represents the arrival of 31 events. The data is generated from an NHPP. The purpose of this example is to provide computational steps for the goodness-of-fit of the data to the NHPP, using the MLEs of the parameters for the NHPP under the PL intensity to determine the appropriate values (see Section 4.2) for the hyperparameters a , b , c , d , and find the optimal sample size.
t = {0.10, 0.30, 1.05, 1.40, 1.94, 4.55, 5.60, 6.39, 16.87, 18.52, 23.74, 23.92, 27.27, 33.24, 34.17, 42.67, 43.49, 43.94, 44.44, 49.19, 63.13, 71.07, 78.60, 84.15, 85.02, 87.32, 89.98, 90.86, 102.44, 106.18, 117.6}
  • where τ = 200; ref. [15] considered the Laplace statistic (LS)
S n = 12 n i = 1 n Λ ( t i , β ^ ) Λ ( τ , β ^ ) 0.5
to assess the goodness-of-fit of an NHPP for several intensity functions, including power-law intensity. Λ ( t i , β ^ ) denotes the mean-value function evaluated at time t i . The test statistic rejects an NHPP if
| S n | > z α / 2 ν ( β ^ ) ,
where
ν ( β ) = 1 12 β 2 τ 2 β 0.5 τ β ln ( τ ) τ β ( 1 + 2 β ln ( τ ) 4 β 2 = 0.25 .
Based on the above data, the MLE for β is β ^ = 0.436625 , and S n = 0.823817 . Note that the estimated value of the parameter θ is not involved in the test statistic. Since
| S n | z α / 2 1 4 = 1.96 / 2 = 0.98 ,
the null hypothesis that t is from the NHPP with the PL intensity is not rejected at the 0.05 significance level. Using b = 4 and d = 0.4 for the hyperparameters (see Section 4.2 for the selection of hyperparameters a and c) and T = 200 , the optimal sample sizes based on δ = 0.05 and δ = 0.01 , respectively, are n = 20 and n = 24 , via the Mathematica Code B.

6. Summary

This article presents a computational method for estimating the expected arrival time of an event beyond a truncated time under the HPP and NHPP models. Two priors are proposed for the HPP case. The gamma prior outperforms the Jeffreys prior in accuracy. For the NHPP case, the power-law intensity function is utilized. Simulations indicate that using gamma priors for the parameters of the intensity function results in more accurate estimates than the non-informative prior discussed in the article. Calculating Bayes estimates for the NHPP case involves a numerical integration method implemented in Mathematica. Given the maximum observation time T for HPP or NHPP, as data points collected in real time, it may be useful to determine the optimal sample size so that the percentage reduction in risk, as a function of sample size, meets a specific threshold. We propose a minmax-type approach to identify this optimal sample size. A sensitivity analysis for the NHHP case confirms that the Bayes estimator based on an informative prior is more precise than the MLE and the Bayes estimator based on the Jeffreys prior. Codes A and B in Supplementary Materials provide the optimal sample sizes for the HPP and NHPP cases.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/analytics4030020/s1, Appendix: Code A and B.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declare no conflicts of interest.

References

  1. Ross, S.M. Introduction to Probability Models, 10th ed.; Academic Press: Cambridge, MA, USA, 2009; pp. 433–434. [Google Scholar]
  2. Aminzadeh, M.S.; Deng, M. Bayesian estimation of renewal function based on Pareto-distributed Inter-arrival times via an MCMC algorithm. Variance 2022, 15. [Google Scholar]
  3. Ferraes, S.G. The optimum Bayesian probability procedure and the prediction of strong earthquakes felt in Mexico City. Pure Appl. Geophysics 1988, 127, 561–571. [Google Scholar] [CrossRef]
  4. Timmermann, K.E.; Nowak, R.D. Multiscale modeling and estimation of Poisson processes with application to photon-limited imaging. IEEE Trans. Inf. Theory 1999, 45, 846–862. [Google Scholar] [CrossRef]
  5. Taghipour, S.; Banjevic, D. Trend analysis of the Power Law process with censored data. In Proceedings of the Reliability and Maintainability Symposium Proceedings-Annual, Lake Buena Vista, FL, USA, 24–27 January 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar]
  6. Al-Dousari, A.; Ellahi, A.; Hussain, I. Use of non-homogeneous Poisson process for the analysis of new cases, deaths, and recoveries of COVID-19 patients: A case study of Kuwait. Science 2021, 33, 101614. [Google Scholar] [CrossRef]
  7. Jayasinghe, J.; Ellingson, L.; Prematilake, C. Regression models using the LINEX loss to predict lower bounds for the number of points for approximating planar contour shapes. J. Appl. Stat. 2021, 49, 4294–4313. [Google Scholar] [CrossRef] [PubMed]
  8. Chang, Y.; Hung, W.l. LINEX Loss functions with applications to determining the optimum process parameters. Qual. Quant. 2007, 41, 291–301. [Google Scholar] [CrossRef]
  9. Nassar, M.; Alotaibi, R.; Okasha, H.; Wang, L. Bayesian estimation using expected LINEX loss function: A novel approach with applications. Mathematics 2022, 10, 436. [Google Scholar] [CrossRef]
  10. Khatun, M.; Matin, M.A. A study on the LINEX loss function with different estimating methods. Open J. Stat. 2020, 10, 52–63. [Google Scholar] [CrossRef]
  11. Zellner, A. Bayesian estimation and prediction using asymmetric loss function. J. Am. Stat. Assoc. 1986, 81, 446–451. [Google Scholar] [CrossRef]
  12. Aminzadeh, M.S. Bayesian estimation of the expected time of first arrival past a truncated time t—The case of NHPP with Power Law intensity. Comput. Stat. 2013, 28, 2465–2477. [Google Scholar] [CrossRef]
  13. El-Sayyad, G.M. Bayesian sequential estimation of a Poisson rate. Biometrika 1973, 60, 289–296. [Google Scholar] [CrossRef]
  14. Meczarski, M.; Zielinski, R. Bayes optimal stopping of a homogeneous Poisson process under LINEX loss function and variation in the priors. Appl. Math. 1977, 24, 457–463. [Google Scholar] [CrossRef]
  15. Zhao, J.; Jinde, W. A new goodness-of-fit test based on the Laplace statistic for a large class of NHPP models. Commun. Stat. Comput. 2005, 34, 725–736. [Google Scholar] [CrossRef]
Figure 1. Graph of the PL intensity function.
Figure 1. Graph of the PL intensity function.
Analytics 04 00020 g001
Figure 2. Graph of LINEX loss function with selected values of ω .
Figure 2. Graph of LINEX loss function with selected values of ω .
Analytics 04 00020 g002
Figure 3. Plot of SASE values for b = 1.2 , d = 1.3 , θ t = 0.5 , β t = 0.8 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Figure 3. Plot of SASE values for b = 1.2 , d = 1.3 , θ t = 0.5 , β t = 0.8 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Analytics 04 00020 g003
Figure 4. Plot of SASE values for b = 0.6 , d = 0.4 , θ t = 0.9 , β t = 0.7 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Figure 4. Plot of SASE values for b = 0.6 , d = 0.4 , θ t = 0.9 , β t = 0.7 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Analytics 04 00020 g004
Figure 5. Plot of SASE values for b = 0.2 , d = 0.2 , θ t = 2.2 , β t = 0.85 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Figure 5. Plot of SASE values for b = 0.2 , d = 0.2 , θ t = 2.2 , β t = 0.85 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Analytics 04 00020 g005
Figure 6. Plot of SASE values for b = 0.5 , d = 0.5 , θ t = 0.4 , β t = 0.95 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Figure 6. Plot of SASE values for b = 0.5 , d = 0.5 , θ t = 0.4 , β t = 0.95 . (a): ω = 0.1, τ = 10, 50, 80. (b): ω = 0.5, τ = 10, 50, 80. (c): ω = 1, τ = 10, 50, 80.
Analytics 04 00020 g006
Table 1. Mean and SASE of estimators for selected values of hyperparameters.
Table 1. Mean and SASE of estimators for selected values of hyperparameters.
α = 5 λ = 0.4 α = 50 λ = 5
ω n θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J ) ω n θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J )
0.02102.50072.49542.26770.02100.20030.20000.1821
(0.7640)(0.5434)(0.7290) (0.0633)(0.0010)(0.0603)
502.49762.49652.4474 500.20030.20010.1964
(0.3612)(0.3342)(0.3577) (0.0274)(0.0138)(0.0271)
1002.51072.50962.4852 1000.19960.19970.1977
(0.2464)(0.2368)(0.2441) (0.0206)(0.0139)(0.0206)
0.4102.51562.41982.18690.4100.20040.19990.1815
(0.7925)(0.5323)(0.7316) (0.0617)(0.0104)(0.0586)
502.52262.49712.4488 500.20110.20050.1970
(0.3748)(0.3399)(0.3633) (0.0290)(0.0146)(0.0285)
1002.50432.49202.4673 1000.20030.19990.1979
(0.2584)(0.2461)(0.2554) (0.0194)(0.0130)(0.0193)
1102.48252.29022.04711100.19990.19960.1799
(0.8083)(0.5354)(0.7617) (0.0651)(0.0110)(0.0614)
502.50622.44912.3994 500.019940.19950.1951
(0.3512)(0.3150)(0.3436) (0.0279(0.0140)(0.0276)
1002.49802.46852.4434 1000.19880.19910.1967
(0.2589)(0.2452)(0.2566) (0.0194)(0.0130)(0.0195)
Table 2. Mean and SASE of estimators for selected values of hyperparameters.
Table 2. Mean and SASE of estimators for selected values of hyperparameters.
α = 5 λ = 20 α = 50 λ = 20
ω n θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J ) ω n θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J )
0.02100.05000.05000.04550.02100.05010.05000.0456
(0.0159)(0.0113)(0.0151) (0.0160)(0.0027)(0.0152)
500.05010.05010.0491 500.05010.05000.0491
(0.0071)(0.0066)(0.0070) (0.0070)(0.0035)(0.0070)
1000.05000.05000.0495 1000.04970.04980.0492
(0.0049)(0.0048)(0.0049) (0.0050)(0.0034)(0.0050)
0.4100.04980.04980.04520.4100.04990.04990.0449
(0.0155)(0.0011)(0.0149) (0.0152)(0.0025)(0.0147)
500.04990.04990.0489 500.04970.04980.0487
(0.0069)(0.0064)(0.0068) (0.0069)(0.0035)(0.0069)
1000.05010.05010.0496 1000.005010.05000.0496
(0.0048)(0.0046)(0.0048) (0.0051)(0.0034)(0.0050)
1100.04970.04970.04511100.04960.04990.0450
(0.0150)(0.0113)(0.0151) (0.0160)(0.0027)(0.0153)
500.04990.04980.0489 500.04970.04980.0487
(0.0071)(0.0066)(0.0071) (0.0071)(0.0035)(0.0070)
1000.04990.04990.0494 1000.04980.04990.0493
(0.0049)(0.0047)(0.0049) (0.0048)0.0032)(0.0047)
Table 3. SASE for estimators: n = 100 ,   ω = 1 .
Table 3. SASE for estimators: n = 100 ,   ω = 1 .
α β λ MLBayes(g)Bayes(J)
535(0.0196)(0.0277)(0.0195)
50.85(0.0199)(0.0191)(0.0198)
5225(0.0040)(0.0179)(0.0040)
50.1625(0.0039)(0.0036)0.0039)
501.010(0.0100)(0.0736)(0.0100)
504.910(0.0101)(0.0096)(0.0010)
5020.5(0.1977)(0.2002)(0.1952)
50980.5(0.2008)(0.1330)(0.1992)
Table 4. Mean and SASE of estimators of M.
Table 4. Mean and SASE of estimators of M.
( b = 1.2 , d = 1.2   θ t = 0.5   β t = 0.8 ) ( b = 0.6 , d = 0.4   θ t = 0.9   β t = 0.7 )
ω τ M ^ ¯ mle M ^ ¯ Bayes ( g ) M ^ ¯ Bayes ( J ) ω τ M ^ ¯ mle M ^ ¯ Bayes ( g ) M ^ ¯ Bayes ( J )
0.101011.31511.45510.0860.101013.87713.85810.447
(1.0088)(0.6228)(1.0824) (5.1169)(1.9138)(2.4429)
5051.56751.60650.017 5054.64654.73950.121
(0.3765)(0.3053)(1.5618) (1.7685)(1.2273)(4.2771)
8081.77381.78580.888 8085.44685.34480.101
(0.3422)(0.2795)(1.7017) (2.7012)(1.3934)(4.9298)
0.501011.35011.33610.0740.501013.58113.17210.439
(1.0439)(0.6509)(1.0083) (2.4807)(1.0874)(2.4270)
5051.64151.65050.018 5054.82854.46950.121
(0.3694)(0.2976)(1.5615) (2.0907)(1.0026)(4.2774)
8081.76881.75080.245 8085.24684.96180.0943
(0.3337)(0.2547)(1.5839) (1.8805)(1.0295)(4.9362)
11011.33711.26410.07911011.61111.45010.148
(1.0020)(0.4986)(1.0875) (2.3862)(0.5235)(1.1486)
5051.59851.60550.0168 5051.52751.54150.025
(0.3505)(0.2742)(1.5628) (0.3403)(0.2714)(1.4379)
8081.71981.72280.262 8081.57281.57580.542
(0.3161)(0.2490)(1.6964) (0.2500)(0.1955)(1.8310)
Table 5. Mean and SASE of estimators of M.
Table 5. Mean and SASE of estimators of M.
( b = 0.2 , d = 0.2   θ t = 2.2   β t = 0.85 ) ( b = 0.5 , d = 0.5   θ t = 0.4   β t = 0.95 )
ω τ M ^ ¯ mle M ^ ¯ Bayes ( g ) M ^ ¯ Bayes ( J ) ω τ M ^ ¯ mle M ^ ¯ Bayes ( g ) M ^ ¯ Bayes ( J )
0.101013.39114.10210.9670.101013.58715.33011.487
(2.8732)(1.2899)(2.6422) (3.9508)(2.5415)(3.3944)
5054.44154.34350.207 5055.79355.614750.371
(2.3591)(1.0487)(3.9872) (3.5006)(2.0538)(4.4365)
8084.90984.83380.116 8085.38385.38980.177
(1.7714)(1.0220)(4.3568) (2.2302)(2.5106)(4.7285)
0.501013.40313.27210.6870.501013.35713.73211.005
(3.9052)(0.9498)(2.7362) (3.1895)(1.4584)(3.1140)
5054.51754.24050.170 5055.38254.68650.321
(2.0258)(0.7738)(4.0143) (2.8345)(1.2416)(4.4807)
8084.81084.43780.123 8085.645084.94880.222
(1.7561)(0.8112)(4.3501) (2.2392)(1.1570)(4.6845)
11013.37012.87310.49011014.02313.18910.435
(2.9280)(0.7653)(2.0492) (4.2846)(1.6032)(4.0081)
5051.17451.78150.069 5054.72253.95150.190
(1.7411)(0.8125)(4.2098) (2.1968)(1.2457)(4.6089)
8084.51581.18380.081 8085.57284.16180.175
(1.4550)(0.7159)(4.3922) (2.7352)(1.0682)(4.7296)
Table 6. Sensitivity analysis for the estimators using selected distributions for θ and β .
Table 6. Sensitivity analysis for the estimators using selected distributions for θ and β .
b = 0.6 d = 0.4 b = 0.6 d = 0.4
β Uniform ( 0.5 , 0.9 ) , θ Uniform ( 1 , 3 ) β Uniform ( 0.5 , 0.9 ) , θ Exponential ( 0.5 )
ω τ M t ¯ θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J ) ω τ M t ¯ θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J )
0.81014.81(6.09)(3.21)(6.31)0.81012.49(2.36)(0.77)(2.59)
3037.43(6.66)(5.57)(10.19) 3039.15(11.29)(9.74)(14.16)
0.21015.63(3.66)(1.80)(6.31)0.21018.41(9.44)(7.63)(11.38)
3039.09(7.90)(4.39)(9.72) 3038.41(12.44)(3.07)(8.57)
b = 0 . 8 d = 0 . 3 b = 0 . 8 d = 0 . 3
β Gamma ( 10 , 0.08 ) , θ Uniform ( 1 , 3 ) β Gamma ( 10 , 0.08 ) , θ Exponential ( 0.67 )
ω τ M t ¯ θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J ) ω τ M t ¯ θ ^ mle θ ^ Bayes ( g ) θ ^ Bayes ( J )
0.81017.29(14.54)(13.85)(15.33)0.81014.39(7.75)(6.79)(8.37)
3039.38(16.72)(12.89)(15.90) 3034.69(3.96)(3.42)(7.28)
0.21014.98(5.81)(3.80)(7.35)0.21014.36(4.79)(3.03)(6.09)
3037.10(22.29)(9.95)(10.61) 3041.34(11.25)(9.30)(16.03)
Table 7. Optimal n for HPP with δ = 0.01 , 0.05 .
Table 7. Optimal n for HPP with δ = 0.01 , 0.05 .
λ α ω δ n λ α ω δ n
0.420.020.01118.52020.020.0199.6
0.020.0520.3 0.020.0519.0
0.40.01120.4 0.40.0199.6
0.40.0520.4 0.40.0519.0
10.01121.7 10.0199.5
10.0520.4 10.0519.0
0.250.020.01114.52050.020.0196.7
0.020.0517.2 0.020.0516.0
0.40.01115.8 0.40.0196.5
0.40.0517.3 0.40.0516.0
10.01119.2 10.0196.5
10.0517.3 10.0516.0
Table 8. Optimal n for NHPP with δ = 0.05 .
Table 8. Optimal n for NHPP with δ = 0.05 .
θ β bdwn θ β bdwn
1.20.820.20.0219.80.80.730.80.0221.2
0.428.3 0.437.8
1.014.1 1.015.1
1.20.820.60.0219.60.80.730.80.0230.3
0.435.7 0.437.8
1.015.8 1.015.1
1.20.821.20.0219.30.80.740.60.0220.6
0.431.7 0.438.3
1.013.2 1.018.6
1.20.820.10.0219.80.80.9540.60.0217.0
0.435.9 0.436.0
1.015.7 1.013.7
0.80.740.40.0222.751.580.50.0216.3
0.435.6 0.433.6
1.017.8 1.010.7
51.580.80.0217.61.50.580.50.0223.3
0.432.6 0.439.0
1.010.2 1.018.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aminzadeh, M.S. LINEX Loss-Based Estimation of Expected Arrival Time of Next Event from HPP and NHPP Processes Past Truncated Time. Analytics 2025, 4, 20. https://doi.org/10.3390/analytics4030020

AMA Style

Aminzadeh MS. LINEX Loss-Based Estimation of Expected Arrival Time of Next Event from HPP and NHPP Processes Past Truncated Time. Analytics. 2025; 4(3):20. https://doi.org/10.3390/analytics4030020

Chicago/Turabian Style

Aminzadeh, M. S. 2025. "LINEX Loss-Based Estimation of Expected Arrival Time of Next Event from HPP and NHPP Processes Past Truncated Time" Analytics 4, no. 3: 20. https://doi.org/10.3390/analytics4030020

APA Style

Aminzadeh, M. S. (2025). LINEX Loss-Based Estimation of Expected Arrival Time of Next Event from HPP and NHPP Processes Past Truncated Time. Analytics, 4(3), 20. https://doi.org/10.3390/analytics4030020

Article Metrics

Back to TopTop