Next Article in Journal
Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost
Previous Article in Journal
Characterization Results of Extremization Models with Interval Values
Previous Article in Special Issue
Strong Consistency of Incomplete Functional Percentile Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Objective Posterior Analysis of kth Record Statistics in Gompertz Model

1
Faculty of Education, University of Belgrade, Kraljice Natalije 43, 11000 Belgrade, Serbia
2
School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2025, 14(3), 152; https://doi.org/10.3390/axioms14030152
Submission received: 21 January 2025 / Revised: 13 February 2025 / Accepted: 17 February 2025 / Published: 20 February 2025
(This article belongs to the Special Issue Stochastic Modeling and Its Analysis)

Abstract

:
The Gompertz distribution has proven highly valuable in modeling human mortality rates and assessing the impacts of catastrophic events, such as plagues, financial crashes, and famines. Record data, which capture extreme values and critical trends, are particularly relevant for analyzing such phenomena. In this study, we propose an objective Bayesian framework for estimating the parameters of the Gompertz distribution using record data. We analyze the performance of several objective priors, including the reference prior, Jeffreys’ prior, the maximal data information (MDI) prior, and probability matching priors. The suitability and properties of the resulting posterior distributions are systematically examined for each prior. A detailed simulation study is performed to assess the effectiveness of various estimators based on the performance criteria. To demonstrate the practical application of the methodology, it is applied to a real-world dataset. This study contributes to the field by providing a thorough comparative evaluation of objective priors and showcasing their impact and applicability in parameter estimation for Gompertz distribution based on record values.

1. Introduction and Motivation

Initially developed to model mortality rates, the Gompertz distribution has demonstrated remarkable versatility across various domains. Its primary appeal lies in its ability to capture growth patterns and survival probabilities in contexts where failure rates or hazard functions increase exponentially over time.
Its probability density function (pdf) and cumulative distribution function (cdf) are given by
f ( x ; α , β ) = β e α x e β α ( e α x 1 ) , x > 0 ,
F ( x ; α , β ) = 1 e β α ( e α x 1 ) , x > 0 ,
where α > 0 is the shape parameter, and β > 0 is the scale parameter.
For instance, in biology, the Gompertz distribution has been used to model abalone growth increments [1], reflecting the biological growth process where the rate of growth decelerates over time. Similarly, it has been employed in actuarial science and biomedical studies to model human and animal survival rates, tumor growth dynamics, and cell proliferation patterns.
In economics, the Gompertz distribution has found applications in modeling income distribution [2] by capturing the skewed nature of income variations in populations. It has also been used in reliability engineering and risk assessment, particularly in insurance and finance, where it helps model claim severity, lifetime of financial instruments, and default probabilities.
In demography, the Gompertz model is widely applied in fertility rate modeling [3] and population predictions [4,5], as it effectively captures age-dependent mortality and birth rates. It has also been employed in urban planning to forecast population growth and assess the sustainability of resources in growing metropolitan areas.
Beyond these fields, the Gompertz distribution has seen extensive use in engineering, where it helps model system reliability and failure rates in mechanical and electronic components. In climate science, it has been applied to estimate extreme weather event occurrences, such as hurricanes or drought durations. Additionally, in marketing and business analytics, the distribution has been used to model consumer behavior, product adoption curves, and technology diffusion, as it aligns well with the idea of early adopters followed by slower market saturation.
The utility of the Gompertz distribution has inspired numerous extensions and generalizations, broadening its applicability to even more complex datasets. Notable extensions include the following:
  • Generalized Gompertz distribution [6], which provides greater flexibility in modeling hazard functions with non-monotonic characteristics.
  • Beta-Gompertz distribution [7], which incorporates additional shape parameters to better model diverse life phenomena.
  • Gamma-Gompertz distribution [8], which extends the original model by incorporating a gamma-distributed heterogeneity component, making it useful in survival analysis.
  • McDonald–Gompertz distribution [9], which introduces additional shape parameters for greater adaptability in real-world applications.
Given its versatility, the Gompertz distribution continues to be a valuable tool in statistical modeling, offering a robust framework for analyzing time-to-event data and dynamic growth processes across multiple disciplines.
For instance, during an epidemic, high death rates are perceived as dominant, and particular attention is paid to them because their behavior is crucial for determining the process of the plague. Similar circumstances could arise in the stock market, when investors are curious about whether a stock would outperform its predecessors. Additionally, cutting down on the number of records to observe from the entire set of data under consideration could save time and money during the memorization process. This suggests that record values in these cases include the majority of the information as well; see [10]. This is one of the explanations for the ongoing interest in record event modeling and observation.
Record values are one of the most useful sample schemes in reliability and life testing designs. They were introduced in [11]. In this scheme, the interest relies on the units that exceed all previous ones during the investigation. Note that there are two methods of extracting record values from a sample. Random sampling is used as the term for the first one. Within this approach, the number of records is seen as a random variable, and the sample size is specified in advance. The second approach implies that the sample size is seen as a random variable by choosing the number of records. This technique is known as inverse sampling (see [12]), and it is commonly used in practice. Under this setting, m units are placed on a life-testing experiment and only n ( n m ) units of records are observed. This is the occurrence in a vast number of experiments, where it seems easier to collect and memorize only record values instead of the whole sample. Examples may be found in breaking wooden beams [13], extreme weather conditions [14], biology [15], etc. Readers who find this interesting may refer to [16,17] for the fundamentals within this theory.
This paper deals with so-called kth record values. Let { X i , i 1 } be the sequence of independent and identically distributed (iid) random variables that serves us as the scheme where the extraction procedure of records is applied. We can select the random variables T 1 , k = k and R 1 ( k ) = X 1 : k as the first kth record time of occurrence and kth record value. Immediately following the first time of occurrence T 1 , k , T 2 , k is defined as the minimum of the set { j : j > k , X j > X 1 : k } , i.e., T 2 , k = min { j : j > k , X j > X 1 : k } . Then, following the second kth record time T 2 , k , T 3 , k denotes the minimum of the set { j : j > T 2 , k , X j > X T 2 , k k + 1 : k } , i.e., T 3 , k = min { j : j > T 2 , k , X j > X T 2 , k k + 1 : k } . The process continues until all remaining kth times are collected. Clearly, the upper kth record values are defined recursively as { R n ( k ) = X T n 1 , k k + 1 : T n 1 , k , n 1 } providing a sample scheme of records R ( k ) = ( R 1 ( k ) , R 2 ( k ) , , R n ( k ) ) useful for inferential purposes and denoting the observed record values as r ( k ) = ( r 1 ( k ) , r 2 ( k ) , , r n ( k ) ) . Directly, the conventional record values are specific cases of kth record values by considering the k value to be one. For more details on kth record times and values, we refer to [16,18,19]. For recent inference under the kth record sample scheme, see, for example [20,21,22,23,24,25,26,27,28].
Jaheen [29] used sampling-based computational methods to estimate the parameters of the Gompertz model under the classical record framework. This framework represents a specific case of kth records when k = 1 . It was revealed that there is no tractable form for the maximum likelihood estimator (MLE) of parameter α in the case of classical record values. As a result, a numerical iterative process is required for its evaluation.
In this paper, a similar situation arises as we demonstrate that there is no tractable form for the MLEs of parameter α in the case of kth upper record values. Therefore, numerical computation methods based on iterations must be applied. However, several challenges can emerge during such iterations, including divergence, convergence to multiple solutions, and dependency on initial values. The nature of these potential issues is discussed in Appendix A.1.
In general, drawing valid conclusions about the parameters becomes difficult when using small samples. These samples exhibit a high degree of skewness and display noticeable asymmetry in their behavior, similar to record values. To address these challenges, Bayesian methods are employed to provide clearer and more reliable parameter estimates as indicated in [30].
Moreover, the Gompertz distribution plays a crucial role in reliability analysis, aligning closely with advancements in statistical inference and degradation modeling. Recent developments propose multivariate models for analyzing dependent tail-weighted degradation data, emphasizing the importance of robust statistical methods in reliability applications [31].
Additionally, statistical inference techniques for k-out-of-n: G systems with switching failures under Poisson shocks highlight the necessity for flexible and accurate modeling in reliability and life testing [32]. These contributions support the current study by emphasizing the significance of advanced statistical approaches, such as those based on record values and Bayesian inference, in addressing reliability challenges.
By integrating these methodologies, this study bridges theoretical advancements with practical needs in reliability modeling.
Here, we consider Bayesian analysis for the Gompertz distribution when record values are present. As is known in the literature, the analysis uses the so-called subjective priors as an illustration of already acquired knowledge that the researcher possesses. Such knowledge is based on past experience, background motivation within the problem, or even some evidence already collected. Let us consider the Bayesian analysis of the model parameters ( α , β ) under the subjective fuzzy prior
π ( α , β ) 1 β α τ 1 e α ,
where τ > 0 is the hyperparameter.
In Appendix A, we see that under the prior (3), the posterior distributions π ( α , β R ( k ) ) are proper. For the purpose of illustrating the influence of the prior distribution’s shape on the posterior distributions, we use true values of parameters ( α , β ) , while the hyperparameter τ is assigned values of 1, 5, and 9 with performance metrics such as RMSE and CP, defined in Section 4. The RMSE is used to quantify the estimation accuracy, focusing on α since the results for β are similar and thus omitted for brevity. The estimates α ^ and β ^ are obtained through Bayesian methods as described in Section 4. Specifically, the posterior distributions π ( α , β r ( k ) ) are derived under the prior specified in (3), and the parameter estimates are taken as the posterior means of α and β . This ensures that the estimates reflect the characteristics of the posterior distributions while accounting for the influence of the prior. The record sample sizes considered in this study are n = 3 , 4 , 7 , 9 , and 10, and the results are presented graphically in Figure 1, providing a detailed visualization of how τ affects posterior analysis. These findings emphasize the challenge of selecting an appropriate value for τ that balances prior assumptions with data-driven inference.
When the underlying data are unknown or when we want to highlight the information found in the data alone, we might employ what are known as objective priors. These priors are based on adequate scientific principles that address the interest of the problem at hand. Several authors have utilized the objective priors for model parameters based on iid samples and discussed their respective properties and influence on the posteriors. Examples can be found in [33,34,35,36,37,38]. For situations in which the iid structure is compromised, such analysis becomes more complex but at the same time appealing. This is the reason why there are limited papers that deal with this topic in literature. The case of record values is one such instance. As a result, this paper’s practical value is mandated by the Bayesian inference under record values and objective priors, which uses the global popularity of records and can find several applications in diverse spheres of life. For example, Refs. [39,40] contain such applications.
This paper aims to provide a detailed analysis of objective Bayesian methods for the Gompertz model based on kth record values. We demonstrate that a vague prior exists on the entire positive axis and that the properness of the posterior distributions is satisfied under this prior. Similar conclusions are drawn for objective priors. Bayesian and objective Bayesian analyses for the Gompertz model using independent and identically distributed (iid) samples have been discussed in [41,42,43]. The importance of Bayesian inference for Gompertz model parameters was highlighted in the context of an age mimicry problem in [44]. Objective Bayesian analysis for the Gumbel distribution under a record framework was presented in [40]. Furthermore, as noted in [23], when both parameters of the Gompertz model are unknown, finding a joint prior with support on the positive real axis can lead to computational complexities. Using extensive computer simulations, we compare the performance of Bayesian estimators under various objective priors. Finally, we compute credible intervals and evaluated their correspondence with mean square errors (MSEs).
Overall, this paper contributes to the growing body of work on Bayesian inference for the Gompertz distribution by focusing on record data. The specific contributions are as follows:
  • We develop a comprehensive objective Bayesian framework for estimating the parameters of the Gompertz distribution using kth record values. This includes deriving and evaluating various objective priors such as Jeffreys’ prior, reference prior, maximal data information (MDI) prior, and probability matching priors.
  • We rigorously establish the properness of the posterior distributions under these priors, ensuring the validity of the proposed approach.
  • A detailed comparative analysis of the objective priors is performed through an extensive simulation study, highlighting their influence on Bayesian estimators in terms of mean squared error (MSE) and coverage probabilities (CPs).
  • We address the computational challenges associated with maximum likelihood estimation (MLE) for kth record values and demonstrate the advantages of Bayesian methods, particularly under small sample settings, where MLE methods often struggle.
  • The proposed methodology is applied to real-world record data, showcasing its practical relevance and robustness in modeling and inference.
Compared to existing works, our study is distinguished by its focus on kth record values and the integration of multiple objective priors in the Bayesian framework. While ref. [29] explored Bayesian inference for Gompertz distribution parameters using classical record values ( k = 1 ), this paper generalizes the framework to kth record values and extends the analysis to include objective priors. Furthermore, we address computational issues related to iterative MLE methods, as noted in [29], and demonstrate how Bayesian methods circumvent these challenges. Prior works on objective Bayesian inference, such as [39], primarily focused on iid data or simpler distributions; our approach broadens the scope to include record-based inference for the Gompertz model, which has not been extensively studied in the literature.
The remainder of the paper is structured as follows: Section 2 introduces the derivation and properties of objective priors for the Gompertz distribution under the record framework. Section 3 presents the results of a simulation study comparing the performance of Bayesian estimators under different priors. In Section 4, the proposed methodology is applied to real-world datasets, followed by conclusions and discussions in Section 5 and Section 6. Detailed proofs and additional results are provided in Appendix A.

2. Non-Informative Priors and Their Properties

In this part, we introduce deterministically generated priors for the Gompertz distribution parameters, which highlight and retain as much as possible the information found in a record sample. The next section includes a number of significant non-informative priors for parameters ( α , β ) , including Jeffrey’s, the maximal data information, reference, and probability matching priors.

2.1. Probability Matching Priors

In statistical analysis, the comparison between Bayesian inference and conventional frequentist methodologies remains a critical area of study. A key motivation for these comparisons is the desire to align the behaviors of the two approaches as closely as possible. This is where probability matching priors become particularly valuable. These priors are constructed to meet a fundamental practical requirement: ensuring that the posterior coverage probability of a Bayesian credible interval closely aligns with the frequentist coverage probability. For further details on this approach, refer to [41]. However, within record sampling schemes, selecting an appropriate prior becomes especially challenging due to the inherent skewness of record value distributions.
To formalize this concept, consider a prior π ( · ) for the parameters ( ϕ , ξ ) , where ϕ is the parameter of interest. Let ϕ ( 1 γ ) ( π ( · ) , X ) denote the ( 1 γ ) th percentile of the marginal posterior distribution of ϕ . The prior π ( · ) is termed a second-order probability matching prior if it satisfies
P ϕ ϕ ( 1 γ ) ( π ( · ) , X ) = 1 γ + o ( n 1 ) ,
for all γ ( 0 , 1 ) . For more detailed insights, see [42].
Expressions like (1) and (2) are instrumental in defining second-order probability matching priors for the parameters ( α , β ) of the Gompertz distribution. These priors ensure that the frequentist coverage probabilities of Bayesian credible intervals align closely with their nominal levels, a desirable property in statistical inference. Their construction carefully balances the interplay between the parameter of interest and the nuisance parameter, enabling a tailored approach to inference. Moreover, the properness of the posterior under these priors ensures their applicability for Bayesian analysis, mitigating computational challenges often associated with such distributions. Consequently, these expressions serve as a foundation for constructing objective priors for the Gompertz distribution within the record framework.
Theorem 1. 
(a) When α is the parameter of interest and β is the nuisance parameter, the second-order probability matching prior has the approximated form
π M 1 ( α , β ) F 1 ( α ) · G 1 ( β ) ,
where F 1 ( α ) = e c 1 + 2 α d α and G 1 ( β ) = β c 1 , and c 1 as an arbitrary constant.
(b) When β is the parameter of interest and α is the nuisance parameter, the second-order probability matching prior has the approximated form
π M 2 ( β , α ) F 2 ( α ) · G 2 ( β ) ,
where F 2 ( α ) = e 2 α 2 ( c 2 + 1 ) 1 2 α d α and G 2 ( β ) = β c 2 , and c 2 as an arbitrary constant. (c) For the case when c 1 = c 2 = 1 , the posterior distribution under π M 1 or π M 2 is proper.

2.2. Maximal Data Information Priors

Lindley [43] developed an information-theoretical study of the Bayesian modeling structure using Shannon entropy. His work aimed to formalize the principles behind Bayesian inference by leveraging entropy-based measures to assess the informativeness of prior distributions. The fundamental idea is to construct a prior distribution π that maximizes the information obtained from the data themselves while effectively incorporating prior knowledge about the parameters. This ensures that the prior is not overly restrictive or uninformative but is instead optimally balanced to make full use of the available information.
Building upon this concept, Zellner [44] introduced a systematic approach to deriving informative priors based on maximizing the information content in the likelihood function of the unknown parameters. He proposed a prior distribution that optimally reflects the structure of the likelihood while maintaining a coherent Bayesian updating framework. This prior, known as the maximum data information (MDI) prior, is formulated to enhance the inferential process by ensuring that the prior contributes meaningfully to parameter estimation without overshadowing the evidence provided by the data. The MDI prior is particularly useful in situations where subjective prior information is limited or unreliable, allowing for a more objective and data-driven approach to Bayesian analysis.
The next theorem provides the MDI prior for parameters α and β of the Gompertz model (1).
Theorem 2. 
(a) 
The MDI prior for the parameters ( α , β ) is selected as
π M D I ( α , β ) β e α .
(b) 
The posterior distribution under π M D I ( α , β ) is proper.

2.3. Reference Priors

In order to maximize the missing information on the parameters, the reference analysis use information-theoretic ideas to properly specify the objective prior, which should be maximally dominated by the data [45]. In [46], reference priors were first formulated in an essentially informal manner. More detailed explanations of the sequential reference process in continuous multiparameter issues were provided in [47]. Furthermore, in [47], a formal, rigorous definition of reference priors for a single block of parameters was provided. Reference priors divide the parameters into distinct ordering groups of interest as is well known. One can find more details in [39].
The following theorem represents the reference priors under different ordering groups for parameters α and β from model (1).
Theorem 3. 
(a) If α is the parameter of interest and β is the nuisance parameter, the reference prior holds the form
π R 1 ( α , β ) 1 α β .
and if β is the parameter of interest and α is the nuisance parameter, the reference prior for ( β , α ) is
π R 2 ( β , α ) 1 α β .
(b) The posterior distribution under the prior π R 1 and π R 2 is proper.

2.4. Jeffrey’s Prior

Jeffery [48] suggested an objective prior to represent a scenario with weak information about the unknown model parameters. The Fisher information (FI) matrix I, provided in (A18), is the source of this prior, indicated by
π J ( α , β ) d e t I ( α , β )
where
I ( α , β ) 2 α 2 α β + 1 β 2 1 β 2 1 α β 1 β 2 1 α β 1 β 2 ,
Although there is continuous debate on whether the multivariate form prior is acceptable, Jeffrey’s prior is commonly utilized because of its invariance property under one-to-one transformations of parameters. Thus, from (6) and (7), the Jeffrey’s prior for ( α , β ) is stated in following theorem. We gain a simplified version by approximation.
Theorem 4. 
(a) The Jeffrey’s prior for ( α , β ) is
π J ( α , β ) 1 α β .
(b) The posterior distribution under π J ( α , β ) is proper.
The proof of Theorems 1–4 is postponed to Appendix A. Theorems 1–4 imply that posterior inferences are possible with respect to all objective priors discussed above. On the other hand, while π M D I , π R 2 is not a second-order probability matching prior, π R 1 and π J are, and they coincide. In this regard, prospective users are advised to use π J . Indeed, the following numerical investigations corroborate this.
While this study evaluates multiple priors, the choice of a prior should be context dependent. For instance, Jeffreys’ prior is recommended in small sample settings due to its second-order probability matching property. On the other hand, the maximal data information (MDI) prior may be more suitable for exploratory analyses where the focus is on leveraging the data’s inherent structure. Practitioners should also consider computational trade-offs; priors requiring complex numerical integration may not be ideal for real-time applications. Guidelines for selecting priors in specific domains, such as meteorology or reliability testing, are provided in Section 4.

3. Implementing the MCMC Algorithm

As is standard in Bayesian analysis, we explicitly state the likelihood function and derive the joint posterior distribution for the parameters ( α , β ) of the Gompertz distribution. The likelihood function for the parameters ( α , β ) given the observed record data r ( k ) = ( r 1 ( k ) , r 2 ( k ) , , r n ( k ) ) is defined as (A2) with a prior distribution π ( α , β ) , and the joint posterior distribution is obtained as (A12) with the form
π ( α , β r ( k ) ) k n β n π ( α , β ) exp α i = 1 n r i ( k ) β k α e α r n ( k ) 1 .
The joint posterior distribution lacks a closed-form solution, making direct sampling of ( α , β ) infeasible; hence, we employ the Markov Chain Monte Carlo (MCMC) algorithm as the pivotal in Bayesian statistical analysis, particularly when dealing with posterior distributions that lack a closed-form solution. In this study, we employ the random-walk Metropolis–Hastings (MH) algorithm [49] due to its flexibility and robustness in handling complex posterior landscapes into practice within the R software (version 4.4.0), which is a widely used MCMC method. This approach is particularly advantageous in scenarios where posterior distributions are challenging to sample from directly due to their intricate structures, conditional posteriors lack standard forms that preclude the use of Gibbs sampling (see Appendix A.7), and the parameter space involves strong dependencies between parameters, which the MH algorithm effectively navigates by adjusting the proposal distribution’s step size.
The MH algorithm demonstrates robust performance in estimating parameters for the Gompertz distribution under the kth record framework. The choice of an appropriate proposal distribution and tuning parameters is critical for achieving high acceptance rates and efficient exploration of the parameter space.
In this study, we employ truncated normal distributions at vectors of zeros as candidate-generating densities for the MH algorithm. The variance matrix of the proposal distribution is constructed to be directly proportional to the Fisher information matrix. This choice is motivated by the Fisher information’s ability to reflect the local geometry of the parameter space, thereby guiding the proposal distribution to regions of higher posterior density.
Specifically, let the Fisher information matrix for the parameters θ = ( α , β ) be denoted by I ( θ ) . The variance matrix for the proposal distribution is set as
Σ = c · I ( θ ) 1 ,
where c > 0 is a scaling factor adjusted during preliminary tuning to achieve an acceptance rate in the range of 10–40%. The truncation ensures that all proposed values remain within the parameter space’s feasible bounds, particularly for parameters like α and β , which are constrained to be positive.
Using this approach offers two main advantages: it provides efficient exploration by aligning the variance matrix with the curvature of the posterior distribution, allowing the algorithm to propose moves that better match the underlying geometry and improve convergence; and it ensures robustness through the Fisher information-based variance matrix, which adapts to the data’s properties and maintains flexibility across different scenarios.
This implementation is further validated through diagnostic checks, including trace plots and autocorrelation analysis, ensuring effective mixing of the Markov chain and convergence to the posterior distribution.
The steps of the random-walk Metropolis algorithm are as follows:
  • Initialize the parameters α 0 and β 0 .
  • Propose new candidate values α and β from the truncated normal distribution centered at the current values with variance matrix Σ .
  • Compute the acceptance probability
    A = min 1 , π ( α , β | r ( k ) ) π ( α , β | r ( k ) ) .
  • Accept the candidate with probability A. If rejected, retain the current values.
  • Repeat steps 2–4 for a pre-specified number of iterations.
The reliance on iterative numerical procedures for parameter estimation, such as the MCMC algorithm, introduces computational challenges. These include potential divergence, multiple solutions, and dependency on initial conditions. These issues are exacerbated in small sample sizes or high-dimensional parameter spaces. The study addresses these challenges by adopting robust Bayesian frameworks, as detailed in the next section, ensuring convergence and computational feasibility. Practitioners are encouraged to adopt hybrid strategies, combining analytic approximations with iterative refinement, to mitigate these issues in practical applications.

4. Simulation Study

In order to evaluate the impact of the various prior distributions on the posterior, a Monte Carlo simulation analysis is carried out in this section. A total of 100,000 random variates are sampled; the first 40,000 are discarded as a burn-in sample, and each process is repeated 500 times. By targeting the 10–40% acceptance rate, the algorithm balances between sufficient exploration of the state space and a reasonable number of accepted moves as proposed in [50]. The samples are then reduced by a factor of 15 to produce a low mutual autocorrelation within chains and used to estimate the posterior density with the remaining observations. In order to provide a reliable performance, the median is chosen as a Bayesian estimator. This intuitively makes sense since the median provides more optimal and robust estimation compared to the sample mean. We select the highest posterior density intervals (HPDs) as the appropriate estimate of Bayesian credible intervals (CIs) due to the non-symmetrical form of the marginal posteriors of parameters α and β based on records as recognized in [51,52].
All estimates are evaluated based on their computed values and MSEs. For example, the MSE for α is calculated using the formula
MSE ( α ) = 1 M i = 1 M α ^ i α 2 ,
where α ^ i denotes the estimate of α obtained in the ith simulation, α is the true parameter value, and M is the total number of simulations. Also, the root mean square error (RMSE) is defined as the square root of MSE, i.e., M S E . Further, the true parameter values are assigned as ( α , β ) = { ( 3 , 1 ) , ( 2 , 1.5 ) , ( 1.5 , 2 ) , ( 1.5 , 5 ) , ( 3 , 10 ) } . Value k is chosen to be 1 and 2. These values of k are intuitively chosen because, for k = 1 , they represent ordinary records that are encountered frequently in practice, indicating a pragmatic viewpoint. In the meantime, k = 2 emphasizes other aspects of records, including the convergence of their moments, which can be more profound in real situations (see, for example, [53]). This consideration becomes particularly significant in machine learning techniques, as the selection of k values can profoundly impact the dimensionality reduction of the problem. For instance, in techniques like Principal Component Analysis (PCA), choosing different values of k alters the number of principal components retained, thus affecting the representation and compression of the data. Therefore, the thoughtful selection of k values is crucial for reducing dimensionality and retaining knowledge in machine learning tasks in an efficient manner (see [54,55,56,57]). The sample sizes are selected at n = 5 , 10 , 15 , and 20 in order to illustrate the performance of Bayesian estimators with respect to different objective priors.
From the reported values found in Table A1 and Table A2, we can derive the following conclusions:
  • The simulations reveal that larger sample sizes consistently reduced the MSE for all priors and improved the CPs of HPD intervals, with values converging to the nominal level of 0.95. This is readily apparent for instances ( α , β ) = ( 1.5 , 5 ) and ( α , β ) = ( 3 , 10 ) . In contrast, small sample sizes ( n 10 ) exhibit higher variability and less reliable coverage probabilities, underscoring the challenges of record-based inference with limited data. For k = 2 , the MSEs show similar trends to those observed with k = 1 , highlighting the flexibility of Bayesian estimation under different record settings. In general, inference based on record values suffers from the low precision of estimators. This limitation is well recognized in the literature, where similar issues have been observed; see, for example, [19]. This only confirms the necessity of such an analysis.
  • For the case when true values ( α , β ) = ( 3 , 10 ) , the deviations have the highest values, while the CPs holds stable values, except for the case of the π M D I prior.
  • The performance of the Bayesian estimators under the Jeffrey’s prior π J is generally superior to that under any other priors as indicated by both the MSEs and CPs. This is reasonable since π J is a second-order probability matching prior, in contrast to π M D I and π R 2 . See Figure A2 and Figure A3 for a graphical presentation.
The selection of an appropriate estimator involves balancing its performance with computational cost, especially in scenarios with limited resources or stringent time constraints. High-performing estimators, such as those derived from Bayesian methods using Jeffreys’ prior, often necessitate iterative numerical procedures that can be computationally intensive. In contrast, simpler methods like maximum likelihood estimation (MLE) might be computationally cheaper but exhibit higher bias or variance, particularly in small sample settings.
Moreover, iterative methods introduce potential challenges such as non-convergence or sensitivity to initial conditions, as illustrated in Appendix A.1. Practitioners should weigh these trade-offs based on the application context. For instance, during time-critical decision-making scenarios like disaster response, methods with faster convergence may be prioritized over those yielding marginally better statistical accuracy.

5. Data Analysis

In this section, we discuss the analyses on the performance of Bayesian estimators based on kth upper record values with Gompertz distribution. We apply the proposed methods to real datasets of ordinary ( k = 1 ) records extracted from a sequence of 30-year periods of the amount of annual rainfall (in inches) from the Los Angeles Civic Center (Dataset II) found in [22] and a simulated record data (Dataset I) found in [29]. These datasets are shown in Table 1, while in Table 2, the values of the fitted parameters α and β of the Gompertz model are presented.
The Bayesian techniques for estimating the parameters α and β based on the π M D I , π M 2 , π R 2 and π J priors are provided here for the sake of comparison. Medians are selected as appropriate Bayes estimates and computed using MH methods. Using the MH algorithm, we initially sample 300,000 values for the MCMC chains. We then use the first 30,000 samples as a burn-in sample and filter the rest of them by selecting every 15th observation for analysis for reducing any potential autocorrelation within chains. We confirm the convergence of MH samples through graphical examination. Figure A2 and Figure A3 show the trace plots for the MH sequence of values for α and β , varying in relation to the implemented priors π J and π M D I as well as the used dataset of records. The trace plots derived from these figures demonstrate that, for both parameters, the values of α and β are randomly distributed around the average. Furthermore, it is evident from the MH sequence histograms for α and β in Figure A2 and Figure A3 that selecting the truncated normal distribution for the proposal distribution is a reasonable choice. The same conclusion follows with the usage of π R 2 and π M 2 priors.
Table 3 lists the Bayesian estimators for α and β together with the appropriate standard deviation (SD) and 95% highest density posterior (HDI) coverage interval [58]. Based on the SD and the width of the HDIs, the result shows that the Bayesian estimates of α are slightly more precise than in the case of parameter β for all priors. In the case of π M 2 , Bayesian estimates produce the most accurate results. In addition, the performance results of the four Bayesian estimates are near to one other, although the prior π J performs slightly better than others in terms of the width of coverage intervals. This is as expected since π J is also a probability matching prior. Overall, we may choose the π J prior as the optimal choice.
The results obtained in this study bridge the gap between theoretical development and practical implementation. For instance, the use of Jeffreys’ prior consistently demonstrates superior performance in small sample settings, a common scenario in fields like epidemiology and finance. The ability to generate credible intervals and high posterior density regions underpins applications where uncertainty quantification is critical. Such outcomes reinforce the utility of Bayesian methods for operational risk assessment, weather forecasting, and resource optimization in engineering and public health. The MDI prior may be suitable for exploratory analyses where the primary goal is to leverage inherent data structure. Reference priors are advantageous in computationally constrained settings, offering simplicity and robustness. Practitioners should consider computational feasibility and application-specific requirements, such as the need for rapid decision-making or detailed uncertainty quantification.

6. Conclusions

This paper addresses key challenges in Bayesian inference for the Gompertz model under kth record data, contributing to both theoretical and practical aspects. By developing objective priors, establishing the propriety of posteriors, and employing computationally efficient methods like the MH algorithm, it strengthens the Bayesian framework while ensuring practical applicability. Extensive simulations validate the proposed approach, demonstrating robustness in terms of MSEs and CPs.
A significant contribution of this study is extending the application of objective priors to record-based data, a relatively unexplored area. The finding that neither the MDI prior nor the reference prior π R 2 belongs to the class of probability-matching priors provides valuable insights into prior selection. These results have practical implications, particularly in decision-making contexts involving kth record values.
Another key aspect addressed in this study is the trade-off between the estimator performance and computational cost. While Bayesian methods utilizing priors such as Jeffreys’ prior offer high accuracy, they can also be computationally demanding, especially in resource-limited settings. Practitioners must balance these factors based on specific application needs, such as emergency response or resource allocation during crises.
Furthermore, this research highlights the relevance of Bayesian methods in policy development under extreme conditions, such as pandemics or financial disruptions. The ability to obtain robust estimates from limited and skewed data enables informed decision-making in such high-stakes scenarios. Future work could explore refinements to sampling techniques or the development of alternative priors tailored to specific datasets, further advancing Bayesian inference for complex real-world challenges.
Despite its contributions, this study faces several limitations, primarily due to small sample sizes and the inherent properties of record values.
One major challenge is the reliance on small sample sizes. Since record values represent extreme observations, the dataset is naturally reduced, leading to higher variability in parameter estimates. This limitation affects precision, making it difficult to generalize findings or draw robust inferences, as estimates may be biased with wider credible intervals.
Another limitation arises from the high degree of skewness and asymmetry commonly observed in record value distributions. Such skewness complicates inference, making it challenging to apply traditional statistical techniques. Even Bayesian methods may struggle with convergence issues or sensitivity to prior selection when faced with extreme skewness.
Additionally, the computational complexity of the proposed framework can be a constraint, especially with small and highly skewed samples. Iterative algorithms, such as the MCMC methods employed in this study, may encounter difficulties like non-convergence or dependence on initial conditions, further complicating estimation.
Lastly, the accuracy of posterior estimates depends on the choice of prior distributions. While objective priors provide a theoretically sound basis, their suitability for highly skewed and asymmetric datasets remains an open question. Conducting sensitivity analyses is crucial to ensuring the robustness of conclusions drawn from the Bayesian approach.
While this study advances Bayesian inference for the Gompertz model under kth records, several avenues for future research remain.
One promising direction is extending the Bayesian framework to accommodate multivariate or dependent record data. This could be particularly useful in reliability analysis and epidemiological modeling, where relationships between multiple variables play a crucial role.
Another area for exploration is the integration of alternative prior distributions, especially those informed by domain-specific knowledge. The use of hierarchical or empirical priors may enhance flexibility and improve the accuracy of Bayesian estimates, making them more adaptable to diverse datasets.
Developing more computationally efficient algorithms is also a key area for future research. Tailoring Bayesian inference methods to specific priors or data structures could improve scalability, particularly for applications involving large datasets or real-time decision-making.
Finally, there is potential to apply these Bayesian methods in emerging fields such as machine learning and artificial intelligence. Optimizing predictive models that leverage record-based datasets could enhance decision-making processes across various domains.
By addressing these challenges and opportunities, future research can build upon the findings of this study, further refining Bayesian methodologies and expanding their real-world applications.

Author Contributions

Conceptualization, Z.V.; methodology, Z.V. and L.W.; software, Z.V. and L.W.; validation, Z.V. and L.W.; formal analysis, Z.V. and L.W.; investigation, Z.V. and L.W.; resources, Z.V. and L.W.; data curation, Z.V. and L.W.; writing—original draft preparation, Z.V. and L.W.; writing—review and editing, Z.V. and L.W.; visualization, Z.V. and L.W.; supervision, Z.V. and L.W.; project administration, Z.V. and L.W.; funding acquisition, Z.V. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work of Liang Wang was supported by the National Natural Science Foundation of China (No. 12061091), the Yunnan Fundamental Research Projects (No. 202401AT070116) and and Yunnan Key Laboratory of Modern Analytical Mathematics and Applications No. 202302AN360007).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. The Problem on the Existence of MLE for α

Under the first n upper kth record values r ( k ) = ( r 1 ( k ) , r 2 ( k ) , , r n ( k ) ) , the likelihood function for θ = ( α , β ) is given by (see [16,59])
L ( θ | r ( k ) ) = k n 1 F ( r n ( k ) ) k i = 1 n f ( r i ( k ) ) 1 F ( r i ( k ) ) .
It follows from (1), (2) and (A1), that
L ( α , β | r ( k ) ) = k n β n e α i = 1 n r i ( k ) β k α e α r n ( k ) 1 .
The log-likelihood function is obtained as
l ( α , β | r ( k ) ) = n ln k + n ln β + α i = 1 n r i ( k ) k β α e α r n ( k ) 1 .
Taking the derivatives of l ( α , β | r ( k ) ) with respect to α and β and setting them to zero, the MLEs of α and β can be obtained by solving the equations l ( α , β | r ( k ) ) α = 0 and l ( α , β | r ( k ) ) β = 0 .
We derive the likelihood equations for α and β as
i = 1 n r i ( k ) + k β α 2 e α r n ( k ) 1 k β α e α r n ( k ) r n ( k ) = 0 ,
n β k α e α r n ( k ) 1 = 0 .
From Equations (A4) and (A5), we find the MLE β ^ as
β ^ = n α k e α r n ( k ) 1 ,
while the MLE α ^ can be found by solving the nonlinear equation
1 α = g 1 ( α ) ,
where
g 1 ( α ) = n r n ( k ) e α r n ( k ) e α r n ( k ) 1 i = 1 n r i ( k ) .
Let us consider the following function
h 1 ( x ) = e x e x 1 ,
for x > 0 . This implies that h 1 ( x ) = 1 ( e x 1 ) 2 < 0 for x > 0 indicating that h 1 is a monotonically decreasing function. Following a similar pattern, function g 1 can also be seen as a decreasing function. Next, we can derive lim α 1 α = 0 and lim α g 1 ( α ) = n r n ( k ) i = 1 n r i ( k ) > 0 . All this information can be illustratively explained with an example.
Let us consider a sample of record values
r ( k ) = ( 11.47 , 21.00 , 27.36 , 31.01 , 37.25 )
drawn from Gompertz distribution with parameters α = 0.0049 and β = 0.2459 found in [22]. We can note that in this case k = 1 . Therefore, it may be seen that, under this record sample, function g 1 has the form
g 1 ( α ) = 186.25 e 37.25 α e 37.25 α 1 128.09 .
In Figure A1, we plot the functions g 1 ( α ) and 1 α .
Using the information found in the graph, it is evident that it is possible that the MLE for α exist, but for its evaluation iterated numerical methods must be employed. Overall, this largely impacts on the simplicity of obtaining the MLEs.
Figure A1. Functions g 1 and 1 α .
Figure A1. Functions g 1 and 1 α .
Axioms 14 00152 g0a1

Appendix A.2. On the Properness of the Posterior

The posterior π ( α , β | r ( k ) ) can be represented as
π ( α , β | r ( k ) ) L ( r ( k ) | α , β ) π ( α , β ) .
Using (3), (A2) and (A12), we obtain that
π ( α , β | r ( k ) ) 0 0 β n 1 α τ 1 exp { α i = 1 n r i ( k ) 1 β k α e α r n ( k ) 1 } d β d α 0 α n + τ 1 e α i = 1 n r i ( k ) 1 e α r n ( k ) 1 n d α = 0 α n + τ 1 e ( α g ( α ) ) d α ,
where
( α ) = α i = 1 n r i ( k ) n ln ( e α r n ( k ) 1 ) .
Its derivative is
g ( α ) = i = 1 n r i ( k ) n r n ( k ) e α r n ( k ) e α r n ( k ) 1 .
Based on the behavior of (A9), lim α g ( α ) = i = 1 n r i ( k ) n r n ( k ) and lim α 0 g ( α ) = , we can conclude that g ( α ) is an increasing function with constant sign g ( α ) < 0 for all α > 0 . This implies that g ( α ) is a decreasing function for all α > 0 and g ( 0 ) = . Such facts confirm the convergence of the integral (A13).
For illustration purposes, let τ = 1 2 . Based on the record sample (A10), we see that the numerical value of
0 α 5 + 1 2 1 e α 128.09 1 e α 37.25 1 5 d α = 2.06816 · 10 8 ,
as obtained in Mathematica, implying the convergence of (A13).

Appendix A.3. Proof of Theorem 1

For part (a) and (b), per [60], the solutions of the partial differential equations
α α π M 1 ( α , β ) + β β π M 1 ( α , β ) = 0
and
α α 1 / 2 π M 2 ( α , β ) + β α 1 / 2 β π M 2 ( α , β ) = 0
provide us with the approximate probability matching priors π M 1 and π M 2 presented in (A4) and (A5) for the parameters for the Gompertz model, respectively, that satisfy the second-order property. Equations (A16) and (A17) implement the elements of the inverse of FI (A18) that can be approximated as
I 1 α 2 α β α β α β 2 .
Part (c) follows the same steps as in Appendix A.2. This completes the proof.

Appendix A.4. Proof of Theorem 2

The MDI prior θ = ( α , β ) takes the following form in view of Zellner [44]:
π M D I ( α , β ) exp { H ( α , β ) } ,
where H ( α , β ) = E ( log f R n ( k ) ( R n ( k ) ) ) , and f R n ( k ) is the density function of upper kth record values from Gompertz distribution given by (see [16])
f R n ( k ) ( x ) = k n Γ ( n ) ( ln ( 1 F ( x ) ) ) n 1 ( 1 F ( x ) ) k 1 f ( x ) β n α n 1 exp { α x β k α ( e α k 1 ) } ( e α k 1 ) ,
for x > 0 . Observe that
H ( α , β ) ln β + α β .
This can be essentially confirmed using the relation E ( ϕ ( X ) ) ϕ ( E ( X ) ) where ϕ ( · ) is a continuous function. Next, it follows that the MDI prior has the form
π M D I ( α , β ) = exp { E ( ln f R n ( k ) ( R n ( k ) ) } β e α β .
However, it should be noted that the researcher’s intuition assigns α alone more weight as the only power within this prior. Then, as an argument inside an exponential function, the parameter β is introduced as a location parameter of the Gompertz model, suggesting its modest impact. Consequently, we might alter the MDI prior earlier as
π M D I ( α , β ) β e α .
Furthermore, this makes sense because next it will be shown that the posterior under prior (A21) is proper, allowing for the use of numerical analyses in contrast to prior (A20).
For part (b), using (A2), (A12) and (A21), we have that
π ( α , β | r ( k ) ) 0 0 β n + 1 exp α i = 1 n r i ( k ) β k α e α r n ( k ) 1 d β d α 0 α n + 2 e α i = 1 n r i ( k ) e α r n ( k ) 1 n + 2 d α .
The last integral can be easily seen to converge by sight changes in (A13).

Appendix A.5. Proof of Theorem 3

For part (a), the conditional prior distribution of β given α can be defined on the FI matrix (A18) when α is the parameter of interest as
π ( β | α ) 1 β .
Then, by choosing a sequence of compact sets Ω i = ( d 1 i , d 2 i ) × ( d 3 i , d 4 i ) for ( α , β ) such that d 1 i , d 3 i 0 , d 2 i , d 4 i as i , it follows that
k 1 i ( α ) 1 = d 3 i d 4 i π ( β | α ) d β = d 3 i d 4 i 1 β d β = ln d 4 i ln d 3 i ,
and
p i ( β | α ) = k 1 i ( α ) π ( β | α ) = 1 β ln ( d 4 i ) ln ( d 3 i ) 1 .
The marginal reference prior α can be defined based on FI matrix (A18) and (A23) as
π i ( α ) = exp 1 2 d 3 i d 4 i p i ( β | α ) log det I ( α , β ) I 22 d β 1 α ,
where I 22 = 1 β 2 . Hence, the following reference prior is produced:
π R 1 ( α , β ) = lim i k 1 i ( α ) π i ( α ) k 1 i ( α 0 ) π i ( α 0 ) π ( β | α ) 1 α β ,
for any fixes point α 0 . When the β is the parameter of interest, the same procedure holds. We may set
π ( β | α ) 1 α ( 1 β ) .
Then,
k 2 i 1 ( β ) = d 1 i d 2 i π ( α | β ) d α = 1 ( 1 β ) ( ln d 2 i ln d 1 i )
and
p i ( α | β ) = k 2 i π ( α | β ) 1 α .
Therefore, the marginal reference prior for β can be produced as
π ( β ) = exp 1 2 d 1 i d 2 i p i ( α | β ) log | I | I 11 d α 1 β 1 / 2 ,
where I 11 1 α ( 1 β ) . With this, we can obtain the reference prior as
π R 2 ( β , α ) = lim i k 2 i ( β ) π i ( β ) k 2 i ( β 0 ) π i ( β 0 ) π ( α | β ) 1 α β 1 / 2 ,
for any fixed point β 0 .
Part (b) directly holds true because π R 1 = π J (see Section 2), so all properties are inherited.

Appendix A.6. Proof of Theorem 4

Part (a) can be proved quite directly, while part (b) can be proven by realizing that
π ( α , β | r ( k ) ) 0 0 α 1 β n 1 / 2 exp { α i = 1 n r i ( k ) β k α e α r n ( k ) 1 } d β d α 0 α n 1 / 2 e α i = 1 n r i ( k ) e α r n ( k ) 1 n + 1 / 2 d α .
Hence, following the same steps as in the proof in Appendix A.2, we conclude the statement of this Theorem.

Appendix A.7. Conditional Posterior Distributions

1. Conditional posterior for α (given β ) under the prior (3): The conditional posterior for α is derived by treating β as fixed:
π ( α β , R ( k ) ) α n + τ 1 exp α i = 1 n R i ( k ) β k α e α R n ( k ) 1 .
2. Conditional posterior for β (given α ): The conditional posterior for β is derived by treating α as fixed under the prior (3):
π ( β α , R ( k ) ) β n 1 exp β k α e α R n ( k ) 1 .
Figure A2. MSEs for α and β posteriors across all priors for record sample size n = 20 and k = 1 .
Figure A2. MSEs for α and β posteriors across all priors for record sample size n = 20 and k = 1 .
Axioms 14 00152 g0a2
Figure A3. CPs for α and β posteriors across all priors for record sample size n = 20 and k = 1 .
Figure A3. CPs for α and β posteriors across all priors for record sample size n = 20 and k = 1 .
Axioms 14 00152 g0a3
Figure A4. Histogram, trace plots, and ACF plots for α and β posteriors under Jeffrey’s prior for dataset 1.
Figure A4. Histogram, trace plots, and ACF plots for α and β posteriors under Jeffrey’s prior for dataset 1.
Axioms 14 00152 g0a4
Figure A5. Histogram, trace plots, and ACF plots for α and β posteriors under MDI prior for dataset 1.
Figure A5. Histogram, trace plots, and ACF plots for α and β posteriors under MDI prior for dataset 1.
Axioms 14 00152 g0a5
Table A1. Empirical MSEs (CPs in parentheses) of Bayesian estimators based on priors π J , π R 2 , π M 2 , and π M D I for k = 1 .
Table A1. Empirical MSEs (CPs in parentheses) of Bayesian estimators based on priors π J , π R 2 , π M 2 , and π M D I for k = 1 .
PriorParameter n = 5 n = 10 n = 15 n = 20
α = 3 , β = 1
π J α 1.7362 (0.932)1.2102 (0.932)1.0539 (0.902)0.7827 (0.924)
β 1.6424 (0.978)1.9788 (0.974)1.925 (0.948)1.6002 (0.972)
π M D I α 1.7592 (0.97)0.9793 (0.932)0.8322 (0.92)0.7636 (0.904)
β 1.1298 (0.998)1.7745 (0.974)1.8069 (0.978)1.6755 (0.98)
π R 2 α 1.8439 (0.842)1.4707 (0.79)1.0848 (0.85)0.8439 (0.878)
β 2.377 (0.964)2.785 (0.924)2.4906 (0.952)1.9889 (0.96)
π M 2 α 1.5546 (0.978)1.2402 (0.924)0.8938 (0.952)0.7753 (0.966)
β 1.0959 (0.988)1.4218 (0.97)1.1956 (0.968)1.0505 (0.974)
α = 2 , β = 1.5
π J α 1.4957 (0.942)1.048 (0.886)0.8575 (0.884)0.7146 (0.894)
β 1.3724 (0.978)2.146 (0.938)2.1382 (0.94)2.1277 (0.93)
π M D I α 1.3398 (0.984)0.7242 (0.922)0.6621 (0.9)0.5555 (0.94)
β 1.1552 (1)1.9265 (0.986)2.1219 (0.958)1.9719 (0.968)
π R 2 α 1.3055 (0.88)1.0662 (0.806)0.8812 (0.808)0.7455 (0.846)
β 2.2305 (0.96)2.5247 (0.944)2.6387 (0.912)2.6454 (0.92)
π M 2 α 1.5645 (0.982)1.0097 (0.948)0.7429 (0.938)0.6122 (0.948)
β 1.0612 (0.978)1.4921 (0.962)1.5654 (0.95)1.469 (0.946)
α = 1.5 , β = 2
π J α 0.9934 (0.972)0.8581 (0.9)0.6904 (0.892)0.5961 (0.91)
β 1.5894 (0.972)1.9405 (0.958)2.1222 (0.948)2.1179 (0.93)
π M D I α 1.8438 (0.984)0.5918 (0.956)0.5644 (0.882)0.4974 (0.89)
β 1.0913 (0.996)1.8687 (0.996)2.3238 (0.952)2.1677 (0.96)
π R 2 α 0.8766 (0.944)0.844 (0.842)0.7265 (0.81)0.6344 (0.806)
β 1.9598 (0.97)2.3987 (0.956)2.6637 (0.938)2.7873 (0.908)
π M 2 α 1.3672 (0.99)0.8012 (0.974)0.6592 (0.93)0.5548 (0.93)
β 1.1989 (0.982)1.434 (0.968)1.7181 (0.94)1.6663 (0.934)
α = 1.5 , β = 5
π J α 2.4513 (0.962)1.049 (0.946)0.8486 (0.91)0.7696 (0.868)
β 3.805 (0.94)2.8276 (0.956)3.6008 (0.934)3.7584 (0.918)
π M D I α 12.7472 (0.706)1.4345 (0.968)0.6719 (0.962)0.5508 (0.958)
β 3.2466 (0.754)2.276 (0.98)2.6135 (0.986)3.117 (0.97)
π R 2 α 1.192 (0.962)0.9704 (0.894)0.8767 (0.834)0.7855 (0.808)
β 4.1568 (0.962)3.4359 (0.946)3.8024 (0.928)4.4419 (0.89)
π M 2 α 2.8096 (0.982)1.1815 (0.982)0.9151 (0.928)0.6964 (0.912)
β 2.5463 (0.954)2.4604 (0.962)2.8278 (0.912)3.1741 (0.936)
α = 3 , β = 10
π J α 4.3602 (0.966)2.4604 (0.914)1.8525 (0.88)1.6201 (0.848)
β 6.3537 (0.954)6.8124 (0.942)7.7799 (0.906)8.325 (0.916)
π M D I α 44.3649 (0.126)6.8346 (0.656)2.2841 (0.898)1.4796 (0.926)
β 9.6124 (0.058)6.4391 (0.69)4.9064 (0.886)4.8237 (0.92)
π R 2 α 2.5706 (0.952)1.9613 (0.89)1.7866 (0.826)1.6454 (0.79)
β 7.178 (0.946)7.3819 (0.942)8.6658 (0.896)9.5319 (0.872)
π M 2 α 6.2331 (0.97)2.5782 (0.966)1.7945 (0.94)1.4292 (0.922)
β 5.7429 (0.944)5.2623 (0.932)5.7694 (0.944)6.4812 (0.926)
Table A2. Empirical MSEs (CPs in parentheses) of Bayesian estimators based on priors π J , π R 2 , π M 2 , and π M D I for k = 2 .
Table A2. Empirical MSEs (CPs in parentheses) of Bayesian estimators based on priors π J , π R 2 , π M 2 , and π M D I for k = 2 .
PriorParameter n = 5 n = 10 n = 15 n = 20
α = 3 , β = 1
π J α 2.1334 (0.906)1.455 (0.904)1.1254 (0.912)0.9594 (0.91)
β 1.2199 (0.96)1.5031 (0.978)1.5222 (0.956)1.3702 (0.964)
π M D I α 3.0018 (0.944)1.0543 (0.958)0.8895 (0.936)0.7713 (0.942)
β 0.6066 (0.988)1.0881 (0.994)1.1939 (0.978)1.2017 (0.98)
π R 2 α 1.9617 (0.858)1.6118 (0.816)1.2976 (0.844)1.0139 (0.87)
β 1.6104 (0.966)1.8605 (0.946)1.9594 (0.94)1.6754 (0.954)
π M 2 α 2.6099 (0.974)1.3474 (0.936)1.0111 (0.946)0.8385 (0.942)
β 0.7729 (0.99)1.0413 (0.966)1.1612 (0.962)1.0505 (0.952)
α = 2 , β = 1.5
π J α 1.4819 (0.97)1.1575 (0.9)1.0355 (0.862)0.8459 (0.88)
β 1.1532 (0.976)1.4445 (0.948)1.7347 (0.898)1.6963 (0.922)
π M D I α 6.8213 (0.892)0.9073 (0.972)0.7113 (0.94)0.6326 (0.914)
β 0.7371 (0.962)1.183 (0.994)1.3923 (0.98)1.4029 (0.972)
π R 2 α 1.2328 (0.928)1.1683 (0.842)1.0485 (0.812)0.8664 (0.83)
β 1.6365 (0.95)1.7396 (0.952)2.0795 (0.912)1.9476 (0.934)
π M 2 α 2.5595 (0.986)1.0307 (0.968)0.8997 (0.922)0.7498 (0.914)
β 1.1424 (0.974)1.1226 (0.98)1.2761 (0.952)1.2239 (0.946)
α = 1.5 , β = 2
π J α 1.6862 (0.964)0.9628 (0.934)0.8384 (0.862)0.7328 (0.886)
β 1.1864 (0.956)1.3059 (0.956)1.7547 (0.936)1.7597 (0.94)
π M D I α 6.7843 (0.852)0.8998 (0.982)0.624 (0.958)0.5602 (0.928)
β 1.0036 (0.946)1.1376 (0.988)1.3229 (0.994)1.5534 (0.962)
π R 2 α 0.869 (0.954)0.9311 (0.896)0.8535 (0.82)0.7681 (0.784)
β 1.7139 (0.96)1.5577 (0.962)1.8474 (0.942)2.1392 (0.892)
π M 2 α 2.388 (0.988)0.9196 (0.984)0.8015 (0.95)0.6599 (0.922)
β 1.0435 (0.972)1.0425 (0.976)1.3135 (0.952)1.3494 (0.946)
α = 1.5 , β = 5
π J α 2.9618 (0.97)1.3554 (0.952)0.9985 (0.936)0.9052 (0.906)
β 2.8079 (0.93)2.3359 (0.94)2.2688 (0.97)2.7031 (0.94)
π M D I α 33.0919 (0.262)3.2629 (0.816)1.2834 (0.954)0.7814 (0.97)
β 4.4792 (0.214)2.3849 (0.878)1.9536 (0.952)2.0055 (0.968)
π R 2 α 1.614 (0.974)1.0475 (0.954)0.9707 (0.89)0.873 (0.878)
β 3.6881 (0.97)2.5887 (0.958)2.5859 (0.948)3.0233 (0.932)
π M 2 α 4.3126 (0.994)1.5492 (0.986)1.0858 (0.986)0.8529 (0.956)
β 2.611 (0.908)2.1368 (0.958)2.0915 (0.956)2.1985 (0.956)
α = 3 , β = 10
π J α 7.3097 (0.966)2.8326 (0.96)2.2558 (0.9)1.8764 (0.86)
β 6.8628 (0.922)4.4389 (0.946)5.1001 (0.944)5.6842 (0.924)
π M D I α 84.4238 (0.004)20.9314 (0.27)5.8657 (0.634)2.7898 (0.824)
β 9.9741 (0)8.4092 (0.216)5.9635 (0.634)4.6369 (0.8)
π R 2 α 4.1755 (0.972)2.2731 (0.94)1.9344 (0.908)1.8518 (0.84)
β 6.1125 (0.96)5.4758 (0.966)5.3633 (0.944)6.3066 (0.934)
π M 2 α 10.8352 (0.976)3.218 (0.988)2.1488 (0.964)1.7812 (0.946)
β 5.7046 (0.88)4.1613 (0.952)4.3392 (0.952)4.5652 (0.94)

References

  1. Troynikov, V.S.; Day, R.W.; Leorke, A.M. Estimation of seasonal growth parameters using a stochastic Gompertz model for tagging data. J. Shellfish Res. 1998, 17, 833–838. [Google Scholar]
  2. Moura, N.J.; Ribeiro, M.B. Evidence for the Gompertz curve in the income distribution of Brazil 1978–2005. Eur. Phys. J. B 2009, 67, 101–120. [Google Scholar] [CrossRef]
  3. Pollard, J.H.; Valkovics, E.J. The Gompertz distribution and its applications. Genus 1992, 48, 15–28. [Google Scholar]
  4. Mueller, L.D.; Nusbaum, T.J.; Rose, M.R. The Gompertz equation as a predictive tool in demography. Exp. Gerontol. 1995, 30, 553–569. [Google Scholar] [CrossRef] [PubMed]
  5. Olshansky, S.J.; Carnes, B.A. Ever since Gompertz. Demography 1997, 34, 1–15. [Google Scholar] [CrossRef]
  6. El-Gohary, A.; Alshamrani, A.; Al-Otaibi, A.N. The generalized Gompertz distribution. Appl. Math. Model. 2013, 37, 13–24. [Google Scholar] [CrossRef]
  7. Jafari, A.A.; Tahmasebi, S.; Alizadeh, M. The beta-Gompertz distribution. Rev. Colomb. Estadística 2014, 37, 141–158. [Google Scholar] [CrossRef]
  8. Shama, M.S.; Dey, S.; Altun, E.; Afify, A.Z. The gamma–Gompertz distribution: Theory and applications. Math. Comput. Simul. 2022, 193, 689–712. [Google Scholar] [CrossRef]
  9. Roozegar, R.; Tahmasebi, S.; Jafari, A.A. The McDonald Gompertz distribution: Properties and applications. Commun. Stat. Simul. Comput. 2017, 46, 3341–3355. [Google Scholar] [CrossRef]
  10. Ahmadi, J.; Arghami, N.R. Comparing the Fisher information in record values and iid observations. Statistics 2003, 37, 435–441. [Google Scholar] [CrossRef]
  11. Chandler, K.N. The distribution and frequency of record values. J. R. Stat. Soc. Ser. B Methodol. 1952, 14, 220–228. [Google Scholar] [CrossRef]
  12. Berger, M.; Gulati, S. Record-breaking data: A parametric comparison of the inverse-sampling and the random-sampling schemes. J. Stat. Comput. Simul. 2001, 69, 225–238. [Google Scholar] [CrossRef]
  13. Glick, N. Breaking records and breaking boards. Am. Math. Mon. 1978, 85, 2–26. [Google Scholar] [CrossRef]
  14. Benestad, R.E. How often can we expect a record event? Clim. Res. 2003, 25, 3–13. [Google Scholar] [CrossRef]
  15. Kauffman, S.; Levin, S. Towards a general theory of adaptive walks on rugged landscapes. J. Theor. Biol. 1987, 128, 11–45. [Google Scholar] [CrossRef] [PubMed]
  16. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. Records; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 768. [Google Scholar]
  17. Nevzorov, V.B. Records: Mathematical Theory; Translations of Mathematical Monographs, AMS: Providence, RI, USA, 2000. [Google Scholar]
  18. Dziubdziela, W.; Kopociński, B. Limiting properties of the k-th record values. Appl. Math. 1976, 2, 187–190. [Google Scholar] [CrossRef]
  19. Vidović, Z. On MLEs of the parameters of a modified Weibull distribution based on record values. J. Appl. Stat. 2018, 46, 715–724. [Google Scholar] [CrossRef]
  20. Wang, L.; Shi, Y.; Yan, W. Inference for Gompertz distribution under records. J. Syst. Eng. Electron. 2016, 27, 271–278. [Google Scholar]
  21. Laji, M.; Chacko, M. Inference On Gompertz Distribution Based On Upper k-Record Values: Inference based on upper k-records. J. Kerala Stat. Assoc. 2019, 30, 47–63. [Google Scholar]
  22. Hemmati, A.; Khodadadi, Z.; Zare, K.; Jafarpour, H. Bayesian and Classical Estimation of Strength-Stress Reliability for Gompertz Distribution Based on Upper Record Values. J. Math. Ext. 2022, 16, 1–27. [Google Scholar]
  23. Tripathi, A.; Singh, U.; Kumar Singh, S. Estimation of P(X < Y) for Gompertz distribution based on upper records. Int. J. Model. Simul. 2022, 42, 388–399. [Google Scholar]
  24. El-Bassiouny, A.H.; Medhat, E.D.; Mustafa, A.; Eliwa, M.S. Characterization of the generalized Weibull-Gompertz distribution based on the upper record values. Int. J. Math. Its Appl. 2015, 3, 13–22. [Google Scholar]
  25. Kumar, D.; Wang, L.; Dey, S.; Salehi, M. Inference on generalized inverted exponential distribution based on record values and inter-record times. Afr. Mat. 2022, 33, 73. [Google Scholar] [CrossRef]
  26. Yu, Y.; Wang, L.; Dey, S.; Liu, J. Estimation of stress-strength reliability from unit-Burr III distribution under records data. Math. Biosci. Eng. 2023, 20, 12360–12379. [Google Scholar] [CrossRef] [PubMed]
  27. Bashir, S.; Qureshİ, A. Gompertz-Exponential Distribution: Record Value Theory and Applications in Reliability. Istat. J. Turk. Stat. Assoc. 2022, 14, 27–37. [Google Scholar]
  28. Vidović, Z.; Nikolić, J.; Perić, Z. Properties of k-record posteriors for the Weibull model. Stat. Theory Relat. Fields 2024, 8, 152–162. [Google Scholar] [CrossRef]
  29. Jaheen, Z.F. A Bayesian analysis of record statistics from the Gompertz model. Appl. Math. Comput. 2003, 145, 307–320. [Google Scholar] [CrossRef]
  30. Tian, Q.; Lewis-Beck, C.; Niemi, J.B.; Meeker, W.Q. Specifying prior distributions in reliability applications. Appl. Stoch. Model. Bus. Ind. 2023, 40, 5–62. [Google Scholar] [CrossRef]
  31. Xu, A.; Fang, G.; Zhuang, L.; Gu, C. A multivariate student-t process model for dependent tail-weighted degradation data. IISE Trans. 2024. [Google Scholar] [CrossRef]
  32. Luo, F.; Hu, L.; Wang, Y.; Yu, X. Statistical inference of reliability for a K-out-of-N: G system with switching failure under Poisson shocks. Stat. Theory Relat. Fields 2024, 8, 195–210. [Google Scholar] [CrossRef]
  33. Gugushvili, S.; Spreij, P. Nonparametric Bayesian drift estimation for multidimensional stochastic differential equations. Lith. Math. J. 2014, 54, 127–141. [Google Scholar] [CrossRef]
  34. Kass, R.E.; Wasserman, L. The selection of prior distributions by formal rules. J. Am. Stat. Assoc. 1996, 91, 1343–1370. [Google Scholar] [CrossRef]
  35. Ramos, P.L.; Achcar, J.A.; Moala, F.A.; Ramos, E.; Louzada, F. Bayesian analysis of the generalized gamma distribution using non-informative priors. Statistics 2017, 51, 824–843. [Google Scholar] [CrossRef]
  36. Shakhatreh, M.K.; Dey, S.; Alodat, M.T. Objective Bayesian analysis for the differential entropy of the Weibull distribution. Appl. Math. Model. 2021, 89, 314–332. [Google Scholar] [CrossRef]
  37. Ramos, P.L.; Almeida, M.H.; Louzada, F.; Flores, E.; Moala, F.A. Objective Bayesian inference for the Capability index of the Weibull distribution and its generalization. Comput. Ind. Eng. 2022, 167, 108012. [Google Scholar] [CrossRef]
  38. Xu, A.; Fu, J.; Tang, Y.; Guan, Q. Bayesian analysis of constant-stress accelerated life test for the Weibull distribution using noninformative priors. Appl. Math. Model. 2015, 39, 6183–6195. [Google Scholar] [CrossRef]
  39. Kim, Y.; Seo, J.I. Objective Bayesian Prediction of Future Record Statistics Based on the Exponentiated Gumbel Distribution: Comparison with Time-Series Prediction. Symmetry 2020, 12, 1443. [Google Scholar] [CrossRef]
  40. Vidovic, Z.; Nikolić, J.; Perić, Z. Bayesian k-record analysis for the Lomax distribution using objective priors. J. Math. Ext. 2024, 18, 1–23. [Google Scholar]
  41. Consonni, G.; Fouskakis, D.; Liseo, B.; Ntzoufras, I. Prior distributions for objective Bayesian analysis. Bayesian Anal. 2018, 13, 627–679. [Google Scholar] [CrossRef]
  42. Datta, G.S.; Mukerjee, R. Probability Matching Priors: Higher Order Asymptotics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004; Volume 178. [Google Scholar]
  43. Lindley, D.V. On a measure of the information provided by an experiment. Ann. Math. Stat. 1956, 27, 986–1005. [Google Scholar] [CrossRef]
  44. Zellner, A. Maximal data information prior distributions. In New Developments in the Applications of Bayesian Methods; Elsevier: Amsterdam, The Netherlands, 1977; pp. 211–232. [Google Scholar]
  45. Berger, J.O.; Bernardo, J.M.; Sun, D. The formal definition of reference priors. arXiv 2009, arXiv:0904.0156. [Google Scholar] [CrossRef]
  46. Bernardo, J.M. Reference posterior distributions for Bayesian inference. J. R. Stat. Soc. Ser. B Methodol. 1979, 41, 113–128. [Google Scholar] [CrossRef]
  47. Berger, J.O.; Bernardo, J.M. Ordered group reference priors with application to the multinomial problem. Biometrika 1992, 79, 25–37. [Google Scholar] [CrossRef]
  48. Jeffreys, H. An invariant form for the prior probability in estimation problems. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1946, 186, 453–461. [Google Scholar]
  49. Gelman, A.; Gilks, W.R.; Roberts, G.O. Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab. 1997, 7, 110–120. [Google Scholar] [CrossRef]
  50. Neal, P.; Roberts, G. Optimal scaling for random walk Metropolis on spherically constrained target densities. Methodol. Comput. Appl. Probab. 2008, 10, 277–297. [Google Scholar] [CrossRef]
  51. Empacher, C.; Kamps, U.; Volovskiy, G. Statistical Prediction of Future Sports Records Based on Record Values. Stats 2023, 6, 131–147. [Google Scholar] [CrossRef]
  52. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  53. Hofmann, G.; Balakrishnan, N. Fisher information in k-records. Ann. Inst. Stat. Math. 2004, 56, 383–396. [Google Scholar] [CrossRef]
  54. Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2009; Volume 2. [Google Scholar]
  55. Jolliffe, I.T. Principal component analysis. Technometrics 2003, 45, 276. [Google Scholar]
  56. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4. [Google Scholar]
  57. Tibshirani, R.; Walther, G.; Hastie, T. Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. Ser. B Stat. Methodol. 2001, 63, 411–423. [Google Scholar] [CrossRef]
  58. Turkkan, N.; Pham-Gia, T. Computation of the highest posterior density interval in Bayesian analysis. J. Stat. Comput. Simul. 1993, 44, 243–250. [Google Scholar] [CrossRef]
  59. Ahsanullah, M. Record values of the Lomax distribution. Stat. Neerl. 1991, 45, 21–29. [Google Scholar] [CrossRef]
  60. Peers, H.W. On confidence points and Bayesian probability points in the case of several parameters. J. R. Stat. Soc. Ser. B Methodol. 1965, 27, 9–16. [Google Scholar] [CrossRef]
Figure 1. RMSE and CP of the Bayesian estimates of parameter α based on prior (3) with τ = 1 , 5 and 9. Left graph (a) is for RMSE and right graph (b) is for CP.
Figure 1. RMSE and CP of the Bayesian estimates of parameter α based on prior (3) with τ = 1 , 5 and 9. Left graph (a) is for RMSE and right graph (b) is for CP.
Axioms 14 00152 g001
Table 1. Record datasets.
Table 1. Record datasets.
Dataset I0.125280.212110.227840.260630.65258
0.660560.682550.793850.837780.92206
Dataset II4.8518.7920.4422.0027.47
33.44
Table 2. Parameters α and β of the fitted Gompertz model.
Table 2. Parameters α and β of the fitted Gompertz model.
α β
Dataset I1.4835.572
Dataset II0.00490.2659
Table 3. Summary of the Bayesian estimates for parameters ( α . β ) .
Table 3. Summary of the Bayesian estimates for parameters ( α . β ) .
PriorParameterMedianSD95% HDI
Dataset I
π J α 0.61910.9874(0, 2.2369)
β 7.26444.1427(1.3615, 14.5149)
π M D I α 2.31361.6505(0, 5.1734)
β 3.06623.5271(0.0473, 9.9768)
π R 2 α 0.47771.0322(0, 1.8967)
β 8.21624.7862(2.1967, 15.9144)
π M 2 α 0.98550.9589(0.0002, 2.9002)
β 6.18713.7258(0.627, 13.4511)
Dataset II
π J α 0.02350.0368(0, 0.0823)
β 0.10410.1652(0.0062, 0.2482)
π M D I α 0.03480.0565(0, 0.1319)
β 0.09050.1838(0.0001, 0.3362)
π R 2 α 0.19670.0318(0, 0.0731)
β 0.11930.1537(0.0124, 0.2693)
π M 2 α 0.03690.0518(0, 0.1097)
β 0.08290.1835(0.0009, 0.2171)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vidović, Z.; Wang, L. Objective Posterior Analysis of kth Record Statistics in Gompertz Model. Axioms 2025, 14, 152. https://doi.org/10.3390/axioms14030152

AMA Style

Vidović Z, Wang L. Objective Posterior Analysis of kth Record Statistics in Gompertz Model. Axioms. 2025; 14(3):152. https://doi.org/10.3390/axioms14030152

Chicago/Turabian Style

Vidović, Zoran, and Liang Wang. 2025. "Objective Posterior Analysis of kth Record Statistics in Gompertz Model" Axioms 14, no. 3: 152. https://doi.org/10.3390/axioms14030152

APA Style

Vidović, Z., & Wang, L. (2025). Objective Posterior Analysis of kth Record Statistics in Gompertz Model. Axioms, 14(3), 152. https://doi.org/10.3390/axioms14030152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop