Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = Metropolis–Hastings technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 19296 KB  
Article
Inference for the Chris–Jerry Lifetime Distribution Under Improved Adaptive Progressive Type-II Censoring for Physics and Engineering Data Modelling
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(9), 702; https://doi.org/10.3390/axioms14090702 - 17 Sep 2025
Viewed by 172
Abstract
This paper presents a comprehensive reliability analysis framework for the Chris–Jerry (CJ) lifetime distribution under an improved adaptive progressive Type-II censoring plan. The CJ model, recently introduced to capture skewed lifetime behaviors, is studied under a modified censoring structure designed to provide greater [...] Read more.
This paper presents a comprehensive reliability analysis framework for the Chris–Jerry (CJ) lifetime distribution under an improved adaptive progressive Type-II censoring plan. The CJ model, recently introduced to capture skewed lifetime behaviors, is studied under a modified censoring structure designed to provide greater flexibility in terminating life-testing experiments. We derive maximum likelihood estimators for the CJ parameters and key reliability measures, including the reliability and hazard rate functions, and construct approximate confidence intervals using the observed Fisher information matrix and the delta method. To address the intractability of the likelihood function, Bayesian estimators are obtained under independent gamma priors and a squared-error loss function. Because the posterior distributions are not available in closed form, we apply the Metropolis–Hastings algorithm to generate Bayesian estimates and two types of credible intervals. A comprehensive simulation study evaluates the performance of the proposed estimation techniques under various censoring scenarios. The framework is further validated through two real-world datasets: one involving rainfall measurements and another concerning mechanical failure times. In both cases, the CJ model combined with the proposed censoring strategy demonstrates superior fit and reliability inference compared to competing models. These findings highlight the value of the CJ distribution, together with advanced censoring methods, for modeling lifetime data in physics and engineering applications. Full article
Show Figures

Figure 1

35 pages, 11039 KB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 315
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

20 pages, 774 KB  
Article
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
by Ruisi Nan, Jingwei Wang, Hanfang Li and Youxi Luo
Mathematics 2025, 13(14), 2287; https://doi.org/10.3390/math13142287 - 16 Jul 2025
Viewed by 495
Abstract
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the [...] Read more.
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the sample size (n) and there are extreme outliers in the response variables or covariates (e.g., p/n > 0.1). Traditional penalized regression techniques, however, exhibit notable vulnerability to data outliers during high-dimensional variable selection, often leading to biased parameter estimates and compromised resilience. To address this critical limitation, we propose a novel empirical likelihood (EL)-based variable selection framework that integrates a Bayesian LASSO penalty within the composite quantile regression framework. By constructing a hybrid sampling mechanism that incorporates the Expectation–Maximization (EM) algorithm and Metropolis–Hastings (M-H) algorithm within the Gibbs sampling scheme, this approach effectively tackles variable selection in high-dimensional settings with outlier contamination. This innovative design enables simultaneous optimization of regression coefficients and penalty parameters, circumventing the need for ad hoc selection of optimal penalty parameters—a long-standing challenge in conventional LASSO estimation. Moreover, the proposed method imposes no restrictive assumptions on the distribution of random errors in the model. Through Monte Carlo simulations under outlier interference and empirical analysis of two U.S. house price datasets, we demonstrate that the new approach significantly enhances variable selection accuracy, reduces estimation bias for key regression coefficients, and exhibits robust resistance to data outlier contamination. Full article
Show Figures

Figure 1

32 pages, 1098 KB  
Article
Estimation and Bayesian Prediction for New Version of Xgamma Distribution Under Progressive Type-II Censoring
by Ahmed R. El-Saeed, Molay Kumar Ruidas and Ahlam H. Tolba
Symmetry 2025, 17(3), 457; https://doi.org/10.3390/sym17030457 - 18 Mar 2025
Cited by 1 | Viewed by 379
Abstract
This article introduces a new continuous lifetime distribution within the Gamma family, called the induced Xgamma distribution, and explores its various statistical properties. The proposed distribution’s estimation and prediction are investigated using Bayesian and non-Bayesian approaches under progressively Type-II censored data. The maximum [...] Read more.
This article introduces a new continuous lifetime distribution within the Gamma family, called the induced Xgamma distribution, and explores its various statistical properties. The proposed distribution’s estimation and prediction are investigated using Bayesian and non-Bayesian approaches under progressively Type-II censored data. The maximum likelihood and maximum product spacing methods are applied for the non-Bayesian approach, and some of their performances are evaluated. In the Bayesian framework, the numerical approximation technique utilizing the Metropolis–Hastings algorithm within the Markov chain Monte Carlo is employed under different loss functions, including the squared error loss, general entropy, and LINEX loss. Interval estimation methods, such as asymptotic confidence intervals, log-normal asymptotic confidence intervals, and highest posterior density intervals, are also developed. A comprehensive numerical study using Monte Carlo simulations is conducted to evaluate the performance of the proposed point and interval estimation methods through progressive Type-II censored data. Furthermore, the applicability and effectiveness of the proposed distribution are demonstrated through three real-world datasets from the fields of medicine and engineering. Full article
(This article belongs to the Special Issue Bayesian Statistical Methods for Forecasting)
Show Figures

Figure 1

18 pages, 418 KB  
Article
Inference with Pólya-Gamma Augmentation for US Election Law
by Adam C. Hall and Joseph Kang
Mathematics 2025, 13(6), 945; https://doi.org/10.3390/math13060945 - 13 Mar 2025
Viewed by 793
Abstract
Pólya-gamma (PG) augmentation has proven to be highly effective for Bayesian MCMC simulation, particularly for models with binomial likelihoods. This data augmentation strategy offers two key advantages. First, the method circumvents the need for analytic approximations or Metropolis–Hastings algorithms, which leads to simpler [...] Read more.
Pólya-gamma (PG) augmentation has proven to be highly effective for Bayesian MCMC simulation, particularly for models with binomial likelihoods. This data augmentation strategy offers two key advantages. First, the method circumvents the need for analytic approximations or Metropolis–Hastings algorithms, which leads to simpler and more computationally efficient posterior inference. Second, the approach can be successfully applied to several types of models, including nonlinear mixed-effects models for count data. The effectiveness of PG augmentation has led to its widespread adoption and implementation in statistical software packages, such as version 2.1 of the R package BayesLogit. This success has inspired us to apply this method to the implementation of Section 203 of the Voting Rights Act (VRA), a US law that requires certain jurisdictions to provide non-English voting materials for specific language minority groups (LMGs). In this paper, we show how PG augmentation can be used to fit a Bayesian model that estimates the prevalence of each LMG in each US voting jurisdiction, and that uses a variable selection technique called stochastic search variable selection. We demonstrate that this new model outperforms the previous model used for 2021 VRA data with respect to model diagnostic measures. Full article
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)
Show Figures

Figure 1

27 pages, 699 KB  
Article
Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application
by Mahmoud M. Ramadan, Rashad M. EL-Sagheer and Amel Abd-El-Monem
Axioms 2024, 13(12), 822; https://doi.org/10.3390/axioms13120822 - 25 Nov 2024
Cited by 2 | Viewed by 1123
Abstract
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are [...] Read more.
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are obtained, though they lack closed-form expressions. The Newton–Raphson method is used to compute these estimations. Confidence intervals for the parameters are approximated via the normal distribution of the maximum-likelihood estimation. The Fisher information matrix is derived using the missing information principle, and the delta method is applied to approximate the confidence intervals for the survival, hazard rate, and inverse hazard rate functions. Bayes estimators were calculated with the squared error, linear exponential, and general entropy loss functions, utilizing independent gamma distributions for informative priors. Markov-chain Monte Carlo sampling provides the highest-posterior-density credible intervals and Bayesian point estimates for the parameters and reliability characteristics. This study evaluates these methods through Monte Carlo simulations, comparing Bayes and maximum-likelihood estimates based on mean squared errors for point estimates, average interval widths, and coverage probabilities for interval estimators. A real dataset is also analyzed to illustrate the proposed methods. Full article
Show Figures

Figure 1

44 pages, 3247 KB  
Article
Enhancing Metaheuristic Algorithm Performance Through Structured Population and Evolutionary Game Theory
by Héctor Escobar-Cuevas, Erik Cuevas, Alberto Luque-Chang, Oscar Barba-Toscano and Marco Pérez-Cisneros
Mathematics 2024, 12(23), 3676; https://doi.org/10.3390/math12233676 - 24 Nov 2024
Cited by 1 | Viewed by 1258
Abstract
Diversity is crucial for metaheuristic algorithms. It prevents early convergence, balances exploration and exploitation, and helps to avoid local optima. Traditional metaheuristic algorithms tend to rely on a single strategy for generating new solutions, often resulting in a lack of diversity. In contrast, [...] Read more.
Diversity is crucial for metaheuristic algorithms. It prevents early convergence, balances exploration and exploitation, and helps to avoid local optima. Traditional metaheuristic algorithms tend to rely on a single strategy for generating new solutions, often resulting in a lack of diversity. In contrast, employing multiple strategies encourages a variety of search behaviors and a diverse pool of potential solutions, thereby improving the exploration of the search space. Evolutionary Game Theory (EGT) modifies agents’ strategies through competition, promoting successful strategies and eliminating weaker ones. Structured populations, as opposed to unstructured ones, preserve diverse strategies through localized competition, meaning that an individual’s strategy is influenced by only a subset or group of the population and not all elements. This paper presents a novel metaheuristic method based on EGT applied to structured populations. Initially, individuals are positioned near optimal regions using the Metropolis–Hastings algorithm. Subsequently, each individual is endowed with a unique search strategy. Considering a certain number of clusters, the complete population is segmented. Within these clusters, the method enhances search efficiency and solution quality by adapting all strategies through an intra-cluster competition. To assess the effectiveness of the proposed method, it has been compared against several well-known metaheuristic algorithms across a suite of 30 test functions. The results indicated that the new methodology outperformed the established techniques, delivering higher-quality solutions and faster convergence rates. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 14253 KB  
Article
Optimal Estimation of Reliability Parameters for Modified Frechet-Exponential Distribution Using Progressive Type-II Censored Samples with Mechanical and Medical Data
by Dina A. Ramadan, Ahmed T. Farhat, M. E. Bakr, Oluwafemi Samson Balogun and Mustafa M. Hasaballah
Symmetry 2024, 16(11), 1476; https://doi.org/10.3390/sym16111476 - 6 Nov 2024
Cited by 1 | Viewed by 1392
Abstract
The aim of this research is to estimate the parameters of the modified Frechet-exponential (MFE) distribution using different methods when applied to progressive type-II censored samples. These methods include using the maximum likelihood technique and the Bayesian approach, which were used to determine [...] Read more.
The aim of this research is to estimate the parameters of the modified Frechet-exponential (MFE) distribution using different methods when applied to progressive type-II censored samples. These methods include using the maximum likelihood technique and the Bayesian approach, which were used to determine the values of parameters in addition to calculating the reliability and failure functions at time t. The approximate confidence intervals (ACIs) and credible intervals (CRIs) are derived for these parameters. Two bootstrap techniques of parametric type are provided to compute the bootstrap confidence intervals. Both symmetric loss functions such as the squared error loss (SEL) and asymmetric loss functions such as the linear-exponential (LINEX) loss are used in the Bayesian method to obtain the estimates. The Markov Chain Monte Carlo (MCMC) technique is utilized in the Metropolis–Hasting sampler approach to obtain the unknown parameters using the Bayes approach. Two actual datasets are utilized to examine the various progressive schemes and different estimation methods considered in this paper. Additionally, a simulation study is performed to compare the schemes and estimation techniques. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

18 pages, 321 KB  
Article
Estimation and Bayesian Prediction of the Generalized Pareto Distribution in the Context of a Progressive Type-II Censoring Scheme
by Tianrui Ye and Wenhao Gui
Appl. Sci. 2024, 14(18), 8433; https://doi.org/10.3390/app14188433 - 19 Sep 2024
Cited by 1 | Viewed by 1540
Abstract
The generalized Pareto distribution plays a significant role in reliability research. This study concentrates on the statistical inference of the generalized Pareto distribution utilizing progressively Type-II censored data. Estimations are performed using maximum likelihood estimation through the expectation–maximization approach. Confidence intervals are derived [...] Read more.
The generalized Pareto distribution plays a significant role in reliability research. This study concentrates on the statistical inference of the generalized Pareto distribution utilizing progressively Type-II censored data. Estimations are performed using maximum likelihood estimation through the expectation–maximization approach. Confidence intervals are derived using the asymptotic confidence intervals. Bayesian estimations are conducted using the Tierney and Kadane method alongside the Metropolis–Hastings algorithm, and the highest posterior density credible interval estimation is accomplished. Furthermore, Bayesian predictive intervals and future sample estimations are explored. To illustrate these inference techniques, a simulation and practical example are presented for analysis. Full article
(This article belongs to the Special Issue Novel Applications of Machine Learning and Bayesian Optimization)
22 pages, 3992 KB  
Article
Bayesian Modeling for Nonstationary Spatial Point Process via Spatial Deformations
by Dani Gamerman, Marcel de Souza Borges Quintana and Mariane Branco Alves
Entropy 2024, 26(8), 678; https://doi.org/10.3390/e26080678 - 11 Aug 2024
Viewed by 1484
Abstract
Many techniques have been proposed to model space-varying observation processes with a nonstationary spatial covariance structure and/or anisotropy, usually on a geostatistical framework. Nevertheless, there is an increasing interest in point process applications, and methodologies that take nonstationarity into account are welcomed. In [...] Read more.
Many techniques have been proposed to model space-varying observation processes with a nonstationary spatial covariance structure and/or anisotropy, usually on a geostatistical framework. Nevertheless, there is an increasing interest in point process applications, and methodologies that take nonstationarity into account are welcomed. In this sense, this work proposes an extension of a class of spatial Cox process using spatial deformation. The proposed method enables the deformation behavior to be data-driven, through a multivariate latent Gaussian process. Inference leads to intractable posterior distributions that are approximated via MCMC. The convergence of algorithms based on the Metropolis–Hastings steps proved to be slow, and the computational efficiency of the Bayesian updating scheme was improved by adopting Hamiltonian Monte Carlo (HMC) methods. Our proposal was also compared against an alternative anisotropic formulation. Studies based on synthetic data provided empirical evidence of the benefit brought by the adoption of nonstationarity through our anisotropic structure. A real data application was conducted on the spatial spread of the Spodoptera frugiperda pest in a corn-producing agricultural area in southern Brazil. Once again, the proposed method demonstrated its benefit over alternatives. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

15 pages, 1360 KB  
Article
Parameter Estimation of Breakthrough Curve Models in the Adsorption Process of H2S and CO2 Using the Markov Chain Monte Carlo Method
by Haianny Beatriz Saraiva Lima, Ana Paula Souza de Sousa, Wellington Betencurte da Silva, Deibson Silva da Costa, Emerson Cardoso Rodrigues and Diego Cardoso Estumano
Appl. Sci. 2024, 14(16), 6956; https://doi.org/10.3390/app14166956 - 8 Aug 2024
Cited by 1 | Viewed by 3317
Abstract
The increase in emissions of toxic gasses such as hydrogen sulfide (H2S) and carbon dioxide (CO2), resulting from growing urbanization and industrialization, has caused environmental and public health problems, making the implementation of air purification techniques through adsorption important. [...] Read more.
The increase in emissions of toxic gasses such as hydrogen sulfide (H2S) and carbon dioxide (CO2), resulting from growing urbanization and industrialization, has caused environmental and public health problems, making the implementation of air purification techniques through adsorption important. Thus, modeling the gas adsorption process is fundamental for good agreement with experimental data, employing mathematical models that enable the prediction of adsorption capacity. In this way, the present work aimed to compare different analytical breakthrough curve models (Thomas, Yoon–Nelson, Adams–Bohart, and Yan) for the adsorption of H2S and CO2 in fixed-bed columns, using experimental data from the literature, estimating the curve parameters through the Markov Chain Monte Carlo (MCMC) method with the Metropolis–Hastings algorithm, and ranking using the determination coefficients (R2 and R2Adjusted) and the Bayesian Information Criterion (BIC). The models showed better agreement using the estimation of maximum adsorption capacity (qs, N0) and the constants of each model (kth, kyn, and kba). In the adsorption of H2S, the Yan model stood out for its precision in estimating qs. For the adsorption of CO2, the Adams–Bohart model achieved better results with the estimation of N0, along with the Yoon–Nelson model. Furthermore, the use of this method allows for a reduction in computational effort compared to models based on complex differential equations. Full article
Show Figures

Figure 1

24 pages, 805 KB  
Article
Bayesian Inference for Inverse Power Exponentiated Pareto Distribution Using Progressive Type-II Censoring with Application to Flood-Level Data Analysis
by Eman H. Khalifa, Dina A. Ramadan, Hana N. Alqifari and Beih S. El-Desouky
Symmetry 2024, 16(3), 309; https://doi.org/10.3390/sym16030309 - 5 Mar 2024
Cited by 4 | Viewed by 2002
Abstract
Progressive type-II (Prog-II) censoring schemes are gaining traction in estimating the parameters, and reliability characteristics of lifetime distributions. The focus of this paper is to enhance the accuracy and reliability of such estimations for the inverse power exponentiated Pareto (IPEP) distribution, a flexible [...] Read more.
Progressive type-II (Prog-II) censoring schemes are gaining traction in estimating the parameters, and reliability characteristics of lifetime distributions. The focus of this paper is to enhance the accuracy and reliability of such estimations for the inverse power exponentiated Pareto (IPEP) distribution, a flexible extension of the exponentiated Pareto distribution suitable for modeling engineering and medical data. We aim to develop novel statistical inference methods applicable under Prog-II censoring, leading to a deeper understanding of failure time behavior, improved decision-making, and enhanced overall model reliability. Our investigation employs both classical and Bayesian approaches. The classical technique involves constructing maximum likelihood estimators of the model parameters and their bootstrap covariance intervals. Using the Gibbs process constructed by the Metropolis–Hasting sampler technique, the Markov chain Monte Carlo method provides Bayesian estimates of the unknown parameters. In addition, an actual data analysis is carried out to examine the estimation process’s performance under this ideal scheme. Full article
Show Figures

Figure 1

13 pages, 2016 KB  
Article
The Quantum Amplitude Estimation Algorithms on Near-Term Devices: A Practical Guide
by Marco Maronese, Massimiliano Incudini, Luca Asproni and Enrico Prati
Quantum Rep. 2024, 6(1), 1-13; https://doi.org/10.3390/quantum6010001 - 24 Dec 2023
Cited by 4 | Viewed by 5923
Abstract
The Quantum Amplitude Estimation (QAE) algorithm is a major quantum algorithm designed to achieve a quadratic speed-up. Until fault-tolerant quantum computing is achieved, being competitive over classical Monte Carlo (MC) remains elusive. Alternative methods have been developed so as to require fewer resources [...] Read more.
The Quantum Amplitude Estimation (QAE) algorithm is a major quantum algorithm designed to achieve a quadratic speed-up. Until fault-tolerant quantum computing is achieved, being competitive over classical Monte Carlo (MC) remains elusive. Alternative methods have been developed so as to require fewer resources while maintaining an advantageous theoretical scaling. We compared the standard QAE algorithm with two Noisy Intermediate-Scale Quantum (NISQ)-friendly versions of QAE on a numerical integration task, with the Monte Carlo technique of Metropolis–Hastings as a classical benchmark. The algorithms were evaluated in terms of the estimation error as a function of the number of samples, computational time, and length of the quantum circuits required by the solutions, respectively. The effectiveness of the two QAE alternatives was tested on an 11-qubit trapped-ion quantum computer in order to verify which solution can first provide a speed-up in the integral estimation problems. We concluded that an alternative approach is preferable with respect to employing the phase estimation routine. Indeed, the Maximum Likelihood estimation guaranteed the best trade-off between the length of the quantum circuits and the precision in the integral estimation, as well as greater resistance to noise. Full article
Show Figures

Figure 1

19 pages, 3041 KB  
Article
Reliability Analysis of Kavya Manoharan Kumaraswamy Distribution under Generalized Progressive Hybrid Data
by Refah Alotaibi, Ehab M. Almetwally and Hoda Rezk
Symmetry 2023, 15(9), 1671; https://doi.org/10.3390/sym15091671 - 30 Aug 2023
Cited by 6 | Viewed by 1674
Abstract
Generalized progressive hybrid censoring approaches have been developed to reduce test time and cost. This paper investigates the difficulties associated with estimating the unobserved model parameters and the reliability time functions of the Kavya Manoharan Kumaraswamy (KMKu) distribution based on generalized type-II progressive [...] Read more.
Generalized progressive hybrid censoring approaches have been developed to reduce test time and cost. This paper investigates the difficulties associated with estimating the unobserved model parameters and the reliability time functions of the Kavya Manoharan Kumaraswamy (KMKu) distribution based on generalized type-II progressive hybrid censoring using classical and Bayesian estimation techniques. The frequentist estimators’ normal approximations are also used to construct the appropriate estimated confidence intervals for the unknown parameter model. Under symmetrical squared error loss, independent gamma conjugate priors are used to produce the Bayesian estimators. The Bayesian estimators and associated highest posterior density intervals cannot be derived analytically since the joint likelihood function is provided in a complicated form. However, they may be evaluated using Monte Carlo Markov chain (MCMC) techniques. Out of all the censoring choices, the best one is selected using four optimality criteria. Full article
Show Figures

Figure 1

17 pages, 3512 KB  
Article
Bayesian Joint Modeling Analysis of Longitudinal Proportional and Survival Data
by Wenting Liu, Huiqiong Li, Anmin Tang and Zixin Cui
Mathematics 2023, 11(16), 3469; https://doi.org/10.3390/math11163469 - 10 Aug 2023
Cited by 4 | Viewed by 2053
Abstract
This paper focuses on a joint model to analyze longitudinal proportional and survival data. We utilize a logit transformation on the longitudinal proportional data and employ a partially linear mixed-effect model. With this model, we estimate the unknown function of time using the [...] Read more.
This paper focuses on a joint model to analyze longitudinal proportional and survival data. We utilize a logit transformation on the longitudinal proportional data and employ a partially linear mixed-effect model. With this model, we estimate the unknown function of time using the B-splines technique. Additionally, we introduce a centered Dirichlet process mixture model (CDPMM) to capture the random effects, allowing for a flexible distribution. The survival data are assumed using a Cox proportional hazard model, and the sharing random effects joint model is developed for the two types of data. We develop a Bayesian Lasso (BLasso) approach that combines the Gibbs sampler and the Metropolis–Hastings algorithm. The proposed method allows for the estimation of unknown parameters and the selection of significant covariates simultaneously. We evaluate the performance of our proposed methods through simulation studies and also provide an illustration of our methodologies using an example from the MA.5 research experiment. Full article
(This article belongs to the Special Issue Bayesian Statistical Analysis of Big Data and Complex Data)
Show Figures

Figure 1

Back to TopTop