Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (153)

Search Parameters:
Keywords = Fisher Information Matrix

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 835 KiB  
Article
Progressive First-Failure Censoring in Reliability Analysis: Inference for a New Weibull–Pareto Distribution
by Rashad M. EL-Sagheer and Mahmoud M. Ramadan
Mathematics 2025, 13(15), 2377; https://doi.org/10.3390/math13152377 - 24 Jul 2025
Viewed by 158
Abstract
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival [...] Read more.
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival and hazard rate functions, although these estimators do not have explicit closed-form solutions. The Newton–Raphson algorithm is employed for the numerical computation of these estimates. Confidence intervals for the parameters are approximated based on the asymptotic normality of the maximum likelihood estimators. The Fisher information matrix is calculated using the missing information principle, and the delta technique is applied to approximate confidence intervals for the survival and hazard rate functions. Bayesian estimators are developed under squared error, linear exponential, and general entropy loss functions, assuming independent gamma priors. Markov chain Monte Carlo sampling is used to obtain Bayesian point estimates and the highest posterior density credible intervals for the parameters and reliability measures. Finally, the proposed methods are demonstrated through the analysis of a real dataset. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

35 pages, 11039 KiB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 157
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

25 pages, 1169 KiB  
Article
DPAO-PFL: Dynamic Parameter-Aware Optimization via Continual Learning for Personalized Federated Learning
by Jialu Tang, Yali Gao, Xiaoyong Li and Jia Jia
Electronics 2025, 14(15), 2945; https://doi.org/10.3390/electronics14152945 - 23 Jul 2025
Viewed by 198
Abstract
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under [...] Read more.
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under non-independent and identically distributed (non-IID) data, we propose DPAO-PFL, a Dynamic Parameter-Aware Optimization framework that leverages continual learning principles to improve Personalized Federated Learning under non-IID conditions. We decomposed the parameters into two components: local personalized parameters tailored to client characteristics, and global shared parameters that capture the accumulated marginal effects of parameter updates over historical rounds. Specifically, we leverage the Fisher information matrix to estimate parameter importance online, integrate the path sensitivity scores within a time-series sliding window to construct a dynamic regularization term, and adaptively adjust the constraint strength to mitigate the conflict overall tasks. We evaluate the effectiveness of DPAO-PFL through extensive experiments on several benchmarks under IID and non-IID data distributions. Comprehensive experimental results indicate that DPAO-PFL outperforms baselines with improvements from 5.41% to 30.42% in average classification accuracy. By decoupling model parameters and incorporating an adaptive regularization mechanism, DPAO-PFL effectively balances generalization and personalization. Furthermore, DPAO-PFL exhibits superior performance in convergence and collaborative optimization compared to state-of-the-art FL methods. Full article
Show Figures

Figure 1

25 pages, 4682 KiB  
Article
Visual Active SLAM Method Considering Measurement and State Uncertainty for Space Exploration
by Yao Zhao, Zhi Xiong, Jingqi Wang, Lin Zhang and Pascual Campoy
Aerospace 2025, 12(7), 642; https://doi.org/10.3390/aerospace12070642 - 20 Jul 2025
Viewed by 281
Abstract
This paper presents a visual active SLAM method considering measurement and state uncertainty for space exploration in urban search and rescue environments. An uncertainty evaluation method based on the Fisher Information Matrix (FIM) is studied from the perspective of evaluating the localization uncertainty [...] Read more.
This paper presents a visual active SLAM method considering measurement and state uncertainty for space exploration in urban search and rescue environments. An uncertainty evaluation method based on the Fisher Information Matrix (FIM) is studied from the perspective of evaluating the localization uncertainty of SLAM systems. With the aid of the Fisher Information Matrix, the Cramér–Rao Lower Bound (CRLB) of the pose uncertainty in the stereo visual SLAM system is derived to describe the boundary of the pose uncertainty. Optimality criteria are introduced to quantitatively evaluate the localization uncertainty. The odometry information selection method and the local bundle adjustment information selection method based on Fisher Information are proposed to find out the measurements with low uncertainty for localization and mapping in the search and rescue process. By adopting the method above, the computing efficiency of the system is improved while the localization accuracy is equivalent to the classical ORB-SLAM2. Moreover, by the quantified uncertainty of local poses and map points, the generalized unary node and generalized unary edge are defined to improve the computational efficiency in computing local state uncertainty. In addition, an active loop closing planner considering local state uncertainty is proposed to make use of uncertainty in assisting the space exploration and decision-making of MAV, which is beneficial to the improvement of MAV localization performance in search and rescue environments. Simulations and field tests in different challenging scenarios are conducted to verify the effectiveness of the proposed method. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

19 pages, 342 KiB  
Article
Fisher Information in Helmholtz–Boltzmann Thermodynamics of Mechanical Systems
by Marco Favretti
Foundations 2025, 5(3), 24; https://doi.org/10.3390/foundations5030024 - 4 Jul 2025
Viewed by 257
Abstract
In this paper, we review Helmholtz–Boltzmann thermodynamics for mechanical systems depending on parameters, and we compute the Fisher information matrix for the associated probability density. The divergence of Fisher information has been used as a signal for the existence of phase transitions in [...] Read more.
In this paper, we review Helmholtz–Boltzmann thermodynamics for mechanical systems depending on parameters, and we compute the Fisher information matrix for the associated probability density. The divergence of Fisher information has been used as a signal for the existence of phase transitions in finite systems even in the absence of a thermodynamic limit. We investigate through examples if qualitative changes in the dynamic of mechanical systems described by Helmholtz–Boltzmann thermodynamic formalism can be detected using Fisher information. Full article
(This article belongs to the Section Physical Sciences)
Show Figures

Figure 1

17 pages, 572 KiB  
Article
Statistical Analysis Under a Random Censoring Scheme with Applications
by Mustafa M. Hasaballah and Mahmoud M. Abdelwahab
Symmetry 2025, 17(7), 1048; https://doi.org/10.3390/sym17071048 - 3 Jul 2025
Cited by 1 | Viewed by 251
Abstract
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates [...] Read more.
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates for the unknown parameters are derived using the maximum likelihood estimation method, and confidence intervals are constructed based on the Fisher information matrix. From a Bayesian perspective, Bayes estimates of the parameters are obtained using the Markov Chain Monte Carlo method, and the average lengths of credible intervals are calculated. The Bayesian inference is performed under both the squared error loss function and the general entropy loss function. Additionally, a numerical simulation is conducted to evaluate the performance of the proposed methods. To demonstrate their practical applicability, a real world example is provided, illustrating the application and development of these inference techniques. In conclusion, the Bayesian method appears to outperform other approaches, although each method offers unique advantages. Full article
Show Figures

Figure 1

27 pages, 1182 KiB  
Article
The New Gompertz Distribution Model and Applications
by Ayşe Metin Karakaş and Fatma Bulut
Symmetry 2025, 17(6), 843; https://doi.org/10.3390/sym17060843 - 28 May 2025
Viewed by 489
Abstract
The Gompertz distribution has long been a cornerstone for analyzing growth processes and mortality patterns across various scientific disciplines. However, as the intricacies of real-world phenomena evolve, there is a pressing need for more versatile probability distributions that can accurately capture a wide [...] Read more.
The Gompertz distribution has long been a cornerstone for analyzing growth processes and mortality patterns across various scientific disciplines. However, as the intricacies of real-world phenomena evolve, there is a pressing need for more versatile probability distributions that can accurately capture a wide array of data characteristics. In response to this demand, we introduce the Marshall–Olkin Power Gompertz (MOPG) distribution, an innovative and powerful extension of the traditional Gompertz model. The MOPG distribution is crafted by enhancing the Power Gompertz cumulative distribution function through the Marshall–Olkin transformation. This distribution yields two pivotal contributions: a power parameter (c) that significantly increases the model’s adaptability to diverse data patterns and the Marshall–Olkin transformation, which modifies tail behavior to enhance predictive accuracy. Furthermore, we derived the distribution’s essential statistical properties and evaluate its performance through extensive Monte Carlo simulations, along with a maximum likelihood estimation of model parameters. Our empirical validation, utilizing three real-world data sets, compellingly demonstrated that the MOPG distribution not only surpasses several well-established lifetime distributions but is also superior in terms of flexibility and tail behavior characterization. The results highlight that the proposed MOPG stands out as a superior choice, delivering the most precise fit to the data when compared to various competing models, and its performance makes it a compelling option worth considering. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

27 pages, 993 KiB  
Article
Statistical Inference of Inverse Weibull Distribution Under Joint Progressive Censoring Scheme
by Jinchen Xiang, Yuanqi Wang and Wenhao Gui
Symmetry 2025, 17(6), 829; https://doi.org/10.3390/sym17060829 - 26 May 2025
Viewed by 350
Abstract
In recent years, there has been an increasing interest in the application of progressive censoring as a means to reduce both cost and experiment duration. In the absence of explanatory variables, the present study employs a statistical inference approach for the inverse Weibull [...] Read more.
In recent years, there has been an increasing interest in the application of progressive censoring as a means to reduce both cost and experiment duration. In the absence of explanatory variables, the present study employs a statistical inference approach for the inverse Weibull distribution, using a progressive type II censoring strategy with two independent samples. The article expounds on the maximum likelihood estimation method, utilizing the Fisher information matrix to derive approximate confidence intervals. Moreover, interval estimations are computed by the bootstrap method. We explore the application of Bayesian methods for estimating model parameters under both the squared error and LINEX loss functions. The Bayesian estimates and corresponding credible intervals are calculated via Markov chain Monte Carlo (MCMC). Finally, comprehensive simulation studies and real data analysis are carried out to validate the precision of the proposed estimation methods. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

25 pages, 873 KiB  
Article
Statistical Inference for Two Lomax Populations Under Balanced Joint Progressive Type-II Censoring Scheme
by Yuanqi Wang, Jinchen Xiang and Wenhao Gui
Mathematics 2025, 13(9), 1536; https://doi.org/10.3390/math13091536 - 7 May 2025
Viewed by 371
Abstract
In recent years, joint censoring schemes have gained significant attention in lifetime experiments and reliability analysis. A refined approach, known as the balanced joint progressive censoring scheme, has been introduced in statistical studies. This research focuses on statistical inference for two Lomax populations [...] Read more.
In recent years, joint censoring schemes have gained significant attention in lifetime experiments and reliability analysis. A refined approach, known as the balanced joint progressive censoring scheme, has been introduced in statistical studies. This research focuses on statistical inference for two Lomax populations under this censoring framework. Maximum likelihood estimation is employed to derive parameter estimates, and asymptotic confidence intervals are constructed using the observed Fisher information matrix. From a Bayesian standpoint, posterior estimates of the unknown parameters are obtained under informative prior assumptions. To evaluate the effectiveness and precision of these estimators, a numerical study is conducted. Additionally, a real dataset is analyzed to demonstrate the practical application of these estimation methods. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

24 pages, 1017 KiB  
Article
Parametric Estimation and Analysis of Lifetime Models with Competing Risks Under Middle-Censored Data
by Shan Liang and Wenhao Gui
Appl. Sci. 2025, 15(8), 4288; https://doi.org/10.3390/app15084288 - 13 Apr 2025
Cited by 1 | Viewed by 372
Abstract
Middle-censoring is a general censoring mechanism. In middle-censoring, the exact lifetimes are observed only for a portion of the units and for others, we can only know the random interval within which the failure occurs. In this study, we focus on statistical inference [...] Read more.
Middle-censoring is a general censoring mechanism. In middle-censoring, the exact lifetimes are observed only for a portion of the units and for others, we can only know the random interval within which the failure occurs. In this study, we focus on statistical inference for middle-censored data with competing risks. The latent failure times are assumed to be independent and follow Burr-XII distributions with distinct parameters. To begin with, we derive the maximum likelihood estimators for the unknown parameters, proving their existence and uniqueness. Additionally, asymptotic confidence intervals are constructed using the observed Fisher information matrix. Furthermore, Bayesian estimates under squared loss function and the corresponding highest posterior density intervals are obtained through the Gibbs sampling method. A simulation study is carried out to assess the performance of all proposed estimators. Lastly, an analysis for a practical dataset is provided to demonstrate the inferential processes developed. Full article
Show Figures

Figure 1

32 pages, 3983 KiB  
Article
Parameter Estimation Precision with Geocentric Gravitational Wave Interferometers: Monochromatic Signals
by Manoel Felipe Sousa, Tabata Aira Ferreira and Massimo Tinto
Universe 2025, 11(4), 122; https://doi.org/10.3390/universe11040122 - 7 Apr 2025
Viewed by 440
Abstract
We present a Fisher information matrix study of the parameter estimation precision achievable by a class of future space-based, “mid-band”, gravitational wave interferometers observing monochromatic signals. The mid-band is the frequency region between that accessible by the Laser Interferometer Space Antenna (LISA) and [...] Read more.
We present a Fisher information matrix study of the parameter estimation precision achievable by a class of future space-based, “mid-band”, gravitational wave interferometers observing monochromatic signals. The mid-band is the frequency region between that accessible by the Laser Interferometer Space Antenna (LISA) and ground-based interferometers. We analyze monochromatic signals observed by the TianQin mission, gLISA (a LISA-like interferometer in a geosynchronous orbit) and a descoped gLISA mission, gLISAd, characterized by an acceleration noise level that is three orders of magnitude worse than that of gLISA. We find that all three missions achieve their best angular source reconstruction precision in the higher part of their accessible frequency band, with an error box better than 1010 sr in the frequency band [101,10] Hz when observing a monochromatic gravitational wave signal of amplitude h0=1021 that is incoming from a given direction. In terms of their reconstructed frequencies and amplitudes, TianQin achieves its best precision values in both quantities in the frequency band [102,4×101] Hz, with a frequency precision σfgw=2×1011 Hz and an amplitude precision σh0=2×1024. gLISA matches these precisions in a frequency band slightly higher than that of TianQin, [3×102,1] Hz, as a consequence of its smaller arm length. gLISAd, on the other hand, matches the performance of gLISA only over the narrower frequency region, [7×101,1] Hz, as a consequence of its higher acceleration noise at lower frequencies. The angular, frequency, and amplitude precisions as functions of the source sky location are then derived by assuming an average signal-to-noise ratio of 10 at a selected number of gravitational wave frequencies covering the operational bandwidth of TianQin and gLISA. Similar precision functions are then derived for gLISAd by using the amplitudes resulting in the gLISA average SNR being equal to 10 at the selected frequencies. We find that, for any given source location, all three missions display a marked precision improvement in the three reconstructed parameters at higher gravitational wave frequencies. Full article
Show Figures

Figure 1

18 pages, 9803 KiB  
Article
Probabilistic Small-Signal Modeling and Stability Analysis of the DC Distribution System
by Wenlong Liu, Bo Zhang, Zimeng Lu, Yuming Liao and Heng Nian
Energies 2025, 18(5), 1196; https://doi.org/10.3390/en18051196 - 28 Feb 2025
Viewed by 632
Abstract
With the advent of large-scale electronic transportation, the construction of electric vehicle charging stations (EVCSs) has increased. The stochastic characteristic of the charging power of EVCSs leads to a risk of destabilization of the DC distribution network when there is a high degree [...] Read more.
With the advent of large-scale electronic transportation, the construction of electric vehicle charging stations (EVCSs) has increased. The stochastic characteristic of the charging power of EVCSs leads to a risk of destabilization of the DC distribution network when there is a high degree of power electronification. Current deterministic stability analysis methods are too complicated to allow for brief descriptions of the effect of probabilistic characteristics of EVCSs on stability. This paper develops a probabilistic small-signal stability analysis method. Firstly, the probabilistic information of the system is obtained by combining the s-domain nodal impedance matrix based on the point estimation method. Then, the probability function of stability is fitted using the Cornish–Fisher expansion method. Finally, a comparison experiment using Monte Carlo simulation demonstrates that this method performs well in balancing accuracy and computational efficiency. The effects of line parameters and system control parameters on stability are investigated in the framework of probabilistic stability. This will provide a probabilistic perspective on the design of more complex power systems in the future. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

49 pages, 3741 KiB  
Review
Optimal Sensor Placement for Structural Health Monitoring: A Comprehensive Review
by Zhiyan Sun, Mojtaba Mahmoodian, Amir Sidiq, Sanduni Jayasinghe, Farham Shahrivar and Sujeeva Setunge
J. Sens. Actuator Netw. 2025, 14(2), 22; https://doi.org/10.3390/jsan14020022 - 20 Feb 2025
Cited by 4 | Viewed by 3717
Abstract
The structural health monitoring (SHM) of bridge infrastructure has become essential for ensuring safety, serviceability, and long-term functionality amid aging structures and increasing load demands. SHM leverages sensor networks to enable real-time data acquisition, damage detection, and predictive maintenance, offering a more reliable [...] Read more.
The structural health monitoring (SHM) of bridge infrastructure has become essential for ensuring safety, serviceability, and long-term functionality amid aging structures and increasing load demands. SHM leverages sensor networks to enable real-time data acquisition, damage detection, and predictive maintenance, offering a more reliable alternative to traditional visual inspection methods. A key challenge in SHM is optimal sensor placement (OSP), which directly impacts monitoring accuracy, cost-efficiency, and overall system performance. This review explores recent advancements in SHM techniques, sensor technologies, and OSP methodologies, with a primary focus on bridge infrastructure. It evaluates sensor configuration strategies based on criteria such as the modal assurance criterion (MAC) and mean square error (MSE) while examining optimisation approaches like the Effective Independence (EI) method, Kinetic Energy Optimisation (KEO), and their advanced variants. Despite these advancements, several research gaps remain. Future studies should focus on scalable OSP strategies for large-scale bridge networks, integrating machine learning (ML) and artificial intelligence (AI) for adaptive sensor deployment. The implementation of digital twin (DT) technology in SHM can enhance predictive maintenance and real-time decision-making, improving long-term infrastructure resilience. Additionally, research on sensor robustness against environmental noise and external disturbances, as well as the integration of edge computing and wireless sensor networks (WSNs) for efficient data transmission, will be critical in advancing SHM applications. This review provides critical insights and recommendations to bridge the gap between theoretical innovations and real-world implementation, ensuring the effective monitoring and maintenance of bridge infrastructure in modern civil engineering. Full article
(This article belongs to the Section Actuators, Sensors and Devices)
Show Figures

Figure 1

40 pages, 5018 KiB  
Article
Global Dense Vector Representations for Words or Items Using Shared Parameter Alternating Tweedie Model
by Taejoon Kim and Haiyan Wang
Mathematics 2025, 13(4), 612; https://doi.org/10.3390/math13040612 - 13 Feb 2025
Viewed by 780
Abstract
In this article, we present a model for analyzing the co-occurrence count data derived from practical fields such as user–item or item–item data from online shopping platforms and co-occurring word–word pairs in sequences of texts. Such data contain important information for developing recommender [...] Read more.
In this article, we present a model for analyzing the co-occurrence count data derived from practical fields such as user–item or item–item data from online shopping platforms and co-occurring word–word pairs in sequences of texts. Such data contain important information for developing recommender systems or studying the relevance of items or words from non-numerical sources. Different from traditional regression models, there are no observations for covariates. Additionally, the co-occurrence matrix is typically of such high dimension that it does not fit into a computer’s memory for modeling. We extract numerical data by defining windows of co-occurrence using weighted counts on the continuous scale. Positive probability mass is allowed for zero observations. We present the Shared Parameter Alternating Tweedie (SA-Tweedie) model and an algorithm to estimate the parameters. We introduce a learning rate adjustment used along with the Fisher scoring method in the inner loop to help the algorithm stay on track with optimizing direction. Gradient descent with the Adam update was also considered as an alternative method for the estimation. Simulation studies showed that our algorithm with Fisher scoring and learning rate adjustment outperforms the other two methods. We applied SA-Tweedie to English-language Wikipedia dump data to obtain dense vector representations for WordPiece tokens. The vector representation embeddings were then used in an application of the Named Entity Recognition (NER) task. The SA-Tweedie embeddings significantly outperform GloVe, random, and BERT embeddings in the NER task. A notable strength of the SA-Tweedie embedding is that the number of parameters and training cost for SA-Tweedie are only a tiny fraction of those for BERT. Full article
(This article belongs to the Special Issue High-Dimensional Data Analysis and Applications)
Show Figures

Figure 1

25 pages, 3165 KiB  
Article
Estimation and Bayesian Prediction for the Generalized Exponential Distribution Under Type-II Censoring
by Wei Wang and Wenhao Gui
Symmetry 2025, 17(2), 222; https://doi.org/10.3390/sym17020222 - 2 Feb 2025
Cited by 2 | Viewed by 1213
Abstract
This research focuses on the prediction and estimation problems for the generalized exponential distribution under Type-II censoring. Firstly, maximum likelihood estimations for the parameters of the generalized exponential distribution are computed using the EM algorithm. Additionally, confidence intervals derived from the Fisher information [...] Read more.
This research focuses on the prediction and estimation problems for the generalized exponential distribution under Type-II censoring. Firstly, maximum likelihood estimations for the parameters of the generalized exponential distribution are computed using the EM algorithm. Additionally, confidence intervals derived from the Fisher information matrix are developed and analyzed alongside two bootstrap confidence intervals for comparison. Compared to classical maximum likelihood estimation, Bayesian inference proves to be highly effective in handling censored data. This study explores Bayesian inference for estimating the unknown parameters, considering both symmetrical and asymmetrical loss functions. Utilizing Gibbs sampling to produce Markov Chain Monte Carlo samples, we employ an importance sampling approach to obtain Bayesian estimates and compute the corresponding highest posterior density (HPD) intervals. Furthermore, for one-sample prediction and, separately, for the two-sample case, we provide the corresponding posterior distributions, along with methods for computing point predictions and predictive intervals. Through Monte Carlo simulations, we evaluate the performance of Bayesian estimation in contrast to maximum likelihood estimation. Finally, we conduct an analysis of a real dataset derived from deep groove ball bearings, calculating Bayesian point predictions and predictive intervals for future samples. Full article
(This article belongs to the Special Issue Bayesian Statistical Methods for Forecasting)
Show Figures

Figure 1

Back to TopTop