Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = Bayes–Fisher information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 572 KiB  
Article
Statistical Analysis Under a Random Censoring Scheme with Applications
by Mustafa M. Hasaballah and Mahmoud M. Abdelwahab
Symmetry 2025, 17(7), 1048; https://doi.org/10.3390/sym17071048 - 3 Jul 2025
Cited by 1 | Viewed by 259
Abstract
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates [...] Read more.
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates for the unknown parameters are derived using the maximum likelihood estimation method, and confidence intervals are constructed based on the Fisher information matrix. From a Bayesian perspective, Bayes estimates of the parameters are obtained using the Markov Chain Monte Carlo method, and the average lengths of credible intervals are calculated. The Bayesian inference is performed under both the squared error loss function and the general entropy loss function. Additionally, a numerical simulation is conducted to evaluate the performance of the proposed methods. To demonstrate their practical applicability, a real world example is provided, illustrating the application and development of these inference techniques. In conclusion, the Bayesian method appears to outperform other approaches, although each method offers unique advantages. Full article
Show Figures

Figure 1

22 pages, 3351 KiB  
Article
Distinguishing Compact Objects in Extreme-Mass-Ratio Inspirals by Gravitational Waves
by Lujia Xu, Shucheng Yang, Wenbiao Han, Xingyu Zhong, Rundong Tang and Yuanhao Zhang
Universe 2025, 11(1), 18; https://doi.org/10.3390/universe11010018 - 13 Jan 2025
Cited by 2 | Viewed by 1378
Abstract
Extreme-mass-ratio inspirals (EMRIs) are promising gravitational-wave (GW) sources for space-based GW detectors. EMRI signals typically have long durations, ranging from several months to several years, necessitating highly accurate GW signal templates for detection. In most waveform models, compact objects in EMRIs are treated [...] Read more.
Extreme-mass-ratio inspirals (EMRIs) are promising gravitational-wave (GW) sources for space-based GW detectors. EMRI signals typically have long durations, ranging from several months to several years, necessitating highly accurate GW signal templates for detection. In most waveform models, compact objects in EMRIs are treated as test particles without accounting for their spin, mass quadrupole, or tidal deformation. In this study, we simulate GW signals from EMRIs by incorporating the spin and mass quadrupole moments of the compact objects. We evaluate the accuracy of parameter estimation for these simulated waveforms using the Fisher Information Matrix (FIM) and find that the spin, tidal-induced quadruple, and spin-induced quadruple can all be measured with precision ranging from 102 to 101, particularly for a mass ratio of ∼104. Assuming the “true” GW signals originate from an extended body inspiraling into a supermassive black hole, we compute the signal-to-noise ratio (SNR) and Bayes factors between a test-particle waveform template and our model, which includes the spin and quadrupole of the compact object. Our results show that the spin of compact objects can produce detectable deviations in the waveforms across all object types, while tidal-induced quadrupoles are only significant for white dwarfs, especially in cases approaching an intermediate-mass ratio. Spin-induced quadrupoles, however, have negligible effects on the waveforms. Therefore, our findings suggest that it is possible to distinguish primordial black holes from white dwarfs, and, under certain conditions, neutron stars can also be differentiated from primordial black holes. Full article
Show Figures

Figure 1

15 pages, 331 KiB  
Article
Analyzing Sample Size in Information-Theoretic Models
by D. Bernal-Casas and J. M. Oller
Mathematics 2024, 12(24), 4018; https://doi.org/10.3390/math12244018 - 21 Dec 2024
Viewed by 735
Abstract
In this paper, we delve into the complexities of information-theoretic models, specifically focusing on how we can model sample size and how it affects our previous findings. This question is fundamental and intricate, posing a significant intellectual challenge to our research. While previous [...] Read more.
In this paper, we delve into the complexities of information-theoretic models, specifically focusing on how we can model sample size and how it affects our previous findings. This question is fundamental and intricate, posing a significant intellectual challenge to our research. While previous studies have considered a fixed sample size, this work explores other possible alternatives to assess its impact on the mathematical approach. To ensure that our framework aligns with the principles of quantum theory, specific conditions related to sample size must be met, as they are inherently linked to information quantities. The arbitrary nature of sample size presents a significant challenge in achieving this alignment, which we thoroughly investigate in this study. Full article
27 pages, 699 KiB  
Article
Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application
by Mahmoud M. Ramadan, Rashad M. EL-Sagheer and Amel Abd-El-Monem
Axioms 2024, 13(12), 822; https://doi.org/10.3390/axioms13120822 - 25 Nov 2024
Cited by 2 | Viewed by 968
Abstract
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are [...] Read more.
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are obtained, though they lack closed-form expressions. The Newton–Raphson method is used to compute these estimations. Confidence intervals for the parameters are approximated via the normal distribution of the maximum-likelihood estimation. The Fisher information matrix is derived using the missing information principle, and the delta method is applied to approximate the confidence intervals for the survival, hazard rate, and inverse hazard rate functions. Bayes estimators were calculated with the squared error, linear exponential, and general entropy loss functions, utilizing independent gamma distributions for informative priors. Markov-chain Monte Carlo sampling provides the highest-posterior-density credible intervals and Bayesian point estimates for the parameters and reliability characteristics. This study evaluates these methods through Monte Carlo simulations, comparing Bayes and maximum-likelihood estimates based on mean squared errors for point estimates, average interval widths, and coverage probabilities for interval estimators. A real dataset is also analyzed to illustrate the proposed methods. Full article
Show Figures

Figure 1

16 pages, 533 KiB  
Article
On a Randomly Censoring Scheme for Generalized Logistic Distribution with Applications
by Mustafa M. Hasaballah, Oluwafemi Samson Balogun and Mahmoud E. Bakr
Symmetry 2024, 16(9), 1240; https://doi.org/10.3390/sym16091240 - 21 Sep 2024
Cited by 4 | Viewed by 945
Abstract
In this paper, we investigate the inferential procedures within both classical and Bayesian frameworks for the generalized logistic distribution under a random censoring model. For randomly censored data, our main goals were to develop maximum likelihood estimators and construct confidence intervals using the [...] Read more.
In this paper, we investigate the inferential procedures within both classical and Bayesian frameworks for the generalized logistic distribution under a random censoring model. For randomly censored data, our main goals were to develop maximum likelihood estimators and construct confidence intervals using the Fisher information matrix for the unknown parameters. Additionally, we developed Bayes estimators with gamma priors, addressing both squared error and general entropy loss functions. We also calculated Bayesian credible intervals for the parameters. These methods were applied to two real datasets with random censoring to provide valuable insights. Finally, we conducted a simulation analysis to assess the effectiveness of the estimated values. Full article
Show Figures

Figure 1

32 pages, 1967 KiB  
Article
Different Statistical Inference Algorithms for the New Pareto Distribution Based on Type-II Progressively Censored Competing Risk Data with Applications
by Essam A. Ahmed, Tariq S. Alshammari and Mohamed S. Eliwa
Mathematics 2024, 12(13), 2136; https://doi.org/10.3390/math12132136 - 7 Jul 2024
Cited by 2 | Viewed by 1335
Abstract
In this research, the statistical inference of unknown lifetime parameters is proposed in the presence of independent competing risks using a progressive Type-II censored dataset. The lifetime distribution associated with a failure mode is assumed to follow the new Pareto distribution, with consideration [...] Read more.
In this research, the statistical inference of unknown lifetime parameters is proposed in the presence of independent competing risks using a progressive Type-II censored dataset. The lifetime distribution associated with a failure mode is assumed to follow the new Pareto distribution, with consideration given to two distinct competing failure reasons. Maximum likelihood estimators (MLEs) for the unknown model parameters, as well as reliability and hazard functions, are derived, noting that they are not expressible in closed form. The Newton–Raphson, expectation maximization (EM), and stochastic expectation maximization (SEM) methods are employed to generate maximum likelihood (ML) estimations. Approximate confidence intervals for the unknown parameters, reliability, and hazard rate functions are constructed using the normal approximation of the MLEs and the normal approximation of the log-transformed MLEs. Additionally, the missing information principle is utilized to derive the closed form of the Fisher information matrix, which, in turn, is used with the delta approach to calculate confidence intervals for reliability and hazards. Bayes estimators are derived under both symmetric and asymmetric loss functions, with informative and non-informative priors considered, including independent gamma distributions for informative priors. The Monte Carlo Markov Chain sampling approach is employed to obtain the highest posterior density credible intervals and Bayesian point estimates for unknown parameters and reliability characteristics. A Monte Carlo simulation is conducted to assess the effectiveness of the proposed techniques, with the performances of the Bayes and maximum likelihood estimations examined using average values and mean squared errors as benchmarks. Interval estimations are compared in terms of average lengths and coverage probabilities. Real datasets are considered and examined for each topic to provide illustrative examples. Full article
(This article belongs to the Special Issue Application of the Bayesian Method in Statistical Modeling)
Show Figures

Figure 1

14 pages, 310 KiB  
Article
Intrinsic Information-Theoretic Models
by D. Bernal-Casas and J. M. Oller
Entropy 2024, 26(5), 370; https://doi.org/10.3390/e26050370 - 28 Apr 2024
Cited by 2 | Viewed by 2005
Abstract
With this follow-up paper, we continue developing a mathematical framework based on information geometry for representing physical objects. The long-term goal is to lay down informational foundations for physics, especially quantum physics. We assume that we can now model information sources as univariate [...] Read more.
With this follow-up paper, we continue developing a mathematical framework based on information geometry for representing physical objects. The long-term goal is to lay down informational foundations for physics, especially quantum physics. We assume that we can now model information sources as univariate normal probability distributions N (μ, σ0), as before, but with a constant σ0 not necessarily equal to 1. Then, we also relaxed the independence condition when modeling m sources of information. Now, we model m sources with a multivariate normal probability distribution Nm(μ,Σ0) with a constant variance–covariance matrix Σ0 not necessarily diagonal, i.e., with covariance values different to 0, which leads to the concept of modes rather than sources. Invoking Schrödinger’s equation, we can still break the information into m quantum harmonic oscillators, one for each mode, and with energy levels independent of the values of σ0, altogether leading to the concept of “intrinsic”. Similarly, as in our previous work with the estimator’s variance, we found that the expectation of the quadratic Mahalanobis distance to the sample mean equals the energy levels of the quantum harmonic oscillator, being the minimum quadratic Mahalanobis distance at the minimum energy level of the oscillator and reaching the “intrinsic” Cramér–Rao lower bound at the lowest energy level. Also, we demonstrate that the global probability density function of the collective mode of a set of m quantum harmonic oscillators at the lowest energy level still equals the posterior probability distribution calculated using Bayes’ theorem from the sources of information for all data values, taking as a prior the Riemannian volume of the informative metric. While these new assumptions certainly add complexity to the mathematical framework, the results proven are invariant under transformations, leading to the concept of “intrinsic” information-theoretic models, which are essential for developing physics. Full article
10 pages, 283 KiB  
Article
Information-Theoretic Models for Physical Observables
by D. Bernal-Casas and J. M. Oller
Entropy 2023, 25(10), 1448; https://doi.org/10.3390/e25101448 - 14 Oct 2023
Cited by 3 | Viewed by 2077
Abstract
This work addresses J.A. Wheeler’s critical idea that all things physical are information-theoretic in origin. In this paper, we introduce a novel mathematical framework based on information geometry, using the Fisher information metric as a particular Riemannian metric, defined in the parameter space [...] Read more.
This work addresses J.A. Wheeler’s critical idea that all things physical are information-theoretic in origin. In this paper, we introduce a novel mathematical framework based on information geometry, using the Fisher information metric as a particular Riemannian metric, defined in the parameter space of a smooth statistical manifold of normal probability distributions. Following this approach, we study the stationary states with the time-independent Schrödinger’s equation to discover that the information could be represented and distributed over a set of quantum harmonic oscillators, one for each independent source of data, whose coordinate for each oscillator is a parameter of the smooth statistical manifold to estimate. We observe that the estimator’s variance equals the energy levels of the quantum harmonic oscillator, proving that the estimator’s variance is definitively quantized, being the minimum variance at the minimum energy level of the oscillator. Interestingly, we demonstrate that quantum harmonic oscillators reach the Cramér–Rao lower bound on the estimator’s variance at the lowest energy level. In parallel, we find that the global probability density function of the collective mode of a set of quantum harmonic oscillators at the lowest energy level equals the posterior probability distribution calculated using Bayes’ theorem from the sources of information for all data values, taking as a prior the Riemannian volume of the informative metric. Interestingly, the opposite is also true, as the prior is constant. Altogether, these results suggest that we can break the sources of information into little elements: quantum harmonic oscillators, with the square modulus of the collective mode at the lowest energy representing the most likely reality, supporting A. Zeilinger’s recent statement that the world is not broken into physical but informational parts. Full article
21 pages, 2923 KiB  
Article
Estimation and Prediction for Alpha-Power Weibull Distribution Based on Hybrid Censoring
by Ehab M. Almetwally, Refah Alotaibi and Hoda Rezk
Symmetry 2023, 15(9), 1687; https://doi.org/10.3390/sym15091687 - 2 Sep 2023
Cited by 5 | Viewed by 1457
Abstract
This work discusses the issues of estimation and prediction when lifespan data following alpha-power Weibull distribution are observed under Type II hybrid censoring. We calculate point and related interval estimates for both issues using both non-Bayesian and Bayesian methods. Using the Newton–Raphson technique [...] Read more.
This work discusses the issues of estimation and prediction when lifespan data following alpha-power Weibull distribution are observed under Type II hybrid censoring. We calculate point and related interval estimates for both issues using both non-Bayesian and Bayesian methods. Using the Newton–Raphson technique under the classical approach, we compute maximum likelihood estimates for point estimates in the estimation problem. Under the Bayesian approach, we compute Bayes estimates under informative and non-informative priors using the symmetric loss function. Using the Fisher information matrix under classical and Bayesian techniques, the corresponding interval estimates are derived. Additionally, using the best unbiased and conditional median predictors under the classical approach, as well as Bayesian predictive and associated Bayesian predictive interval estimates in the prediction approach, the predictive point estimates and associated predictive interval estimates are computed. We compare several suggested approaches of estimation and prediction using real data sets and Monte Carlo simulation studies. A conclusion is provided. Full article
Show Figures

Figure 1

21 pages, 10003 KiB  
Article
Analysis of WE Parameters of Life Using Adaptive-Progressively Type-II Hybrid Censored Mechanical Equipment Data
by Ahmed Elshahhat, Ehab M. Almetwally, Sanku Dey and Heba S. Mohammed
Axioms 2023, 12(7), 690; https://doi.org/10.3390/axioms12070690 - 16 Jul 2023
Cited by 3 | Viewed by 1507
Abstract
A new two-parameter weighted-exponential (WE) distribution, as a beneficial competitor model to other lifetime distributions, namely: generalized exponential, gamma, and Weibull distributions, is studied in the presence of adaptive progressive Type-II hybrid data. Thus, based on different frequentist and Bayesian estimation methods, we [...] Read more.
A new two-parameter weighted-exponential (WE) distribution, as a beneficial competitor model to other lifetime distributions, namely: generalized exponential, gamma, and Weibull distributions, is studied in the presence of adaptive progressive Type-II hybrid data. Thus, based on different frequentist and Bayesian estimation methods, we study the inferential problem of the WE parameters as well as related reliability indices, including survival and failure functions. In frequentist setups, besides the standard likelihood-based estimation, the product of spacing (PS) approach is also taken into account for estimating all unknown parameters of life. Making use of the delta method and the observed Fisher information of the frequentist estimators, approximated asymptotic confidence intervals for all unknown parameters are acquired. In Bayes methodology, from the squared-error loss with independent gamma density priors, the point and interval estimates of the unknown parameters are offered using both joint likelihood and the product of spacings functions. Because a closed solution to the Bayes estimators is not accessible, the Metropolis–Hastings sampler is presented to approximate the Bayes estimates and also to create their associated highest interval posterior density estimates. To figure out the effectiveness of the developed approaches, extensive Monte Carlo experiments are implemented. To highlight the applicability of the offered methodologies in practice, one real-life data set consisting of 30 failure times of repairable mechanical equipment is analyzed. This application demonstrated that the offered WE model provides a better fit compared to the other eight lifetime models. Full article
(This article belongs to the Special Issue Mathematical and Statistical Methods and Their Applications)
Show Figures

Figure 1

25 pages, 19007 KiB  
Article
Statistical Analysis and Applications of Adaptive Progressively Type-II Hybrid Poisson–Exponential Censored Data
by Ahmed Elshahhat and Heba S. Mohammed
Axioms 2023, 12(6), 533; https://doi.org/10.3390/axioms12060533 - 29 May 2023
Cited by 2 | Viewed by 1371
Abstract
A new two-parameter extended exponential lifetime distribution with an increasing failure rate called the Poisson–exponential (PE) model was explored. In the reliability experiments, an adaptive progressively Type-II hybrid censoring strategy is presented to improve the statistical analysis efficiency and reduce the entire test [...] Read more.
A new two-parameter extended exponential lifetime distribution with an increasing failure rate called the Poisson–exponential (PE) model was explored. In the reliability experiments, an adaptive progressively Type-II hybrid censoring strategy is presented to improve the statistical analysis efficiency and reduce the entire test duration on a life-testing experiment. To benefit from this mechanism, this paper sought to infer the unknown parameters, as well as the reliability and failure rate function of the PE distribution using both the likelihood and product of spacings estimation procedures as a conventional view. For each unknown parameter, from both classical approaches, an approximate confidence interval based on Fisher’s information was also created. Additionally, in the Bayesian paradigm, the given classical approaches were extended to Bayes’ continuous theorem to develop the Bayes (or credible interval) estimates of the same unknown quantities. Employing the squared error loss, the Bayesian inference was developed based on independent gamma assumptions. Because of the complex nature of the posterior density, the Markov chain with the Monte Carlo methodology was used to obtain data from the whole conditional distributions and, therefore, evaluate the acquired Bayes point/interval estimates. Via extensive numerical comparisons, the performance of the estimates provided was evaluated with respect to various criteria. Among different competing progressive mechanisms, using four optimality criteria, the best censoring was suggested. Two real chemical engineering datasets were also analyzed to highlight the applicability of the acquired point and interval estimators in an actual practical scenario. Full article
(This article belongs to the Special Issue Statistical Signal Processing: Recent Advances)
Show Figures

Figure 1

10 pages, 2287 KiB  
Article
Hellinger Information Matrix and Hellinger Priors
by Arkady Shemyakin
Entropy 2023, 25(2), 344; https://doi.org/10.3390/e25020344 - 13 Feb 2023
Cited by 2 | Viewed by 1628
Abstract
Hellinger information as a local characteristic of parametric distribution families was first introduced in 2011. It is related to the much older concept of the Hellinger distance between two points in a parametric set. Under certain regularity conditions, the local behavior of the [...] Read more.
Hellinger information as a local characteristic of parametric distribution families was first introduced in 2011. It is related to the much older concept of the Hellinger distance between two points in a parametric set. Under certain regularity conditions, the local behavior of the Hellinger distance is closely connected to Fisher information and the geometry of Riemann manifolds. Nonregular distributions (non-differentiable distribution densities, undefined Fisher information or denisities with support depending on the parameter), including uniform, require using analogues or extensions of Fisher information. Hellinger information may serve to construct information inequalities of the Cramer–Rao type, extending the lower bounds of the Bayes risk to the nonregular case. A construction of non-informative priors based on Hellinger information was also suggested by the author in 2011. Hellinger priors extend the Jeffreys rule to nonregular cases. For many examples, they are identical or close to the reference priors or probability matching priors. Most of the paper was dedicated to the one-dimensional case, but the matrix definition of Hellinger information was also introduced for higher dimensions. Conditions of existence and the nonnegative definite property of Hellinger information matrix were not discussed. Hellinger information for the vector parameter was applied by Yin et al. to problems of optimal experimental design. A special class of parametric problems was considered, requiring the directional definition of Hellinger information, but not a full construction of Hellinger information matrix. In the present paper, a general definition, the existence and nonnegative definite property of Hellinger information matrix is considered for nonregular settings. Full article
Show Figures

Figure 1

22 pages, 1007 KiB  
Article
Survival Analysis of Type-II Lehmann Fréchet Parameters via Progressive Type-II Censoring with Applications
by Ahmed Elshahhat, Ritwik Bhattacharya and Heba S. Mohammed
Axioms 2022, 11(12), 700; https://doi.org/10.3390/axioms11120700 - 7 Dec 2022
Cited by 6 | Viewed by 1741
Abstract
A new three-parameter Type-II Lehmann Fréchet distribution (LFD-TII), as a reparameterized version of the Kumaraswamy–Fréchet distribution, is considered. In this study, using progressive Type-II censoring, different estimation methods of the LFD-TII parameters and its lifetime functions, namely, reliability and hazard functions, are considered. [...] Read more.
A new three-parameter Type-II Lehmann Fréchet distribution (LFD-TII), as a reparameterized version of the Kumaraswamy–Fréchet distribution, is considered. In this study, using progressive Type-II censoring, different estimation methods of the LFD-TII parameters and its lifetime functions, namely, reliability and hazard functions, are considered. In a frequentist setup, both the likelihood and product of the spacing estimators of the considered parameters are obtained utilizing the Newton–Raphson method. From the normality property of the proposed classical estimators, based on Fisher’s information and the delta method, the asymptotic confidence interval for any unknown parametric function is obtained. In the Bayesian paradigm via likelihood and spacings functions, using independent gamma conjugate priors, the Bayes estimators of the unknown parameters are obtained against the squared-error and general-entropy loss functions. Since the proposed posterior distributions cannot be explicitly expressed, by combining two Markov-chain Monte-Carlo techniques, namely, the Gibbs and Metropolis–Hastings algorithms, the Bayes point/interval estimates are approximated. To examine the performance of the proposed estimation methodologies, extensive simulation experiments are conducted. In addition, based on several criteria, the optimum censoring plan is proposed. In real-life practice, to show the usefulness of the proposed estimators, two applications based on two different data sets taken from the engineering and physics fields are analyzed. Full article
(This article belongs to the Special Issue Probability, Statistics and Estimation)
Show Figures

Figure 1

28 pages, 4991 KiB  
Article
Machine Vision Approach for Diagnosing Tuberculosis (TB) Based on Computerized Tomography (CT) Scan Images
by Inayatul Haq, Tehseen Mazhar, Qandeel Nasir, Saqib Razzaq, Syed Agha Hassnain Mohsan, Mohammed H. Alsharif, Hend Khalid Alkahtani, Ayman Aljarbouh and Samih M. Mostafa
Symmetry 2022, 14(10), 1997; https://doi.org/10.3390/sym14101997 - 23 Sep 2022
Cited by 14 | Viewed by 4801
Abstract
Tuberculosis is curable, still the world’s second inflectional murderous disease, and ranked 13th (in 2020) by the World Health Organization on the list of leading death causes. One of the reasons for its fatality is the unavailability of modern technology and human experts [...] Read more.
Tuberculosis is curable, still the world’s second inflectional murderous disease, and ranked 13th (in 2020) by the World Health Organization on the list of leading death causes. One of the reasons for its fatality is the unavailability of modern technology and human experts for early detection. This study represents a precise and reliable machine vision-based approach for Tuberculosis detection in the lung through Symmetry CT scan images. TB spreads irregularly, which means it might not affect both lungs equally, and it might affect only some part of the lung. That’s why regions of interest (ROI’s) from TB infected and normal CT scan images of lungs were selected after pre-processing i.e., selection/cropping, grayscale image conversion, and filtration, Statistical texture features were extracted, and 30 optimized features using F (Fisher) + PA (probability of error + average correlation) + MI (mutual information) were selected for final optimization and only 6 most optimized features were selected. Several supervised learning classifiers were used to classify between normal and infected TB images. Artificial Neural Network (ANN: n class) based classifier Multi-Layer Perceptron (MLP) showed comparatively better and probably best accuracy of 99% with execution time of less than a second, followed by Random Forest 98.83%, J48 98.67%, Log it Boost 98%, AdaBoostM1 97.16% and Bayes Net 96.83%. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Computer Vision and Image Processing)
Show Figures

Figure 1

21 pages, 425 KiB  
Article
Statistical Inference and Optimal Design of Accelerated Life Testing for the Chen Distribution under Progressive Type-II Censoring
by Wenjie Zhang and Wenhao Gui
Mathematics 2022, 10(9), 1609; https://doi.org/10.3390/math10091609 - 9 May 2022
Cited by 8 | Viewed by 2554
Abstract
This paper discusses statistical inference and optimal design of constant-stress accelerated life testing for the Chen distribution under progressive Type-II censoring. The scale parameter of the life distribution is assumed to be a logarithmic linear function of the stress level. The maximum likelihood [...] Read more.
This paper discusses statistical inference and optimal design of constant-stress accelerated life testing for the Chen distribution under progressive Type-II censoring. The scale parameter of the life distribution is assumed to be a logarithmic linear function of the stress level. The maximum likelihood estimates of the parameters are obtained. Then, the observed Fisher information matrix is derived and utilized to construct asymptotic confidence intervals. Meanwhile, the parametric bootstrap methods are provided for the interval estimation. In addition, the Bayes estimates under the squared error loss function are obtained by applying the Tierney and Kadane technique and Lindley’s approximation. As for the optimal design, D- and A-optimality criteria are considered to determine the optimal transformed stress level. Finally, the simulation is carried out to demonstrate the proposed estimation techniques and the optimal criteria, and a real data set is discussed. Full article
Show Figures

Figure 1

Back to TopTop