Computational Statistics & Data Analysis

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 48997

Special Issue Editors


E-Mail Website
Guest Editor
Department of Statistics, Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519087, China
Interests: multivariate statistical inference; statistical computation and simulation; graphical statistical methods; statistical inference based on representative points; EM algorithm; structural equation modeling

E-Mail Website
Guest Editor
Department of Statistics, Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519087, China
Interests: experimental design; multivariate statistical inference; statistical computation and simulation; Monte Carlo; quasi Monte Carlo; number-theoretic methods in statistics; data mining; statistical applications in industry

Special Issue Information

Dear Colleagues,

The rapid development of modern computer science has made it possible for statisticians to implement computationally intensive statistical methods. Well-designed algorithms are essential for obtaining better solutions to many complicated statistical problems. Statistics is facing more and more challenging computational problems in the era of information and big data. The Special Issue entitled “Computational Statistics and Data Analysis” is inviting prospective mathematicans, statisticians, data analysts, and computer scientists to submit their research to this Special Issue of Axioms. Axioms is an international peer-reviewed open access journal of mathematics, mathematical logic and mathematical physics, published monthly by MDPI. Axioms has high visibility with a cite score of Q1 (algebra and number theory). It is indexed within Scopus, SCIE (Web of Science), dblp, and many other databases.

Dr. Jiajuan Liang 
Prof. Dr. Kaitai Fang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • parametric and nonparametric inference
  • goodness of fit
  • representative points
  • resampling
  • EM algorithm
  • Markov Chain Monte Carlo algorithm
  • Monte Carlo simulation
  • survial data
  • financial data
  • medical data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (24 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

37 pages, 7148 KiB  
Article
Longitudinal Data Analysis Based on Bayesian Semiparametric Method
by Guimei Jiao, Jiajuan Liang, Fanjuan Wang, Xiaoli Chen, Shaokang Chen, Hao Li, Jing Jin, Jiali Cai and Fangjie Zhang
Axioms 2023, 12(5), 431; https://doi.org/10.3390/axioms12050431 - 27 Apr 2023
Cited by 2 | Viewed by 1637
Abstract
A Bayesian semiparametric model framework is proposed to analyze multivariate longitudinal data. The new framework leads to simple explicit posterior distributions of model parameters. It results in easy implementation of the MCMC algorithm for estimation of model parameters and demonstrates fast convergence. The [...] Read more.
A Bayesian semiparametric model framework is proposed to analyze multivariate longitudinal data. The new framework leads to simple explicit posterior distributions of model parameters. It results in easy implementation of the MCMC algorithm for estimation of model parameters and demonstrates fast convergence. The proposed model framework associated with the MCMC algorithm is validated by four covariance structures and a real-life dataset. A simple Monte Carlo study of the model under four covariance structures and an analysis of the real dataset show that the new model framework and its associated Bayesian posterior inferential method through the MCMC algorithm perform fairly well in the sense of easy implementation, fast convergence, and smaller root mean square errors compared with the same model without the specified autoregression structure. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

25 pages, 3838 KiB  
Article
Mixture of Shanker Distributions: Estimation, Simulation and Application
by Tahani A. Abushal, Tabassum Naz Sindhu, Showkat Ahmad Lone, Marwa K. H. Hassan and Anum Shafiq
Axioms 2023, 12(3), 231; https://doi.org/10.3390/axioms12030231 - 22 Feb 2023
Cited by 6 | Viewed by 1572
Abstract
The Shanker distribution, a one-parameter lifetime distribution with an increasing hazard rate function, is recommended by Shanker for modelling lifespan data. In this study, we examine the theoretical and practical implications of 2-component mixture of Shanker model (2-CMSM). A significant feature of proposed [...] Read more.
The Shanker distribution, a one-parameter lifetime distribution with an increasing hazard rate function, is recommended by Shanker for modelling lifespan data. In this study, we examine the theoretical and practical implications of 2-component mixture of Shanker model (2-CMSM). A significant feature of proposed model’s hazard rate function is that it has rising, decreasing, and upside-down bathtub forms. We investigate the statistical characteristics of a mixed model, such as the probability-generating function, the factorial-moment-generating function, cumulants, the characteristic function, the Mills ratio, the mean residual life, and the mean time to failure. There is a graphic representation of density, mean, hazard rate functions, coefficient of variation, skewness, and kurtosis. Our final approach is to estimate the parameters of the mixture model using appropriate approaches such as maximum likelihood, least squares, and weighted least squares. Using a simulation analysis, we examined how the estimates behaved graphically. The simulation results demonstrated that, in the majority of cases, the maximum likelihood estimates have the smallest mean square errors among all other estimates. Finally, we observed that when the sample size rises, the precision measures decrease for all of the estimation techniques, indicating that all of the estimation approaches are consistent. Through two real data analyses, the suggested model’s validity and adaptability are contrasted with those of other models, including the mixture of the exponential distributions and the Lindley distributions . Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

19 pages, 424 KiB  
Article
Partially Linear Additive Hazards Regression for Bivariate Interval-Censored Data
by Ximeng Zhang, Shishun Zhao, Tao Hu and Jianguo Sun
Axioms 2023, 12(2), 198; https://doi.org/10.3390/axioms12020198 - 13 Feb 2023
Cited by 1 | Viewed by 1573
Abstract
In this paper, we discuss regression analysis of bivariate interval-censored failure time data that often occur in biomedical and epidemiological studies. To solve this problem, we propose a kind of general and flexible copula-based semiparametric partly linear additive hazards models that can allow [...] Read more.
In this paper, we discuss regression analysis of bivariate interval-censored failure time data that often occur in biomedical and epidemiological studies. To solve this problem, we propose a kind of general and flexible copula-based semiparametric partly linear additive hazards models that can allow for both time-dependent covariates and possible nonlinear effects. For inference, a sieve maximum likelihood estimation approach based on Bernstein polynomials is proposed to estimate the baseline hazard functions and nonlinear covariate effects. The resulting estimators of regression parameters are shown to be consistent, asymptotically efficient and normal. A simulation study is conducted to assess the finite-sample performance of this method and the results show that it is effective in practice. Moreover, an illustration is provided. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

12 pages, 1071 KiB  
Article
On the Estimation of the Binary Response Model
by Muhammad Amin, Muhammad Nauman Akram, B. M. Golam Kibria, Huda M. Alshanbari, Nahid Fatima and Ahmed Elhassanein
Axioms 2023, 12(2), 175; https://doi.org/10.3390/axioms12020175 - 8 Feb 2023
Cited by 1 | Viewed by 1600
Abstract
The binary logistic regression model (LRM) is practical in situations when the response variable (RV) is dichotomous. The maximum likelihood estimator (MLE) is generally considered to estimate the LRM parameters. However, in the presence of multicollinearity (MC), the MLE is not the correct [...] Read more.
The binary logistic regression model (LRM) is practical in situations when the response variable (RV) is dichotomous. The maximum likelihood estimator (MLE) is generally considered to estimate the LRM parameters. However, in the presence of multicollinearity (MC), the MLE is not the correct choice due to its inflated standard deviation (SD) and standard errors (SE) of the estimates. To combat MC, commonly used biased estimators, i.e., the Ridge estimators (RE) and Liu estimators (LEs), are preferred. However, most of the time, the traditional LE attains a negative value for its Liu parameter (LP), which is considered to be a major drawback. Therefore, to overcome this issue, we proposed a new adjusted LE for the binary LRM. Owing to numerical evaluation purposes, Monte Carlo simulation (MCS) study is performed under different conditions where bias and mean squared error are the performance criteria. Findings showed the superiority of our proposed estimator in comparison with the other estimation methods due to the existence of high but imperfect multicollinearity, which clearly means that it is consistent when the regressors are multicollinear. Furthermore, the findings demonstrated that whenever there is MC, the MLE is not the best choice. Finally, a real application is being considered to be evidence for the advantage of the intended estimator. The MCS and the application findings pointed out that the considered adjusted LE for the binary logistic regression model is a more efficient estimation method whenever the regressors are highly multicollinear. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

30 pages, 1386 KiB  
Article
Statistical Inference of Truncated Cauchy Power-Inverted Topp–Leone Distribution under Hybrid Censored Scheme with Applications
by Rania A. H. Mohamed, Mohammed Elgarhy, Manal H. Alabdulhadi, Ehab M. Almetwally and Taha Radwan
Axioms 2023, 12(2), 148; https://doi.org/10.3390/axioms12020148 - 31 Jan 2023
Cited by 8 | Viewed by 1497
Abstract
In this article, a new two-parameter model called the truncated Cauchy power-inverted Topp–Leone (TCP-ITL) is constructed by merging the truncated Cauchy power -G (TCP-G) family with the inverted Topp–Leone (ITL) distribution. Some structural properties of the newly suggested model are obtained. Different types [...] Read more.
In this article, a new two-parameter model called the truncated Cauchy power-inverted Topp–Leone (TCP-ITL) is constructed by merging the truncated Cauchy power -G (TCP-G) family with the inverted Topp–Leone (ITL) distribution. Some structural properties of the newly suggested model are obtained. Different types of entropies are proposed under the TCP-ITL distribution. Under the complete and hybrid censored data, the maximum likelihood (ML), maximum product of spacing (MPSP), and Bayesian estimate approaches are explored. A simulation study is developed to test the proposed distribution’s restricted sample attributes. In the majority of cases, the numerical data revealed that the Bayesian estimates provided more accurate outcomes than the equivalent alternative estimates. The adaptability of the proposed approach is proven using examples from dependability, medicine, and engineering. A real-world data set is utilized to demonstrate the potential of the TCP-ITL distribution in comparison to other well-known distributions. The results of the model selection revealed that the proposed distribution is the best choice for the data sets under consideration. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

20 pages, 401 KiB  
Article
A Method for Detecting Outliers from the Gamma Distribution
by Xiou Liao, Tongtong Wang and Guohua Zou
Axioms 2023, 12(2), 107; https://doi.org/10.3390/axioms12020107 - 19 Jan 2023
Cited by 1 | Viewed by 2282
Abstract
Outliers often occur during data collection, which could impact the result seriously and lead to a large inference error; therefore, it is important to detect outliers before data analysis. Gamma distribution is a popular distribution in statistics; this paper proposes a method for [...] Read more.
Outliers often occur during data collection, which could impact the result seriously and lead to a large inference error; therefore, it is important to detect outliers before data analysis. Gamma distribution is a popular distribution in statistics; this paper proposes a method for detecting multiple upper outliers from gamma (m,θ). For computing the critical value of the test statistic in our method, we derive the density function for the case of a single outlier and design two algorithms based on the Monte Carlo and the kernel density estimation for the case of multiple upper outliers. A simulation study shows that the test statistic proposed in this paper outperforms some common test statistics. Finally, we propose an improved testing method to reduce the impact of the swamping effect, which is demonstrated by real data analyses. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

13 pages, 295 KiB  
Article
Direct Constructions of Uniform Designs under the Weighted Discrete Discrepancy
by Jiaqi Wang and Yu Tang
Axioms 2022, 11(12), 747; https://doi.org/10.3390/axioms11120747 - 19 Dec 2022
Viewed by 1517
Abstract
Uniform designs are widely used in many fields in view of their ability to reduce experimentalcosts. In a lot of practical cases, different factors may take different numbers of values, so a mixed-level uniform design is needed. Since it is not reasonable to [...] Read more.
Uniform designs are widely used in many fields in view of their ability to reduce experimentalcosts. In a lot of practical cases, different factors may take different numbers of values, so a mixed-level uniform design is needed. Since it is not reasonable to use the uniformity measure with the same weight for factors with different levels, the weighted discrete discrepancy was proposed in the existing literature. This paper discusses the construction method of mixed-level uniform designs under the weighted discrete discrepancy. The underlying method is to utilize some properties of partitioned difference families (PDFs) to obtain an infinite class of uniformly resolvable weighted balanced designs (URWBDs), which can directly produce corresponding uniform designs. Some examples are presented to illustrate the methods. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
27 pages, 699 KiB  
Article
Statistical Inference of the Beta Binomial Exponential 2 Distribution with Application to Environmental Data
by Osama H. Mahmoud Hassan, Ibrahim Elbatal, Abdullah H. Al-Nefaie and Ahmed R. El-Saeed
Axioms 2022, 11(12), 740; https://doi.org/10.3390/axioms11120740 - 17 Dec 2022
Cited by 1 | Viewed by 2056
Abstract
A new four-parameter lifetime distribution called the beta binomial exponential 2 (BBE2) distribution is proposed. Some mathematical features, including quantile function, moments, generating function and characteristic function, of the BBE2 distribution, are computed. When the [...] Read more.
A new four-parameter lifetime distribution called the beta binomial exponential 2 (BBE2) distribution is proposed. Some mathematical features, including quantile function, moments, generating function and characteristic function, of the BBE2 distribution, are computed. When the life test is truncated at a predetermined time, acceptance sampling plans (ASP) are constructed for the BBE2 distribution. The truncation time is supposed to represent the median lifetime of the BBE2 distribution with predetermined factors for the smallest sample size required to guarantee that the prescribed life test is achieved at a given consumer’s risk. Some numerical results for a given consumer’s risk, BBE2 distribution parameters and truncation time are derived. Classical (maximum likelihood and maximum product of spacing estimation methods) and Bayesian estimation approaches are utilized to estimate the model parameters. The performance of the model parameters is examined through the simulation study by using the three different approaches of estimation. Subsequently, we examine real-world data applications to demonstrate the versatility and potential of the BBE2 model. A real-world application demonstrates that the new distribution can offer a better fit than other competitive lifetime models. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

18 pages, 1647 KiB  
Article
Discrete Single-Factor Extension of the Exponential Distribution: Features and Modeling
by Mahmoud El-Morshedy, Hend S. Shahen, Bader Almohaimeed and Mohamed S. Eliwa
Axioms 2022, 11(12), 737; https://doi.org/10.3390/axioms11120737 - 16 Dec 2022
Viewed by 1540
Abstract
The importance of counting data modeling and its applications to real-world phenomena has been highlighted in several research studies. The present study focuses on a one-parameter discrete distribution that can be derived via the survival discretization approach. The proposed model has explicit forms [...] Read more.
The importance of counting data modeling and its applications to real-world phenomena has been highlighted in several research studies. The present study focuses on a one-parameter discrete distribution that can be derived via the survival discretization approach. The proposed model has explicit forms for its statistical properties. It can be applied to discuss asymmetric “right skewed” data with long “heavy” tails. Its failure rate function can be used to discuss the phenomena with a monotonically decreasing or unimodal failure rate shape. Further, it can be utilized as a probability tool to model and discuss over- and under-dispersed data. Various estimation techniques are reported and discussed in detail. A simulation study is performed to test the property of the estimator. Finally, three real data sets are analyzed to prove the notability of the introduced model. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

12 pages, 315 KiB  
Article
Projection Uniformity of Asymmetric Fractional Factorials
by Kang Wang, Zujun Ou, Hong Qin and Na Zou
Axioms 2022, 11(12), 716; https://doi.org/10.3390/axioms11120716 - 10 Dec 2022
Cited by 1 | Viewed by 2412
Abstract
The objective of this paper is to study the issue of the projection uniformity of asymmetric fractional factorials. On the basis of level permutation and mixture discrepancy, the average projection mixture discrepancy to measure the uniformity for low-dimensional projection designs is defined, the [...] Read more.
The objective of this paper is to study the issue of the projection uniformity of asymmetric fractional factorials. On the basis of level permutation and mixture discrepancy, the average projection mixture discrepancy to measure the uniformity for low-dimensional projection designs is defined, the uniformity pattern and minimum projection uniformity criterion are presented for evaluating and comparing any asymmetric factorials. Moreover, lower bounds to uniformity pattern have been obtained, and some illustrative examples are also provided. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
19 pages, 1329 KiB  
Article
Representative Points Based on Power Exponential Kernel Discrepancy
by Zikang Xiong, Yao Xiao, Jianhui Ning and Hong Qin
Axioms 2022, 11(12), 711; https://doi.org/10.3390/axioms11120711 - 9 Dec 2022
Viewed by 1747
Abstract
Representative points (rep-points) are a set of points that are optimally chosen for representing a big original data set or a target distribution in terms of a statistical criterion, such as mean square error and discrepancy. Most of the existing criteria can only [...] Read more.
Representative points (rep-points) are a set of points that are optimally chosen for representing a big original data set or a target distribution in terms of a statistical criterion, such as mean square error and discrepancy. Most of the existing criteria can only assure the representing properties in the whole variable space. In this paper, a new kernel discrepancy, named power exponential kernel discrepancy (PEKD), is proposed to measure the representativeness of the point set with respect to the general multivariate distribution. Different from the commonly used criteria, PEKD can improve the projection properties of the point set, which is important in high-dimensional circumstances. Some theoretical results are presented for understanding the new discrepancy better and guiding the hyperparameter setting. An efficient algorithm for searching rep-points under the PEKD criterion is presented and its convergence has also been proven. Examples are given to illustrate its potential applications in the numerical integration, uncertainty propagation, and reduction of Markov Chain Monte Carlo chains. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

18 pages, 469 KiB  
Article
A Unified Test for the AR Error Structure of an Autoregressive Model
by Xinyi Wei, Xiaohui Liu, Yawen Fan, Li Tan and Qing Liu
Axioms 2022, 11(12), 690; https://doi.org/10.3390/axioms11120690 - 1 Dec 2022
Cited by 2 | Viewed by 1725
Abstract
A direct application of autoregressive (AR) models with independent and identically distributed (iid) errors is sometimes inadequate to fit the time series data well. A natural alternative is further to assume the model errors following an AR process, whose structure however has essential [...] Read more.
A direct application of autoregressive (AR) models with independent and identically distributed (iid) errors is sometimes inadequate to fit the time series data well. A natural alternative is further to assume the model errors following an AR process, whose structure however has essential impacts on the statistical inferences related to the autoregressive models. In this paper, we construct a new unified test for checking the AR error structure based on the empirical likelihood method. The proposed test is desirable because its limit distribution is always chi-squared regardless of whether the autoregressive model is stationary or non-stationary, with or without an intercept term. Some simulations are also provided to illustrate the finite sample performance of this test. Finally, we apply the proposed test to a financial real data set. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

19 pages, 675 KiB  
Article
Confidence Intervals for the Ratio of Variances of Delta-Gamma Distributions with Applications
by Wansiri Khooriphan, Sa-Aat Niwitpong and Suparat Niwitpong
Axioms 2022, 11(12), 689; https://doi.org/10.3390/axioms11120689 - 30 Nov 2022
Cited by 3 | Viewed by 1474
Abstract
Since rainfall data often contain zero observations, the ratio of the variances of delta-gamma distributions can be used to compare the rainfall dispersion between two rainfall datasets. To this end, we constructed the confidence interval for the ratio of the variances of two [...] Read more.
Since rainfall data often contain zero observations, the ratio of the variances of delta-gamma distributions can be used to compare the rainfall dispersion between two rainfall datasets. To this end, we constructed the confidence interval for the ratio of the variances of two delta-gamma distributions by using the fiducial quantity method, Bayesian credible intervals based on the Jeffreys, uniform, or normal-gamma-beta priors, and highest posterior density (HPD) intervals based on the Jeffreys, uniform, or normal-gamma-beta priors. The performances of the proposed confidence interval methods were evaluated in terms of their coverage probabilities and average lengths via Monte Carlo simulation. Our findings show that the HPD intervals based on Jeffreys prior and the normal-gamma-beta prior are both suitable for datasets with a small and large probability of containing zeros, respectively. Rainfall data from Phrae province, Thailand, are used to illustrate the practicability of the proposed methods with real data. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

18 pages, 723 KiB  
Article
A Unit Half-Logistic Geometric Distribution and Its Application in Insurance
by Ahmed T. Ramadan, Ahlam H. Tolba and Beih S. El-Desouky
Axioms 2022, 11(12), 676; https://doi.org/10.3390/axioms11120676 - 28 Nov 2022
Cited by 16 | Viewed by 1966
Abstract
A new one parameter distribution recently was proposed for modelling lifetime data called half logistic-geometric (HLG) distribution. In this paper, appropriate transformation is considered for HLG distribution and a new distribution is derived called unit half logistic-geometric (UHLG) distribution for modelling bounded data [...] Read more.
A new one parameter distribution recently was proposed for modelling lifetime data called half logistic-geometric (HLG) distribution. In this paper, appropriate transformation is considered for HLG distribution and a new distribution is derived called unit half logistic-geometric (UHLG) distribution for modelling bounded data in the interval (0, 1). Some important statistical properties are investigated with a closed form quantile function. Some methods of parameter estimation are introduced to evaluate the distribution parameter and a simulation study is introduced to compare these different methods. A real data application in the insurance field is introduced to show the flexibility of the new distribution modelling such data comparing with other distributions. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

27 pages, 2059 KiB  
Article
New Robust Estimators for Handling Multicollinearity and Outliers in the Poisson Model: Methods, Simulation and Applications
by Issam Dawoud, Fuad A. Awwad, Elsayed Tag Eldin and Mohamed R. Abonazel
Axioms 2022, 11(11), 612; https://doi.org/10.3390/axioms11110612 - 3 Nov 2022
Cited by 9 | Viewed by 2463
Abstract
The Poisson maximum likelihood (PML) is used to estimate the coefficients of the Poisson regression model (PRM). Since the resulting estimators are sensitive to outliers, different studies have provided robust Poisson regression estimators to alleviate this problem. Additionally, the PML estimator is sensitive [...] Read more.
The Poisson maximum likelihood (PML) is used to estimate the coefficients of the Poisson regression model (PRM). Since the resulting estimators are sensitive to outliers, different studies have provided robust Poisson regression estimators to alleviate this problem. Additionally, the PML estimator is sensitive to multicollinearity. Therefore, several biased Poisson estimators have been provided to cope with this problem, such as the Poisson ridge estimator, Poisson Liu estimator, Poisson Kibria–Lukman estimator, and Poisson modified Kibria–Lukman estimator. Despite different Poisson biased regression estimators being proposed, there has been no analysis of the robust version of these estimators to deal with the two above-mentioned problems simultaneously, except for the robust Poisson ridge regression estimator, which we have extended by proposing three new robust Poisson one-parameter regression estimators, namely, the robust Poisson Liu (RPL), the robust Poisson Kibria–Lukman (RPKL), and the robust Poisson modified Kibria–Lukman (RPMKL). Theoretical comparisons and Monte Carlo simulations were conducted to show the proposed performance compared with the other estimators. The simulation results indicated that the proposed RPL, RPKL, and RPMKL estimators outperformed the other estimators in different scenarios, in cases where both problems existed. Finally, we analyzed two real datasets to confirm the results. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

34 pages, 5402 KiB  
Article
Amoud Class for Hazard-Based and Odds-Based Regression Models: Application to Oncology Studies
by Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa, Christophe Chesneau, Huda M. Alshanbari and Abdal-Aziz H. El-Bagoury
Axioms 2022, 11(11), 606; https://doi.org/10.3390/axioms11110606 - 1 Nov 2022
Cited by 5 | Viewed by 4277
Abstract
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of models. This generality was attained using [...] Read more.
The purpose of this study is to propose a novel, general, tractable, fully parametric class for hazard-based and odds-based models of survival regression for the analysis of censored lifetime data, named as the “Amoud class (AM)” of models. This generality was attained using a structure resembling the general class of hazard-based regression models, with the addition that the baseline odds function is multiplied by a link function. The class is broad enough to cover a number of widely used models, including the proportional hazard model, the general hazard model, the proportional odds model, the general odds model, the accelerated hazards model, the accelerated odds model, and the accelerated failure time model, as well as combinations of these. The proposed class incorporates the analysis of crossing survival curves. Based on a versatile parametric distribution (generalized log-logistic) for the baseline hazard, we introduced a technique for applying these various hazard-based and odds-based regression models. This distribution allows us to cover the most common hazard rate shapes in practice (decreasing, constant, increasing, unimodal, and reversible unimodal), and various common survival distributions (Weibull, Burr-XII, log-logistic, exponential) are its special cases. The proposed model has good inferential features, and it performs well when different information criteria and likelihood ratio tests are used to select hazard-based and odds-based regression models. The proposed model’s utility is demonstrated by an application to a right-censored lifetime dataset with crossing survival curves. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

13 pages, 399 KiB  
Article
Enhanced Lot Acceptance Testing Based on Defect Counts and Posterior Odds Ratios
by Arturo J. Fernández
Axioms 2022, 11(11), 604; https://doi.org/10.3390/axioms11110604 - 29 Oct 2022
Viewed by 1461
Abstract
Optimal defects-per-unit test plans based on posterior odds ratios are developed for the disposition of product lots. The number of nonconformities per unit is modeled by the Conway–Maxwell–Poisson distribution rather than the typical Poisson model. In essence, a submitted batch is conforming if [...] Read more.
Optimal defects-per-unit test plans based on posterior odds ratios are developed for the disposition of product lots. The number of nonconformities per unit is modeled by the Conway–Maxwell–Poisson distribution rather than the typical Poisson model. In essence, a submitted batch is conforming if its posterior acceptability is sufficiently large. First, a useful approximation of the optimal test plan is derived in closed form using the asymptotic normality of the log ratio. A mixed-integer nonlinear programming problem is then solved via Monte Carlo simulation to find the smallest number of inspected items per lot and the maximum tolerable posterior odds ratio. The methodology is applied to the manufacturing of paper and glass. The suggested sampling plan for lot sentencing provides the specified protections to both manufacturers and customers and minimizes the needed sample size. In terms of inspection effort and accuracy, the proposed approach is virtually an advantageous extension of the classical frequentist perspective. In many practical cases, it yields more precise assessments of the current consumer and producer risks, as well as more realistic decision rules. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

20 pages, 1649 KiB  
Article
Testing Multivariate Normality Based on t-Representative Points
by Jiajuan Liang, Ping He and Jun Yang
Axioms 2022, 11(11), 587; https://doi.org/10.3390/axioms11110587 - 24 Oct 2022
Cited by 5 | Viewed by 1810
Abstract
Testing multivariate normality is an ever-lasting interest in the goodness-of-fit area since the classical Pearson’s chi-squared test. Among the numerous approaches in the construction of tests for multivariate normality, normal characterization is one of the common approaches, which can be divided into the [...] Read more.
Testing multivariate normality is an ever-lasting interest in the goodness-of-fit area since the classical Pearson’s chi-squared test. Among the numerous approaches in the construction of tests for multivariate normality, normal characterization is one of the common approaches, which can be divided into the necessary and sufficient characterization and necessary-only characterization. We construct a test for multivariate normality by combining the necessary-only characterization and the idea of statistical representative points in this paper. The main idea is to transform a high-dimensional sample into a one-dimensional one through the necessary normal characterization and then employ the representative-point-based Pearson’s chi-squared test. A limited Monte Carlo study shows a considerable power improvement of the representative-point-based chi-square test over the traditional one. An illustrative example is given to show the supplemental function of the new test when used together with existing ones in the literature. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

22 pages, 1472 KiB  
Article
Statistical Analysis of Alpha Power Exponential Parameters Using Progressive First-Failure Censoring with Applications
by Mazen Nassar, Refah Alotaibi and Ahmed Elshahhat
Axioms 2022, 11(10), 553; https://doi.org/10.3390/axioms11100553 - 13 Oct 2022
Cited by 3 | Viewed by 1771
Abstract
This paper is an endeavor to investigate some estimation problems of the unknown parameters and some reliability measures of the alpha power exponential distribution in the presence of progressive first-failure censored data. In this regard, the classical and Bayesian approaches are considered to [...] Read more.
This paper is an endeavor to investigate some estimation problems of the unknown parameters and some reliability measures of the alpha power exponential distribution in the presence of progressive first-failure censored data. In this regard, the classical and Bayesian approaches are considered to acquire the point and interval estimates of the different quantities. The maximum likelihood approach is proposed to obtain the estimates of the unknown parameters, reliability, and hazard rate functions. The approximate confidence intervals are also considered. The Bayes estimates are obtained by considering both symmetric and asymmetric loss functions. The Bayes estimates and the associated highest posterior density credible intervals are given by applying the Monte Carlo Markov Chain technique. Due to the complexity of the given estimators which cannot be compared theoretically, a simulation study is implemented to compare the performance of the different procedures. In addition, diverse optimality criteria are employed to pick the best progressive censoring plans. Two engineering applications are considered to illustrate the applicability of the offered estimators. The numerical outcomes showed that the Bayes estimates based on symmetric or asymmetric loss functions perform better than other estimates in terms of minimum root mean square errors and interval lengths. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

16 pages, 3731 KiB  
Article
A New 3-Parameter Bounded Beta Distribution: Properties, Estimation, and Applications
by Faiza A. Althubyani, Ahmed M. T. Abd El-Bar, Mohamad A. Fawzy and Ahmed M. Gemeay
Axioms 2022, 11(10), 504; https://doi.org/10.3390/axioms11100504 - 26 Sep 2022
Cited by 8 | Viewed by 2402
Abstract
This study presents a new three-parameter beta distribution defined on the unit interval, which can have increasing, decreasing, left-skewed, right-skewed, approximately symmetric, bathtub, and upside-down bathtub shaped densities, and increasing, U, and bathtub shaped hazard rates. This model can define well-known distributions with [...] Read more.
This study presents a new three-parameter beta distribution defined on the unit interval, which can have increasing, decreasing, left-skewed, right-skewed, approximately symmetric, bathtub, and upside-down bathtub shaped densities, and increasing, U, and bathtub shaped hazard rates. This model can define well-known distributions with various parameters and supports, such as Kumaraswamy, beta exponential, exponential, exponentiated exponential, uniform, the generalized beta of the first kind, and beta power distributions. We present a comprehensive account of the mathematical features of the new model. Maximum likelihood methods and a Bayesian method under squared error and linear exponential loss functions are presented; also, approximate confidence intervals are obtained. We present a simulation study to compare all the results. Two real-world data sets are analyzed to demonstrate the utility and adaptability of the proposed model. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

19 pages, 404 KiB  
Article
Statistical Inference for Odds Ratio of Two Proportions in Bilateral Correlated Data
by Zhiming Li and Changxing Ma
Axioms 2022, 11(10), 502; https://doi.org/10.3390/axioms11100502 - 25 Sep 2022
Cited by 1 | Viewed by 1695
Abstract
Bilateral correlated data frequently arise in medical clinical studies such as otolaryngology and ophthalmology. Based on an equal correlation coefficient model, this paper mainly aimed to investigate the statistical inference for the odds ratio of two proportions in bilateral correlated data, including not [...] Read more.
Bilateral correlated data frequently arise in medical clinical studies such as otolaryngology and ophthalmology. Based on an equal correlation coefficient model, this paper mainly aimed to investigate the statistical inference for the odds ratio of two proportions in bilateral correlated data, including not only three test procedures but also four confidence interval (CI) constructions. Through iterative algorithms, all unknown parameters are estimated in order to construct the likelihood ratio, score and Wald-type tests. Furthermore, the profile likelihood CI, score CI, and Wald-type CI are obtained by the bisection root-finding algorithm. We provided another Wald-type CI based on an asymptotic normality property. The performance of the proposed tests were investigated with regard to empirical type I error rate and power, and CI methods were compared in terms of mean coverage probability and mean interval width. Numerical simulations show that the score test is more robust, and has higher power than other tests. The score CI also has a shorter interval width, and its coverage probability is closer to 0.95. A real example is used to illustrate the proposed methods. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

27 pages, 1890 KiB  
Article
Inferences of a Mixture Bivariate Alpha Power Exponential Model with Engineering Application
by Refah Alotaibi, Mazen Nassar, Indranil Ghosh, Hoda Rezk and Ahmed Elshahhat
Axioms 2022, 11(9), 459; https://doi.org/10.3390/axioms11090459 - 7 Sep 2022
Cited by 4 | Viewed by 1743
Abstract
The univariate alpha power exponential (APE) distribution has several appealing characteristics. It behaves similarly to Weibull, Gamma, and generalized exponential distributions with two parameters. In this paper, we consider different bivariate mixture models starting with two independent univariate APE models, and, in the [...] Read more.
The univariate alpha power exponential (APE) distribution has several appealing characteristics. It behaves similarly to Weibull, Gamma, and generalized exponential distributions with two parameters. In this paper, we consider different bivariate mixture models starting with two independent univariate APE models, and, in the latter case, starting from two dependent APE models. Several useful structural properties of such a mixture model (under the assumption of two independent APE distribution) are discussed. Bivariate APE (BAPE), in short, modelled under the dependent set up are also discussed in the context of a copula-based construction. Inferential aspects under the classical and under the Bayesian paradigm are considered to estimate the model parameters, and a simulation study is conducted for this purpose. For illustrative purposes, a well-known motor data is re-analyzed to exhibit the flexibility of the proposed bivariate mixture model. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

23 pages, 4666 KiB  
Article
Bayesian and Non-Bayesian Analysis of Exponentiated Exponential Stress–Strength Model Based on Generalized Progressive Hybrid Censoring Process
by Manal M. Yousef, Amal S. Hassan, Huda M. Alshanbari, Abdal-Aziz H. El-Bagoury and Ehab M. Almetwally
Axioms 2022, 11(9), 455; https://doi.org/10.3390/axioms11090455 - 5 Sep 2022
Cited by 13 | Viewed by 2054
Abstract
In many real-life scenarios, systems frequently perform badly in difficult operating situations. The multiple failures that take place when systems reach their lower, higher, or extreme functioning states typically receive little attention from researchers. This study uses generalized progressive hybrid censoring to discuss [...] Read more.
In many real-life scenarios, systems frequently perform badly in difficult operating situations. The multiple failures that take place when systems reach their lower, higher, or extreme functioning states typically receive little attention from researchers. This study uses generalized progressive hybrid censoring to discuss the inference of R=P(X<Y<Z) for a component when it is exposed to two stresses, Y,Z, and it has one strength X that is regarded. We assume that both the stresses and strength variables follow an exponentiated exponential distribution with a common scale parameter. We obtain R’s maximum likelihood estimator and approximate confidence intervals. In addition, the Bayesian estimators for symmetric, such as squared error, and asymmetric loss functions, such as linear exponential, are developed. Credible intervals with the highest posterior densities are established. Monte Carlo simulations are used to evaluate and compare the effectiveness of the many proposed estimators. The process is then precisely described using an analysis of real data. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

19 pages, 897 KiB  
Article
Inference of Reliability Analysis for Type II Half Logistic Weibull Distribution with Application of Bladder Cancer
by Rania A. H. Mohamed, Ahlam H. Tolba, Ehab M. Almetwally and Dina A. Ramadan
Axioms 2022, 11(8), 386; https://doi.org/10.3390/axioms11080386 - 6 Aug 2022
Cited by 11 | Viewed by 1638
Abstract
The estimation of the unknown parameters of Type II Half Logistic Weibull (TIIHLW) distribution was analyzed in this paper. The maximum likelihood and Bayes methods are used as estimation methods. These estimators are used to estimate the fuzzy reliability function and to choose [...] Read more.
The estimation of the unknown parameters of Type II Half Logistic Weibull (TIIHLW) distribution was analyzed in this paper. The maximum likelihood and Bayes methods are used as estimation methods. These estimators are used to estimate the fuzzy reliability function and to choose the best estimator of the fuzzy reliability function by comparing the mean square error (MSE). The simulation’s results showed that fuzziness is better than reality for all sample sizes, and fuzzy reliability at Bayes predicted estimates is better than the maximum likelihood technique. It produces the lowest average MSE until a sample size of n = 50 is obtained. A simulated data set is applied to diagnose the performance of the two techniques applied here. A real data set is used as a practice for the model discussed and developed the maximum likelihood estimate alternative model of TIIHLW as Topp Leone inverted Kumaraswamy, modified Kies inverted Topp–Leone, Kumaraswamy Weibull–Weibull, Marshall–Olkin alpha power inverse Weibull, and odd Weibull inverted Topp–Leone. We conclude that the TIIHLW is the best distribution fit for this data. Full article
(This article belongs to the Special Issue Computational Statistics & Data Analysis)
Show Figures

Figure 1

Back to TopTop