Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = discrete censored data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7825 KiB  
Article
A New Hjorth Distribution in Its Discrete Version
by Hanan Haj Ahmad and Ahmed Elshahhat
Mathematics 2025, 13(5), 875; https://doi.org/10.3390/math13050875 - 6 Mar 2025
Cited by 1 | Viewed by 544
Abstract
The Hjorth distribution is more flexible in modeling various hazard rate shapes, including increasing, decreasing, and bathtub shapes. This makes it highly useful in reliability analysis and survival studies, where different failure rate behaviors must be captured effectively. In some practical experiments, the [...] Read more.
The Hjorth distribution is more flexible in modeling various hazard rate shapes, including increasing, decreasing, and bathtub shapes. This makes it highly useful in reliability analysis and survival studies, where different failure rate behaviors must be captured effectively. In some practical experiments, the observed data may appear to be continuous, but their intrinsic discreteness requires the development of specialized techniques for constructing discrete counterparts to continuous distributions. This study extends this methodology by discretizing the Hjorth distribution using the survival function approach. The proposed discrete Hjorth distribution preserves the essential statistical characteristics of its continuous counterpart, such as percentiles and quantiles, making it a valuable tool for modeling lifetime data. The complexity of the transformation requires numerical techniques to ensure accurate estimations and analysis. A key feature of this study is the incorporation of Type-II censored samples. We also derive key statistical properties, including the quantile function and order statistics, and then employ maximum likelihood and Bayesian inference methods. A comparative analysis of these estimation techniques is conducted through simulation studies. Furthermore, the proposed model is validated using two real-world datasets, including electronic device failure times and ball-bearing failure analysis, by applying goodness-of-fit tests against alternative discrete models. The findings emphasize the versatility and applicability of the discrete Hjorth distribution in reliability studies, engineering, and survival analysis, offering a robust framework for modeling discrete data in practical scenarios. To our knowledge, no prior research has explored the use of censored data in analyzing discrete Hjorth-distributed data. This study fills this gap, providing new insights into discrete reliability modeling and broadening the application of the Hjorth distribution in real-world scenarios. Full article
(This article belongs to the Special Issue New Advances in Distribution Theory and Its Applications)
Show Figures

Figure 1

14 pages, 457 KiB  
Article
Proportional Log Survival Model for Discrete Time-to-Event Data
by Tiago Chandiona Ernesto Franque, Marcílio Ramos Pereira Cardial and Eduardo Yoshio Nakano
Mathematics 2025, 13(5), 800; https://doi.org/10.3390/math13050800 - 27 Feb 2025
Viewed by 446
Abstract
The aim of this work is to propose a proportional log survival model (PLSM) as a discrete alternative to the proportional hazards (PH) model. This paper presents the formulation of PLSM as well as the procedures for verifying its assumption. The parameters of [...] Read more.
The aim of this work is to propose a proportional log survival model (PLSM) as a discrete alternative to the proportional hazards (PH) model. This paper presents the formulation of PLSM as well as the procedures for verifying its assumption. The parameters of the PLSM are inferred using the maximum likelihood method, and a simulation study was carried out to investigate the usual asymptotic properties of the estimators. The PLSM was illustrated using data on the survival time of leukemia patients, and it was shown to be a viable alternative for modeling discrete survival data in the presence of covariates. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

22 pages, 420 KiB  
Article
Estimation of Marshall–Olkin Extended Generalized Extreme Value Distribution Parameters under Progressive Type-II Censoring by Using a Genetic Algorithm
by Rasha Abd El-Wahab Attwa, Shimaa Wasfy Sadk and Taha Radwan
Symmetry 2024, 16(6), 669; https://doi.org/10.3390/sym16060669 - 29 May 2024
Cited by 4 | Viewed by 1347
Abstract
In this article, we consider the statistical analysis of the parameter estimation of the Marshall–Olkin extended generalized extreme value under liner normalization distribution (MO-GEVL) within the context of progressively type-II censored data. The progressively type-II censored data are considered for three specific distribution [...] Read more.
In this article, we consider the statistical analysis of the parameter estimation of the Marshall–Olkin extended generalized extreme value under liner normalization distribution (MO-GEVL) within the context of progressively type-II censored data. The progressively type-II censored data are considered for three specific distribution patterns: fixed, discrete uniform, and binomial random removal. The challenge lies in the computation of maximum likelihood estimations (MLEs), as there is no straightforward analytical solution. The classical numerical methods are considered inadequate for solving the complex MLE equation system, leading to the necessity of employing artificial intelligence algorithms. This article utilizes the genetic algorithm (GA) to overcome this difficulty. This article considers parameter estimation through both maximum likelihood and Bayesian methods. For the MLE, the confidence intervals of the parameters are calculated using the Fisher information matrix. In the Bayesian estimation, the Lindley approximation is applied, considering LINEX loss functions and square error loss, suitable for both non-informative and informative contexts. The effectiveness and applicability of these proposed methods are demonstrated through numerical simulations and practical real-data examples. Full article
Show Figures

Figure 1

17 pages, 1463 KiB  
Article
Proportional Odds Hazard Model for Discrete Time-to-Event Data
by Maria Gabriella Figueiredo Vieira, Marcílio Ramos Pereira Cardial, Raul Matsushita and Eduardo Yoshio Nakano
Axioms 2023, 12(12), 1102; https://doi.org/10.3390/axioms12121102 - 6 Dec 2023
Cited by 3 | Viewed by 2015
Abstract
In this article, we present the development of the proportional odds hazard model for discrete time-to-event data. In this work, inferences about the model’s parameters were formulated considering the presence of right censoring and the discrete Weibull and log-logistic distributions. Simulation studies were [...] Read more.
In this article, we present the development of the proportional odds hazard model for discrete time-to-event data. In this work, inferences about the model’s parameters were formulated considering the presence of right censoring and the discrete Weibull and log-logistic distributions. Simulation studies were carried out to check the asymptotic properties of the estimators. In addition, procedures for checking the proportional odds assumption were proposed, and the proposed model is illustrated using a dataset on the survival time of patients with low back pain. Full article
(This article belongs to the Special Issue New Trends in Discrete Probability and Statistics)
Show Figures

Figure 1

16 pages, 423 KiB  
Article
Joint Model for Estimating the Asymmetric Distribution of Medical Costs Based on a History Process
by Simeng Li, Dianliang Deng, Yuecai Han and Dingwen Zhang
Symmetry 2023, 15(12), 2130; https://doi.org/10.3390/sym15122130 - 30 Nov 2023
Cited by 1 | Viewed by 1255
Abstract
In this paper, we modify a semi-parameter estimation of the joint model for the mean medical cost function with time-dependent covariates to enable it to describe the nonlinear relationship between the longitudinal variable and time points by using polynomial approximation. The observation time [...] Read more.
In this paper, we modify a semi-parameter estimation of the joint model for the mean medical cost function with time-dependent covariates to enable it to describe the nonlinear relationship between the longitudinal variable and time points by using polynomial approximation. The observation time points are discrete and not exactly the same for all subjects; in order to use all of the information, we first estimate the mean medical cost at the same observed time points for all subjects, and then we weigh these values using the kernel method. Therefore, a smooth mean function of medical costs can be obtained. The proposed estimating method can be used for asymmetric distribution statistics. The consistency of the estimator is demonstrated by theoretical analysis. For the simulation study, we first set up the values of parameters and non-parametric functions, and then we generated random samples for covariates and censored survival times. Finally, the longitudinal data of response variables could be produced based on the covariates and survival times. Then, numerical simulation experiments were conducted by using the proposed method and applying the JM package in R to the generated data. The estimated results for parameters and non-parametric functions were compared with different settings. Numerical results illustrate that the standard deviations of the parametric estimators decrease as the sample sizes increases and are much smaller than preassigned threshold value. The estimates of non-parametric functions in the model almost coincide with the true functions as shown in the figures of simulation study. We apply the proposed model to a real data set from a multicenter automatic defibrillator implantation trial. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

25 pages, 3225 KiB  
Article
The Discrete Exponentiated-Chen Model and Its Applications
by Refah Alotaibi, Hoda Rezk, Chanseok Park and Ahmed Elshahhat
Symmetry 2023, 15(6), 1278; https://doi.org/10.3390/sym15061278 - 18 Jun 2023
Cited by 5 | Viewed by 2098
Abstract
A novel discrete exponentiated Chen (DEC) distribution, which is a subset of the continuous exponentiated Chen distribution, is proposed. The offered model is more adaptable to analyzing a wide range of data than traditional and recently published models. Several important statistical and reliability [...] Read more.
A novel discrete exponentiated Chen (DEC) distribution, which is a subset of the continuous exponentiated Chen distribution, is proposed. The offered model is more adaptable to analyzing a wide range of data than traditional and recently published models. Several important statistical and reliability characteristics of the DEC model are introduced. In the presence of Type-II censored data, the maximum likelihood and asymptotic confidence interval estimators of the model parameters are acquired. Two various bootstrapping estimators of the DEC parameters are also obtained. To examine the efficacy of the adopted methods, several simulations are implemented. To further clarify the offered model in the life scenario, the two applications, based on the number of vehicle fatalities in South Carolina in 2012 and the final exam marks in 2004 at the Indian Institute of Technology at Kanpur, are analyzed. The analysis findings showed that the DEC model is the most effective model for fitting the supplied data sets compared to eleven well-known models in literature, including: Poisson, geometric, negative binomial, discrete-Weibull, discrete Burr Type XII, discrete generalized exponential, discrete-gamma, discrete Burr Hatke, discrete Nadarajah-Haghighi, discrete modified-Weibull, and exponentiated discrete-Weibull models. Ultimately, the new model is recommended to be applied in many fields of real practice. Full article
Show Figures

Figure 1

17 pages, 1940 KiB  
Article
Measuring the Recovery Performance of a Portfolio of NPLs
by Alessandra Carleo, Roberto Rocci and Maria Sole Staffa
Computation 2023, 11(2), 29; https://doi.org/10.3390/computation11020029 - 7 Feb 2023
Cited by 4 | Viewed by 3748
Abstract
The objective of the present paper is to propose a new method to measure the recovery performance of a portfolio of non-performing loans (NPLs) in terms of recovery rate and time to liquidate. The fundamental idea is to draw a curve representing the [...] Read more.
The objective of the present paper is to propose a new method to measure the recovery performance of a portfolio of non-performing loans (NPLs) in terms of recovery rate and time to liquidate. The fundamental idea is to draw a curve representing the recovery rates over time, here assumed discretized, for example, in years. In this way, the user can get simultaneously information about recovery rate and time to liquidate of the portfolio. In particular, it is discussed how to estimate such a curve in the presence of right-censored data, e.g., when the NPLs composing the portfolio have been observed in different time periods, with a method based on an algorithm that is usually used in the construction of survival curves. The curves obtained are smoothed with nonparametric statistical learning techniques. The effectiveness of the proposal is shown by applying the method to simulated and real financial data. The latter are about some portfolios of Italian unsecured NPLs taken over by a specialized operator. Full article
(This article belongs to the Special Issue Computational Issues in Insurance and Finance)
Show Figures

Figure 1

19 pages, 509 KiB  
Article
Estimation of the Generalized Logarithmic Transformation Exponential Distribution under Progressively Type-II Censored Data with Application to the COVID-19 Mortality Rates
by Olayan Albalawi, Naresh Chandra Kabdwal, Qazi J. Azhad, Rashi Hora and Basim S. O. Alsaedi
Mathematics 2022, 10(7), 1015; https://doi.org/10.3390/math10071015 - 22 Mar 2022
Cited by 4 | Viewed by 2581
Abstract
In this paper, classical and Bayesian estimation for the parameters and the reliability function for the generalized logarithmic transformation exponential (GLTE) distribution has been proposed when the life-times are progressively censored. The maximum likelihood estimator of unknown parameters and their corresponding reliability function [...] Read more.
In this paper, classical and Bayesian estimation for the parameters and the reliability function for the generalized logarithmic transformation exponential (GLTE) distribution has been proposed when the life-times are progressively censored. The maximum likelihood estimator of unknown parameters and their corresponding reliability function are obtained under the classical setup. The Bayes estimators are obtained for symmetric (squared error) and asymmetric (LINEX and general entropy) loss functions. This was achieved by considering discrete prior for the scale parameter and conditional gamma prior for the shape parameter. Interval estimation of the unknown parameters and reliability function for classical and Bayesian schemes is also considered. The performances of various derived estimators are recorded using simulation study for different sample sizes and progressive censoring schemes. Finally, the COVID-19 mortality data sets are provided to illustrate the computation of various estimators. Full article
Show Figures

Figure 1

11 pages, 381 KiB  
Article
Subgroup Identification and Regression Analysis of Clustered and Heterogeneous Interval-Censored Data
by Xifen Huang and Jinfeng Xu
Mathematics 2022, 10(6), 862; https://doi.org/10.3390/math10060862 - 8 Mar 2022
Cited by 1 | Viewed by 2526
Abstract
Clustered and heterogeneous interval-censored data occur in many fields such as medical studies. For example, in a migraine study with the Netherlands Twin Registry, the information including time to diagnosis of migraine and gender was collected for 3975 monozygotic and dizygotic twins. Since [...] Read more.
Clustered and heterogeneous interval-censored data occur in many fields such as medical studies. For example, in a migraine study with the Netherlands Twin Registry, the information including time to diagnosis of migraine and gender was collected for 3975 monozygotic and dizygotic twins. Since each study subject is observed only at discrete and periodic follow-up time points, the failure times of interest (i.e., the time when the individual first had a migraine) are known only to belong to certain intervals and hence are interval-censored. Furthermore, these twins come from different genetic backgrounds and may be associated with differential risks for developing migraines. For simultaneous subgroup identification and regression analysis of such data, we propose a latent Cox model where the number of subgroups is not assumed a priori but rather data-driven estimated. The nonparametric maximum likelihood method and an EM algorithm with monotone ascent property are also developed for estimating the model parameters. Simulation studies are conducted to assess the finite sample performance of the proposed estimation procedure. We further illustrate the proposed methodologies by an empirical analysis of migraine data. Full article
(This article belongs to the Special Issue Advances in Computational Statistics and Applications)
Show Figures

Figure 1

15 pages, 486 KiB  
Article
Linked Lives: Does Disability and Marital Quality Influence Risk of Marital Dissolution among Older Couples?
by Kenzie Latham-Mintus, Jeanne Holcomb and Andrew P. Zervos
Soc. Sci. 2022, 11(1), 27; https://doi.org/10.3390/socsci11010027 - 15 Jan 2022
Cited by 5 | Viewed by 4294
Abstract
Using fourteen waves of data from the Health and Retirement Study (HRS), a longitudinal panel survey with respondents in the United States, this research explores whether marital quality—as measured by reports of enjoyment of time together—influences risk of divorce or separation when either [...] Read more.
Using fourteen waves of data from the Health and Retirement Study (HRS), a longitudinal panel survey with respondents in the United States, this research explores whether marital quality—as measured by reports of enjoyment of time together—influences risk of divorce or separation when either spouse acquires basic care disability. Discrete-time event history models with multiple competing events were estimated using multinomial logistic regression. Respondents were followed until they experienced the focal event (i.e., divorce or separation) or right-hand censoring (i.e., a competing event or were still married at the end of observation). Disability among wives was predictive of divorce/separation in the main effects model. Low levels of marital quality (i.e., enjoy time together) were associated with marital dissolution. An interaction between marital quality and disability yielded a significant association among couples where at least one spouse acquired basic care disability. For couples who acquired disability, those who reported low enjoyment were more likely to divorce/separate than those with high enjoyment; however, the group with the highest predicted probability were couples with low enjoyment, but no acquired disability. Full article
(This article belongs to the Special Issue Divorce and Life Course)
Show Figures

Figure 1

18 pages, 657 KiB  
Article
A Class of Exponentiated Regression Model for Non Negative Censored Data with an Application to Antibody Response to Vaccine
by Guillermo Martínez-Flórez, Sandra Vergara-Cardozo and Roger Tovar-Falón
Symmetry 2021, 13(8), 1419; https://doi.org/10.3390/sym13081419 - 3 Aug 2021
Cited by 1 | Viewed by 2026
Abstract
In this paper, an asymmetric regression model for censored non-negative data based on the centred exponentiated log-skew-normal and Bernoulli distributions mixture is introduced. To connect the discrete part with the continuous distribution, the logit link function is used. The parameters of the model [...] Read more.
In this paper, an asymmetric regression model for censored non-negative data based on the centred exponentiated log-skew-normal and Bernoulli distributions mixture is introduced. To connect the discrete part with the continuous distribution, the logit link function is used. The parameters of the model are estimated by using the likelihood maximum method. The score function and the information matrix are shown in detail. Antibody data from a study of the measles vaccine are used to illustrate applicability of the proposed model, and it was found the best fit to the data with respect to an others models used in the literature. Full article
Show Figures

Figure 1

28 pages, 8517 KiB  
Article
A 10-Year Statistical Analysis of Heavy Metals in River and Sediment in Hengyang Segment, Xiangjiang River Basin, China
by Jingwen Tang, Liyuan Chai, Huan Li, Zhihui Yang and Weichun Yang
Sustainability 2018, 10(4), 1057; https://doi.org/10.3390/su10041057 - 3 Apr 2018
Cited by 24 | Viewed by 4199
Abstract
Heavy metal elements in water and surface sediments were characterized in Hengyang river segment in Xiangjiang River basin, one of China’s most important heavy metal control and treatment region. Data of heavy metal monitoring results in water and sediment for 10 years were [...] Read more.
Heavy metal elements in water and surface sediments were characterized in Hengyang river segment in Xiangjiang River basin, one of China’s most important heavy metal control and treatment region. Data of heavy metal monitoring results in water and sediment for 10 years were acquired from an environmental monitoring program in the main channel of the studied area. Descriptive and exploratory statistical procedures were performed to reveal the characteristics of the sample distributions of heavy metal elements. The sample distributions of heavy metal elements were largely skewed right. Data censoring and too severe rounding in the water monitoring data were identified to have caused discretization in the sample distributions. Temporal and spatial characteristics of the data sets were addressed. The chromium (Cr) in the sediment possessed unique behavior, and this could be caused by a rapid deposition and releasing process. Full article
Show Figures

Figure 1

Back to TopTop