Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = multiple censored

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 356 KB  
Article
Exact Inference and Prediction for Exponential Models Under General Progressive Censoring with Application to Tire Wear Data
by Chien-Tai Lin
Mathematics 2025, 13(22), 3627; https://doi.org/10.3390/math13223627 - 12 Nov 2025
Abstract
General progressive Type-II censoring is widely applied in life-testing experiments to enhance efficiency by allowing early removal of surviving units, thereby reducing experimental time and cost. This paper develops exact inference and prediction procedures for one- and two-parameter exponential models based on multiple [...] Read more.
General progressive Type-II censoring is widely applied in life-testing experiments to enhance efficiency by allowing early removal of surviving units, thereby reducing experimental time and cost. This paper develops exact inference and prediction procedures for one- and two-parameter exponential models based on multiple independent general progressively Type-II censored samples. Using the recursive algorithm repeatedly, exact confidence intervals for model parameters and exact prediction intervals for unobserved failure times are constructed. The proposed methods are illustrated with simulated and real (tire wear) data, demonstrating their practical applicability to partially censored reliability experiments. Full article
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)
21 pages, 1895 KB  
Article
Computational Testing Procedure for the Overall Lifetime Performance Index of Multi-Component Exponentially Distributed Products
by Shu-Fei Wu and Chia-Chi Hsu
Stats 2025, 8(4), 104; https://doi.org/10.3390/stats8040104 - 2 Nov 2025
Viewed by 157
Abstract
In addition to products with a single component, this study examines products composed of multiple components whose lifetimes follow a one-parameter exponential distribution. An overall lifetime performance index is developed to assess products under the progressive type I interval censoring scheme. This study [...] Read more.
In addition to products with a single component, this study examines products composed of multiple components whose lifetimes follow a one-parameter exponential distribution. An overall lifetime performance index is developed to assess products under the progressive type I interval censoring scheme. This study establishes the relationship between the overall and individual lifetime performance indices and derives the corresponding maximum likelihood estimators along with their asymptotic distributions. Based on the asymptotic distributions, the lower confidence bounds for all indices are also established. Furthermore, a hypothesis testing procedure is formulated to evaluate whether the overall lifetime performance index achieves the specified target level, utilizing the maximum likelihood estimator as the test statistic under a progressive type I interval censored sample. Moreover, a power analysis is carried out, and two numerical examples are presented to demonstrate the practical implementation for the overall lifetime performance index. This research can be applied to the fields of life testing and reliability analysis. Full article
Show Figures

Figure 1

13 pages, 275 KB  
Article
Generalized Gamma Frailty and Symmetric Normal Random Effects Model for Repeated Time-to-Event Data
by Kai Liu, Yan Qiao Wang, Xiaojun Zhu and Narayanaswamy Balakrishnan
Symmetry 2025, 17(10), 1760; https://doi.org/10.3390/sym17101760 - 17 Oct 2025
Viewed by 284
Abstract
Clustered time-to-event data are quite common in survival analysis and finding a suitable model to account for dispersion as well as censoring is an important issue. In this article, we present a flexible model for repeated, overdispersed time-to-event data with right-censoring. We present [...] Read more.
Clustered time-to-event data are quite common in survival analysis and finding a suitable model to account for dispersion as well as censoring is an important issue. In this article, we present a flexible model for repeated, overdispersed time-to-event data with right-censoring. We present here a general model by incorporating generalized gamma and normal random effects in a Weibull distribution to accommodate overdispersion and data hierarchies, respectively. The normal random effect has the property of being symmetrical, which means its probability density function is symmetric around its mean. While the random effects are symmetrically distributed, the resulting frailty model is asymmetric in its survival function because the random effects enter the model multiplicatively via the hazard function, and the exponentiation of a symmetric normal variable leads to lognormal distribution, which is right-skewed. Due to the intractable integrals involved in the likelihood function and its derivatives, the Monte Carlo approach is used to approximate the involved integrals. The maximum likelihood estimates of the parameters in the model are then numerically determined. An extensive simulation study is then conducted to evaluate the performance of the proposed model and the method of inference developed here. Finally, the usefulness of the model is demonstrated by analyzing a data on recurrent asthma attacks in children and a recurrent bladder data set known in the survival analysis literature. Full article
16 pages, 1097 KB  
Article
The Role of Antifibrotic Therapy in Pulmonary Fibrosis and Lung Cancer: A Multicenter Retrospective Analysis
by Francesco Rocco Bertuccio, Nicola Baio, Fabio Perrotta, Donato Lacedonia, Vito D’Agnano, Andrea Bianco, Giulia Scioscia, Pasquale Tondo, Maria Pia Foschino Barbaro, Chandra Bortolotto, Angelo Guido Corsico and Giulia Maria Stella
Biomedicines 2025, 13(9), 2310; https://doi.org/10.3390/biomedicines13092310 - 21 Sep 2025
Viewed by 654
Abstract
Background: Patients with fibrotic interstitial lung disease (ILD) are at increased risk of lung cancer, yet the impact of antifibrotic therapy on oncologic outcomes remains unclear. Objective: This study aimed to explore associations between antifibrotic therapy and overall survival (OS) and acute [...] Read more.
Background: Patients with fibrotic interstitial lung disease (ILD) are at increased risk of lung cancer, yet the impact of antifibrotic therapy on oncologic outcomes remains unclear. Objective: This study aimed to explore associations between antifibrotic therapy and overall survival (OS) and acute exacerbations of ILD (AE-ILD) in patients with fibrotic ILD who develop lung cancer. Methods: We retrospectively analyzed 61 patients from multiple Italian centers: 35 received antifibrotic therapy (pirfenidone or nintedanib) and 26 did not. Outcomes included OS from cancer diagnosis and post-treatment AE-ILD. Results: Mean OS was 17.9 months in the antifibrotic group and 33.2 months in the non-antifibrotic group; no adjusted survival analyses were possible due to missing censoring data, and these descriptive values should not be overinterpreted. AE-ILD occurred in 11.4% of antifibrotic-treated patients and 11.5% of those without antifibrotics. PD-L1 expression was detected in 24.1% vs. 21.8% of tumors in the two groups, and autoantibody positivity was observed in 22.8% vs. 30.7%, respectively, reflecting differences in ILD subtypes. Conclusions: In this heterogeneous real-world cohort, antifibrotic therapy was not associated with increased AE-ILD risk, and descriptive OS comparisons showed no clear survival advantage. These exploratory findings warrant confirmation in larger, prospective studies. Full article
Show Figures

Figure 1

31 pages, 12350 KB  
Article
Statistical Evaluation of Beta-Binomial Probability Law for Removal in Progressive First-Failure Censoring and Its Applications to Three Cancer Cases
by Ahmed Elshahhat, Osama E. Abo-Kasem and Heba S. Mohammed
Mathematics 2025, 13(18), 3028; https://doi.org/10.3390/math13183028 - 19 Sep 2025
Viewed by 392
Abstract
Progressive first-failure censoring is a flexible and cost-efficient strategy that captures real-world testing scenarios where only the first failure is observed at each stage while randomly removing remaining units, making it ideal for biomedical and reliability studies. By applying the α-power transformation [...] Read more.
Progressive first-failure censoring is a flexible and cost-efficient strategy that captures real-world testing scenarios where only the first failure is observed at each stage while randomly removing remaining units, making it ideal for biomedical and reliability studies. By applying the α-power transformation to the exponential baseline, the proposed model introduces an additional flexibility parameter that enriches the family of lifetime distributions, enabling it to better capture varying failure rates and diverse hazard rate behaviors commonly observed in biomedical data, thus extending the classical exponential model. This study develops a novel computational framework for analyzing an α-powered exponential model under beta-binomial random removals within the proposed censoring test. To address the inherent complexity of the likelihood function arising from simultaneous random removals and progressive censoring, we derive closed-form expressions for the likelihood, survival, and hazard functions and propose efficient estimation strategies based on both maximum likelihood and Bayesian inference. For the Bayesian approach, gamma and beta priors are adopted, and a tailored Metropolis–Hastings algorithm is implemented to approximate posterior distributions under symmetric and asymmetric loss functions. To evaluate the empirical performance of the proposed estimators, extensive Monte Carlo simulations are conducted, examining bias, mean squared error, and credible interval coverage under varying censoring levels and removal probabilities. Furthermore, the practical utility of the model is illustrated through three oncological datasets, including multiple myeloma, lung cancer, and breast cancer patients, demonstrating superior goodness of fit and predictive reliability compared to traditional models. The results show that the proposed lifespan model, under the beta-binomial probability law and within the examined censoring mechanism, offers a flexible and computationally tractable framework for reliability and biomedical survival analysis, providing new insights into censored data structures with random withdrawals. Full article
(This article belongs to the Special Issue New Advance in Applied Probability and Statistical Inference)
Show Figures

Figure 1

14 pages, 792 KB  
Article
Reliability Assessment for Small-Sample Accelerated Life Tests with Normal Distribution
by Jianchao Guo and Huimin Fu
Machines 2025, 13(9), 850; https://doi.org/10.3390/machines13090850 - 14 Sep 2025
Viewed by 662
Abstract
A significant challenge in the accelerated life test (ALT) is the reliance on large sample sizes and multiple stress levels, which results in high costs and long test durations. To address this issue, this paper develops a new reliability assessment method for small-sample [...] Read more.
A significant challenge in the accelerated life test (ALT) is the reliance on large sample sizes and multiple stress levels, which results in high costs and long test durations. To address this issue, this paper develops a new reliability assessment method for small-sample ALTs with normal distribution (or lognormal distribution) and censoring. This method enables a high-confidence evaluation of the percentile lifetime (reliable lifetime) under normal operating stress level using censored data from only two accelerated stress levels. Firstly, a relationship is established between the percentile lifetime at normal stress level and the distribution parameters at accelerated stress levels. Subsequently, an initial estimate of the percentile lifetime is obtained from failure data, and its confidence is then refined using a Bayesian update with the nonfailures. Finally, an exact one-sided lower confidence limit (LCL) for the percentile lifetime and reliability is determined. This paper derives an analytical formula for LCLs under Type-II censoring scenarios and further extend the method to accommodate Type-I censored and general incomplete data. The Monte Carlo simulations and case studies show that, the proposed methods significantly reduce the required sample size and testing duration while offering superior theoretical rigor and accuracy than the conventional methods. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

13 pages, 1415 KB  
Article
The Impact of Sarcopenia, Myosteatosis, and Visceral Adiposity on Renal Transplantation Outcomes
by Esin Olcucuoglu, Utku Eren Ozkaya, Muhammed Emin Polat, Mehmet Yılmaz, Sedat Tastemur, Rıza Sarper Okten and Erkan Olcucuoglu
Medicina 2025, 61(9), 1608; https://doi.org/10.3390/medicina61091608 - 5 Sep 2025
Viewed by 784
Abstract
Background and Objectives: The impact of sarcopenia and myosteatosis on renal transplantation (RT) outcomes has yet to be explained, certainly due to differences in assessment methods. The role of visceral adiposity is also not clearly defined. This retrospective study aimed to evaluate [...] Read more.
Background and Objectives: The impact of sarcopenia and myosteatosis on renal transplantation (RT) outcomes has yet to be explained, certainly due to differences in assessment methods. The role of visceral adiposity is also not clearly defined. This retrospective study aimed to evaluate pretransplant body composition—including sarcopenia, myosteatosis, and visceral adiposity ratio (VSR)—using computed tomography (CT) and analyze their relationship with short- and long-term graft outcomes. Materials and Methods: A total of 94 patients who underwent RT between 2019 and 2023 and had pretransplant non-contrast abdominal CT scans were included. Skeletal muscle area (SMA) was assessed at the L3 vertebral level, including multiple muscle groups. Sarcopenia was defined by a low skeletal muscle index (SMI), while myosteatosis was defined by high intramuscular adipose tissue content (IMAC). Visceral adiposity was evaluated by the visceral-to-subcutaneous adipose tissue ratio (VSR). These parameters were compared with post-transplant outcomes. Results: The mean age was 42.69 ± 12.47 years, with 54.3% male patients. High IMAC was significantly associated with early graft failure (p = 0.026), delayed graft function (p = 0.005), death-censored graft failure (p = 0.036), and overall graft failure (p = 0.047). One-year mortality was also higher in the high IMAC group (14.8% vs. 0.0%, p = 0.012). SMI and VSR were not significantly associated with outcomes. Myosteatosis emerged as a significant risk factor in univariate analysis but was not independently predictive in multivariate analysis. Among the established risk factors identified in the study, recipient age was found to be a significant predictor for overall graft failure, donation type (cadaveric vs. living) for death-censored graft failure, and cold ischemia time for delayed graft function (OR: 1.068, 95% CI: 1.001–1.141, p = 0.049; OR: 147.7, 95% CI: 2.1—10,427.0, p = 0.021; OR: 1.003, 95% CI: 1.001–1.006, p = 0.023). Conclusions: Myosteatosis correlates with worse graft outcomes and higher mortality, but its independent prognostic value requires further investigation. Full article
(This article belongs to the Section Urology & Nephrology)
Show Figures

Figure 1

20 pages, 437 KB  
Article
A Copula-Driven CNN-LSTM Framework for Estimating Heterogeneous Treatment Effects in Multivariate Outcomes
by Jong-Min Kim
Mathematics 2025, 13(15), 2384; https://doi.org/10.3390/math13152384 - 24 Jul 2025
Cited by 2 | Viewed by 910
Abstract
Estimating heterogeneous treatment effects (HTEs) across multiple correlated outcomes poses significant challenges due to complex dependency structures and diverse data types. In this study, we propose a novel deep learning framework integrating empirical copula transformations with a CNN-LSTM (Convolutional Neural Networks and Long [...] Read more.
Estimating heterogeneous treatment effects (HTEs) across multiple correlated outcomes poses significant challenges due to complex dependency structures and diverse data types. In this study, we propose a novel deep learning framework integrating empirical copula transformations with a CNN-LSTM (Convolutional Neural Networks and Long Short-Term Memory networks) architecture to capture nonlinear dependencies and temporal dynamics in multivariate treatment effect estimation. The empirical copula transformation, a rank-based nonparametric approach, preprocesses input covariates to better represent the underlying joint distributions before modeling. We compare this method with a baseline CNN-LSTM model lacking copula preprocessing and a nonparametric tree-based approach, the Causal Forest, grounded in generalized random forests for HTE estimation. Our framework accommodates continuous, count, and censored survival outcomes simultaneously through a multitask learning setup with customized loss functions, including Cox partial likelihood for survival data. We evaluate model performance under varying treatment perturbation rates via extensive simulation studies, demonstrating that the Empirical Copula CNN-LSTM achieves superior accuracy and robustness in average treatment effect (ATE) and conditional average treatment effect (CATE) estimation. These results highlight the potential of copula-based deep learning models for causal inference in complex multivariate settings, offering valuable insights for personalized treatment strategies. Full article
(This article belongs to the Special Issue Current Developments in Theoretical and Applied Statistics)
Show Figures

Figure 1

35 pages, 11039 KB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 428
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

18 pages, 361 KB  
Article
Analyzing Competing Risks with Progressively Type-II Censored Data in Dagum Distributions
by Raghd Badwan and Reza Pakyari
Axioms 2025, 14(7), 508; https://doi.org/10.3390/axioms14070508 - 30 Jun 2025
Cited by 1 | Viewed by 643
Abstract
Competing risk models are essential in survival analysis for studying systems with multiple mutually exclusive failure events. This study investigates the application of competing risk models in the presence of progressively Type-II censored data for the Dagum distribution, a flexible distribution suited for [...] Read more.
Competing risk models are essential in survival analysis for studying systems with multiple mutually exclusive failure events. This study investigates the application of competing risk models in the presence of progressively Type-II censored data for the Dagum distribution, a flexible distribution suited for modeling data with heavy tails and varying skewness and kurtosis. The methodology includes maximum likelihood estimation of the unknown parameters, with a focus on the special case of a common shape parameter, which allows for a closed-form expression of the relative risks. A hypothesis test is developed to assess the validity of this assumption, and both asymptotic and bootstrap confidence intervals are constructed. The performance of the proposed methods is evaluated through Monte Carlo simulations, and their applicability is demonstrated with a real-world example. Full article
Show Figures

Figure 1

34 pages, 18712 KB  
Article
Statistical Computation of Hjorth Competing Risks Using Binomial Removals in Adaptive Progressive Type II Censoring
by Refah Alotaibi, Mazen Nassar and Ahmed Elshahhat
Mathematics 2025, 13(12), 2010; https://doi.org/10.3390/math13122010 - 18 Jun 2025
Viewed by 444
Abstract
In complex reliability applications, it is common for the failure of an individual or an item to be attributed to multiple causes known as competing risks. This paper explores the estimation of the Hjorth competing risks model based on an adaptive progressive Type [...] Read more.
In complex reliability applications, it is common for the failure of an individual or an item to be attributed to multiple causes known as competing risks. This paper explores the estimation of the Hjorth competing risks model based on an adaptive progressive Type II censoring scheme via a binomial removal mechanism. For parameter and reliability metric estimation, both frequentist and Bayesian methodologies are developed. Maximum likelihood estimates for the Hjorth parameters are computed numerically due to their intricate form, while the binomial removal parameter is derived explicitly. Confidence intervals are constructed using asymptotic approximations. Within the Bayesian paradigm, gamma priors are assigned to the Hjorth parameters and a beta prior for the binomial parameter, facilitating posterior analysis. Markov Chain Monte Carlo techniques yield Bayesian estimates and credible intervals for parameters and reliability measures. The performance of the proposed methods is compared using Monte Carlo simulations. Finally, to illustrate the practical applicability of the proposed methodology, two real-world competing risk data sets are analyzed: one representing the breaking strength of jute fibers and the other representing the failure modes of electrical appliances. Full article
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)
Show Figures

Figure 1

24 pages, 755 KB  
Article
Inference for Dependent Competing Risks with Partially Observed Causes from Bivariate Inverted Exponentiated Pareto Distribution Under Generalized Progressive Hybrid Censoring
by Rani Kumari, Yogesh Mani Tripathi, Rajesh Kumar Sinha and Liang Wang
Axioms 2025, 14(3), 217; https://doi.org/10.3390/axioms14030217 - 16 Mar 2025
Viewed by 576
Abstract
In this paper, inference under dependent competing risk data is considered with multiple causes of failure. We discuss both classical and Bayesian methods for estimating model parameters under the assumption that data are observed under generalized progressive hybrid censoring. The maximum likelihood estimators [...] Read more.
In this paper, inference under dependent competing risk data is considered with multiple causes of failure. We discuss both classical and Bayesian methods for estimating model parameters under the assumption that data are observed under generalized progressive hybrid censoring. The maximum likelihood estimators of model parameters are obtained when occurrences of latent failure follow a bivariate inverted exponentiated Pareto distribution. The associated existence and uniqueness properties of these estimators are established. The asymptotic interval estimators are also constructed. Further, Bayes estimates and highest posterior density intervals are derived using flexible priors. A Monte Carlo sampling algorithm is proposed for posterior computations. The performance of all proposed methods is evaluated through extensive simulations. Moreover, a real-life example is also presented to illustrate the practical applications of our inferential procedures. Full article
Show Figures

Figure 1

21 pages, 2734 KB  
Article
The Computational Assessment on the Performance of Products with Multi-Parts Using the Gompertz Distribution
by Shu-Fei Wu and Chieh-Hsin Peng
Symmetry 2025, 17(3), 363; https://doi.org/10.3390/sym17030363 - 27 Feb 2025
Viewed by 565
Abstract
The lifetime performance index is widely used in the manufacturing industry to assess the capability and effectiveness of production processes. A new overall lifetime performance index is proposed when multiple parts of products are produced in multiple dependent production lines. Each individual lifetime [...] Read more.
The lifetime performance index is widely used in the manufacturing industry to assess the capability and effectiveness of production processes. A new overall lifetime performance index is proposed when multiple parts of products are produced in multiple dependent production lines. Each individual lifetime performance index for a single production line is connected to the overall lifetime performance index for multiple independent or dependent production lines. The overall lifetime performance index increases with the overall process yield. We analyze the maximum likelihood estimators for the individual lifetime performance indices using progressively type I interval-censored samples while the lifetime of the ith part of products follows a Gompertz distribution for either independent or dependent cases. To determine whether the overall lifetime performance index meets the desired target value, the maximum likelihood estimator for the individual index is utilized separately to conduct the testing procedures about the overall lifetime performance index for either independent or dependent cases. Power analysis of the multiple testing procedure is illustrated with figures, and key findings are summarized. A simulation study is conducted for the test powers. Lastly, a practical example involving products with two parts is presented to demonstrate the application of the proposed testing algorithm. Given the asymmetry of the lifetime distribution, this research aligns with the study of asymmetric probability distributions and their diverse applications across various fields. Full article
Show Figures

Figure 1

20 pages, 2869 KB  
Article
Censoring Sensitivity Analysis for Benchmarking Survival Machine Learning Methods
by János Báskay, Tamás Mezei, Péter Banczerowski, Anna Horváth, Tamás Joó and Péter Pollner
Sci 2025, 7(1), 18; https://doi.org/10.3390/sci7010018 - 13 Feb 2025
Cited by 2 | Viewed by 1751
Abstract
(1) Background: Survival analysis models in clinical research must effectively handle censored data, where complete survival times are unknown for some subjects. While established methodologies exist for validating standard machine learning models, current benchmarking approaches rarely assess model robustness under varying censoring conditions. [...] Read more.
(1) Background: Survival analysis models in clinical research must effectively handle censored data, where complete survival times are unknown for some subjects. While established methodologies exist for validating standard machine learning models, current benchmarking approaches rarely assess model robustness under varying censoring conditions. This limitation creates uncertainty about model reliability in real-world applications where censoring patterns may differ from training data. We address this gap by introducing a systematic benchmarking methodology focused on censoring sensitivity. (2) Methods: We developed a benchmarking framework that assesses survival models through controlled modification of censoring conditions. Five models were evaluated: Cox proportional hazards, survival tree, random survival forest, gradient-boosted survival analysis, and mixture density networks. The framework systematically reduced observation periods and increased censoring rates while measuring performance through multiple metrics following Bayesian hyperparameter optimization. (3) Results: Model performance showed greater sensitivity to increased censoring rates than to reduced observation periods. Non-linear models, especially mixture density networks, exhibited higher vulnerability to data quality degradation. Statistical comparisons became increasingly challenging with higher censoring rates due to widened confidence intervals. (4) Conclusions: Our methodology provides a new standard for evaluating survival analysis models, revealing the critical impact of censoring on model performance. These findings offer practical guidance for model selection and development in clinical applications, emphasizing the importance of robust censoring handling strategies. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

80 pages, 858 KB  
Article
Uniform in Number of Neighbor Consistency and Weak Convergence of k-Nearest Neighbor Single Index Conditional Processes and k-Nearest Neighbor Single Index Conditional U-Processes Involving Functional Mixing Data
by Salim Bouzebda
Symmetry 2024, 16(12), 1576; https://doi.org/10.3390/sym16121576 - 25 Nov 2024
Cited by 6 | Viewed by 1820
Abstract
U-statistics are fundamental in modeling statistical measures that involve responses from multiple subjects. They generalize the concept of the empirical mean of a random variable X to include summations over each m-tuple of distinct observations of X. W. Stute introduced [...] Read more.
U-statistics are fundamental in modeling statistical measures that involve responses from multiple subjects. They generalize the concept of the empirical mean of a random variable X to include summations over each m-tuple of distinct observations of X. W. Stute introduced conditional U-statistics, extending the Nadaraya–Watson estimates for regression functions. Stute demonstrated their strong pointwise consistency with the conditional expectation r(m)(φ,t), defined as E[φ(Y1,,Ym)|(X1,,Xm)=t] for tXm. This paper focuses on estimating functional single index (FSI) conditional U-processes for regular time series data. We propose a novel, automatic, and location-adaptive procedure for estimating these processes based on k-Nearest Neighbor (kNN) principles. Our asymptotic analysis includes data-driven neighbor selection, making the method highly practical. The local nature of the kNN approach improves predictive power compared to traditional kernel estimates. Additionally, we establish new uniform results in bandwidth selection for kernel estimates in FSI conditional U-processes, including almost complete convergence rates and weak convergence under general conditions. These results apply to both bounded and unbounded function classes, satisfying certain moment conditions, and are proven under standard Vapnik–Chervonenkis structural conditions and mild model assumptions. Furthermore, we demonstrate uniform consistency for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship. This result is independently valuable and has potential applications in areas such as set-indexed conditional U-statistics, the Kendall rank correlation coefficient, and discrimination problems. Full article
(This article belongs to the Section Mathematics)
Back to TopTop