Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = penalized likelihood function

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 5562 KB  
Article
Does Q.Clear Processing Change PET Ratios? Quantitative Evidence Using BTXBrain-DAT
by Ari Chong, Jung-Min Ha and Ji Yeon Chung
Brain Sci. 2025, 15(10), 1036; https://doi.org/10.3390/brainsci15101036 - 24 Sep 2025
Viewed by 44
Abstract
Introduction: Bayesian penalized likelihood (BPL) reconstruction algorithms, commercially implemented as Q.Clear (GE Healthcare), enhance image quality but may alter quantitative metrics. The impact of BPL on dopamine transporter (DAT) PET quantification, including ratios, remains unclear. This study investigates whether Q.Clear processing alters [...] Read more.
Introduction: Bayesian penalized likelihood (BPL) reconstruction algorithms, commercially implemented as Q.Clear (GE Healthcare), enhance image quality but may alter quantitative metrics. The impact of BPL on dopamine transporter (DAT) PET quantification, including ratios, remains unclear. This study investigates whether Q.Clear processing alters key metrics such as specific binding ratios (SBRs) and interregional ratios. Methods: We retrospectively analyzed 170 paired F-18 FP-CIT PET datasets reconstructed with conventional 3D-OSEM (baseline-DICOM) and Q.Clear (Q.Clear-DICOM). Quantification was performed using BTXBrain-DAT (Brightonix Imaging), yielding 57 specific binding ratios (SBRs), three asymmetry indices, and nine interregional ratios. Paired statistical tests, Bland–Altman plots, and reproducibility checks were conducted. Visual reads by two nuclear medicine physicians were also compared between datasets. Results: Q.Clear processing significantly altered all quantitative metrics (p < 0.001). SBR values changed in all 57 regions, with most high-uptake regions showing an increase and low-uptake regions showing a decrease. Striatal and caudate asymmetry indices showed significant differences (p < 0.0001), whereas the putamen index remained stable. All interregional ratios differed significantly, although Bland–Altman analysis indicated relative stability for ratios compared with asymmetric indices. BTXBrain-DAT showed perfect reproducibility on repeat analysis, and visual interpretation was unaffected by reconstruction method. Conclusions: Q.Clear (BPL) reconstruction substantially influences F-18 FP-CIT PET quantification, including ratios and asymmetry indices, while leaving visual interpretation unchanged. These findings highlight the need for caution when using image enhancement functions for quantitative analysis, particularly in clinical studies involving low-uptake regions or multicenter data comparisons. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

25 pages, 434 KB  
Article
The Impact of Digitalization on Carbon Emission Efficiency: An Intrinsic Gaussian Process Regression Approach
by Yongtong Hu, Jiaqi Xu and Tao Liu
Sustainability 2025, 17(14), 6551; https://doi.org/10.3390/su17146551 - 17 Jul 2025
Cited by 1 | Viewed by 553
Abstract
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process [...] Read more.
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process as a flexible spatial random effect with a heat-kernel-based covariance function to capture the manifold geometry of spatial features. To enable tractable inference, we employ a penalized maximum-likelihood estimation (PMLE) approach to jointly estimate regression coefficients and covariance hyperparameters. Using a panel dataset linking a national digitalization (modernization) index to carbon emission efficiency, the empirical analysis demonstrates that digitalization has a significantly positive impact on carbon emission efficiency while accounting for spatial heterogeneity. The iGPR model also exhibits superior predictive accuracy compared to state-of-the-art machine learning methods (including XGBoost, random forest, support vector regression, ElasticNet, and a standard Gaussian process regression), achieving the lowest mean squared error (MSE = 0.0047) and an average prediction error near zero. Robustness checks include instrumental-variable GMM estimation to address potential endogeneity across the efficiency distribution and confirm the stability of the estimated positive effect of digitalization. Full article
Show Figures

Figure 1

25 pages, 646 KB  
Article
Exponential Squared Loss-Based Robust Variable Selection with Prior Information in Linear Regression Models
by Hejun Wei, Tian Jin and Yunquan Song
Axioms 2025, 14(7), 516; https://doi.org/10.3390/axioms14070516 - 4 Jul 2025
Viewed by 353
Abstract
This paper proposes a robust variable selection method that incorporates prior information through linear constraints. For more than a decade, penalized likelihood frameworks have been the predominant approach for variable selection, where appropriate loss and penalty functions are selected to formulate unconstrained optimization [...] Read more.
This paper proposes a robust variable selection method that incorporates prior information through linear constraints. For more than a decade, penalized likelihood frameworks have been the predominant approach for variable selection, where appropriate loss and penalty functions are selected to formulate unconstrained optimization problems. However, in many specific applications, some prior information can be obtained. In this paper, we reformulate variable selection by incorporating prior knowledge as linear constraints. In addition, the loss function adopted in this paper is a robust exponential squared loss function, which ensures that the estimation of model parameter coefficient will not have a great impact when there are a few outliers in the dataset. This paper uses the designed solution algorithm to calculate the estimated values of coefficients and some other parameters, and finally conducts numerical simulations and a real-data experiment. Experimental results demonstrate that our model significantly improves estimation robustness compared to existing methods, even in outlier-contaminated scenarios. Full article
(This article belongs to the Special Issue Computational Statistics and Its Applications, 2nd Edition)
Show Figures

Figure 1

23 pages, 472 KB  
Article
Variable Selection for Multivariate Failure Time Data via Regularized Sparse-Input Neural Network
by Bin Luo and Susan Halabi
Bioengineering 2025, 12(6), 596; https://doi.org/10.3390/bioengineering12060596 - 31 May 2025
Viewed by 835
Abstract
This study addresses the problem of simultaneous variable selection and model estimation in multivariate failure time data, a common challenge in clinical trials with multiple correlated time-to-event endpoints. We propose a unified framework that identifies predictors shared across outcomes, applicable to both low- [...] Read more.
This study addresses the problem of simultaneous variable selection and model estimation in multivariate failure time data, a common challenge in clinical trials with multiple correlated time-to-event endpoints. We propose a unified framework that identifies predictors shared across outcomes, applicable to both low- and high-dimensional settings. For linear marginal hazard models, we develop a penalized pseudo-partial likelihood approach with a group LASSO-type penalty applied to the 2 norms of coefficients corresponding to the same covariates across marginal hazard functions. To capture potential nonlinear effects, we further extend the approach to a sparse-input neural network model with structured group penalties on input-layer weights. Both methods are optimized using a composite gradient descent algorithm combining standard gradient steps with proximal updates. Simulation studies demonstrate that the proposed methods yield superior variable selection and predictive performance compared to traditional and outcome-specific approaches, while remaining robust to violations of the common predictor assumption. In an application to advanced prostate cancer data, the framework identifies both established clinical factors and potentially novel prognostic single-nucleotide polymorphisms for overall and progression-free survival. This work provides a flexible and robust tool for analyzing complex multivariate survival data, with potential utility in prognostic modeling and personalized medicine. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

22 pages, 1752 KB  
Article
Bias Reduction of Modified Maximum Likelihood Estimates for a Three-Parameter Weibull Distribution
by Adriana da Silva, Felipe Quintino, Frederico Almeida and Dióscoros Aguiar
Entropy 2025, 27(5), 485; https://doi.org/10.3390/e27050485 - 30 Apr 2025
Cited by 1 | Viewed by 779
Abstract
In this work, we investigate the parameter estimation problem based on the three-parameter Weibull models, for which non-finite estimates may be obtained for the log-likelihood function in some regions of the parametric space. Based on an information criterion with penalization of the modified [...] Read more.
In this work, we investigate the parameter estimation problem based on the three-parameter Weibull models, for which non-finite estimates may be obtained for the log-likelihood function in some regions of the parametric space. Based on an information criterion with penalization of the modified log-likelihood function, we propose a new class of estimators for this distribution model. In addition to providing finite estimates for the model parameters, this procedure reduces the bias of the modified estimator. The performance of the new estimator is evaluated through simulations and real-life data set modeling. An economic application on a real data set is discussed, as well as an engineering one. Full article
(This article belongs to the Special Issue Information-Theoretic Criteria for Statistical Model Selection)
Show Figures

Figure 1

26 pages, 387 KB  
Article
Identification and Empirical Likelihood Inference in Nonlinear Regression Model with Nonignorable Nonresponse
by Xianwen Ding and Xiaoxia Li
Mathematics 2025, 13(9), 1388; https://doi.org/10.3390/math13091388 - 24 Apr 2025
Viewed by 445
Abstract
The identification of model parameters is a central challenge in the analysis of nonignorable nonresponse data. In this paper, we propose a novel penalized semiparametric likelihood method to obtain sparse estimators for a parametric nonresponse mechanism model. Based on these sparse estimators, an [...] Read more.
The identification of model parameters is a central challenge in the analysis of nonignorable nonresponse data. In this paper, we propose a novel penalized semiparametric likelihood method to obtain sparse estimators for a parametric nonresponse mechanism model. Based on these sparse estimators, an instrumental variable is introduced, enabling the identification of the observed likelihood. Two classes of estimating equations for the nonlinear regression model are constructed, and the empirical likelihood approach is employed to make inferences about the model parameters. The oracle properties of the sparse estimators in the nonresponse mechanism model are systematically established. Furthermore, the asymptotic normality of the maximum empirical likelihood estimators is derived. It is also shown that the empirical log-likelihood ratio functions are asymptotically weighted chi-squared distributed. Simulation studies are conducted to validate the effectiveness of the proposed estimation procedure. Finally, the practical utility of our approach is demonstrated through the analysis of ACTG 175 data. Full article
(This article belongs to the Special Issue Modeling, Control and Optimization of Biological Systems)
Show Figures

Figure 1

29 pages, 4082 KB  
Article
A Two-Stage Estimation Approach to Cox Regression Under the Five-Parameter Spline Model
by Ren Teranishi, Kyoji Furukawa and Takeshi Emura
Mathematics 2025, 13(4), 616; https://doi.org/10.3390/math13040616 - 13 Feb 2025
Viewed by 1017
Abstract
The Cox proportional hazards model is one of the most popular regression models for censored survival data. In the Cox model, the baseline hazard function is often modeled by cubic spline functions. However, the penalized likelihood estimation for fitting cubic spline models is [...] Read more.
The Cox proportional hazards model is one of the most popular regression models for censored survival data. In the Cox model, the baseline hazard function is often modeled by cubic spline functions. However, the penalized likelihood estimation for fitting cubic spline models is computationally challenging. In this paper, we propose a computationally simple approach to implement the cubic spline model without penalizing the likelihood. The proposed method consists of two stages under the five-parameter spline model. The first stage estimates a scale parameter for a given shape model. The second stage adopts a model selection from 13 candidate shape models. We implement the proposed methods in our new R package “splineCox” (version 0.0.3) and it has been made available in CRAN. We conduct simulation studies to assess the performance of the proposed method. To illustrate the advantage of the proposed model, we analyze a life test dataset on electrical insulations and a gene expression dataset on lung cancer patients. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

17 pages, 348 KB  
Article
Maximum Penalized-Likelihood Structured Covariance Estimation for Imaging Extended Objects, with Application to Radio Astronomy
by Aaron Lanterman
Stats 2024, 7(4), 1496-1512; https://doi.org/10.3390/stats7040088 - 17 Dec 2024
Viewed by 1518
Abstract
Image formation in radio astronomy is often posed as a problem of constructing a nonnegative function from sparse samples of its Fourier transform. We explore an alternative approach that reformulates the problem in terms of estimating the entries of a diagonal covariance matrix [...] Read more.
Image formation in radio astronomy is often posed as a problem of constructing a nonnegative function from sparse samples of its Fourier transform. We explore an alternative approach that reformulates the problem in terms of estimating the entries of a diagonal covariance matrix from Gaussian data. Maximum-likelihood estimates of the covariance cannot be readily computed analytically; hence, we investigate an iterative algorithm originally proposed by Snyder, O’Sullivan, and Miller in the context of radar imaging. The resulting maximum-likelihood estimates tend to be unacceptably rough due to the ill-posed nature of the maximum-likelihood estimation of functions from limited data, so some kind of regularization is needed. We explore penalized likelihoods based on entropy functionals, a roughness penalty proposed by Silverman, and an information-theoretic formulation of Good’s roughness penalty crafted by O’Sullivan. We also investigate algorithm variations that perform a generic smoothing step at each iteration. The results illustrate that tuning parameters allow for a tradeoff between the noise and blurriness of the reconstruction. Full article
(This article belongs to the Section Computational Statistics)
Show Figures

Figure 1

27 pages, 1961 KB  
Article
Pspatreg: R Package for Semiparametric Spatial Autoregressive Models
by Román Mínguez, Roberto Basile and María Durbán
Mathematics 2024, 12(22), 3598; https://doi.org/10.3390/math12223598 - 17 Nov 2024
Cited by 2 | Viewed by 2018
Abstract
This article introduces the R package pspatreg, which is publicly available for download from the Comprehensive R Archive Network, for estimating semiparametric spatial econometric penalized spline (P-Spline) models. These models can incorporate a nonparametric spatiotemporal trend, a spatial lag of the dependent variable, [...] Read more.
This article introduces the R package pspatreg, which is publicly available for download from the Comprehensive R Archive Network, for estimating semiparametric spatial econometric penalized spline (P-Spline) models. These models can incorporate a nonparametric spatiotemporal trend, a spatial lag of the dependent variable, independent variables, noise, and time-series autoregressive noise. The primary functions in this package cover the estimation of P-Spline spatial econometric models using either Restricted Maximum Likelihood (REML) or Maximum Likelihood (ML) methods, as well as the computation of marginal impacts for both parametric and nonparametric terms. Additionally, the package offers methods for the graphical display of estimated nonlinear functions and spatial or spatiotemporal trend maps. Applications to cross-sectional and panel spatial data are provided to illustrate the package’s functionality. Full article
(This article belongs to the Special Issue Nonparametric Regression Models: Theory and Applications)
Show Figures

Figure 1

19 pages, 1058 KB  
Article
Maximum Penalized Likelihood Estimation of the Skew–t Link Model for Binomial Response Data
by Omar Chocotea-Poca, Orietta Nicolis and Germán Ibacache-Pulgar
Axioms 2024, 13(11), 749; https://doi.org/10.3390/axioms13110749 - 30 Oct 2024
Viewed by 1327
Abstract
A critical aspect of modeling binomial response data is selecting an appropriate link function, as an improper choice can significantly affect model precision. This paper introduces the skew–t link model, an extension of the skew–probit model, offering increased flexibility by incorporating both [...] Read more.
A critical aspect of modeling binomial response data is selecting an appropriate link function, as an improper choice can significantly affect model precision. This paper introduces the skew–t link model, an extension of the skew–probit model, offering increased flexibility by incorporating both asymmetry and heavy tails, making it suitable for asymmetric and complex data structures. A penalized likelihood-based estimation method is proposed to stabilize parameter estimation, particularly for the asymmetry parameter. Extensive simulation studies demonstrate the model’s superior performance in terms of lower bias, root mean squared error (RMSE), and robustness compared to traditional symmetric models like probit and logit. Furthermore, the model is applied to two real-world datasets: one concerning women’s labor participation and another related to cardiovascular disease outcomes, both showing superior fitting capabilities compared to more traditional models (with probit and the skew–probit links). These findings highlight the model’s applicability to socioeconomic and medical research, characterized by skew and asymmetric data. Moreover, the proposed model could be applied in various domains where data exhibit asymmetry and complex structures. Full article
(This article belongs to the Special Issue Advances in Statistical Simulation and Computing)
Show Figures

Figure 1

23 pages, 426 KB  
Article
A Penalized Empirical Likelihood Approach for Estimating Population Sizes under the Negative Binomial Regression Model
by Yulu Ji and Yang Liu
Mathematics 2024, 12(17), 2674; https://doi.org/10.3390/math12172674 - 28 Aug 2024
Viewed by 1446
Abstract
In capture–recapture experiments, the presence of overdispersion and heterogeneity necessitates the use of the negative binomial regression model for inferring population sizes. However, within this model, existing methods based on likelihood and ratio regression for estimating the dispersion parameter often face boundary and [...] Read more.
In capture–recapture experiments, the presence of overdispersion and heterogeneity necessitates the use of the negative binomial regression model for inferring population sizes. However, within this model, existing methods based on likelihood and ratio regression for estimating the dispersion parameter often face boundary and nonidentifiability issues. These problems can result in nonsensically large point estimates and unbounded upper limits of confidence intervals for the population size. We present a penalized empirical likelihood technique for solving these two problems by imposing a half-normal prior on the population size. Based on the proposed approach, a maximum penalized empirical likelihood estimator with asymptotic normality and a penalized empirical likelihood ratio statistic with asymptotic chi-square distribution are derived. To improve numerical performance, we present an effective expectation-maximization (EM) algorithm. In the M-step, optimization for the model parameters could be achieved by fitting a standard negative binomial regression model via the R basic function glm.nb(). This approach ensures the convergence and reliability of the numerical algorithm. Using simulations, we analyze several synthetic datasets to illustrate three advantages of our methods in finite-sample cases: complete mitigation of the boundary problem, more efficient maximum penalized empirical likelihood estimates, and more precise penalized empirical likelihood ratio interval estimates compared to the estimates obtained without penalty. These advantages are further demonstrated in a case study estimating the abundance of black bears (Ursus americanus) at the U.S. Army’s Fort Drum Military Installation in northern New York. Full article
Show Figures

Figure 1

20 pages, 866 KB  
Article
Local Influence for the Thin-Plate Spline Generalized Linear Model
by Germán Ibacache-Pulgar, Pablo Pacheco, Orietta Nicolis and Miguel Angel Uribe-Opazo
Axioms 2024, 13(6), 346; https://doi.org/10.3390/axioms13060346 - 23 May 2024
Cited by 1 | Viewed by 1293
Abstract
Thin-Plate Spline Generalized Linear Models (TPS-GLMs) are an extension of Semiparametric Generalized Linear Models (SGLMs), because they allow a smoothing spline to be extended to two or more dimensions. This class of models allows modeling a set of data in which it is [...] Read more.
Thin-Plate Spline Generalized Linear Models (TPS-GLMs) are an extension of Semiparametric Generalized Linear Models (SGLMs), because they allow a smoothing spline to be extended to two or more dimensions. This class of models allows modeling a set of data in which it is desired to incorporate the non-linear joint effects of some covariates to explain the variability of a certain variable of interest. In the spatial context, these models are quite useful, since they allow the effects of locations to be included, both in trend and dispersion, using a smooth surface. In this work, we extend the local influence technique for the TPS-GLM model in order to evaluate the sensitivity of the maximum penalized likelihood estimators against small perturbations in the model and data. We fit our model through a joint iterative process based on Fisher Scoring and weighted backfitting algorithms. In addition, we obtained the normal curvature for the case-weight perturbation and response variable additive perturbation schemes, in order to detect influential observations on the model fit. Finally, two data sets from different areas (agronomy and environment) were used to illustrate the methodology proposed here. Full article
(This article belongs to the Special Issue Mathematical Models and Simulations, 2nd Edition)
Show Figures

Figure 1

21 pages, 1457 KB  
Article
Variable Selection for Sparse Logistic Regression with Grouped Variables
by Mingrui Zhong, Zanhua Yin and Zhichao Wang
Mathematics 2023, 11(24), 4979; https://doi.org/10.3390/math11244979 - 17 Dec 2023
Viewed by 1953
Abstract
We present a new penalized method for estimation in sparse logistic regression models with a group structure. Group sparsity implies that we should consider the Group Lasso penalty. In contrast to penalized log-likelihood estimation, our method can be viewed as a penalized weighted [...] Read more.
We present a new penalized method for estimation in sparse logistic regression models with a group structure. Group sparsity implies that we should consider the Group Lasso penalty. In contrast to penalized log-likelihood estimation, our method can be viewed as a penalized weighted score function method. Under some mild conditions, we provide non-asymptotic oracle inequalities promoting the group sparsity of predictors. A modified block coordinate descent algorithm based on a weighted score function is also employed. The net advantage of our algorithm over existing Group Lasso-type procedures is that the tuning parameter can be pre-specified. The simulations show that this algorithm is considerably faster and more stable than competing methods. Finally, we illustrate our methodology with two real data sets. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

20 pages, 362 KB  
Article
Variable Selection for Length-Biased and Interval-Censored Failure Time Data
by Fan Feng, Guanghui Cheng and Jianguo Sun
Mathematics 2023, 11(22), 4576; https://doi.org/10.3390/math11224576 - 8 Nov 2023
Cited by 1 | Viewed by 1525
Abstract
Length-biased failure time data occur often in various biomedical fields, including clinical trials, epidemiological cohort studies and genome-wide association studies, and their analyses have been attracting a surge of interest. In practical applications, because one may collect a large number of candidate covariates [...] Read more.
Length-biased failure time data occur often in various biomedical fields, including clinical trials, epidemiological cohort studies and genome-wide association studies, and their analyses have been attracting a surge of interest. In practical applications, because one may collect a large number of candidate covariates for the failure event of interest, variable selection becomes a useful tool to identify the important risk factors and enhance the estimation accuracy. In this paper, we consider Cox’s proportional hazards model and develop a penalized variable selection technique with various popular penalty functions for length-biased data, in which the failure event of interest suffers from interval censoring. Specifically, a computationally stable and reliable penalized expectation-maximization algorithm via two-stage data augmentation is developed to overcome the challenge in maximizing the intractable penalized likelihood. We establish the oracle property of the proposed method and present some simulation results, suggesting that the proposed method outperforms the traditional variable selection method based on the conditional likelihood. The proposed method is then applied to a set of real data arising from the Prostate, Lung, Colorectal and Ovarian cancer screening trial. The analysis results show that African Americans and having immediate family members with prostate cancer significantly increase the risk of developing prostate cancer, while having diabetes exhibited a significantly lower risk of developing prostate cancer. Full article
(This article belongs to the Section D1: Probability and Statistics)
29 pages, 2475 KB  
Article
Consistent Model Selection Procedure for Random Coefficient INAR Models
by Kaizhi Yu and Tielai Tao
Entropy 2023, 25(8), 1220; https://doi.org/10.3390/e25081220 - 16 Aug 2023
Cited by 1 | Viewed by 1626
Abstract
In the realm of time series data analysis, information criteria constructed on the basis of likelihood functions serve as crucial instruments for determining the appropriate lag order. However, the intricate structure of random coefficient integer-valued time series models, which are founded on thinning [...] Read more.
In the realm of time series data analysis, information criteria constructed on the basis of likelihood functions serve as crucial instruments for determining the appropriate lag order. However, the intricate structure of random coefficient integer-valued time series models, which are founded on thinning operators, complicates the establishment of likelihood functions. Consequently, employing information criteria such as AIC and BIC for model selection becomes problematic. This study introduces an innovative methodology that formulates a penalized criterion by utilizing the estimation equation within conditional least squares estimation, effectively addressing the aforementioned challenge. Initially, the asymptotic properties of the penalized criterion are derived, followed by a numerical simulation study and a comparative analysis. The findings from both theoretical examinations and simulation investigations reveal that this novel approach consistently selects variables under relatively relaxed conditions. Lastly, the applications of this method to infectious disease data and seismic frequency data produce satisfactory outcomes. Full article
Show Figures

Figure 1

Back to TopTop