Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = minimax concave penalty (MCP)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6617 KiB  
Article
Penalty Strategies in Semiparametric Regression Models
by Ayuba Jack Alhassan, S. Ejaz Ahmed, Dursun Aydin and Ersin Yilmaz
Math. Comput. Appl. 2025, 30(3), 54; https://doi.org/10.3390/mca30030054 - 12 May 2025
Viewed by 1091
Abstract
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, [...] Read more.
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, Adaptive Lasso (aLasso), smoothly clipped absolute deviation (SCAD), ElasticNet, and minimax concave penalty (MCP). In addition to these established methods, we also incorporate Stein-type shrinkage estimation techniques that are standard and positive shrinkage and assess their effectiveness in this context. To estimate the PLRMs, we consider a kernel smoothing technique grounded in penalized least squares. Our investigation involves a theoretical analysis of the estimators’ asymptotic properties and a detailed simulation study designed to compare their performance under a variety of conditions, including different sample sizes, numbers of predictors, and levels of multicollinearity. The simulation results reveal that aLasso and shrinkage estimators, particularly the positive shrinkage estimator, consistently outperform the other methods in terms of Mean Squared Error (MSE) relative efficiencies (RE), especially when the sample size is small and multicollinearity is high. Furthermore, we present a real data analysis using the Hitters dataset to demonstrate the applicability of these methods in a practical setting. The results of the real data analysis align with the simulation findings, highlighting the superior predictive accuracy of aLasso and the shrinkage estimators in the presence of multicollinearity. The findings of this study offer valuable insights into the strengths and limitations of these penalty and shrinkage strategies, guiding their application in future research and practice involving semiparametric regression. Full article
Show Figures

Figure 1

10 pages, 479 KiB  
Article
The Capped Separable Difference of Two Norms for Signal Recovery
by Zhiyong Zhou and Gui Wang
Mathematics 2024, 12(23), 3717; https://doi.org/10.3390/math12233717 - 27 Nov 2024
Viewed by 618
Abstract
This paper introduces a novel capped separable difference of two norms (CSDTN) method for sparse signal recovery, which generalizes the well-known minimax concave penalty (MCP) method. The CSDTN method incorporates two shape parameters and one scale parameter, with their appropriate selection being crucial [...] Read more.
This paper introduces a novel capped separable difference of two norms (CSDTN) method for sparse signal recovery, which generalizes the well-known minimax concave penalty (MCP) method. The CSDTN method incorporates two shape parameters and one scale parameter, with their appropriate selection being crucial for ensuring robustness and achieving superior reconstruction performance. We provide a detailed theoretical analysis of the method and propose an efficient iteratively reweighted 1 (IRL1)-based algorithm for solving the corresponding optimization problem. Extensive numerical experiments, including electrocardiogram (ECG) and synthetic signal recovery tasks, demonstrate the effectiveness of the proposed CSDTN method. Our method outperforms existing methods in terms of recovery accuracy and robustness. These results highlight the potential of CSDTN in various signal-processing applications. Full article
Show Figures

Figure 1

26 pages, 9869 KiB  
Article
Pruning Deep Neural Network Models via Minimax Concave Penalty Regression
by Xinggu Liu, Lin Zhou and Youxi Luo
Appl. Sci. 2024, 14(9), 3669; https://doi.org/10.3390/app14093669 - 25 Apr 2024
Cited by 2 | Viewed by 1688
Abstract
In this study, we propose a filter pruning method based on MCP (Minimax Concave Penalty) regression. The convolutional process is conceptualized as a linear regression procedure, and the regression coefficients serve as indicators to assess the redundancy of channels. In the realm of [...] Read more.
In this study, we propose a filter pruning method based on MCP (Minimax Concave Penalty) regression. The convolutional process is conceptualized as a linear regression procedure, and the regression coefficients serve as indicators to assess the redundancy of channels. In the realm of feature selection, the efficacy of sparse penalized regression gradually outperforms that of Lasso regression. Building upon this insight, MCP regression is introduced to screen convolutional channels, coupled with the coordinate descent method, to effectuate model compression. In single-layer pruning and global pruning analyses, the Top1 loss value associated with the MCP regression compression method is consistently smaller than that of the Lasso regression compression method across diverse models. Specifically, when the global pruning ratio is set to 0.3, the Top1 accuracy of the MCP regression compression method, in comparison with that of the Lasso regression compression method, exhibits improvements of 0.21% and 1.67% under the VGG19_Simple and VGG19 models, respectively. Similarly, for ResNet34, at two distinct pruning ratios, the Top1 accuracy demonstrates enhancements of 0.33% and 0.26%. Lastly, we compare and discuss the novel methods introduced in this study, considering both time and space resource consumption. Full article
Show Figures

Figure 1

15 pages, 1248 KiB  
Article
Automatic Variable Selection Algorithms in Prognostic Factor Research in Neck Pain
by Bernard X. W. Liew, Francisco M. Kovacs, David Rügamer and Ana Royuela
J. Clin. Med. 2023, 12(19), 6232; https://doi.org/10.3390/jcm12196232 - 27 Sep 2023
Cited by 1 | Viewed by 1500
Abstract
This study aims to compare the variable selection strategies of different machine learning (ML) and statistical algorithms in the prognosis of neck pain (NP) recovery. A total of 3001 participants with NP were included. Three dichotomous outcomes of an improvement in NP, arm [...] Read more.
This study aims to compare the variable selection strategies of different machine learning (ML) and statistical algorithms in the prognosis of neck pain (NP) recovery. A total of 3001 participants with NP were included. Three dichotomous outcomes of an improvement in NP, arm pain (AP), and disability at 3 months follow-up were used. Twenty-five variables (twenty-eight parameters) were included as predictors. There were more parameters than variables, as some categorical variables had >2 levels. Eight modelling techniques were compared: stepwise regression based on unadjusted p values (stepP), on adjusted p values (stepPAdj), on Akaike information criterion (stepAIC), best subset regression (BestSubset) least absolute shrinkage and selection operator [LASSO], Minimax concave penalty (MCP), model-based boosting (mboost), and multivariate adaptive regression splines (MuARS). The algorithm that selected the fewest predictors was stepPAdj (number of predictors, p = 4 to 8). MuARS was the algorithm with the second fewest predictors selected (p = 9 to 14). The predictor selected by all algorithms with the largest coefficient magnitude was “having undergone a neuroreflexotherapy intervention” for NP (β = from 1.987 to 2.296) and AP (β = from 2.639 to 3.554), and “Imaging findings: spinal stenosis” (β = from −1.331 to −1.763) for disability. Stepwise regression based on adjusted p-values resulted in the sparsest models, which enhanced clinical interpretability. MuARS appears to provide the optimal balance between model sparsity whilst retaining high predictive performance across outcomes. Different algorithms produced similar performances but resulted in a different number of variables selected. Rather than relying on any single algorithm, confidence in the variable selection may be increased by using multiple algorithms. Full article
(This article belongs to the Special Issue Clinical Advances in Musculoskeletal Disorders)
Show Figures

Figure 1

13 pages, 1884 KiB  
Article
Modeling the Cigarette Consumption of Poor Households Using Penalized Zero-Inflated Negative Binomial Regression with Minimax Concave Penalty
by Yudhie Andriyana, Rinda Fitriani, Bertho Tantular, Neneng Sunengsih, Kurnia Wahyudi, I Gede Nyoman Mindra Jaya and Annisa Nur Falah
Mathematics 2023, 11(14), 3192; https://doi.org/10.3390/math11143192 - 20 Jul 2023
Cited by 3 | Viewed by 1465
Abstract
The cigarette commodity is the second largest contributor to the food poverty line. Several aspects imply that poor people consume cigarettes despite having a minimal income. In this study, we are interested in investigating factors influencing poor people to be active smokers. Since [...] Read more.
The cigarette commodity is the second largest contributor to the food poverty line. Several aspects imply that poor people consume cigarettes despite having a minimal income. In this study, we are interested in investigating factors influencing poor people to be active smokers. Since the consumption number is a set of count data with zero excess, we have an overdispersion problem. This implies that a standard Poisson regression technique cannot be implemented. On the other hand, the factors involved in the model need to be selected simultaneously. Therefore, we propose to use a zero-inflated negative binomial (ZINB) regression with a minimax concave penalty (MCP) to determine the dominant factors influencing cigarette consumption in poor households. The data used in this study were microdata from the National Socioeconomic Survey (SUSENAS) conducted in March 2019 in East Java Province, Indonesia. The result shows that poor households with a male head of household, having no education, working in the informal sector, having many adult household members, and receiving social assistance tend to consume more cigarettes than others. Additionally, cigarette consumption decreases with the increasing age of the head of household. Full article
(This article belongs to the Special Issue Statistical Modeling for Analyzing Data with Complex Structures)
Show Figures

Figure 1

22 pages, 354 KiB  
Article
Simultaneous Feature Selection and Classification for Data-Adaptive Kernel-Penalized SVM
by Xin Liu, Bangxin Zhao and Wenqing He
Mathematics 2020, 8(10), 1846; https://doi.org/10.3390/math8101846 - 20 Oct 2020
Cited by 6 | Viewed by 3064
Abstract
Simultaneous feature selection and classification have been explored in the literature to extend the support vector machine (SVM) techniques by adding penalty terms to the loss function directly. However, it is the kernel function that controls the performance of the SVM, and an [...] Read more.
Simultaneous feature selection and classification have been explored in the literature to extend the support vector machine (SVM) techniques by adding penalty terms to the loss function directly. However, it is the kernel function that controls the performance of the SVM, and an imbalance in the data will deteriorate the performance of an SVM. In this paper, we examine a new method of simultaneous feature selection and binary classification. Instead of incorporating the standard loss function of the SVM, a penalty is added to the data-adaptive kernel function directly to control the performance of the SVM, by firstly conformally transforming the kernel functions of the SVM, and then re-conducting an SVM classifier based on the sparse features selected. Both convex and non-convex penalties, such as least absolute shrinkage and selection (LASSO), moothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP) are explored, and the oracle property of the estimator is established accordingly. An iterative optimization procedure is applied as there is no analytic form of the estimated coefficients available. Numerical comparisons show that the proposed method outperforms the competitors considered when data are imbalanced, and it performs similarly to the competitors when data are balanced. The method can be easily applied in medical images from different platforms. Full article
(This article belongs to the Special Issue Advances in Machine Learning Prediction Models)
16 pages, 961 KiB  
Article
Regularization Methods Based on the Lq-Likelihood for Linear Models with Heavy-Tailed Errors
by Yoshihiro Hirose
Entropy 2020, 22(9), 1036; https://doi.org/10.3390/e22091036 - 16 Sep 2020
Cited by 1 | Viewed by 2876
Abstract
We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in [...] Read more.
We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume q-normal distributions as the errors in linear models. A q-normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the proposed methods for linear models with q-normal errors coincide with the ordinary regularization methods that are applied to the normal linear model. The proposed methods can be computed using existing packages because they are penalized least squares methods. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed. The numerical experiments also illustrate that our methods work well in model selection and generalization, especially when the error is slightly heavy-tailed. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

23 pages, 633 KiB  
Article
Combination of Ensembles of Regularized Regression Models with Resampling-Based Lasso Feature Selection in High Dimensional Data
by Abhijeet R Patil and Sangjin Kim
Mathematics 2020, 8(1), 110; https://doi.org/10.3390/math8010110 - 10 Jan 2020
Cited by 24 | Viewed by 4945
Abstract
In high-dimensional data, the performances of various classifiers are largely dependent on the selection of important features. Most of the individual classifiers with the existing feature selection (FS) methods do not perform well for highly correlated data. Obtaining important features using the FS [...] Read more.
In high-dimensional data, the performances of various classifiers are largely dependent on the selection of important features. Most of the individual classifiers with the existing feature selection (FS) methods do not perform well for highly correlated data. Obtaining important features using the FS method and selecting the best performing classifier is a challenging task in high throughput data. In this article, we propose a combination of resampling-based least absolute shrinkage and selection operator (LASSO) feature selection (RLFS) and ensembles of regularized regression (ERRM) capable of dealing data with the high correlation structures. The ERRM boosts the prediction accuracy with the top-ranked features obtained from RLFS. The RLFS utilizes the lasso penalty with sure independence screening (SIS) condition to select the top k ranked features. The ERRM includes five individual penalty based classifiers: LASSO, adaptive LASSO (ALASSO), elastic net (ENET), smoothly clipped absolute deviations (SCAD), and minimax concave penalty (MCP). It was built on the idea of bagging and rank aggregation. Upon performing simulation studies and applying to smokers’ cancer gene expression data, we demonstrated that the proposed combination of ERRM with RLFS achieved superior performance of accuracy and geometric mean. Full article
(This article belongs to the Special Issue Uncertainty Quantification Techniques in Statistics)
Show Figures

Figure 1

Back to TopTop