Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = Stein’s method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 741 KiB  
Article
Empirical Bayes Estimators for Mean Parameter of Exponential Distribution with Conjugate Inverse Gamma Prior Under Stein’s Loss
by Zheng Li, Ying-Ying Zhang and Ya-Guang Shi
Mathematics 2025, 13(10), 1658; https://doi.org/10.3390/math13101658 - 19 May 2025
Viewed by 241
Abstract
A Bayes estimator for a mean parameter of an exponential distribution is calculated using Stein’s loss, which equally penalizes gross overestimation and underestimation. A corresponding Posterior Expected Stein’s Loss (PESL) is also determined. Additionally, a Bayes estimator for a mean parameter is obtained [...] Read more.
A Bayes estimator for a mean parameter of an exponential distribution is calculated using Stein’s loss, which equally penalizes gross overestimation and underestimation. A corresponding Posterior Expected Stein’s Loss (PESL) is also determined. Additionally, a Bayes estimator for a mean parameter is obtained under a squared error loss along with its corresponding PESL. Furthermore, two methods are used to derive empirical Bayes estimators for the mean parameter of the exponential distribution with an inverse gamma prior. Numerical simulations are conducted to illustrate five aspects. Finally, theoretical studies are illustrated using Static Fatigue 90% Stress Level data. Full article
(This article belongs to the Special Issue Bayesian Statistical Analysis of Big Data and Complex Data)
Show Figures

Figure 1

26 pages, 6617 KiB  
Article
Penalty Strategies in Semiparametric Regression Models
by Ayuba Jack Alhassan, S. Ejaz Ahmed, Dursun Aydin and Ersin Yilmaz
Math. Comput. Appl. 2025, 30(3), 54; https://doi.org/10.3390/mca30030054 - 12 May 2025
Viewed by 1016
Abstract
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, [...] Read more.
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, Adaptive Lasso (aLasso), smoothly clipped absolute deviation (SCAD), ElasticNet, and minimax concave penalty (MCP). In addition to these established methods, we also incorporate Stein-type shrinkage estimation techniques that are standard and positive shrinkage and assess their effectiveness in this context. To estimate the PLRMs, we consider a kernel smoothing technique grounded in penalized least squares. Our investigation involves a theoretical analysis of the estimators’ asymptotic properties and a detailed simulation study designed to compare their performance under a variety of conditions, including different sample sizes, numbers of predictors, and levels of multicollinearity. The simulation results reveal that aLasso and shrinkage estimators, particularly the positive shrinkage estimator, consistently outperform the other methods in terms of Mean Squared Error (MSE) relative efficiencies (RE), especially when the sample size is small and multicollinearity is high. Furthermore, we present a real data analysis using the Hitters dataset to demonstrate the applicability of these methods in a practical setting. The results of the real data analysis align with the simulation findings, highlighting the superior predictive accuracy of aLasso and the shrinkage estimators in the presence of multicollinearity. The findings of this study offer valuable insights into the strengths and limitations of these penalty and shrinkage strategies, guiding their application in future research and practice involving semiparametric regression. Full article
Show Figures

Figure 1

21 pages, 1101 KiB  
Article
On Data-Enriched Logistic Regression
by Cheng Zheng, Sayan Dasgupta, Yuxiang Xie, Asad Haris and Ying-Qing Chen
Mathematics 2025, 13(3), 441; https://doi.org/10.3390/math13030441 - 28 Jan 2025
Viewed by 721
Abstract
Biomedical researchers typically investigate the effects of specific exposures on disease risks within a well-defined population. The gold standard for such studies is to design a trial with an appropriately sampled cohort. However, due to the high cost of such trials, the collected [...] Read more.
Biomedical researchers typically investigate the effects of specific exposures on disease risks within a well-defined population. The gold standard for such studies is to design a trial with an appropriately sampled cohort. However, due to the high cost of such trials, the collected sample sizes are often limited, making it difficult to accurately estimate the effects of certain exposures. In this paper, we discuss how to leverage the information from external “big data” (datasets with significantly larger sample sizes) to improve the estimation accuracy at the risk of introducing a small amount of bias. We propose a family of weighted estimators to balance bias increase and variance reduction when incorporating the big data. We establish a connection between our proposed estimator and the well-known penalized regression estimators. We derive optimal weights using both second-order and higher-order asymptotic expansions. Through extensive simulation studies, we demonstrate that the improvement in mean square error (MSE) for the regression coefficient can be substantial even with finite sample sizes, and our weighted method outperformed existing approaches such as penalized regression and James–Stein estimator. Additionally, we provide a theoretical guarantee that the proposed estimators will never yield an asymptotic MSE larger than the maximum likelihood estimator using small data only in general. Finally, we apply our proposed methods to the Asia Cohort Consortium China cohort data to estimate the relationships between age, BMI, smoking, alcohol use, and mortality. Full article
(This article belongs to the Special Issue Statistical Methods in Bioinformatics and Health Informatics)
Show Figures

Figure 1

15 pages, 273 KiB  
Article
Density Formula in Malliavin Calculus by Using Stein’s Method and Diffusions
by Hyun-Suk Park
Mathematics 2025, 13(2), 323; https://doi.org/10.3390/math13020323 - 20 Jan 2025
Viewed by 726
Abstract
Let G be a random variable of functionals of an isonormal Gaussian process X defined on some probability space. Studies have been conducted to determine the exact form of the density function of the random variable G. In this paper, unlike previous [...] Read more.
Let G be a random variable of functionals of an isonormal Gaussian process X defined on some probability space. Studies have been conducted to determine the exact form of the density function of the random variable G. In this paper, unlike previous studies, we will use the Stein’s method for invariant measures of diffusions to obtain the density formula of G. By comparing the density function obtained in this paper with that of the diffusion invariant measure, we find that the diffusion coefficient of an Itô diffusion with an invariant measure having a density can be expressed as in terms of operators in Malliavin calculus. Full article
24 pages, 499 KiB  
Article
Constrained Bayesian Method for Testing Equi-Correlation Coefficient
by Kartlos Kachiashvili and Ashis SenGupta
Axioms 2024, 13(10), 722; https://doi.org/10.3390/axioms13100722 - 17 Oct 2024
Viewed by 679
Abstract
The problem of testing the equi-correlation coefficient of a standard symmetric multivariate normal distribution is considered. Constrained Bayesian and classical Bayes methods, using the maximum likelihood estimation and Stein’s approach, are examined. For the investigation of the obtained theoretical results and choosing the [...] Read more.
The problem of testing the equi-correlation coefficient of a standard symmetric multivariate normal distribution is considered. Constrained Bayesian and classical Bayes methods, using the maximum likelihood estimation and Stein’s approach, are examined. For the investigation of the obtained theoretical results and choosing the best among them, different practical examples are analyzed. The simulation results showed that the constrained Bayesian method (CBM) using Stein’s approach has the advantage of making decisions with higher reliability for testing hypotheses concerning the equi-correlation coefficient than the Bayes method. Also, the use of this approach with the probability distribution of linear combinations of chi-square random variables gives better results compared to that of using the integrated probability distributions in terms of providing both the necessary precisions as well as convenience of implementation in practice. Recommendations towards the use of the proposed methods for solving practical problems are given. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure A1

19 pages, 386 KiB  
Article
Optimal Investment Strategy for DC Pension Plan with Stochastic Salary and Value at Risk Constraint in Stochastic Volatility Model
by Zilan Liu, Huanying Zhang, Yijun Wang and Ya Huang
Axioms 2024, 13(8), 543; https://doi.org/10.3390/axioms13080543 - 10 Aug 2024
Cited by 1 | Viewed by 1031
Abstract
This paper studies the optimal asset allocation problem of a defined contribution (DC) pension plan with a stochastic salary and value under a constraint within a stochastic volatility model. It is assumed that the financial market contains a risk-free asset and a risky [...] Read more.
This paper studies the optimal asset allocation problem of a defined contribution (DC) pension plan with a stochastic salary and value under a constraint within a stochastic volatility model. It is assumed that the financial market contains a risk-free asset and a risky asset whose price process satisfies the Stein–Stein stochastic volatility model. To comply with regulatory standards and offer a risk management tool, we integrate the dynamic versions of Value-at-Risk (VaR), Conditional Value-at-Risk (CVaR), and worst-case CVaR (wcCVaR) constraints into the DC pension fund management model. The salary is assumed to be stochastic and characterized by geometric Brownian motion. In the dynamic setting, a CVaR/wcCVaR constraint is equivalent to a VaR constraint under a higher confidence level. By using the Lagrange multiplier method and the dynamic programming method to maximize the constant absolute risk aversion (CARA) utility of terminal wealth, we obtain closed-form expressions of optimal investment strategies with and without a VaR constraint. Several numerical examples are provided to illustrate the impact of a dynamic VaR/CVaR/wcCVaR constraint and other parameters on the optimal strategy. Full article
Show Figures

Figure 1

15 pages, 3450 KiB  
Article
Adaptive Truncation Threshold Determination for Multimode Fiber Single-Pixel Imaging
by Yangyang Xiang, Junhui Li, Mingying Lan, Le Yang, Xingzhuo Hu, Jianxin Ma and Li Gao
Appl. Sci. 2024, 14(16), 6875; https://doi.org/10.3390/app14166875 - 6 Aug 2024
Viewed by 1067
Abstract
Truncated singular value decomposition (TSVD) is a popular recovery algorithm for multimode fiber single-pixel imaging (MMF-SPI), and it uses truncation thresholds to suppress noise influences. However, due to the sensitivity of MMF relative to stochastic disturbances, the threshold requires frequent re-determination as noise [...] Read more.
Truncated singular value decomposition (TSVD) is a popular recovery algorithm for multimode fiber single-pixel imaging (MMF-SPI), and it uses truncation thresholds to suppress noise influences. However, due to the sensitivity of MMF relative to stochastic disturbances, the threshold requires frequent re-determination as noise levels dynamically fluctuate. In response, we design an adaptive truncation threshold determination (ATTD) method for TSVD-based MMF-SPI in disturbed environments. Simulations and experiments reveal that ATTD approaches the performance of ideal clairvoyant benchmarks, and it corresponds to the best possible image recovery under certain noise levels and surpasses both traditional truncation threshold determination methods with less computation—fixed threshold and Stein’s unbiased risk estimator (SURE)—specifically under high noise levels. Moreover, target insensitivity is demonstrated via numerical simulations, and the robustness of the self-contained parameters is explored. Finally, we also compare and discuss the performance of TSVD-based MMF-SPI, which uses ATTD, and machine learning-based MMF-SPI, which uses diffusion models, to provide a comprehensive understanding of ATTD. Full article
(This article belongs to the Special Issue Optical Imaging and Sensing: From Design to Its Practical Use)
Show Figures

Figure 1

15 pages, 296 KiB  
Article
On Some Multipliers Related to Discrete Fractional Integrals
by Jinhua Cheng
Mathematics 2024, 12(10), 1545; https://doi.org/10.3390/math12101545 - 15 May 2024
Viewed by 1291
Abstract
This paper explores the properties of multipliers associated with discrete analogues of fractional integrals, revealing intriguing connections with Dirichlet characters, Euler’s identity, and Dedekind zeta functions of quadratic imaginary fields. Employing Fourier transform techniques, the Hardy–Littlewood circle method, and a discrete analogue of [...] Read more.
This paper explores the properties of multipliers associated with discrete analogues of fractional integrals, revealing intriguing connections with Dirichlet characters, Euler’s identity, and Dedekind zeta functions of quadratic imaginary fields. Employing Fourier transform techniques, the Hardy–Littlewood circle method, and a discrete analogue of the Stein–Weiss inequality on product space through implication methods, we establish pq bounds for these operators. Our results contribute to a deeper understanding of the intricate relationship between number theory and harmonic analysis in discrete domains, offering insights into the convergence behavior of these operators. Full article
(This article belongs to the Special Issue Fractional Calculus and Mathematical Applications, 2nd Edition)
19 pages, 1790 KiB  
Article
Operator Smith Algorithm for Coupled Stein Equations from Jump Control Systems
by Bo Yu, Ning Dong and Baiquan Hu
Axioms 2024, 13(4), 249; https://doi.org/10.3390/axioms13040249 - 10 Apr 2024
Viewed by 978
Abstract
Consider a class of coupled Stein equations arising from jump control systems. An operator Smith algorithm is proposed for calculating the solution of the system. Convergence of the algorithm is established under certain conditions. For large-scale systems, the operator Smith algorithm is extended [...] Read more.
Consider a class of coupled Stein equations arising from jump control systems. An operator Smith algorithm is proposed for calculating the solution of the system. Convergence of the algorithm is established under certain conditions. For large-scale systems, the operator Smith algorithm is extended to a low-rank structured format, and the error of the algorithm is analyzed. Numerical experiments demonstrate that the operator Smith iteration outperforms existing linearly convergent iterative methods in terms of computation time and accuracy. The low-rank structured iterative format is highly effective in approximating the solutions of large-scale structured problems. Full article
(This article belongs to the Special Issue The Numerical Analysis and Its Application)
Show Figures

Figure 1

20 pages, 6191 KiB  
Article
Fail-Safe Topology Optimization Using Damage Scenario Filtering
by Wuhe Sun, Yong Zhang, Yunfei Liu, Kai Cheng and Fei Cheng
Appl. Sci. 2024, 14(2), 878; https://doi.org/10.3390/app14020878 - 19 Jan 2024
Cited by 1 | Viewed by 1537
Abstract
Within the framework of isotropic materials, this paper introduces an efficient topology optimization method that incorporates fail-safe design considerations using a penalty function approach. Existing methods are either computationally expensive or overlook fail-safe requirements during optimization. This approach not only achieves optimized structures [...] Read more.
Within the framework of isotropic materials, this paper introduces an efficient topology optimization method that incorporates fail-safe design considerations using a penalty function approach. Existing methods are either computationally expensive or overlook fail-safe requirements during optimization. This approach not only achieves optimized structures with fail-safe characteristics, but also significantly enhances the computational efficiency of fail-safe topology optimization. In this method, the minimization of worst-case compliance serves as the optimization objective, employing the Kreisselmeier–stein Hauser function to approximate the non-differentiable maximum operator. A sensitivity analysis, derived through the adjoint method, is utilized, and a universal fail-safe optimization criterion is developed to update the design variables. During the optimization process for fail-safe strategies, a density-based filtering method is applied, effectively reducing damage scenarios. Finally, the effectiveness and computational efficiency of this method are validated through several numerical examples. Full article
Show Figures

Figure 1

15 pages, 1105 KiB  
Article
Learning Kernel Stein Discrepancy for Training Energy-Based Models
by Lu Niu, Shaobo Li and Zhenping Li
Appl. Sci. 2023, 13(22), 12293; https://doi.org/10.3390/app132212293 - 14 Nov 2023
Cited by 1 | Viewed by 1480
Abstract
The primary challenge in unsupervised learning is training unnormalized density models and then generating similar samples. Few traditional unnormalized models know what the quality of the trained model is, as most models are evaluated by downstream tasks and often involve complex sampling processes. [...] Read more.
The primary challenge in unsupervised learning is training unnormalized density models and then generating similar samples. Few traditional unnormalized models know what the quality of the trained model is, as most models are evaluated by downstream tasks and often involve complex sampling processes. Kernel Stein Discrepancy (KSD), a goodness-of-fit test method, can measure the discrepancy between the generated samples and the theoretical distribution; therefore, it can be employed to measure the quality of trained models. We first demonstrate that, under certain constraints, KSD is equal to Maximum Mean Discrepancy (MMD), a two-sample test method. PT KSD GAN (Kernel Stein Discrepancy Generative Adversarial Network with a Pulling-Away Term) is produced to compel generated samples to approximate the theoretical distribution. The generator, functioning as an implicit generative model, employs KSD as loss to avoid tedious sampling processes. In contrast, the discriminator is trained to identify the data manifold, also known as an explicit energy-based model. To demonstrate the effectiveness of our approach, we undertook experiments on two-dimensional toy datasets. Our results highlight that our generator adeptly captures the accurate density distribution, while the discriminator proficiently recognizes the unnormalized approximate distribution shape. When applied to linear Independent Component Analysis datasets, the log likelihoods of PT KSD GAN improve by about 5‰ over existing methods when the data dimension is less than 30. Furthermore, our tests on image datasets reveal that the PT KSD GAN excels in navigating high-dimensional challenges, yielding authentically genuine samples. Full article
Show Figures

Figure 1

14 pages, 441 KiB  
Article
Shrinking the Variance in Experts’ “Classical” Weights Used in Expert Judgment Aggregation
by Gayan Dharmarathne, Gabriela F. Nane, Andrew Robinson and Anca M. Hanea
Forecasting 2023, 5(3), 522-535; https://doi.org/10.3390/forecast5030029 - 23 Aug 2023
Cited by 2 | Viewed by 2188
Abstract
Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and [...] Read more.
Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and the informativeness of the elicited distributions. This is referred to as Cooke’s method, or the classical model (CM), for aggregating probabilistic expert judgments. The performance scores are derived from experiments, so they are uncertain and, therefore, can be represented by random variables. As a consequence, the experts’ weights are also random variables. We focus on addressing the underlying uncertainty when calculating experts’ weights to be used in a mathematical aggregation of expert elicited distributions. This paper investigates the potential of applying an empirical Bayes development of the James–Stein shrinkage estimation technique on the CM’s weights to derive shrinkage weights with reduced mean squared errors. We analyze 51 professional CM expert elicitation studies. We investigate the differences between the classical and the (new) shrinkage CM weights and the benefits of using the new weights. In theory, the outcome of a probabilistic model using the shrinkage weights should be better than that obtained when using the classical weights because shrinkage estimation techniques reduce the mean squared errors of estimators in general. In particular, the empirical Bayes shrinkage method used here reduces the assigned weights for those experts with larger variances in the corresponding sampling distributions of weights in the experiment. We measure improvement of the aggregated judgments in a cross-validation setting using two studies that can afford such an approach. Contrary to expectations, the results are inconclusive. However, in practice, we can use the proposed shrinkage weights to increase the reliability of derived weights when only small-sized experiments are available. We demonstrate the latter on 49 post-2006 professional CM expert elicitation studies. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2023)
Show Figures

Figure 1

10 pages, 975 KiB  
Article
What Is Phenomenological Thomism? Its Principles and an Application: The Anthropological Square
by Jadwiga Helena Guerrero van der Meijden
Religions 2023, 14(7), 938; https://doi.org/10.3390/rel14070938 - 20 Jul 2023
Viewed by 2890
Abstract
In the debates over various kinds and traditions of Thomism, the term “Phenomenological Thomism” does not appear often. However, once uttered, it is instantly linked to two figures: Edith Stein and Karol Wojtyła. In her attempt at contrasting and bringing together Husserl’s phenomenology [...] Read more.
In the debates over various kinds and traditions of Thomism, the term “Phenomenological Thomism” does not appear often. However, once uttered, it is instantly linked to two figures: Edith Stein and Karol Wojtyła. In her attempt at contrasting and bringing together Husserl’s phenomenology and the philosophy of St. Thomas Aquinas, the founder of the new approach, Edith Stein, pioneered a philosophy that innovatively united phenomenological and Thomistic methods. This article analyses the essential features of her method, proposing to call it “Phenomenological Thomism”. In order to demonstrate the internal logic of this approach, I apply it to one topic, that of the human being, construing the Anthropological Square. The thesis of the article holds that Phenomenological Thomism is sui generis, yet not an estranged tradition in the history of philosophy. Full article
Show Figures

Figure 1

19 pages, 345 KiB  
Article
A Strong Limit Theorem of the Largest Entries of a Sample Correlation Matrices under a Strong Mixing Assumption
by Haozhu Zhao and Yong Zhang
Axioms 2023, 12(7), 657; https://doi.org/10.3390/axioms12070657 - 2 Jul 2023
Cited by 1 | Viewed by 1241
Abstract
We are interested in an n by p matrix Xn where the n rows are strictly stationary α-mixing random vectors and each of the p columns is an independent and identically distributed random vector; p=pn goes to infinity [...] Read more.
We are interested in an n by p matrix Xn where the n rows are strictly stationary α-mixing random vectors and each of the p columns is an independent and identically distributed random vector; p=pn goes to infinity as n, satisfiying 0<c1pn/nτc2<, where τ>0, c2c1>0. We obtain a logarithmic law of Ln=max1i<jpn|ρij| using the Chen–Stein Poisson approximation method, where ρij denotes the sample correlation coefficient between the ith column and the jth column of Xn. Full article
(This article belongs to the Special Issue Probability, Statistics and Estimation)
14 pages, 298 KiB  
Article
Percolation Problems on N-Ary Trees
by Tianxiang Ren and Jinwen Wu
Mathematics 2023, 11(11), 2571; https://doi.org/10.3390/math11112571 - 4 Jun 2023
Viewed by 1245
Abstract
Percolation theory is a subject that has been flourishing in recent decades. Because of its simple expression and rich connotation, it is widely used in chemistry, ecology, physics, materials science, infectious diseases, and complex networks. Consider an infinite-rooted N-ary tree where each [...] Read more.
Percolation theory is a subject that has been flourishing in recent decades. Because of its simple expression and rich connotation, it is widely used in chemistry, ecology, physics, materials science, infectious diseases, and complex networks. Consider an infinite-rooted N-ary tree where each vertex is assigned an i.i.d. random variable. When the random variable follows a Bernoulli distribution, a path is called head run if all the random variables that are assigned on the path are 1. We obtain the weak law of large numbers for the length of the longest head run. In addition, when the random variable follows a continuous distribution, a path is called an increasing path if the sequence of random variables on the path is increasing. By Stein’s method and other probabilistic methods, we prove that the length of the longest increasing path with a probability of one focuses on three points. We also consider limiting behaviours for the longest increasing path in a special tree. Full article
(This article belongs to the Special Issue Probability Distributions and Their Applications)
Back to TopTop