Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = statistical unbounded convergence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
80 pages, 858 KiB  
Article
Uniform in Number of Neighbor Consistency and Weak Convergence of k-Nearest Neighbor Single Index Conditional Processes and k-Nearest Neighbor Single Index Conditional U-Processes Involving Functional Mixing Data
by Salim Bouzebda
Symmetry 2024, 16(12), 1576; https://doi.org/10.3390/sym16121576 - 25 Nov 2024
Cited by 4 | Viewed by 1359
Abstract
U-statistics are fundamental in modeling statistical measures that involve responses from multiple subjects. They generalize the concept of the empirical mean of a random variable X to include summations over each m-tuple of distinct observations of X. W. Stute introduced [...] Read more.
U-statistics are fundamental in modeling statistical measures that involve responses from multiple subjects. They generalize the concept of the empirical mean of a random variable X to include summations over each m-tuple of distinct observations of X. W. Stute introduced conditional U-statistics, extending the Nadaraya–Watson estimates for regression functions. Stute demonstrated their strong pointwise consistency with the conditional expectation r(m)(φ,t), defined as E[φ(Y1,,Ym)|(X1,,Xm)=t] for tXm. This paper focuses on estimating functional single index (FSI) conditional U-processes for regular time series data. We propose a novel, automatic, and location-adaptive procedure for estimating these processes based on k-Nearest Neighbor (kNN) principles. Our asymptotic analysis includes data-driven neighbor selection, making the method highly practical. The local nature of the kNN approach improves predictive power compared to traditional kernel estimates. Additionally, we establish new uniform results in bandwidth selection for kernel estimates in FSI conditional U-processes, including almost complete convergence rates and weak convergence under general conditions. These results apply to both bounded and unbounded function classes, satisfying certain moment conditions, and are proven under standard Vapnik–Chervonenkis structural conditions and mild model assumptions. Furthermore, we demonstrate uniform consistency for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship. This result is independently valuable and has potential applications in areas such as set-indexed conditional U-statistics, the Kendall rank correlation coefficient, and discrimination problems. Full article
(This article belongs to the Section Mathematics)
29 pages, 2965 KiB  
Article
The Robust Supervised Learning Framework: Harmonious Integration of Twin Extreme Learning Machine, Squared Fractional Loss, Capped L2,p-norm Metric, and Fisher Regularization
by Zhenxia Xue, Yan Wang, Yuwen Ren and Xinyuan Zhang
Symmetry 2024, 16(9), 1230; https://doi.org/10.3390/sym16091230 - 19 Sep 2024
Cited by 1 | Viewed by 1607
Abstract
As a novel learning algorithm for feedforward neural networks, the twin extreme learning machine (TELM) boasts advantages such as simple structure, few parameters, low complexity, and excellent generalization performance. However, it employs the squared L2-norm metric and an unbounded hinge loss [...] Read more.
As a novel learning algorithm for feedforward neural networks, the twin extreme learning machine (TELM) boasts advantages such as simple structure, few parameters, low complexity, and excellent generalization performance. However, it employs the squared L2-norm metric and an unbounded hinge loss function, which tends to overstate the influence of outliers and subsequently diminishes the robustness of the model. To address this issue, scholars have proposed the bounded capped L2,p-norm metric, which can be flexibly adjusted by varying the p value to adapt to different data and reduce the impact of noise. Therefore, we substitute the metric in the TELM with the capped L2,p-norm metric in this paper. Furthermore, we propose a bounded, smooth, symmetric, and noise-insensitive squared fractional loss (SF-loss) function to replace the hinge loss function in the TELM. Additionally, the TELM neglects statistical information in the data; thus, we incorporate the Fisher regularization term into our model to fully exploit the statistical characteristics of the data. Drawing upon these merits, a squared fractional loss-based robust supervised twin extreme learning machine (SF-RSTELM) model is proposed by integrating the capped L2,p-norm metric, SF-loss, and Fisher regularization term. The model shows significant effectiveness in decreasing the impacts of noise and outliers. However, the proposed model’s non-convexity poses a formidable challenge in the realm of optimization. We use an efficient iterative algorithm to solve it based on the concave-convex procedure (CCCP) algorithm and demonstrate the convergence of the proposed algorithm. Finally, to verify the algorithm’s effectiveness, we conduct experiments on artificial datasets, UCI datasets, image datasets, and NDC large datasets. The experimental results show that our model is able to achieve higher ACC and F1 scores across most datasets, with improvements ranging from 0.28% to 4.5% compared to other state-of-the-art algorithms. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 426 KiB  
Article
A Penalized Empirical Likelihood Approach for Estimating Population Sizes under the Negative Binomial Regression Model
by Yulu Ji and Yang Liu
Mathematics 2024, 12(17), 2674; https://doi.org/10.3390/math12172674 - 28 Aug 2024
Viewed by 1312
Abstract
In capture–recapture experiments, the presence of overdispersion and heterogeneity necessitates the use of the negative binomial regression model for inferring population sizes. However, within this model, existing methods based on likelihood and ratio regression for estimating the dispersion parameter often face boundary and [...] Read more.
In capture–recapture experiments, the presence of overdispersion and heterogeneity necessitates the use of the negative binomial regression model for inferring population sizes. However, within this model, existing methods based on likelihood and ratio regression for estimating the dispersion parameter often face boundary and nonidentifiability issues. These problems can result in nonsensically large point estimates and unbounded upper limits of confidence intervals for the population size. We present a penalized empirical likelihood technique for solving these two problems by imposing a half-normal prior on the population size. Based on the proposed approach, a maximum penalized empirical likelihood estimator with asymptotic normality and a penalized empirical likelihood ratio statistic with asymptotic chi-square distribution are derived. To improve numerical performance, we present an effective expectation-maximization (EM) algorithm. In the M-step, optimization for the model parameters could be achieved by fitting a standard negative binomial regression model via the R basic function glm.nb(). This approach ensures the convergence and reliability of the numerical algorithm. Using simulations, we analyze several synthetic datasets to illustrate three advantages of our methods in finite-sample cases: complete mitigation of the boundary problem, more efficient maximum penalized empirical likelihood estimates, and more precise penalized empirical likelihood ratio interval estimates compared to the estimates obtained without penalty. These advantages are further demonstrated in a case study estimating the abundance of black bears (Ursus americanus) at the U.S. Army’s Fort Drum Military Installation in northern New York. Full article
Show Figures

Figure 1

81 pages, 866 KiB  
Article
Limit Theorems in the Nonparametric Conditional Single-Index U-Processes for Locally Stationary Functional Random Fields under Stochastic Sampling Design
by Salim Bouzebda
Mathematics 2024, 12(13), 1996; https://doi.org/10.3390/math12131996 - 27 Jun 2024
Cited by 10 | Viewed by 1260
Abstract
In his work published in (Ann. Probab. 19, No. 2 (1991), 812–825), W. Stute introduced the notion of conditional U-statistics, expanding upon the Nadaraya–Watson estimates used for regression functions. Stute illustrated the pointwise consistency and asymptotic normality of these statistics. Our research [...] Read more.
In his work published in (Ann. Probab. 19, No. 2 (1991), 812–825), W. Stute introduced the notion of conditional U-statistics, expanding upon the Nadaraya–Watson estimates used for regression functions. Stute illustrated the pointwise consistency and asymptotic normality of these statistics. Our research extends these concepts to a broader scope, establishing, for the first time, an asymptotic framework for single-index conditional U-statistics applicable to locally stationary random fields {Xs,An:sinRn} observed at irregularly spaced locations in Rn, a subset of Rd. We introduce an estimator for the single-index conditional U-statistics operator that accommodates the nonstationary nature of the data-generating process. Our method employs a stochastic sampling approach that allows for the flexible creation of irregularly spaced sampling sites, covering both pure and mixed increasing domain frameworks. We establish the uniform convergence rate and weak convergence of the single conditional U-processes. Specifically, we examine weak convergence under bounded or unbounded function classes that satisfy specific moment conditions. These findings are established under general structural conditions on the function classes and underlying models. The theoretical advancements outlined in this paper form essential foundations for potential breakthroughs in functional data analysis, laying the groundwork for future research in this field. Moreover, in the same context, we show the uniform consistency for the nonparametric inverse probability of censoring weighted (I.P.C.W.) estimators of the regression function under random censorship, which is of its own interest. Potential applications of our findings encompass, among many others, the set-indexed conditional U-statistics, the Kendall rank correlation coefficient, and the discrimination problems. Full article
(This article belongs to the Section D1: Probability and Statistics)
20 pages, 400 KiB  
Article
Randomly Shifted Lattice Rules with Importance Sampling and Applications
by Hejin Wang and Zhan Zheng
Mathematics 2024, 12(5), 630; https://doi.org/10.3390/math12050630 - 21 Feb 2024
Viewed by 1337
Abstract
In financial and statistical computations, calculating expectations often requires evaluating integrals with respect to a Gaussian measure. Monte Carlo methods are widely used for this purpose due to their dimension-independent convergence rate. Quasi-Monte Carlo is the deterministic analogue of Monte Carlo and has [...] Read more.
In financial and statistical computations, calculating expectations often requires evaluating integrals with respect to a Gaussian measure. Monte Carlo methods are widely used for this purpose due to their dimension-independent convergence rate. Quasi-Monte Carlo is the deterministic analogue of Monte Carlo and has the potential to substantially enhance the convergence rate. Importance sampling is a widely used variance reduction technique. However, research into the specific impact of importance sampling on the integrand, as well as the conditions for convergence, is relatively scarce. In this study, we combine the randomly shifted lattice rule with importance sampling. We prove that, for unbounded functions, randomly shifted lattice rules combined with a suitably chosen importance density can achieve convergence as quickly as O(N1+ϵ), given N samples for arbitrary ϵ values under certain conditions. We also prove that the conditions of convergence for Laplace importance sampling are stricter than those for optimal drift importance sampling. Furthermore, using a generalized linear mixed model and Randleman–Bartter model, we provide the conditions under which functions utilizing Laplace importance sampling achieve convergence rates of nearly O(N1+ϵ) for arbitrary ϵ values. Full article
Show Figures

Figure 1

20 pages, 371 KiB  
Article
Probability Distributions Approximation via Fractional Moments and Maximum Entropy: Theoretical and Computational Aspects
by Pier Luigi Novi Inverardi and Aldo Tagliani
Axioms 2024, 13(1), 28; https://doi.org/10.3390/axioms13010028 - 30 Dec 2023
Cited by 8 | Viewed by 1911
Abstract
In the literature, the use of fractional moments to express the available information in the framework of maximum entropy (MaxEnt) approximation of a distribution F having finite or unbounded positive support, has been essentially considered as a computational tool to improve the performance [...] Read more.
In the literature, the use of fractional moments to express the available information in the framework of maximum entropy (MaxEnt) approximation of a distribution F having finite or unbounded positive support, has been essentially considered as a computational tool to improve the performance of the analogous procedure based on integer moments. No attention has been paid to two formal aspects concerning fractional moments, such as conditions for the existence of the maximum entropy approximation based on them or convergence in entropy of this approximation to F. This paper aims to fill this gap by providing proofs of these two fundamental results. In fact, convergence in entropy can be involved in the optimal selection of the order of fractional moments for accelerating the convergence of the MaxEnt approximation to F, to clarify the entailment relationships of this type of convergence with other types of convergence useful in statistical applications, and to preserve some important prior features of the underlying F distribution. Full article
(This article belongs to the Special Issue Statistical Methods and Applications)
69 pages, 751 KiB  
Article
Non-Parametric Conditional U-Processes for Locally Stationary Functional Random Fields under Stochastic Sampling Design
by Salim Bouzebda and Inass Soukarieh
Mathematics 2023, 11(1), 16; https://doi.org/10.3390/math11010016 - 20 Dec 2022
Cited by 21 | Viewed by 2055
Abstract
Stute presented the so-called conditional U-statistics generalizing the Nadaraya–Watson estimates of the regression function. Stute demonstrated their pointwise consistency and the asymptotic normality. In this paper, we extend the results to a more abstract setting. We develop an asymptotic theory of conditional [...] Read more.
Stute presented the so-called conditional U-statistics generalizing the Nadaraya–Watson estimates of the regression function. Stute demonstrated their pointwise consistency and the asymptotic normality. In this paper, we extend the results to a more abstract setting. We develop an asymptotic theory of conditional U-statistics for locally stationary random fields {Xs,An:sinRn} observed at irregularly spaced locations in Rn=[0,An]d as a subset of Rd. We employ a stochastic sampling scheme that may create irregularly spaced sampling sites in a flexible manner and includes both pure and mixed increasing domain frameworks. We specifically examine the rate of the strong uniform convergence and the weak convergence of conditional U-processes when the explicative variable is functional. We examine the weak convergence where the class of functions is either bounded or unbounded and satisfies specific moment conditions. These results are achieved under somewhat general structural conditions pertaining to the classes of functions and the underlying models. The theoretical results developed in this paper are (or will be) essential building blocks for several future breakthroughs in functional data analysis. Full article
(This article belongs to the Special Issue Current Developments in Theoretical and Applied Statistics)
12 pages, 305 KiB  
Article
Applications for Unbounded Convergences in Banach Lattices
by Zhangjun Wang and Zili Chen
Fractal Fract. 2022, 6(4), 199; https://doi.org/10.3390/fractalfract6040199 - 1 Apr 2022
Cited by 4 | Viewed by 2292
Abstract
Several recent papers investigated unbounded convergences in Banach lattices. The focus of this paper is to apply the results of unbounded convergence to the classical Banach lattice theory from a new perspective. Combining all unbounded convergences, including unbounded order (norm, absolute weak, absolute [...] Read more.
Several recent papers investigated unbounded convergences in Banach lattices. The focus of this paper is to apply the results of unbounded convergence to the classical Banach lattice theory from a new perspective. Combining all unbounded convergences, including unbounded order (norm, absolute weak, absolute weak*) convergence, we characterize L-weakly compact sets, L-weakly compact operators and M-weakly compact operators on Banach lattices. For applications, we introduce so-called statistical-unbounded convergence and use these convergences to describe KB-spaces and reflexive Banach lattices. Full article
17 pages, 706 KiB  
Article
Mean Square Convergent Non-Standard Numerical Schemes for Linear Random Differential Equations with Delay
by Julia Calatayud, Juan Carlos Cortés, Marc Jornet and Francisco Rodríguez
Mathematics 2020, 8(9), 1417; https://doi.org/10.3390/math8091417 - 24 Aug 2020
Cited by 3 | Viewed by 2517
Abstract
In this paper, we are concerned with the construction of numerical schemes for linear random differential equations with discrete delay. For the linear deterministic differential equation with discrete delay, a recent contribution proposed a family of non-standard finite difference (NSFD) methods from an [...] Read more.
In this paper, we are concerned with the construction of numerical schemes for linear random differential equations with discrete delay. For the linear deterministic differential equation with discrete delay, a recent contribution proposed a family of non-standard finite difference (NSFD) methods from an exact numerical scheme on the whole domain. The family of NSFD schemes had increasing order of accuracy, was dynamically consistent, and possessed simple computational properties compared to the exact scheme. In the random setting, when the two equation coefficients are bounded random variables and the initial condition is a regular stochastic process, we prove that the randomized NSFD schemes converge in the mean square (m.s.) sense. M.s. convergence allows for approximating the expectation and the variance of the solution stochastic process. In practice, the NSFD scheme is applied with symbolic inputs, and afterward the statistics are explicitly computed by using the linearity of the expectation. This procedure permits retaining the increasing order of accuracy of the deterministic counterpart. Some numerical examples illustrate the approach. The theoretical m.s. convergence rate is supported numerically, even when the two equation coefficients are unbounded random variables. M.s. dynamic consistency is assessed numerically. A comparison with Euler’s method is performed. Finally, an example dealing with the time evolution of a photosynthetic bacterial population is presented. Full article
(This article belongs to the Special Issue Models of Delay Differential Equations)
Show Figures

Figure 1

10 pages, 3718 KiB  
Article
Properties of Branch Length Similarity Entropy on the Network in Rk
by Oh Sung Kwon and Sang-Hee Lee
Entropy 2014, 16(1), 557-566; https://doi.org/10.3390/e16010557 - 16 Jan 2014
Cited by 8 | Viewed by 6160
Abstract
Branching network is one of the most universal phenomena in living or non-living systems, such as river systems and the bronchial trees of mammals. To topologically characterize the branching networks, the Branch Length Similarity (BLS) entropy was suggested and the statistical methods based [...] Read more.
Branching network is one of the most universal phenomena in living or non-living systems, such as river systems and the bronchial trees of mammals. To topologically characterize the branching networks, the Branch Length Similarity (BLS) entropy was suggested and the statistical methods based on the entropy have been applied to the shape identification and pattern recognition. However, the mathematical properties of the BLS entropy have not still been explored in depth because of the lack of application and utilization requiring advanced mathematical understanding. Regarding the mathematical study, it was reported, as a theorem, that all BLS entropy values obtained for simple networks created by connecting pixels along the boundary of a shape are exactly unity when the shape has infinite resolution. In the present study, we extended the theorem to the network created by linking infinitely many nodes distributed on the bounded or unbounded domain in Rk for k ≥ 1. We proved that all BLS entropies of the nodes in the network go to one as the number of nodes, n, goes to infinite and its convergence rate is 1 - O(1= ln n), which was confirmed by the numerical tests. Full article
Show Figures

Back to TopTop