Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (127)

Search Parameters:
Keywords = central limit theorems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1391 KB  
Article
A Hybrid Model of Elephant and Moran Random Walks: Exact Distribution and Symmetry Properties
by Rafik Aguech and Mohamed Abdelkader
Symmetry 2025, 17(10), 1709; https://doi.org/10.3390/sym17101709 - 11 Oct 2025
Viewed by 113
Abstract
This work introduces a hybrid memory-based random walk model that combines the Elephant Random Walk with a modified Moran Random Walk. The model introduces a sequence of independent and identically distributed random variables with mean 1, representing step sizes. A particle starts at [...] Read more.
This work introduces a hybrid memory-based random walk model that combines the Elephant Random Walk with a modified Moran Random Walk. The model introduces a sequence of independent and identically distributed random variables with mean 1, representing step sizes. A particle starts at the origin and moves upward with probability r or remains stationary with probability 1r. From the second step onward, the particle decides its next action based on its previous movement, repeating it with probability p or taking the opposite action with probability 1p. The novelty of our approach lies in integrating a short-memory mechanism with variable step sizes, which allows us to derive exact distributions, recurrence relations, and central limit theorems. Our main contributions include (i) establishing explicit expressions for the moment-generating function and the exact distribution of the process, (ii) analyzing the number of stops through a symmetry phenomenon between repetition and inversion, and (iii) providing asymptotic results supported by simulations. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 2845 KB  
Article
Poisson Mean Homogeneity: Single-Observation Framework with Applications
by Xiaoping Shi, Augustine Wong and Kai Kaletsch
Symmetry 2025, 17(10), 1702; https://doi.org/10.3390/sym17101702 - 10 Oct 2025
Viewed by 136
Abstract
Practical problems often drive the development of new statistical methods by presenting real-world challenges. Testing the homogeneity of n independent Poisson means when only one observation per population is available is considered in this paper. This scenario is common in fields where limited [...] Read more.
Practical problems often drive the development of new statistical methods by presenting real-world challenges. Testing the homogeneity of n independent Poisson means when only one observation per population is available is considered in this paper. This scenario is common in fields where limited data from multiple sources must be analyzed to determine whether different groups share the same underlying event rate or mean. These settings often exhibit underlying structural or spatial symmetries that influence statistical behavior. Traditional methods that rely on large sample sizes are not applicable. Hence, it is crucial to develop techniques tailored to the constraints of single observations. Under the null hypothesis, with large n and a fixed common mean λ, the likelihood ratio test statistic (LRTS) is shown to be asymptotically normally distributed, with the mean and variance being approximated by a truncation method and a parametric bootstrap method. Moreover, with fixed n and large λ, under the null hypothesis, the LRTS is shown to be asymptotically distributed as a chi-square with n1 degrees of freedom. The Bartlett correction method is applied to improve the accuracy of the asymptotic distribution of the LRTS. We highlight the practical relevance of the proposed method through applications to wildfire and radioactive event data, where correlated observations and sparse sampling are common. Simulation studies further demonstrate the accuracy and robustness of the test under various scenarios, making it well-suited for modern applications in environmental science and risk assessment. Full article
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)
Show Figures

Figure 1

7 pages, 224 KB  
Article
On Relative Stability for Strongly Mixing Sequences
by Adam Jakubowski and Zbigniew Stanisław Szewczak
Foundations 2025, 5(4), 33; https://doi.org/10.3390/foundations5040033 - 25 Sep 2025
Viewed by 193
Abstract
We consider a class of strongly mixing sequences with infinite second moment. This class contains important GARCH processes that are applied in econometrics. We show the relative stability for such processes and construct a counterexample. We apply these results and obtain a new [...] Read more.
We consider a class of strongly mixing sequences with infinite second moment. This class contains important GARCH processes that are applied in econometrics. We show the relative stability for such processes and construct a counterexample. We apply these results and obtain a new CLT without the requirement of exponential decay of mixing coefficients, and provide a counterexample to this as well. Full article
(This article belongs to the Section Mathematical Sciences)
16 pages, 563 KB  
Article
Practical Test and Inference on the Inheritance of Dual Multi-Factors and Tri-Normal Distributions of Quantitative Characters
by Tingzhen Zhang, Xiaoming Jia, Zhao Xu and Zhiwu Cao
Agronomy 2025, 15(9), 2203; https://doi.org/10.3390/agronomy15092203 - 17 Sep 2025
Viewed by 323
Abstract
The multi-factorial hypothesis of quantitative trait inheritance originated from Nilson’s wheat hybridization experiments. It takes unit traits as the object and is based on the binomial distribution mathematically. Due to the requirement of the same distribution, it cannot include genes of other distributions. [...] Read more.
The multi-factorial hypothesis of quantitative trait inheritance originated from Nilson’s wheat hybridization experiments. It takes unit traits as the object and is based on the binomial distribution mathematically. Due to the requirement of the same distribution, it cannot include genes of other distributions. This is its limitation. Moreover, it does not incorporate the environmental effects that constitute the phenotype, so it is not comprehensive enough. This article started from the overallness of quantitative traits, was based on the central limit theorem, and was analyzed from both the genotype and the environment and proposed the assumption on the inheritance of dual multi-factors and tri-normal distributions of quantitative traits. This genetic model was tested with practical examples, and three inferences were made. Method and Results: Firstly, the overallness of quantitative traits was discussed, thus the above assumption was proposed. Next, using many examples of normal distribution of quantitative characters in the homogeneous populations, the research on the identification of the environments without GEI was carried out. Then, the examples of normal distribution of the same quantitative characters in the homogeneous populations and in the segregated populations of the same family were used. By means of normal distribution of quantitative characters in the homogeneous populations, it was indicated that the test locations were the environments without GEI. By utilizing the properties of normal distribution and variance, it was proven that normal distribution of phenotypic value for quantitative traits in a segregated population was formed by adding normal distribution of genotypic value and environmental effect, which enables the genetic model to be tested in practice. Three types of normal distribution of quantitative traits were inferred, indicating that the quantitative characters of a considerable number of organisms in nature obey a normal distribution, expressing continuous variation. Full article
(This article belongs to the Section Crop Breeding and Genetics)
32 pages, 5167 KB  
Article
Limiting Loss Distribution of Default and Prepayment for Loan Portfolios and Its Application in RMBS
by Chenxi Xia, Xin Zang, Lan Bu, Qinhan Duan and Jingping Yang
Risks 2025, 13(8), 153; https://doi.org/10.3390/risks13080153 - 15 Aug 2025
Viewed by 609
Abstract
This paper studies the joint distribution of the default and prepayment losses for a large portfolio of loans, based on a bottom-up approach. The repayment behaviors of loans in the portfolio are determined by both systematic and idiosyncratic risk factors and are conditionally [...] Read more.
This paper studies the joint distribution of the default and prepayment losses for a large portfolio of loans, based on a bottom-up approach. The repayment behaviors of loans in the portfolio are determined by both systematic and idiosyncratic risk factors and are conditionally independent given the systematic factors. The joint two-dimensional limit distributions of the portfolio default and prepayment losses are obtained, including the strong law of large numbers and the central limit theorem. A numerical study for the portfolio losses is performed for some simplified models. Finally, we conduct the empirical analysis on the residential mortgage-backed security (RMBS) based on Freddie Mac’s dataset. The empirical results reveal the impacts of different factors on the default and prepayment behaviors, and the distributions of the portfolio losses are simulated based on empirical estimation results to show its difference with the log-normal distributions. Full article
(This article belongs to the Special Issue Applied Financial and Actuarial Risk Analytics)
Show Figures

Figure 1

23 pages, 365 KB  
Article
Optimal Convergence of Slow–Fast Stochastic Reaction–Diffusion–Advection Equation with Hölder-Continuous Coefficients
by Li Yang and Lin Liu
Mathematics 2025, 13(16), 2550; https://doi.org/10.3390/math13162550 - 8 Aug 2025
Viewed by 337
Abstract
This paper investigates a slow–fast stochastic reaction–diffusion–advection equation with Hölder-continuous coefficients, where the irregularity of the coefficients presents significant analytical challenges. Our approach fundamentally relies on techniques from Poisson equations in Hilbert spaces, through which we establish optimal strong convergence rates for the [...] Read more.
This paper investigates a slow–fast stochastic reaction–diffusion–advection equation with Hölder-continuous coefficients, where the irregularity of the coefficients presents significant analytical challenges. Our approach fundamentally relies on techniques from Poisson equations in Hilbert spaces, through which we establish optimal strong convergence rates for the approximation of the averaged solution by the slow component. The key advantage that this paper presents is that the coefficients are merely Hölder continuous yet the optimal rate can still be obtained, which is crucial for subsequent central limit theorems and numerical approximations. Full article
19 pages, 2366 KB  
Article
Data Augmentation and Machine Learning for Heavy Metal Detection in Mulberry Leaves Using Laser-Induced Breakdown Spectroscopy (LIBS) Spectral Data
by Heiner Castro Gutiérrez, Carlos Robles-Algarín and Aura Polo
Processes 2025, 13(6), 1688; https://doi.org/10.3390/pr13061688 - 28 May 2025
Cited by 1 | Viewed by 935
Abstract
Laser-induced breakdown spectroscopy (LIBS) is a rapid, cost-effective technique for elemental analysis that enables real-time measurements with minimal sample preparation. However, LIBS datasets are often high-dimensional and imbalanced, limiting the performance of conventional machine-learning models due to small sample sizes. To address this, [...] Read more.
Laser-induced breakdown spectroscopy (LIBS) is a rapid, cost-effective technique for elemental analysis that enables real-time measurements with minimal sample preparation. However, LIBS datasets are often high-dimensional and imbalanced, limiting the performance of conventional machine-learning models due to small sample sizes. To address this, we propose a novel data augmentation method that generates synthetic samples using normal distribution sampling. This approach is justified by the central limit theorem, since each spectrum in the dataset used in this study results from averaging over 80 measurements per sample, yielding approximately Gaussian-distributed features. We also apply a dimensionality reduction method based on random forest feature importance, selecting features that account for 95% of cumulative importance. This selection reduces model complexity while preserving performance. Using random forest for both feature selection and modeling, our approach achieves superior accuracy for copper and competitive performance for chromium detection in mulberry leaves. Additionally, the selected wavelengths partially match reference lines reported by NIST, supporting model interpretability. These findings highlight the potential of combining data augmentation and machine learning for more robust and interpretable LIBS-based heavy metal detection. Full article
(This article belongs to the Special Issue 1st SUSTENS Meeting: Advances in Sustainable Engineering Systems)
Show Figures

Figure 1

25 pages, 380 KB  
Article
Limit Theorems for the Non-Convex Multispecies Curie–Weiss Model
by Francesco Camilli, Emanuele Mingione and Godwin Osabutey
Mathematics 2025, 13(8), 1343; https://doi.org/10.3390/math13081343 - 19 Apr 2025
Viewed by 662
Abstract
We study the thermodynamic properties of the generalized non-convex multispecies Curie–Weiss model, where interactions among different types of particles (forming the species) are encoded in a generic matrix. For spins with a generic prior distribution, we compute the thermodynamic limit of the generating [...] Read more.
We study the thermodynamic properties of the generalized non-convex multispecies Curie–Weiss model, where interactions among different types of particles (forming the species) are encoded in a generic matrix. For spins with a generic prior distribution, we compute the thermodynamic limit of the generating functional for the moments of the Boltzmann–Gibbs measure using simple interpolation techniques. For Ising spins, we further analyze the fluctuations of the magnetization in the thermodynamic limit under the Boltzmann–Gibbs measure. It is shown that a central limit theorem (CLT) holds for a rescaled and centered vector of species magnetizations, which converges to either a centered or non-centered multivariate normal distribution, depending on the rate of convergence of the relative sizes of the species. Full article
(This article belongs to the Section E4: Mathematical Physics)
Show Figures

Figure 1

26 pages, 2247 KB  
Article
Bifurcation Analysis of a Class of Food Chain Model with Two Time Delays
by Xiuling Li, Siyu Dong and Haotian Fan
Mathematics 2025, 13(8), 1307; https://doi.org/10.3390/math13081307 - 16 Apr 2025
Cited by 1 | Viewed by 463
Abstract
This paper investigates the Hopf bifurcation of a three-dimensional food chain model with two timedelays, focusing on the synergistic effect of time delays in energy transfer between different trophic levels on the stability of the system. By analyzing the distribution of the roots [...] Read more.
This paper investigates the Hopf bifurcation of a three-dimensional food chain model with two timedelays, focusing on the synergistic effect of time delays in energy transfer between different trophic levels on the stability of the system. By analyzing the distribution of the roots of the characteristic equation, the stability conditions of the internal equilibrium point and the criterion for the existence of the Hopf bifurcation are established. Using the paradigm theory and the central manifold theorem, explicit formulas for determining the bifurcation direction and the stability of the bifurcation periodic solution are obtained. Numerical simulations verify the theoretical results. This study shows that increasing the time delay will lead to the instability of the food chain model through Hopf bifurcation and produce limit cycle oscillations. This work simulates the asymmetric propagation mode of population fluctuations observed in natural ecosystems, providing a theoretical basis for analyzing the coevolution of complex food webs. Full article
Show Figures

Figure 1

18 pages, 314 KB  
Article
The POVM Theorem in Bohmian Mechanics
by Christian Beck and Dustin Lazarovici
Entropy 2025, 27(4), 391; https://doi.org/10.3390/e27040391 - 7 Apr 2025
Viewed by 1381
Abstract
The POVM theorem is a central result in Bohmian mechanics, grounding the measurement formalism of standard quantum mechanics in a statistical analysis based on the quantum equilibrium hypothesis (the Born rule for Bohmian particle positions). It states that the outcome statistics of an [...] Read more.
The POVM theorem is a central result in Bohmian mechanics, grounding the measurement formalism of standard quantum mechanics in a statistical analysis based on the quantum equilibrium hypothesis (the Born rule for Bohmian particle positions). It states that the outcome statistics of an experiment are described by a positive operator-valued measure (POVM) acting on the Hilbert space of the measured system. In light of recent debates about the scope and status of this result, we provide a systematic presentation of the POVM theorem and its underlying assumptions with a focus on their conceptual foundations and physical justifications. We conclude with a brief discussion of the scope of the POVM theorem—especially the sense in which it does (and does not) place limits on what is “measurable” in Bohmian mechanics. Full article
(This article belongs to the Special Issue Quantum Foundations: 100 Years of Born’s Rule)
31 pages, 8488 KB  
Review
The Rise of the Brown–Twiss Effect
by David Charles Hyland
Photonics 2025, 12(4), 301; https://doi.org/10.3390/photonics12040301 - 25 Mar 2025
Viewed by 371
Abstract
Despite the simplicity of flux collecting hardware, robustness to misalignments, and immunity to seeing conditions, Intensity Correlation Imaging arrays using the Brown–Twiss effect to determine two-dimensional images have been burdened with very long integration times. The root cause is that the essential phase [...] Read more.
Despite the simplicity of flux collecting hardware, robustness to misalignments, and immunity to seeing conditions, Intensity Correlation Imaging arrays using the Brown–Twiss effect to determine two-dimensional images have been burdened with very long integration times. The root cause is that the essential phase retrieval algorithms must use image domain constraints, and the traditional signal-to-noise calculations do not account for these. Thus, the conventional formulations are not efficient estimators. Recently, the long integration times have been emphatically removed by a sequence of papers. This paper is a review of the previous theoretical work that removes the long integration times, making the Intensity Correlation Imaging a practical and inexpensive method for high-resolution astronomy. Full article
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)
Show Figures

Figure 1

20 pages, 2504 KB  
Article
Sensory or Intelligence Data Compression Can Drive the Yerkes–Dodson Effect
by Rodrick Wallace
Symmetry 2025, 17(2), 235; https://doi.org/10.3390/sym17020235 - 6 Feb 2025
Viewed by 1109
Abstract
New probability models of inherently embodied cognition derived from the asymptotic limit theorems of information and control theories show, where the Weber–Fechner, Stevens, Hick–Hyman, and Pieron’s psychophysics laws—and analogous processes of sensory data rate compression—operate, that sufficient arousal will engender the classic Yerkes–Dodson [...] Read more.
New probability models of inherently embodied cognition derived from the asymptotic limit theorems of information and control theories show, where the Weber–Fechner, Stevens, Hick–Hyman, and Pieron’s psychophysics laws—and analogous processes of sensory data rate compression—operate, that sufficient arousal will engender the classic Yerkes–Dodson effect responses for ‘easy’ and ‘difficult’ challenges, depending on the level of ‘noise’ impeding the cognition rate. A ‘hallucination’ mode is found to arise at low arousal, and, in the face of sufficient noise, a ‘panic’ mode at high arousal. Systems that are ‘ductile’ in a formal sense, however, are not afflicted by such hallucination, although panic remains for difficult challenges. Similar dynamics that surround organized conflict on ‘Clausewitz landscapes’ of fog, friction, and deadly adversarial intent have long been studied. We find a central mechanism for cognitive failure under increasing stress across a very broad range of modalities to be enough—usually badly needed—compression of sensory/intelligence and internal information transmission rates. It seems possible, with some effort, to convert the probability models developed here into robust statistical tools for the study and limited control of critical real-time, real-world embodied cognitive phenomena associated with cellular, neural, individual, machine, and institutional systems and their many composites. Full article
Show Figures

Figure 1

66 pages, 8492 KB  
Review
An Overview of Underwater Optical Wireless Communication Channel Simulations with a Focus on the Monte Carlo Method
by Intesar Ramley, Hamdah M. Alzayed, Yas Al-Hadeethi, Mingguang Chen and Abeer Z. Barasheed
Mathematics 2024, 12(24), 3904; https://doi.org/10.3390/math12243904 - 11 Dec 2024
Cited by 7 | Viewed by 3503
Abstract
Building a reliable and optimum underwater optical wireless communication (UOWC) system requires identifying all potential factors that cause the attenuation and dispersion of the optical signal. The radiative transfer equation (RTE) solution can be utilised to conclude these essential design parameters to build [...] Read more.
Building a reliable and optimum underwater optical wireless communication (UOWC) system requires identifying all potential factors that cause the attenuation and dispersion of the optical signal. The radiative transfer equation (RTE) solution can be utilised to conclude these essential design parameters to build an optimum UOWC system. RTE has various numerical and simplified analytical solutions with varying reliability and capability scope. Many scientists consider the Monte Carlo simulation (MCS) method to be a consistent and widely accepted approach to formulating an RTE solution, which models the propagation of photons through various underwater channel environments. MCS recently attracted attention because we can build a reliable model for underwater environments. Based on such a model, this report demonstrates the resulting received optical power distribution as an output for an array of emulation inputs, including transmitted light’s spatial and temporal distribution, channel link regimes, and associated impairments. This study includes a survey component, which presents the required framework’s foundation to establish a valid RTE model, which leads to solutions with different scopes and depths that can be drawn for practical UOWC use cases. Hence, this work shows how underlying modelling elements can influence a solution technique, including inherent optical properties (IOPs), apparent optical properties (AOPs), and the potential limitations of various photon scattering function formats. The work introduces a novel derivation of mathematical equations for single- and multiple-light-pulse propagation in homogeneous and inhomogeneous channels, forming the basis for MCS-based UOWC studies. The reliability of MCS implementation is assessed using compliance with the Central Limit Theorem (CLT) and leveraging the Henyey–Greenstein phase function with full-scale random selection. As part of the tutorial component in this work, the MCS inner working is manifested using an object-oriented design method. Therefore, this work targets researchers interested in using MCS for UOWC research in general and UOWC photon propagation in seawater channel modelling in general. Full article
Show Figures

Figure 1

28 pages, 4219 KB  
Article
Angle Expansion Estimation and Correction Based on the Lindeberg–Feller Central Limit Theorem Under Multi-Pulse Integration
by Jiong Cai, Rui Wang and Handong Yang
Remote Sens. 2024, 16(23), 4535; https://doi.org/10.3390/rs16234535 - 3 Dec 2024
Cited by 3 | Viewed by 885
Abstract
The radar monopulse angle measurement can obtain a target’s angle information within a single pulse, meaning that factors such as target motion and amplitude fluctuations, which vary over time, do not affect the angle measurement accuracy. However, in practical applications, when a target’s [...] Read more.
The radar monopulse angle measurement can obtain a target’s angle information within a single pulse, meaning that factors such as target motion and amplitude fluctuations, which vary over time, do not affect the angle measurement accuracy. However, in practical applications, when a target’s signal-to-noise ratio (SNR) is low, the single pulse signal is severely affected by noise, leading to a significant deterioration in angle measurement accuracy. Therefore, it is usually necessary to coherently integrate multiple pulses before estimating the angle. This paper constructs an angle expansion model for a multi-pulse angle measurement under coherent integration. The analysis reveals that even under noise-free conditions, after coherently integrating multiple pulses, the coupling of target amplitude fluctuations and motion state can still cause significant errors in the angle measurement. Subsequently, this paper conducts a detailed analysis of the impact of the amplitude fluctuations and target maneuvers on the random angle measurement error. It also derives approximate probability density functions of angle measurement errors under various fluctuation and motion scenarios based on the Lindeberg–Feller central limit theorem. In addition, based on the angle expansion model and the random error distribution, this paper proposes an angle correction algorithm based on multi-pulse integration and long-term estimation. Numerical experiments and radar data in the field verify the impact of target characteristics on the angle measurement under multi-pulse integration and the effectiveness of the angle correction algorithm. Full article
Show Figures

Figure 1

17 pages, 1561 KB  
Article
Scrutinizing the Statistical Distribution of a Composite Index of Soil Degradation as a Measure of Early Desertification Risk in Advanced Economies
by Vito Imbrenda, Marco Maialetti, Adele Sateriano, Donato Scarpitta, Giovanni Quaranta, Francesco Chelli and Luca Salvati
Environments 2024, 11(11), 246; https://doi.org/10.3390/environments11110246 - 6 Nov 2024
Viewed by 1218
Abstract
Using descriptive and inferential techniques together with simplified metrics derived from the ecological discipline, we offer a long-term investigation of the Environmental Sensitive Area Index (ESAI) as a proxy of land degradation vulnerability in Italy. This assessment was specifically carried out on a [...] Read more.
Using descriptive and inferential techniques together with simplified metrics derived from the ecological discipline, we offer a long-term investigation of the Environmental Sensitive Area Index (ESAI) as a proxy of land degradation vulnerability in Italy. This assessment was specifically carried out on a decadal scale from 1960 to 2020 at the province (NUTS-3 sensu Eurostat) level and benefited from a short-term forecast for 2030, based on four simplified assumptions grounded on a purely deterministic (‘what … if’) approach. The spatial distribution of the ESAI was investigated at each observation year (1960, 1970, 1980, 1990, 2000, 2010, 2020, 2030) calculating descriptive statistics (central tendency, variability, and distribution shape), deviation from normality, and the increase (or decrease) in diversification in the index scores. Based on nearly 300 thousand observations all over Italy, provinces were considered representative spatial units because they include a relatively broad number of ESAI measures. Assuming a large sample size as a pre-requisite for the stable distribution of the most relevant moments of any statistical distribution—because of the convergence law underlying the central limit theorem—we found that the ESAI scores have increased significantly over time in both central values (i.e., means or medians) and variability across the central tendency (i.e., coefficient of variation). Additionally, ecological metrics reflecting diversification trends in the vulnerability scores delineated a latent shift toward a less diversified (statistical) distribution with a concentration of the observed values toward the highest ESAI scores—possibly reflecting a net increase in the level of soil degradation, at least in some areas. Multiple exploratory techniques (namely, a Principal Component Analysis and a two-way hierarchical clustering) were run on the two-way (data) matrix including distributional metrics (by columns) and temporal observations (by rows). The empirical findings of these techniques delineate the consolidation of worse predisposing conditions to soil degradation in recent times, as reflected in a sudden increase in the ESAI scores—both average and maximum values. These trends underline latent environmental dynamics leading to an early desertification risk, thus representing a valid predictive tool both in the present conditions and in future scenarios. A comprehensive scrutiny of past, present, and future trends in the ESAI scores using mixed (parametric and non-parametric) statistical tools proved to be an original contribution to the study of soil degradation in advanced economies. Full article
Show Figures

Figure 1

Back to TopTop