Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = discrete beta distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 2186 KB  
Article
On a Beta-Gamma Discrete Distribution for Thunderstorm Count Modeling with Risk Analysis
by Tassaddaq Hussain, Enrique Villamor, Mohammad Shakil, Mohammad Ahsanullah and B. M. Golam Kibria
Mathematics 2025, 13(24), 3913; https://doi.org/10.3390/math13243913 - 7 Dec 2025
Viewed by 605
Abstract
Risk management is vital for financial institutions to evaluate and mitigate potential losses. Thunderstorm count modeling with risk analysis is used by various sectors, such as insurance and utility companies, to forecast storm recurrence, analyze risk, and estimate financial losses based on factors [...] Read more.
Risk management is vital for financial institutions to evaluate and mitigate potential losses. Thunderstorm count modeling with risk analysis is used by various sectors, such as insurance and utility companies, to forecast storm recurrence, analyze risk, and estimate financial losses based on factors like wind speed, hail size, and tornado potential. This paper introduces a novel discrete distribution, the Beta-Gamma Discrete (BGD) distribution, designed for modeling count data that inherently excludes zero values. Developed through the compounding of a discrete gamma distribution with a beta distribution, the BGD offers significant flexibility in handling overdispersion and complex data characteristics. The study derives key statistical properties of the BGD, including its probability mass function, moments, hazard rate function, moment generating function, and mean residual life. A comprehensive characterization theorem is also established. The model’s practical utility is demonstrated through an application to thunderstorm event data from the Kennedy Space Center (KSC), where the frequency of thunderstorms per event is a critical operational concern. The performance of the BGD is thoroughly assessed against established zero-truncated models—namely, the Zero-Truncated Generalized Poisson (ZTGP), Size-Biased Negative Binomial (SBNB), and Zero-Truncated Generalized Negative Binomial (ZTGNB)—using evaluation criteria such as Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Chi-square goodness-of-fit, and the Vuong test. The results consistently show that the BGD provides a superior and more accurate fit for the thunderstorm data, thus help NASA and other space agencies for establishing it as a robust and effective tool for modeling positive count data in meteorological and other applied contexts with risk analysis. Full article
(This article belongs to the Special Issue Statistical Analysis and Data Science for Complex Data, 2nd Edition)
Show Figures

Figure 1

36 pages, 560 KB  
Review
A Review: Construction of Statistical Distributions
by Kai-Tai Fang, Yu-Xuan Lin and Yu-Hui Deng
Entropy 2025, 27(12), 1188; https://doi.org/10.3390/e27121188 - 23 Nov 2025
Cited by 1 | Viewed by 1682
Abstract
Statistical modeling is fundamentally based on probability distributions, which can be discrete or continuous and univariate or multivariate. This review focuses on the methods used to construct these distributions, covering both traditional and newly developed approaches. We first examine classic distributions such as [...] Read more.
Statistical modeling is fundamentally based on probability distributions, which can be discrete or continuous and univariate or multivariate. This review focuses on the methods used to construct these distributions, covering both traditional and newly developed approaches. We first examine classic distributions such as the normal, exponential, gamma, and beta for univariate data, and the multivariate normal, elliptical, and Dirichlet for multidimensional data. We then address how, in recent decades, the demand for more flexible modeling tools has led to the creation of complex meta-distributions built using copula theory. Full article
(This article belongs to the Special Issue Number Theoretic Methods in Statistics: Theory and Applications)
Show Figures

Figure 1

24 pages, 2473 KB  
Article
An Approximate Solution for M/G/1 Queues with Pure Mixture Service Time Distributions
by Melik Koyuncu and Nuşin Uncu
Symmetry 2025, 17(10), 1753; https://doi.org/10.3390/sym17101753 - 17 Oct 2025
Viewed by 1285
Abstract
This study introduces an approximate solution for the M/G/1 queueing model in scenarios where the service time distribution follows a pure mixture distribution. The derivation of the proposed approximation leverages the analytical tractability of the variance for certain mixture distributions. By incorporating this [...] Read more.
This study introduces an approximate solution for the M/G/1 queueing model in scenarios where the service time distribution follows a pure mixture distribution. The derivation of the proposed approximation leverages the analytical tractability of the variance for certain mixture distributions. By incorporating this variance into the Pollaczek–Khinchine equation, an approximate closed-form expression for the M/G/1 queue is obtained. The formulation is extended to service-time distributions composed of two or more components, specifically Gamma, Gaussian, and Beta mixtures. To assess the accuracy of the proposed approach, a discrete-event simulation of an M/G/1 system was conducted using random variates generated from these mixture distributions. The comparative analysis reveals that the approximation yields results in close agreement with simulation outputs, with particularly high accuracy observed for Gaussian mixture cases. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 7385 KB  
Article
Time-Division Subbands Beta Distribution Random Space Vector Pulse Width Modulation Method for the High-Frequency Harmonic Dispersion
by Jian Wen and Xiaobin Cheng
Electronics 2025, 14(14), 2852; https://doi.org/10.3390/electronics14142852 - 16 Jul 2025
Cited by 1 | Viewed by 664
Abstract
Conventional space vector pulse width modulation (CSVPWM) with the fixed switching frequency generates significant sideband harmonics in the three-phase voltage. Discrete random switching frequency SVPWM (DRSF-SVPWM) methods have been widely applied in motor control systems for the suppression of tone harmonic energy. To [...] Read more.
Conventional space vector pulse width modulation (CSVPWM) with the fixed switching frequency generates significant sideband harmonics in the three-phase voltage. Discrete random switching frequency SVPWM (DRSF-SVPWM) methods have been widely applied in motor control systems for the suppression of tone harmonic energy. To further reduce the amplitude of the high-frequency harmonic with a limited switching frequency variation range, this paper proposes a time-division subbands beta distribution random SVPWM (TSBDR-SVPWM) method. The overall frequency band of the switching frequency is equally divided into N subbands, and each fundamental cycle of the line voltage is segmented into 2*(N-1) equal time intervals. Additionally, within each time segment, the switching frequency is randomly selected from the corresponding subband and follows the optimal discrete beta distribution. The switching frequency harmonic energy in the line voltage spectrum spreads across multiple frequency subbands and discrete frequency components, thereby forming a more uniform power spectrum of the line voltage. Both simulation and experimental results validate that, compared with CSVPWM, the sideband harmonic amplitude is reduced by more than 8.5 dB across the entire range of speed and torque conditions in the TSBDR-SVPWM. Furthermore, with the same variation range of the switching frequency, the proposed method achieves the lowest switching frequency harmonic amplitude and flattest line voltage spectrum compared with several state-of-the-art random modulation methods. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

11 pages, 245 KB  
Article
Formulae for Generalization of Touchard Polynomials with Their Generating Functions
by Ayse Yilmaz Ceylan and Yilmaz Simsek
Symmetry 2025, 17(7), 1126; https://doi.org/10.3390/sym17071126 - 14 Jul 2025
Cited by 1 | Viewed by 1148
Abstract
One of the main motivations of this paper is to construct generating functions for generalization of the Touchard polynomials (or generalization exponential functions) and certain special numbers. Many novel formulas and relations for these polynomials are found by using the Euler derivative operator [...] Read more.
One of the main motivations of this paper is to construct generating functions for generalization of the Touchard polynomials (or generalization exponential functions) and certain special numbers. Many novel formulas and relations for these polynomials are found by using the Euler derivative operator and functional equations of these functions. Some novel relations among these polynomials, beta polynomials, Bernstein polynomials, related to Binomial distribution from discrete probability distribution classes, are given. Full article
(This article belongs to the Section Mathematics)
21 pages, 4561 KB  
Article
Lateral Loaded Pile Reliability Analysis Using the Random Set Method
by Marek Wyjadłowski
Buildings 2025, 15(6), 882; https://doi.org/10.3390/buildings15060882 - 12 Mar 2025
Cited by 2 | Viewed by 1564
Abstract
This study presents a procedure applied to design problems for lateral loaded piles. Calculations for a rigid concrete pile in non-cohesive soil are conducted with the aim of estimating the allowable horizontal force using the methods of Broms and Petrasovit. Random sets are [...] Read more.
This study presents a procedure applied to design problems for lateral loaded piles. Calculations for a rigid concrete pile in non-cohesive soil are conducted with the aim of estimating the allowable horizontal force using the methods of Broms and Petrasovit. Random sets are applied to represent the uncertainties of soil parameters, including the internal friction angle and unit weight. Random variables are described using log-normal and beta distributions. Random set theory is utilised to represent variability in the form of probability boxes, possibility distributions, cumulative distribution functions, or intervals. Based on the assumed distributions of the subsoil, the lower and upper bounds for the precise probability of fulfilment of the limit state function of a laterally loaded pile are estimated. The reliability calculation procedure is implemented using the R package (R Studio v2024.12.1+563), and the limit forces and reliability indicators calculated using the two considered methods are compared. The presented procedure serves as an example of the use of a probabilistic approach for the assessment of the capacity of a laterally loaded pile, using a setup for the task involving set-based data and discrete probability distributions. Full article
Show Figures

Figure 1

24 pages, 1946 KB  
Article
Network Diffusion-Constrained Variational Generative Models for Investigating the Molecular Dynamics of Brain Connectomes Under Neurodegeneration
by Jiajia Xie, Raghav Tandon and Cassie S. Mitchell
Int. J. Mol. Sci. 2025, 26(3), 1062; https://doi.org/10.3390/ijms26031062 - 26 Jan 2025
Cited by 4 | Viewed by 3099
Abstract
Alzheimer’s disease (AD) is a complex and progressive neurodegenerative condition with significant societal impact. Understanding the temporal dynamics of its pathology is essential for advancing therapeutic interventions. Empirical and anatomical evidence indicates that network decoupling occurs as a result of gray matter atrophy. [...] Read more.
Alzheimer’s disease (AD) is a complex and progressive neurodegenerative condition with significant societal impact. Understanding the temporal dynamics of its pathology is essential for advancing therapeutic interventions. Empirical and anatomical evidence indicates that network decoupling occurs as a result of gray matter atrophy. However, the scarcity of longitudinal clinical data presents challenges for computer-based simulations. To address this, a first-principles-based, physics-constrained Bayesian framework is proposed to model time-dependent connectome dynamics during neurodegeneration. This temporal diffusion network framework segments pathological progression into discrete time windows and optimizes connectome distributions for biomarker Bayesian regression, conceptualized as a learning problem. The framework employs a variational autoencoder-like architecture with computational enhancements to stabilize and improve training efficiency. Experimental evaluations demonstrate that the proposed temporal meta-models outperform traditional static diffusion models. The models were evaluated using both synthetic and real-world MRI and PET clinical datasets that measure amyloid beta, tau, and glucose metabolism. The framework successfully distinguishes normative aging from AD pathology. Findings provide novel support for the “decoupling” hypothesis and reveal eigenvalue-based evidence of pathological destabilization in AD. Future optimization of the model, integrated with real-world clinical data, is expected to improve applications in personalized medicine for AD and other neurodegenerative diseases. Full article
Show Figures

Figure 1

31 pages, 402 KB  
Article
Hidden Variable Models in Text Classification and Sentiment Analysis
by Pantea Koochemeshkian, Eddy Ihou Koffi and Nizar Bouguila
Electronics 2024, 13(10), 1859; https://doi.org/10.3390/electronics13101859 - 10 May 2024
Cited by 2 | Viewed by 1998
Abstract
In this paper, we are proposing extensions to the multinomial principal component analysis (MPCA) framework, which is a Dirichlet (Dir)-based model widely used in text document analysis. The MPCA is a discrete analogue to the standard PCA (it operates on continuous data using [...] Read more.
In this paper, we are proposing extensions to the multinomial principal component analysis (MPCA) framework, which is a Dirichlet (Dir)-based model widely used in text document analysis. The MPCA is a discrete analogue to the standard PCA (it operates on continuous data using Gaussian distributions). With the extensive use of count data in modeling nowadays, the current limitations of the Dir prior (independent assumption within its components and very restricted covariance structure) tend to prevent efficient processing. As a result, we are proposing some alternatives with flexible priors such as generalized Dirichlet (GD) and Beta-Liouville (BL), leading to GDMPCA and BLMPCA models, respectively. Besides using these priors as they generalize the Dir, importantly, we also implement a deterministic method that uses variational Bayesian inference for the fast convergence of the proposed algorithms. Additionally, we use collapsed Gibbs sampling to estimate the model parameters, providing a computationally efficient method for inference. These two variational models offer higher flexibility while assigning each observation to a distinct cluster. We create several multitopic models and evaluate their strengths and weaknesses using real-world applications such as text classification and sentiment analysis. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

15 pages, 3284 KB  
Article
Comments on the Bernoulli Distribution and Hilbe’s Implicit Extra-Dispersion
by Daniel A. Griffith
Stats 2024, 7(1), 269-283; https://doi.org/10.3390/stats7010016 - 5 Mar 2024
Cited by 4 | Viewed by 3177
Abstract
For decades, conventional wisdom maintained that binary 0–1 Bernoulli random variables cannot contain extra-binomial variation. Taking an unorthodox stance, Hilbe actively disagreed, especially for correlated observation instances, arguing that the universally adopted diagnostic Pearson or deviance dispersion statistics are insensitive to a variance [...] Read more.
For decades, conventional wisdom maintained that binary 0–1 Bernoulli random variables cannot contain extra-binomial variation. Taking an unorthodox stance, Hilbe actively disagreed, especially for correlated observation instances, arguing that the universally adopted diagnostic Pearson or deviance dispersion statistics are insensitive to a variance anomaly in a binary context, and hence simply fail to detect it. However, having the intuition and insight to sense the existence of this departure from standard mathematical statistical theory, but being unable to effectively isolate it, he classified this particular over-/under-dispersion phenomenon as implicit. This paper explicitly exposes his hidden quantity by demonstrating that the variance in/deflation it represents occurs in an underlying predicted beta random variable whose real number values are rounded to their nearest integers to convert to a Bernoulli random variable, with this discretization masking any materialized extra-Bernoulli variation. In doing so, asymptotics linking the beta-binomial and Bernoulli distributions show another conventional wisdom misconception, namely a mislabeling substitution involving the quasi-Bernoulli random variable; this undeniably is not a quasi-likelihood situation. A public bell pepper disease dataset exhibiting conspicuous spatial autocorrelation furnishes empirical examples illustrating various features of this advocated proposition. Full article
Show Figures

Figure 1

17 pages, 2377 KB  
Article
A Comparative Study of Item Response Theory Models for Mixed Discrete-Continuous Responses
by Cengiz Zopluoglu and J. R. Lockwood
J. Intell. 2024, 12(3), 26; https://doi.org/10.3390/jintelligence12030026 - 25 Feb 2024
Viewed by 4093
Abstract
Language proficiency assessments are pivotal in educational and professional decision-making. With the integration of AI-driven technologies, these assessments can more frequently use item types, such as dictation tasks, producing response features with a mixture of discrete and continuous distributions. This study evaluates novel [...] Read more.
Language proficiency assessments are pivotal in educational and professional decision-making. With the integration of AI-driven technologies, these assessments can more frequently use item types, such as dictation tasks, producing response features with a mixture of discrete and continuous distributions. This study evaluates novel measurement models tailored to these unique response features. Specifically, we evaluated the performance of the zero-and-one-inflated extensions of the Beta, Simplex, and Samejima’s Continuous item response models and incorporated collateral information into the estimation using latent regression. Our findings highlight that while all models provided highly correlated results regarding item and person parameters, the Beta item response model showcased superior out-of-sample predictive accuracy. However, a significant challenge was the absence of established benchmarks for evaluating model and item fit for these novel item response models. There is a need for further research to establish benchmarks for evaluating the fit of these innovative models to ensure their reliability and validity in real-world applications. Full article
(This article belongs to the Topic Psychometric Methods: Theory and Practice)
Show Figures

Figure 1

12 pages, 2375 KB  
Article
Physics-Based Signal Analysis of Genome Sequences: An Overview of GenomeBits
by Enrique Canessa
Microorganisms 2023, 11(11), 2733; https://doi.org/10.3390/microorganisms11112733 - 9 Nov 2023
Viewed by 1861
Abstract
A comprehensive overview of the recent physics-inspired genome analysis tool, GenomeBits, is presented. This is based on traditional signal processing methods such as discrete Fourier transform (DFT). GenomeBits can be used to extract underlying genomics features from the distribution of nucleotides, and can [...] Read more.
A comprehensive overview of the recent physics-inspired genome analysis tool, GenomeBits, is presented. This is based on traditional signal processing methods such as discrete Fourier transform (DFT). GenomeBits can be used to extract underlying genomics features from the distribution of nucleotides, and can be further used to analyze the mutation patterns in viral genomes. Examples of the main GenomeBits findings outlining the intrinsic signal organization of genomics sequences for different SARS-CoV-2 variants along the pandemic years 2020–2022 and Monkeypox cases in 2021 are presented to show the usefulness of GenomeBits. GenomeBits results for DFT of SARS-CoV-2 genomes in different geographical regions are discussed, together with the GenomeBits analysis of complete genome sequences for the first coronavirus variants reported: Alpha, Beta, Gamma, Epsilon and Eta. Interesting features of the Delta and Omicron variants in the form of a unique ‘order–disorder’ transition are uncovered from these samples, as well as from their cumulative distribution function and scatter plots. This class of transitions might reveal the cumulative outcome of mutations on the spike protein. A salient feature of GenomeBits is the mapping of the nucleotide bases (A,T,C,G) into an alternating spin-like numerical sequence via a series having binary (0,1) indicators for each A,T,C,G. This leads to the derivation of a set of statistical distribution curves. Furthermore, the quantum-based extension of the GenomeBits model to an analogous probability measure is shown to identify properties of genome sequences as wavefunctions via a superposition of states. An association of the integral of the GenomeBits coding and a binding-like energy can, in principle, also be established. The relevance of these different results in bioinformatics is analyzed. Full article
Show Figures

Figure 1

24 pages, 13149 KB  
Article
An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
by Qianhao Xiao, Li Jiang, Manman Wang and Xin Zhang
Sensors 2023, 23(13), 6101; https://doi.org/10.3390/s23136101 - 2 Jul 2023
Cited by 15 | Viewed by 4756
Abstract
Traditional path planning is mainly utilized for path planning in discrete action space, which results in incomplete ship navigation power propulsion strategies during the path search process. Moreover, reinforcement learning experiences low success rates due to its unbalanced sample collection and unreasonable design [...] Read more.
Traditional path planning is mainly utilized for path planning in discrete action space, which results in incomplete ship navigation power propulsion strategies during the path search process. Moreover, reinforcement learning experiences low success rates due to its unbalanced sample collection and unreasonable design of reward function. In this paper, an environment framework is designed, which is constructed using the Box2D physics engine and employs a reward function, with the distance between the agent and arrival point as the main, and the potential field superimposed by boundary control, obstacles, and arrival point as the supplement. We also employ the state-of-the-art PPO (Proximal Policy Optimization) algorithm as a baseline for global path planning to address the issue of incomplete ship navigation power propulsion strategy. Additionally, a Beta policy-based distributed sample collection PPO algorithm is proposed to overcome the problem of unbalanced sample collection in path planning by dividing sub-regions to achieve distributed sample collection. The experimental results show the following: (1) The distributed sample collection training policy exhibits stronger robustness in the PPO algorithm; (2) The introduced Beta policy for action sampling results in a higher path planning success rate and reward accumulation than the Gaussian policy at the same training time; (3) When planning a path of the same length, the proposed Beta policy-based distributed sample collection PPO algorithm generates a smoother path than traditional path planning algorithms, such as A*, IDA*, and Dijkstra. Full article
(This article belongs to the Topic Artificial Intelligence in Navigation)
Show Figures

Figure 1

19 pages, 1538 KB  
Article
A New Soft-Clipping Discrete Beta GARCH Model and Its Application on Measles Infection
by Huaping Chen
Stats 2023, 6(1), 293-311; https://doi.org/10.3390/stats6010018 - 9 Feb 2023
Cited by 2 | Viewed by 2302
Abstract
In this paper, we develop a novel soft-clipping discrete beta GARCH (ScDBGARCH) model that provides an available method to model bounded time series with under-dispersion, equi-dispersion or over-dispersion. The new model not only allows positive dependence, but also negative dependence. The stochastic properties [...] Read more.
In this paper, we develop a novel soft-clipping discrete beta GARCH (ScDBGARCH) model that provides an available method to model bounded time series with under-dispersion, equi-dispersion or over-dispersion. The new model not only allows positive dependence, but also negative dependence. The stochastic properties of the models are established, and these results are, in turn, used in the analysis of the asymptotic properties of the conditional maximum likelihood (CML) estimator of the new model. In addition, we apply the new model to measles infection to show its improved performance. Full article
Show Figures

Figure 1

24 pages, 482 KB  
Article
Modelling Coronavirus and Larvae Pyrausta Data: A Discrete Binomial Exponential II Distribution with Properties, Classical and Bayesian Estimation
by Mohamed S. Eliwa, Abhishek Tyagi, Bader Almohaimeed and Mahmoud El-Morshedy
Axioms 2022, 11(11), 646; https://doi.org/10.3390/axioms11110646 - 16 Nov 2022
Cited by 7 | Viewed by 2424
Abstract
In this article, we propose the discrete version of the binomial exponential II distribution for modelling count data. Some of its statistical properties including hazard rate function, mode, moments, skewness, kurtosis, and index of dispersion are derived. The shape of the failure rate [...] Read more.
In this article, we propose the discrete version of the binomial exponential II distribution for modelling count data. Some of its statistical properties including hazard rate function, mode, moments, skewness, kurtosis, and index of dispersion are derived. The shape of the failure rate function is increasing. Moreover, the proposed model is appropriate for modelling equi-, over- and under-dispersed data. The parameter estimation through the classical point of view has been done using the method of maximum likelihood, whereas, in the Bayesian framework, assuming independent beta priors of model parameters, the Metropolis–Hastings algorithm within Gibbs sampler is used to obtain sample-based Bayes estimates of the unknown parameters of the proposed model. A detailed simulation study is carried out to examine the outcomes of maximum likelihood and Bayesian estimators. Finally, two distinctive real data sets are analyzed using the proposed model. These applications showed the flexibility of the new distribution. Full article
Show Figures

Figure 1

17 pages, 2171 KB  
Article
Analyzing the Data Completeness of Patients’ Records Using a Random Variable Approach to Predict the Incompleteness of Electronic Health Records
by Varadraj P. Gurupur, Paniz Abedin, Sahar Hooshmand and Muhammed Shelleh
Appl. Sci. 2022, 12(21), 10746; https://doi.org/10.3390/app122110746 - 24 Oct 2022
Cited by 10 | Viewed by 4888
Abstract
The purpose of this article is to illustrate an investigation of methods that can be effectively used to predict the data incompleteness of a dataset. Here, the investigators have conceptualized data incompleteness as a random variable, with the overall goal behind experimentation providing [...] Read more.
The purpose of this article is to illustrate an investigation of methods that can be effectively used to predict the data incompleteness of a dataset. Here, the investigators have conceptualized data incompleteness as a random variable, with the overall goal behind experimentation providing a 360-degree view of this concept conceptualizing incompleteness of a dataset both as a continuous, discrete random variable depending on the aspect of the required analysis. During the course of the experiments, the investigators have identified Kolomogorov–Smirnov goodness of fit, Mielke distribution, and beta distributions as key methods to analyze the incompleteness of a dataset for the datasets used for experimentation. A comparison of these methods with a mixture density network was also performed. Overall, the investigators have provided key insights into the use of methods and algorithms that can be used to predict data incompleteness and have provided a pathway for further explorations and prediction of data incompleteness. Full article
(This article belongs to the Special Issue Recent Advances in Bioinformatics and Health Informatics)
Show Figures

Figure 1

Back to TopTop