Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (122)

Search Parameters:
Keywords = Bayesian mixture estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1726 KB  
Article
Prenatal Phthalate Exposures and Adiposity Outcomes Trajectories: A Multivariate Bayesian Factor Regression Approach
by Phuc H. Nguyen, Stephanie M. Engel and Amy H. Herring
Int. J. Environ. Res. Public Health 2025, 22(10), 1466; https://doi.org/10.3390/ijerph22101466 - 23 Sep 2025
Viewed by 299
Abstract
Experimental animal evidence and a growing body of observational studies suggest that prenatal exposure to phthalates may be a risk factor for childhood obesity. Using data from the Mount Sinai Children’s Environmental Health Study (MSCEHS), which measured urinary phthalate metabolites (including MEP, MnBP, [...] Read more.
Experimental animal evidence and a growing body of observational studies suggest that prenatal exposure to phthalates may be a risk factor for childhood obesity. Using data from the Mount Sinai Children’s Environmental Health Study (MSCEHS), which measured urinary phthalate metabolites (including MEP, MnBP, MiBP, MCPP, MBzP, MEHP, MEHHP, MEOHP, and MECPP) during the third trimester of pregnancy (between 25 and 40 weeks) of 382 mothers, we examined adiposity outcomes—body mass index (BMI), fat mass percentage, waist-to-hip ratio, and waist circumference—of 180 children between ages 4 and 9. Our aim was to assess the effects of prenatal exposure to phthalates on these adiposity outcomes, with potential time-varying and sex-specific effects. We applied a novel Bayesian multivariate factor regression (BMFR) that (1) represents phthalate mixtures as latent factors—a DEHP and a non-DEHP factor, (2) borrows information across highly correlated adiposity outcomes to improve estimation precision, (3) models potentially non-linear time-varying effects of the latent factors on adiposity outcomes, and (4) fully quantifies uncertainty using state-of-the-art prior specifications. The results show that in boys, at younger ages (4–6), all phthalate components are associated with lower adiposity outcomes; however, after age 7, they are associated with higher outcomes. In girls, there is no evidence of associations between phthalate factors and adiposity outcomes. Full article
Show Figures

Figure 1

28 pages, 16152 KB  
Article
A Smooth-Delayed Phase-Type Mixture Model for Human-Driven Process Duration Modeling
by Dongwei Wang, Sally McClean, Lingkai Yang, Ian McChesney and Zeeshan Tariq
Algorithms 2025, 18(9), 575; https://doi.org/10.3390/a18090575 - 11 Sep 2025
Viewed by 329
Abstract
Activities in business processes primarily depend on human behavior for completion. Due to human agency, the behavior underlying individual activities may occur in multiple phases and can vary in execution. As a result, the execution duration and nature of such activities may exhibit [...] Read more.
Activities in business processes primarily depend on human behavior for completion. Due to human agency, the behavior underlying individual activities may occur in multiple phases and can vary in execution. As a result, the execution duration and nature of such activities may exhibit complex multimodal characteristics. Phase-type distributions are useful for analyzing the underlying behavioral structure, which may consist of multiple sub-activities. The phenomenon of delayed start is also common in such activities, possibly due to the minimum task completion time or prerequisite tasks. As a result, the distribution of durations or certain components does not start at zero but has a minimum value, and the probability below this value is zero. When using phase-type models to fit such distributions, a large number of phases are often required, which exceed the actual number of sub-activities. This reduces the interpretability of the parameters and may also lead to optimization difficulties due to overparameterization. In this paper, we propose a smooth-delayed phase-type mixture model that introduces delay parameters to address the difficulty of fitting this kind of distribution. Since durations shorter than the delay should have zero probability, such hard truncation renders the parameter not estimable under the Expectation–Maximization (EM) framework. To overcome this, we design a soft-truncation mechanism to improve model convergence. We further develop an inference framework that combines the EM algorithm, Bayesian inference, and Sequential Least Squares Programming for comprehensive and efficient parameter estimation. The method is validated on a synthetic dataset and two real-world datasets. Results demonstrate that the proposed approach maintains a suitable performance comparable to purely data-driven methods while providing good interpretability to reveal the potential underlying structure behind human-driven activities. Full article
Show Figures

Graphical abstract

10 pages, 2564 KB  
Proceeding Paper
Multipath Characterization of GNSS Ground Stations Using RINEX Observations and Machine Learning
by Gerardo Allende-Alba, Stefano Caizzone and Ernest Ofosu Addo
Eng. Proc. 2025, 88(1), 72; https://doi.org/10.3390/engproc2025088072 - 22 Aug 2025
Viewed by 529
Abstract
Multipath is one of the most challenging factors to model and/or characterize in the GNSS observation error budget. In the case of ground stations, code phase static multipath is typically the largest contributor of local observation errors. Current approaches for multipath characterization include [...] Read more.
Multipath is one of the most challenging factors to model and/or characterize in the GNSS observation error budget. In the case of ground stations, code phase static multipath is typically the largest contributor of local observation errors. Current approaches for multipath characterization include the analysis of code-minus-carrier (CMC) observables and the exploitation of multipath repeatability. This contribution presents an alternative strategy for multipath detection and characterization based on unsupervised and self-supervised machine learning methods. The proposed strategy makes use of observations in the Receiver Independent Exchange Format (RINEX), typically generated by GNSS receivers in ground stations, for model training and testing, without requiring the availability of labeled data. To assess the performance of the proposed strategy (data-based), a comparison with a model-based methodology for multipath error prediction using a digital twin model is carried out. Results from a test case using data from a monitoring station of the International GNSS Service (IGS) show a point of consistency between the two approaches. The proposed methodology is applicable for a similar characterization in any GNSS ground station. Full article
(This article belongs to the Proceedings of European Navigation Conference 2024)
Show Figures

Figure 1

20 pages, 10452 KB  
Article
Nonlocal Prior Mixture-Based Bayesian Wavelet Regression with Application to Noisy Imaging and Audio Data
by Nilotpal Sanyal
Mathematics 2025, 13(16), 2642; https://doi.org/10.3390/math13162642 - 17 Aug 2025
Viewed by 347
Abstract
We propose a novel Bayesian wavelet regression approach using a three-component spike-and-slab prior for wavelet coefficients, combining a point mass at zero, a moment (MOM) prior, and an inverse moment (IMOM) prior. This flexible prior supports small and large coefficients differently, offering advantages [...] Read more.
We propose a novel Bayesian wavelet regression approach using a three-component spike-and-slab prior for wavelet coefficients, combining a point mass at zero, a moment (MOM) prior, and an inverse moment (IMOM) prior. This flexible prior supports small and large coefficients differently, offering advantages for highly dispersed data where wavelet coefficients span multiple scales. The IMOM prior’s heavy tails capture large coefficients, while the MOM prior is better suited for smaller non-zero coefficients. Further, our method introduces innovative hyperparameter specifications for mixture probabilities and scale parameters, including generalized logit, hyperbolic secant, and generalized normal decay for probabilities, and double exponential decay for scaling. Hyperparameters are estimated via an empirical Bayes approach, enabling posterior inference tailored to the data. Extensive simulations demonstrate significant performance gains over two-component wavelet methods. Applications to electroencephalography and noisy audio data illustrate the method’s utility in capturing complex signal characteristics. We implement our method in an R package, NLPwavelet (≥1.1). Full article
(This article belongs to the Special Issue Bayesian Statistics and Applications)
Show Figures

Figure 1

23 pages, 374 KB  
Article
Empirical Lossless Compression Bound of a Data Sequence
by Lei M. Li
Entropy 2025, 27(8), 864; https://doi.org/10.3390/e27080864 - 14 Aug 2025
Viewed by 1170
Abstract
We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is nH, where n is the number of words and [...] Read more.
We consider the lossless compression bound of any individual data sequence. Conceptually, its Kolmogorov complexity is such a bound yet uncomputable. According to Shannon’s source coding theorem, the average compression bound is nH, where n is the number of words and H is the entropy of an oracle probability distribution characterizing the data source. The quantity nH(θ^n) obtained by plugging in the maximum likelihood estimate is an underestimate of the bound. Shtarkov showed that the normalized maximum likelihood (NML) distribution is optimal in a minimax sense for any parametric family. Fitting a data sequence—without any a priori distributional assumption—by a relevant exponential family, we apply the local asymptotic normality to show that the NML code length is nH(θ^n)+d2logn2π+logΘ|I(θ)|1/2dθ+o(1), where d is dictionary size, |I(θ)| is the determinant of the Fisher information matrix, and Θ is the parameter space. We demonstrate that sequentially predicting the optimal code length for the next word via a Bayesian mechanism leads to the mixture code whose length is given by nH(θ^n)+d2logn2π+log|I(θ^n)|1/2w(θ^n)+o(1), where w(θ) is a prior. The asymptotics apply to not only discrete symbols but also continuous data if the code length for the former is replaced by the description length for the latter. The analytical result is exemplified by calculating compression bounds of protein-encoding DNA sequences under different parsing models. Typically, compression is maximized when parsing aligns with amino acid codons, while pseudo-random sequences remain incompressible, as predicted by Kolmogorov complexity. Notably, the empirical bound becomes more accurate as the dictionary size increases. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

27 pages, 532 KB  
Article
Bayesian Binary Search
by Vikash Singh, Matthew Khanzadeh, Vincent Davis, Harrison Rush, Emanuele Rossi, Jesse Shrader and Pietro Lio’
Algorithms 2025, 18(8), 452; https://doi.org/10.3390/a18080452 - 22 Jul 2025
Cited by 2 | Viewed by 1477
Abstract
We present Bayesian Binary Search (BBS), a novel framework that bridges statistical learning theory/probabilistic machine learning and binary search. BBS utilizes probabilistic methods to learn the underlying probability density of the search space. This learned distribution then informs a modified bisection strategy, where [...] Read more.
We present Bayesian Binary Search (BBS), a novel framework that bridges statistical learning theory/probabilistic machine learning and binary search. BBS utilizes probabilistic methods to learn the underlying probability density of the search space. This learned distribution then informs a modified bisection strategy, where the split point is determined by probability density rather than the conventional midpoint. This learning process for search space density estimation can be achieved through various supervised probabilistic machine learning techniques (e.g., Gaussian Process Regression, Bayesian Neural Networks, and Quantile Regression) or unsupervised statistical learning algorithms (e.g., Gaussian Mixture Models, Kernel Density Estimation (KDE), and Maximum Likelihood Estimation (MLE)). Our results demonstrate substantial efficiency improvements using BBS on both synthetic data with diverse distributions and in a real-world scenario involving Bitcoin Lightning Network channel balance probing (3–6% efficiency gain), where BBS is currently in production. Full article
Show Figures

Figure 1

22 pages, 5236 KB  
Article
Research on Slope Stability Based on Bayesian Gaussian Mixture Model and Random Reduction Method
by Jingrong He, Tao Deng, Shouxing Peng, Xing Pang, Daochun Wan, Shaojun Zhang and Xiaoqiang Zhang
Appl. Sci. 2025, 15(14), 7926; https://doi.org/10.3390/app15147926 - 16 Jul 2025
Cited by 1 | Viewed by 543
Abstract
Slope stability analysis is conventionally performed using the strength reduction method with the proportional reduction in shear strength parameters. However, during actual slope failure processes, the attenuation characteristics of rock mass cohesion (c) and internal friction angle (φ) are [...] Read more.
Slope stability analysis is conventionally performed using the strength reduction method with the proportional reduction in shear strength parameters. However, during actual slope failure processes, the attenuation characteristics of rock mass cohesion (c) and internal friction angle (φ) are often inconsistent, and their reduction paths exhibit clear nonlinearity. Relying solely on proportional reduction paths to calculate safety factors may therefore lack scientific rigor and fail to reflect true slope behavior. To address this limitation, this study proposes a novel approach that considers the non-proportional reduction of c and φ, without dependence on predefined reduction paths. The method begins with an analysis of slope stability states based on energy dissipation theory. A Bayesian Gaussian Mixture Model (BGMM) is employed for intelligent interpretation of the dissipated energy data, and, combined with energy mutation theory, is used to identify instability states under various reduction parameter combinations. To compute the safety factor, the concept of a “reference slope” is introduced. This reference slope represents the state at which the slope reaches limit equilibrium under strength reduction. The safety factor is then defined as the ratio of the shear strength of the target analyzed slope to that of the reference slope, providing a physically meaningful and interpretable safety index. Compared with traditional proportional reduction methods, the proposed approach offers more accurate estimation of safety factors, demonstrates superior sensitivity in identifying critical slopes, and significantly improves the reliability and precision of slope stability assessments. These advantages contribute to enhanced safety management and risk control in slope engineering practice. Full article
(This article belongs to the Special Issue Slope Stability and Earth Retaining Structures—2nd Edition)
Show Figures

Figure 1

30 pages, 8543 KB  
Article
Multi-Channel Coupled Variational Bayesian Framework with Structured Sparse Priors for High-Resolution Imaging of Complex Maneuvering Targets
by Xin Wang, Jing Yang and Yong Luo
Remote Sens. 2025, 17(14), 2430; https://doi.org/10.3390/rs17142430 - 13 Jul 2025
Viewed by 529
Abstract
High-resolution ISAR (Inverse Synthetic Aperture Radar) imaging plays a crucial role in dynamic target monitoring for aerospace, maritime, and ground surveillance. Among various remote sensing techniques, ISAR is distinguished by its ability to produce high-resolution images of non-cooperative maneuvering targets. To meet the [...] Read more.
High-resolution ISAR (Inverse Synthetic Aperture Radar) imaging plays a crucial role in dynamic target monitoring for aerospace, maritime, and ground surveillance. Among various remote sensing techniques, ISAR is distinguished by its ability to produce high-resolution images of non-cooperative maneuvering targets. To meet the increasing demands for resolution and robustness, modern ISAR systems are evolving toward wideband and multi-channel architectures. In particular, multi-channel configurations based on large-scale receiving arrays have gained significant attention. In such systems, each receiving element functions as an independent spatial channel, acquiring observations from distinct perspectives. These multi-angle measurements enrich the available echo information and enhance the robustness of target imaging. However, this setup also brings significant challenges, including inter-channel coupling, high-dimensional joint signal modeling, and non-Gaussian, mixed-mode interference, which often degrade image quality and hinder reconstruction performance. To address these issues, this paper proposes a Hybrid Variational Bayesian Multi-Interference (HVB-MI) imaging algorithm based on a hierarchical Bayesian framework. The method jointly models temporal correlations and inter-channel structure, introducing a coupled processing strategy to reduce dimensionality and computational complexity. To handle complex noise environments, a Gaussian mixture model (GMM) is used to represent nonstationary mixed noise. A variational Bayesian inference (VBI) approach is developed for efficient parameter estimation and robust image recovery. Experimental results on both simulated and real-measured data demonstrate that the proposed method achieves significantly improved image resolution and noise robustness compared with existing approaches, particularly under conditions of sparse sampling or strong interference. Quantitative evaluation further shows that under the continuous sparse mode with a 75% sampling rate, the proposed method achieves a significantly higher Laplacian Variance (LV), outperforming PCSBL and CPESBL by 61.7% and 28.9%, respectively and thereby demonstrating its superior ability to preserve fine image details. Full article
Show Figures

Graphical abstract

19 pages, 2703 KB  
Article
Identifying Risk Regimes in a Sectoral Stock Index Through a Multivariate Hidden Markov Framework
by Akara Kijkarncharoensin
Risks 2025, 13(7), 135; https://doi.org/10.3390/risks13070135 - 9 Jul 2025
Viewed by 1422
Abstract
This study explores the presence of hidden market regimes in a sector-specific stock index within the Thai equity market. The behavior of such indices often deviates from broader macroeconomic trends, making it difficult for conventional models to detect regime changes. To overcome this [...] Read more.
This study explores the presence of hidden market regimes in a sector-specific stock index within the Thai equity market. The behavior of such indices often deviates from broader macroeconomic trends, making it difficult for conventional models to detect regime changes. To overcome this limitation, the study employs a multivariate Gaussian mixture hidden Markov model, which enables the identification of unobservable states based on daily and intraday return patterns. These patterns include open-to-close, open-to-high, and low-to-open returns. The model is estimated using various specifications, and the best-performing structure is chosen based on the Akaike Information Criterion and the Bayesian Information Criterion. The final model reveals three statistically distinct regimes that correspond to bullish, sideways, and bearish conditions. Statistical tests, particularly the Kruskal–Wallis method, confirm that return distributions, trading volume, and open interest differ significantly across these regimes. Additionally, the analysis incorporates risk measures, including expected shortfall, maximum drawdown, and the coefficient of variation. The results indicate that the bearish regime carries the highest risk, whereas the bullish regime is relatively stable. These findings offer practical insights for regime-aware portfolio management in sectoral equity markets. Full article
Show Figures

Figure 1

26 pages, 3112 KB  
Article
Pre-Warning for the Remaining Time to Alarm Based on Variation Rates and Mixture Entropies
by Zijiang Yang, Jiandong Wang, Honghai Li and Song Gao
Entropy 2025, 27(7), 736; https://doi.org/10.3390/e27070736 - 9 Jul 2025
Viewed by 422
Abstract
Alarm systems play crucial roles in industrial process safety. To support tackling the accident that is about to occur after an alarm, a pre-warning method is proposed for a special class of industrial process variables to alert operators about the remaining time to [...] Read more.
Alarm systems play crucial roles in industrial process safety. To support tackling the accident that is about to occur after an alarm, a pre-warning method is proposed for a special class of industrial process variables to alert operators about the remaining time to alarm. The main idea of the proposed method is to estimate the remaining time to alarm based on variation rates and mixture entropies of qualitative trends in univariate variables. If the remaining time to alarm is no longer than the pre-warning threshold and its mixture entropy is small enough then a warning is generated to alert the operators. One challenge for the proposed method is how to determine an optimal pre-warning threshold by considering the uncertainties induced by the sample distribution of the remaining time to alarm, subject to the constraint of the required false warning rate. This challenge is addressed by utilizing Bayesian estimation theory to estimate the confidence intervals for all candidates of the pre-warning threshold, and the optimal one is selected as the one whose upper bound of the confidence interval is nearest to the required false warning rate. Another challenge is how to measure the possibility of the current trend segment increasing to the alarm threshold, and this challenge is overcome by adopting the mixture entropy as a possibility measurement. Numerical and industrial examples illustrate the effectiveness of the proposed method and the advantages of the proposed method over the existing methods. Full article
(This article belongs to the Special Issue Failure Diagnosis of Complex Systems)
Show Figures

Figure 1

24 pages, 3869 KB  
Article
Recursive Bayesian Decoding in State Observation Models: Theory and Application in Quantum-Based Inference
by Branislav Rudić, Markus Pichler-Scheder and Dmitry Efrosinin
Mathematics 2025, 13(12), 2012; https://doi.org/10.3390/math13122012 - 18 Jun 2025
Viewed by 554
Abstract
Accurately estimating a sequence of latent variables in state observation models remains a challenging problem, particularly when maintaining coherence among consecutive estimates. While forward filtering and smoothing methods provide coherent marginal distributions, they often fail to maintain coherence in marginal MAP estimates. Existing [...] Read more.
Accurately estimating a sequence of latent variables in state observation models remains a challenging problem, particularly when maintaining coherence among consecutive estimates. While forward filtering and smoothing methods provide coherent marginal distributions, they often fail to maintain coherence in marginal MAP estimates. Existing methods efficiently handle discrete-state or Gaussian models. However, general models remain challenging. Recently, a recursive Bayesian decoder has been discussed, which effectively infers coherent state estimates in a wide range of models, including Gaussian and Gaussian mixture models. In this work, we analyze the theoretical properties and implications of this method, drawing connections to classical inference frameworks. The versatile applicability of mixture models and the prevailing advantage of the recursive Bayesian decoding method are demonstrated using the double-slit experiment. Rather than inferring the state of a quantum particle itself, we utilize interference patterns from the slit experiments to decode the movement of a non-stationary particle detector. Our findings indicate that, by appropriate modeling and inference, the fundamental uncertainty associated with quantum objects can be leveraged to decrease the induced uncertainty of states associated with classical objects. We thoroughly discuss the interpretability of the simulation results from multiple perspectives. Full article
(This article belongs to the Special Issue Mathematics Methods of Robotics and Intelligent Systems)
Show Figures

Figure 1

22 pages, 2323 KB  
Article
Finite Mixture Model-Based Analysis of Yarn Quality Parameters
by Esra Karakaş, Melik Koyuncu and Mülayim Öngün Ükelge
Appl. Sci. 2025, 15(12), 6407; https://doi.org/10.3390/app15126407 - 6 Jun 2025
Viewed by 528
Abstract
This study investigates the applicability of finite mixture models (FMMs) for accurately modeling yarn quality parameters in 28/1 Ne ring-spun polyester/viscose yarns, focusing on both yarn imperfections and mechanical properties. The research addresses the need for advanced statistical modeling techniques to better capture [...] Read more.
This study investigates the applicability of finite mixture models (FMMs) for accurately modeling yarn quality parameters in 28/1 Ne ring-spun polyester/viscose yarns, focusing on both yarn imperfections and mechanical properties. The research addresses the need for advanced statistical modeling techniques to better capture the inherent heterogeneity in textile production data. To this end, the Poisson mixture model is employed to represent count-based defects, such as thin places, thick places, and neps, while the gamma mixture model is used to model continuous variables, such as tenacity and elongation. Model parameters are estimated using the expectation–maximization (EM) algorithm, and model selection is guided by the Akaike and Bayesian information criteria (AIC and BIC). The results reveal that thin places are optimally modeled using a two-component Poisson mixture distribution, whereas thick places and neps require three components to reflect their variability. Similarly, a two-component gamma mixture distribution best describes the distributions of tenacity and elongation. These findings highlight the robustness of FMMs in capturing complex distributional patterns in yarn data, demonstrating their potential in enhancing quality assessment and control processes in the textile industry. Full article
Show Figures

Figure 1

18 pages, 3211 KB  
Article
Combined Effect of Metals, PFAS, Phthalates, and Plasticizers on Cardiovascular Disease Risk
by Doreen Jehu-Appiah and Emmanuel Obeng-Gyasi
Toxics 2025, 13(6), 476; https://doi.org/10.3390/toxics13060476 - 5 Jun 2025
Viewed by 1148
Abstract
This study assessed the relationship between environmental chemical mixtures—including metals, per- and polyfluoroalkyl substances (PFAS), phthalates, and plasticizers—and key cardiovascular health markers using data from the 2013–2014 National Health and Nutrition Examination Survey (NHANES). The combined effects of these pollutants on cardiovascular markers [...] Read more.
This study assessed the relationship between environmental chemical mixtures—including metals, per- and polyfluoroalkyl substances (PFAS), phthalates, and plasticizers—and key cardiovascular health markers using data from the 2013–2014 National Health and Nutrition Examination Survey (NHANES). The combined effects of these pollutants on cardiovascular markers were evaluated using Bayesian Kernel Machine Regression (BKMR), a flexible, non-parametric modeling approach that accommodates nonlinear and interactive relationships among exposures. BKMR was applied to assess both the joint and individual associations of the chemical mixture with systolic blood pressure (SBP), high-density lipoprotein (HDL), low-density lipoprotein (LDL), diastolic blood pressure (DBP), total cholesterol, and triglycerides. As part of the BKMR analysis, posterior inclusion probabilities (PIPs) were estimated to identify the relative importance of each exposure within the mixture. These results highlighted phthalates as major contributors to LDL, SBP, total cholesterol, HDL, and triglycerides while plasticizers were associated with LDL, SBP, HDL, and triglycerides. Metals and PFAS were most strongly linked to LDL, DBP, total cholesterol, and SBP. The overall mixture effect indicated that cumulative exposures were associated with lower LDL and SBP and elevated DBP, suggesting an increased cardiovascular risk. Triglycerides exhibited a complex quantile-dependent trend, with higher exposures associated with reduced levels. These findings underscore the importance of mixture-based risk assessments that reflect real-world exposure scenarios. Full article
(This article belongs to the Section Human Toxicology and Epidemiology)
Show Figures

Figure 1

16 pages, 1319 KB  
Article
Dirichlet Mixed Process Integrated Bayesian Estimation for Individual Securities
by Phan Dinh Khoi, Thai Minh Trong and Christopher Gan
J. Risk Financial Manag. 2025, 18(6), 304; https://doi.org/10.3390/jrfm18060304 - 4 Jun 2025
Viewed by 795
Abstract
Bayesian nonparametric methods, particularly the Dirichlet process (DP), have gained increasing popularity in both theoretical and applied research, driven by advances in computing power. Traditional Bayesian estimation, which often relies on Gaussian priors, struggles to dynamically integrate evolving prior beliefs into the posterior [...] Read more.
Bayesian nonparametric methods, particularly the Dirichlet process (DP), have gained increasing popularity in both theoretical and applied research, driven by advances in computing power. Traditional Bayesian estimation, which often relies on Gaussian priors, struggles to dynamically integrate evolving prior beliefs into the posterior distribution for decision-making in finance. This study addresses that limitation by modeling daily security price fluctuations using a Dirichlet process mixture (DPM) model. Our results demonstrate the DPM’s effectiveness in identifying the optimal number of clusters within time series data, leading to more accurate density estimation. Unlike kernel methods, the DPM continuously updates the prior density based on observed data, enabling it to better capture the dynamic nature of security prices. This adaptive feature positions the DPM as a superior estimation technique for time series data with complex, multimodal distributions. Full article
(This article belongs to the Special Issue Featured Papers in Mathematics and Finance, 2nd Edition)
Show Figures

Figure 1

17 pages, 910 KB  
Article
The Combined Effects of Per- and Polyfluoroalkyl Substances, Metals, and Behavioral and Social Factors on Depressive Symptoms
by Olamide Ogundare and Emmanuel Obeng-Gyasi
Med. Sci. 2025, 13(2), 69; https://doi.org/10.3390/medsci13020069 - 1 Jun 2025
Cited by 1 | Viewed by 1594
Abstract
Background: This study investigates the combined effects of PFAS metals (PFOA and PFOS), heavy metals (lead, cadmium, and mercury), behavioral factors (smoking and alcohol consumption), and social factors (income and education) on depressive symptoms. Methods: Using cross-sectional data from the National Health and [...] Read more.
Background: This study investigates the combined effects of PFAS metals (PFOA and PFOS), heavy metals (lead, cadmium, and mercury), behavioral factors (smoking and alcohol consumption), and social factors (income and education) on depressive symptoms. Methods: Using cross-sectional data from the National Health and Nutrition Examination Survey (NHANES 2017–2018), blood samples were analyzed to determine the exposure levels of PFOA, PFOS, lead, cadmium, and mercury, and self-reported behavioral and social factors were evaluated in relation to PHQ-9 scores among 181 adults. Results: Education was associated with lower odds of depressive symptoms (OR = 0.68, 95% CI: 0.43–1.07). Although the result was not statistically significant, the estimate suggested a potential protective effect that warranted further investigation. Bayesian Kernel Machine Regression demonstrated that heavy metals collectively had the strongest evidence for influencing depression (group PIP = 0.6508), followed by socioeconomic factors (group PIP = 0.642). Bivariate exposure–response analyses revealed complex interaction patterns whereby exposure effects varied substantially depending on co-exposure contexts. Conclusions: These findings highlight that depressive symptoms are shaped by complex interplays between environmental contaminants, behavior, and social determinants, underscoring the importance of mixture-based approaches in environmental mental health research and the need for integrated interventions addressing both environmental and social factors. Full article
Show Figures

Figure 1

Back to TopTop