Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (478)

Search Parameters:
Keywords = Markov Chain-Monte Carlo methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 798 KB  
Article
A Bayesian Inference Algorithm for Equipment Software Price Estimation Based on Nonlinear Contribution Models
by Tian Meng and Guoping Jiang
Algorithms 2026, 19(5), 396; https://doi.org/10.3390/a19050396 (registering DOI) - 15 May 2026
Abstract
To address the challenges of difficult value quantification, lack of market benchmarks, and scarcity of historical data for embedded software amidst the intelligent transformation of equipment systems, this study develops a scientific price estimation method based on functional capability contribution. A nonlinear pricing [...] Read more.
To address the challenges of difficult value quantification, lack of market benchmarks, and scarcity of historical data for embedded software amidst the intelligent transformation of equipment systems, this study develops a scientific price estimation method based on functional capability contribution. A nonlinear pricing model is constructed to accurately characterize the two-stage evolution of software price: diminishing marginal utility during the mature technology accumulation stage and exponential growth during the technical bottleneck breakthrough stage. To ensure the consistency of pricing logic between hardware and software, a penalty function is innovatively designed to modify the standard likelihood function, effectively transforming practical business logic into a model regularization term. Parameter estimation is achieved by employing a Bayesian inference framework integrated with operational constraints, utilizing Markov Chain Monte Carlo (MCMC) sampling to realize robust posterior inference under small-sample constraints. Empirical analysis demonstrates that the proposed method achieves superior cross-domain data transfer performance compared to traditional baseline models, with a Leave-One-Out Cross-Validation (LOOCV) Mean Absolute Percentage Error (MAPE) of 21.2%. This research provides a practical value-oriented price estimation method for embedded equipment software pricing. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
21 pages, 6539 KB  
Article
Molecular Phylogeny, Divergence Time Estimation, and Biogeography of Moelleriella (Clavicipitaceae, Hypocreales) with Taxonomic Insights
by Yongsheng Lin, Jiao Yang, Nemat O. Keyhani, Luxiao Wang, Yuhang Yao, Xiuyan Wei, Feifei Song, Zhenxing Qiu, Shouping Cai, Xiayu Guan, Lin Zhao and Junzhi Qiu
Biology 2026, 15(10), 739; https://doi.org/10.3390/biology15100739 - 7 May 2026
Viewed by 333
Abstract
The Clavicipitaceae family, including saprobes and insect and myco-pathogens, are widely distributed in nature across various trophic regions, and play important roles in insect population control, plant interactions, and symbiotic evolution. Members of the genus Moelleriella within this family primarily specialize in infecting [...] Read more.
The Clavicipitaceae family, including saprobes and insect and myco-pathogens, are widely distributed in nature across various trophic regions, and play important roles in insect population control, plant interactions, and symbiotic evolution. Members of the genus Moelleriella within this family primarily specialize in infecting scale insects and whiteflies. Using five genomic loci (SSU, LSU, tef1-α, rpb1, and rpb2), we report on the inferred divergence times among members of Clavicipitaceae using molecular dating analyses. Molecular clock estimates revealed that the ancestor of Moelleriella likely emerged in the Late Cretaceous (91.60 Mya; 95% highest posterior density of 79.29–100.13 Mya). Historical biogeographic reconstruction of Moelleriella, performed using the Bayesian Binary Markov chain Monte Carlo (BBM) method, indicates that it most likely originated in Asia. Moreover, based on taxonomic and phylogenetic analyses, we describe three species within the genus Moelleriella, including one new species (Moelleriella microstroma) and two new records for China (Moelleriella chiangmaiensis and Moelleriella phukhiaoensis). Full article
Show Figures

Figure 1

35 pages, 11787 KB  
Article
A New One-Parameter Model Supports an Upside-Down Bathtub Failure Rate: Theory, Inference, and Real-World Applications
by Ohud A. Alqasem and Ahmed Elshahhat
Mathematics 2026, 14(9), 1566; https://doi.org/10.3390/math14091566 - 6 May 2026
Viewed by 156
Abstract
Researchers often develop ordinal hazard distributions, whether increasing or decreasing, into multi-parameter distributions to derive various forms of the hazard function. This process necessitates the formulation of a multi-parameter hazard function, which involves a more complex mathematical expression. In contrast, this study introduces [...] Read more.
Researchers often develop ordinal hazard distributions, whether increasing or decreasing, into multi-parameter distributions to derive various forms of the hazard function. This process necessitates the formulation of a multi-parameter hazard function, which involves a more complex mathematical expression. In contrast, this study introduces a new one-parameter lifetime model, termed the Inverted Z–Lindley (IZL) distribution, which is capable of capturing an upside-down bathtub-shaped failure rate without sacrificing analytical simplicity. Fundamental distributional properties of the IZL model are rigorously established, including closed-form expressions for the probability density, cumulative distribution, reliability, and hazard rate functions. Theoretical analysis shows that the density is strictly positive, unimodal, positively skewed, and heavy-tailed, while the hazard rate is unimodal with vanishing limits at both extremes. Fractional moments are obtained, and the non-existence of classical moments is formally justified, motivating the use of quantile-based and inactivity-time reliability measures. Besides the quantile function, several key reliability measures, including the mean inactivity time and strong mean inactivity time functions, and order statistics, are also developed. Inferential procedures are constructed under Type-II censoring using both likelihood-based and Bayesian frameworks. The existence and uniqueness of the frequentist estimator are established, while Bayesian estimation is implemented via Markov chain Monte Carlo methods under informative gamma priors. Several interval estimation techniques—including asymptotic, bootstrap, Bayesian credible, and highest posterior density intervals—are developed and compared through extensive Monte Carlo simulations. The practical relevance of the proposed model is demonstrated using real datasets from environmental health and communication engineering, where the IZL distribution consistently outperforms fifteen well-established inverted lifetime models according to likelihood-based criteria, information measures, and goodness-of-fit diagnostics. Overall, the IZL model offers a powerful, interpretable, and computationally efficient alternative for modeling heavy-tailed lifetime data with non-monotone failure behavior, contributing meaningfully to modern distribution theory and applied reliability analysis. Full article
(This article belongs to the Special Issue Computational Statistics: Analysis and Applications for Mathematics)
Show Figures

Figure 1

43 pages, 22952 KB  
Article
Parameters Estimation and Reliability Analysis for Burr XII Distribution Under Adaptive Progressive First-Failure Censoring: Systematic Techniques with Application
by Rashad M. EL-Sagheer, Mohamed H. El-Menshawy, Mahmoud E. Bakr, Noha A. Tashkandi, Oluwafemi Samson Balogun and Mahmoud M. Ramadan
Mathematics 2026, 14(9), 1556; https://doi.org/10.3390/math14091556 - 4 May 2026
Viewed by 222
Abstract
An adaptive progressive first-failure censoring scheme is used to enhance the efficiency of statistical analyses and minimize test time in life-testing experiments. This paper focuses on statistical inferences for the unknown parameters, survival, and hazard rate functions of the Burr XII distribution under [...] Read more.
An adaptive progressive first-failure censoring scheme is used to enhance the efficiency of statistical analyses and minimize test time in life-testing experiments. This paper focuses on statistical inferences for the unknown parameters, survival, and hazard rate functions of the Burr XII distribution under this censoring scheme. Since the maximum likelihood estimates for the model parameters and reliability characteristics cannot be obtained explicitly, the Newton–Raphson method is employed for numerical derivation. The delta method is used to determine the variances of reliability characteristics and is applied to construct confidence intervals. Bayesian estimates of the unknown parameters and reliability characteristics are derived under the squared error and linear exponential loss functions. As these estimates are not explicitly obtainable, the Lindley and Markov chain Monte Carlo methods are used as approximation techniques. Additionally, asymptotic confidence intervals and highest posterior density credible intervals are developed for the parameters and reliability characteristics. A Monte Carlo simulation is performed to evaluate the proposed estimators, and the methodology is validated through a real dataset analysis on arthritic patients. Full article
Show Figures

Figure 1

64 pages, 788 KB  
Article
From Biased to Unbiased: Theory and Benchmarks for a New Monte Carlo Solver of Fredholm Integral Equations
by Venelin Todorov and Ivan Dimov
Axioms 2026, 15(5), 338; https://doi.org/10.3390/axioms15050338 - 4 May 2026
Viewed by 197
Abstract
We investigate biased and unbiased Monte Carlo algorithms for solving Fredholm integral equations of the second kind and for estimating linear functionals of their solutions. Fredholm integral equations provide a common mathematical framework in uncertainty quantification, Bayesian inference, physics, finance, engineering modeling, telecommunication [...] Read more.
We investigate biased and unbiased Monte Carlo algorithms for solving Fredholm integral equations of the second kind and for estimating linear functionals of their solutions. Fredholm integral equations provide a common mathematical framework in uncertainty quantification, Bayesian inference, physics, finance, engineering modeling, telecommunication systems, signal processing, and other applied problems where system responses depend on distributed, uncertain, or noise-affected inputs. The comparison covers Crude Monte Carlo and Markov Chain Monte Carlo baselines, modified Sobol quasi–Monte Carlo schemes (MSS variants), the classical Unbiased Stochastic Algorithm (USA), and a new variance-controlled unbiased estimator, the Novel Unbiased Stochastic Algorithm (NUSA). NUSA preserves unbiasedness via a randomized-trajectory representation while improving stability through two mechanisms: adaptive absorption control, governed by a parameter Pd that regulates the effective trajectory length, and kernel-weight normalization based on an auxiliary proposal density to curb heavy-tailed weight products. Extensive experiments in one- and multi-dimensional settings (including regular and discontinuous kernels and weak/strong coupling regimes) show that NUSA consistently reduces dispersion and achieves smaller errors than USA under identical sampling budgets. In representative tests, NUSA attains relative errors below 10−3 and improves average accuracy by approximately 30–50% compared with USA, while maintaining near-linear runtime scaling in N and competitive scaling with dimension. Although NUSA is moderately more expensive per run than USA, the variance reduction yields a superior accuracy–cost trade-off, especially near strong-coupling regimes and in higher dimensions where standard unbiased estimators become variance-limited. Full article
(This article belongs to the Special Issue Numerical Analysis and Applied Mathematics, 2nd Edition)
Show Figures

Figure 1

29 pages, 3181 KB  
Article
The Interaction Between Fiscal and Monetary Policy Under Political Turmoil in Myanmar: New Keynesian DSGE Model
by Ai Kar Pao, Charuk Singhapreecha and Nisit Panthamit
Economies 2026, 14(5), 157; https://doi.org/10.3390/economies14050157 - 4 May 2026
Viewed by 349
Abstract
This paper examines the interaction between fiscal and monetary policies in Myanmar under ongoing political and economic uncertainty. We estimate a small open-economy New Keynesian DSGE model using Bayesian methods, combining the Kalman filter with Markov Chain Monte Carlo sampling on quarterly data [...] Read more.
This paper examines the interaction between fiscal and monetary policies in Myanmar under ongoing political and economic uncertainty. We estimate a small open-economy New Keynesian DSGE model using Bayesian methods, combining the Kalman filter with Markov Chain Monte Carlo sampling on quarterly data from 2013Q1 to 2022Q1. The results show a persistent regime of monetary and fiscal policy conflict. While the central bank follows an active anti-inflationary interest rate rule that satisfies the Taylor principle, fiscal policy shows weak responsiveness to public debt, providing limited fiscal backing for monetary stabilization. As a result, monetary tightening aimed at controlling inflation exacerbates fiscal stress through the debt-service channel, undermining the overall effectiveness of macroeconomic stabilization. Political instability emerges as a key structural driver of macroeconomic fragility. Political shocks are highly persistent and are transmitted primarily through increases in the country risk premium, accounting for more than 50% of real exchange rate volatility and generating exchange rate depreciation, higher inflation, and output contraction. Overall, the findings indicate that monetary tightening alone is insufficient to restore macroeconomic stability in fragile and conflict-affected economies. Credible fiscal adjustment and improvements in political stability are necessary to contain external vulnerabilities and restore the effectiveness of monetary policy. Full article
Show Figures

Figure 1

4 pages, 874 KB  
Proceeding Paper
Detection of Deteriorated Areas in Water Distribution Networks Exploiting Chlorine Measurements in a Bayesian Framework
by Benedetta Sansone, Alfonso Cozzolino, Roberta Padulano, Cristiana Di Cristo and Giuseppe Del Giudice
Eng. Proc. 2026, 135(1), 7; https://doi.org/10.3390/engproc2026135007 - 29 Apr 2026
Viewed by 240
Abstract
This study proposes a methodology to identify deteriorated pipes in water distribution networks using prior system information and routine chlorine residual data. While bulk chlorine decay kbulk can be measured in laboratories, wall decay kwall depends on pipe material, diameter, and [...] Read more.
This study proposes a methodology to identify deteriorated pipes in water distribution networks using prior system information and routine chlorine residual data. While bulk chlorine decay kbulk can be measured in laboratories, wall decay kwall depends on pipe material, diameter, and ageing, particularly in unlined metallic pipes. Empirical data were used to estimate kwall, which was integrated into a Bayesian inference framework solved with Markov Chain Monte Carlo. Applied to an Italian network with synthetic chlorine data, this method demonstrated effectiveness across three test scenarios, exploiting the contrast between kwall and kbulk to detect deteriorated pipes within a computationally efficient environment. Full article
Show Figures

Figure 1

29 pages, 1864 KB  
Article
Confidence Intervals for Parameter Variance of Zero-Inflated Two-Parameter Rayleigh Distribution
by Sasipong Kijsason, Sa-Aat Niwitpong and Suparat Niwitpong
Symmetry 2026, 18(5), 765; https://doi.org/10.3390/sym18050765 - 29 Apr 2026
Viewed by 188
Abstract
This study develops confidence and credible intervals for the variance of the zero-inflated two-parameter Rayleigh distribution, a flexible model for non-negative data with excess zeros. Seven approaches are proposed: Bayesian Markov chain Monte Carlo (MCMC), Bayesian highest posterior density (HPD), the standard confidence [...] Read more.
This study develops confidence and credible intervals for the variance of the zero-inflated two-parameter Rayleigh distribution, a flexible model for non-negative data with excess zeros. Seven approaches are proposed: Bayesian Markov chain Monte Carlo (MCMC), Bayesian highest posterior density (HPD), the standard confidence interval, the approximation normal, the percentile bootstrap, the bootstrap method with standard error, and the generalized confidence interval (GCI). Their performance is assessed through Monte Carlo simulation using coverage probability (CP) and expected length (EL). The results show that the Bayesian HPD interval performs best overall, attaining coverage close to the nominal level while yielding shorter intervals than the alternatives, especially for small samples. The methods are illustrated with road traffic fatality data from Chiang Mai Province, Thailand, recorded in March 2024. These findings support the practical usefulness of the HPD approach for variance interval estimation in zero-inflated continuous models. Full article
Show Figures

Figure 1

30 pages, 2325 KB  
Article
Efficient Estimation Methods for the QR Distribution with Type-II Censored Data: An Empirical Validation on Lung Cancer Prognosis
by Qasim Ramzan, Muhammad Amin, Shuhrah Alghamdi and Randa Alharbi
Entropy 2026, 28(5), 502; https://doi.org/10.3390/e28050502 - 29 Apr 2026
Viewed by 375
Abstract
The QR distribution, recently introduced for modeling lifetime data under Type-II censoring, offers a flexible framework for survival and reliability analysis. This study provides the first comprehensive evaluation of multiple modern estimation techniques for the QR distribution under Type-II censoring. We systematically compare [...] Read more.
The QR distribution, recently introduced for modeling lifetime data under Type-II censoring, offers a flexible framework for survival and reliability analysis. This study provides the first comprehensive evaluation of multiple modern estimation techniques for the QR distribution under Type-II censoring. We systematically compare classical maximum likelihood estimation with stochastic gradient descent variants (Momentum and Adam), Bayesian approaches including Maximum A Posteriori estimation, Markov Chain Monte Carlo, and Variational Inference, as well as machine learning-integrated methods such as amortized neural network inference. Using both synthetic and the real Veterans’ Administration Lung Cancer dataset, we evaluate these methods in terms of parameter estimation accuracy, computational efficiency, and convergence behavior. The results demonstrate the strengths of optimization-based, Bayesian, and neural approaches, highlighting their practical utility in handling complex censored survival data. This research validates the distribution’s effectiveness in capturing survival dynamics, offering valuable insights for clinical applications and highlighting areas for methodological improvement. Full article
Show Figures

Graphical abstract

23 pages, 1815 KB  
Article
Scalable Bayesian–XAI Framework for Multi-Objective Decision-Making in Uncertain Dynamic Systems
by Mostafa Aboulnour Salem and Zeyad Aly Khalil
Algorithms 2026, 19(5), 340; https://doi.org/10.3390/a19050340 - 28 Apr 2026
Viewed by 243
Abstract
This study proposes a scalable Explainable Artificial Intelligence (XAI)–driven Bayesian–AI decision–control framework for multi-objective optimisation in uncertain and dynamic systems. The framework integrates Bayesian networks, stochastic control, and expected utility theory within a unified probabilistic architecture. Unlike traditional black-box models, the proposed framework [...] Read more.
This study proposes a scalable Explainable Artificial Intelligence (XAI)–driven Bayesian–AI decision–control framework for multi-objective optimisation in uncertain and dynamic systems. The framework integrates Bayesian networks, stochastic control, and expected utility theory within a unified probabilistic architecture. Unlike traditional black-box models, the proposed framework provides intrinsic interpretability through probabilistic reasoning and dependency-aware modelling. This allows users to understand how decisions are formed and how variables influence outcomes. To further strengthen explainability, the framework incorporates post hoc XAI techniques, including SHAP-based feature attribution and sensitivity-based local explanations. These methods quantify the contribution of each variable and provide clear explanations at both global and local levels. The system is formulated as a stochastic state-space model and implemented as a closed-loop adaptive architecture. It updates decisions continuously as new data becomes available. Scalable inference is achieved using variational inference, Markov Chain Monte Carlo, and Sequential Monte Carlo methods. This ensures efficient performance in complex and high-dimensional environments. A simulation study based on 370 observations shows that the proposed framework improves decision quality, robustness under uncertainty, and transparency compared to conventional methods. Explainability is evaluated using Fidelity, Stability, and Transparency metrics. The results confirm that the model produces consistent and reliable explanations. The framework supports human-centred decision-making by providing visual analytics and clear probabilistic explanations. This makes it suitable for high-stakes applications such as cyber–physical systems, intelligent platforms, and real-time AI systems. The main contribution of this study is the integration of intrinsic probabilistic interpretability with post hoc XAI techniques into a single, scalable framework. This approach bridges a key gap in XAI research and offers a practical and transparent solution for decision-making under uncertainty. Full article
Show Figures

Graphical abstract

38 pages, 636 KB  
Article
Interval Estimation for the Difference and Ratio of Variances Under the Zero-Inflated Two-Parameter Rayleigh Distribution
by Sasipong Kijsason, Sa-Aat Niwitpong and Suparat Niwitpong
Mathematics 2026, 14(9), 1440; https://doi.org/10.3390/math14091440 - 24 Apr 2026
Viewed by 173
Abstract
The zero-inflated two-parameter Rayleigh (ZITR) distribution provides a flexible framework for modeling data with excess zeros and positive observations following a two-parameter Rayleigh distribution. It is particularly suitable for right-skewed data and has applications in areas such as road traffic mortality and survival [...] Read more.
The zero-inflated two-parameter Rayleigh (ZITR) distribution provides a flexible framework for modeling data with excess zeros and positive observations following a two-parameter Rayleigh distribution. It is particularly suitable for right-skewed data and has applications in areas such as road traffic mortality and survival analysis. This study develops and compares several methods for constructing confidence intervals for the difference and ratio of variances from two independent ZITR populations. The considered methods include Bayesian approaches based on Markov Chain Monte Carlo (MCMC) and highest posterior density (HPD) intervals, as well as the generalized confidence interval (GCI), method of variance estimates recovery (MOVER), approximate normal (AN), percentile bootstrap (PB), and bootstrap with standard error (BS). The performance of these methods is evaluated via Monte Carlo simulations under various parameter settings and sample sizes, using coverage probability and expected interval length as performance criteria. The results indicate that the Bayesian HPD method generally performs well across a wide range of scenarios. A real-data application using road traffic mortality data from January 2025 in Chanthaburi and Narathiwat provinces is also presented, demonstrating the practical usefulness of the proposed approaches for comparing the variance structure between the two regions. Full article
(This article belongs to the Special Issue Statistical Inference: Methods and Applications)
Show Figures

Figure 1

31 pages, 551 KB  
Article
Frequentist and Bayesian Predictive Inference for the Log-Logistic Distribution Under Progressive Type-II Censoring
by Ziteng Zhang and Wenhao Gui
Entropy 2026, 28(4), 466; https://doi.org/10.3390/e28040466 - 18 Apr 2026
Viewed by 323
Abstract
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we [...] Read more.
This paper investigates the prediction of unobserved future failure times for the heavy-tailed Log-Logistic distribution under Progressive Type-II censoring. We first develop point and interval estimates for the unknown parameters using both frequentist maximum likelihood and Bayesian approaches. For predicting future failures, we derive three distinct point predictors: the Best Unbiased Predictor (BUP), the Conditional Median Predictor (CMP), and the Bayesian Predictor (BP). Corresponding prediction intervals are constructed using frequentist pivotal quantities, Bayesian Equal-Tailed Intervals (ETIs), and Highest Posterior Density (HPD) methods. The Bayesian procedures are implemented via Markov chain Monte Carlo (MCMC) sampling. We evaluate the finite-sample performance of the proposed methodologies through a Monte Carlo simulation study and further validate them using two real-world datasets, namely bladder cancer remission times and guinea pig survival times. The numerical results indicate that the proposed BP, particularly under the empirical prior, provides the most accurate and stable overall performance for point prediction, while the frequentist predictors become less reliable in extreme heavy-tailed settings. For interval prediction, the Bayesian HPD method consistently outperforms the alternatives, substantially reducing interval lengths for right-skewed data while maintaining the nominal coverage probability. Full article
Show Figures

Figure 1

22 pages, 4411 KB  
Article
Mineral Inversion Constrained by Lithofacies for Prediction of Ga-Rich Laminations in Coal Seams from the Haerwusu Mine, Jungar Coalfield
by Wan Li, Tongjun Chen, Xuanyu Liu, Haicheng Xu and Haiyang Yin
Minerals 2026, 16(4), 387; https://doi.org/10.3390/min16040387 - 7 Apr 2026
Viewed by 433
Abstract
Gallium (Ga) in coal is a nationally emerging strategic mineral resource, yet research on using petrophysical methods to detect the spatial variation in critical metals in coal seams remains limited. Analyzing the distribution characteristics of Ga-rich coal using geophysical well-logging methods is of [...] Read more.
Gallium (Ga) in coal is a nationally emerging strategic mineral resource, yet research on using petrophysical methods to detect the spatial variation in critical metals in coal seams remains limited. Analyzing the distribution characteristics of Ga-rich coal using geophysical well-logging methods is of great significance for the development and utilization of Ga. This study introduces a quantitative method for predicting Ga-rich laminations in ultra-thick bituminous coal seams by integrating: (i) wireline-log-based lithofacies classification, (ii) lithofacies-constrained mineral inversion, and (iii) lithofacies-constrained and laboratory-established Ga–mineral correlations. The coal seam was first classified into four distinct lithofacies types—(i) parting, (ii) medium-ash coal (MA), (iii) low-ash coal (LA), and (iv) extra-low-ash coal (ELA)—through integration of conventional wireline log interpretation, cluster analysis, and XGBoost machine learning. Second, lithofacies-constrained Ga–host mineral associations were established by integrating core sample analysis, correlation analysis, and linear regression modeling. Third, mineral content predictions for each lithofacies were obtained through wireline-log-based mineral inversion, constrained by petrophysical boundaries. Finally, prediction uncertainties were evaluated using Markov Chain Monte Carlo (MCMC) simulation, while Ga-rich laminations were predicted by integrating log-derived mineral inversion results with regressed Ga prediction models. The results demonstrate strong agreement between mineral inversion and XRD analyses within uncertainty ranges, achieving a prediction accuracy of 73.6% for Ga. This validated methodology presents a novel approach for quantifying Ga concentrations in coal, as demonstrated through a case study. Full article
(This article belongs to the Section Mineral Exploration Methods and Applications)
Show Figures

Figure 1

15 pages, 1217 KB  
Article
Detecting Phase Transitions from Data Using Generative Learning
by Xiyu Zhou, Yan Mi and Pan Zhang
Entropy 2026, 28(4), 406; https://doi.org/10.3390/e28040406 - 3 Apr 2026
Viewed by 588
Abstract
Identifying phase transitions in complex many-body systems traditionally necessitates the definition of specific order parameters, a task often requiring prior knowledge of the statistical model and the symmetry-breaking mechanism. In this work, we propose a framework for detecting phase transitions directly from raw [...] Read more.
Identifying phase transitions in complex many-body systems traditionally necessitates the definition of specific order parameters, a task often requiring prior knowledge of the statistical model and the symmetry-breaking mechanism. In this work, we propose a framework for detecting phase transitions directly from raw (experimental) data without requiring knowledge of the underlying model Hamiltonian, parameters, or pre-defined labels. Inspired by generative modeling in machine learning, our method utilizes autoregressive networks to estimate the normalized probability distribution of the system from raw configuration data. We then quantify the intrinsic sensitivity of this learned distribution to control parameters (such as temperature) to construct a robust indicator of phase transitions. This indicator is based on the expectation of the change in absolute logarithmic probability, derived entirely from the raw data. Our approach is purely data-driven: it takes raw data across varying control parameters as input and outputs the most likely estimate of the phase transition point. To validate our approach, we conduct extensive numerical experiments on the 2D Ising model on both triangular and square lattices, and on the Sherrington–Kirkpatrick (SK) model utilizing raw data generated via Markov Chain Monte Carlo and Tensor Network methods. The results demonstrate that our generative approach accurately identifies phase transitions using only raw data. Our framework provides a general tool for exploring critical phenomena in model systems, with the potential to be extended to realistic experimental data where theoretical descriptions remain incomplete. Full article
Show Figures

Figure 1

51 pages, 2241 KB  
Review
Mathematical Analysis Methods for Quantitative Scenario Generation of Renewable Power Output: A Comprehensive Review
by Tong Ma, Boyu Qin, Shidong Hong and Yiwei Su
Energies 2026, 19(7), 1701; https://doi.org/10.3390/en19071701 - 31 Mar 2026
Viewed by 556
Abstract
As the proportion of renewable power continues to increase, its inherent intermittency and volatility pose serious challenges to the security and stability of power systems. Scenario generation technology serves as a key tool supporting decision-making methods such as stochastic optimization and risk analysis. [...] Read more.
As the proportion of renewable power continues to increase, its inherent intermittency and volatility pose serious challenges to the security and stability of power systems. Scenario generation technology serves as a key tool supporting decision-making methods such as stochastic optimization and risk analysis. By generating representative power output scenarios, it can effectively characterize the uncertainty of renewable power output. This paper systematically reviews mainstream methods for the scenario generation of renewable power output, categorizing them into two major classes: sampling-based methods and model-based methods. Among them, sampling-based methods include Monte Carlo sampling, Latin hypercube sampling (LHS), Markov chains (MCs), and Copula functions. Model-based methods encompass artificial neural networks (ANNs), long short-term memory networks (LSTMs), autoregressive moving average models (ARMAs), generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models and transformer-based models. This paper elaborates on the principles and characteristics of each type of method. Moreover, scenario quality is evaluated from three dimensions: output-based metrics for numerical accuracy, distribution-based metrics for statistical consistency, and event-based metrics for key operational event representation. The current research challenges and future research directions are also summarized to provide a reference for modeling the uncertainty of renewable output. Full article
Show Figures

Figure 1

Back to TopTop