Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = Full Bayesian Significance Test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 4725 KB  
Article
Graph-FEM/ML Framework for Inverse Load Identification in Thick-Walled Hyperelastic Pressure Vessels
by Nasser Firouzi, Ramy M. Hafez, Kareem N. Salloomi, Mohamed A. Abdelkawy and Raja Rizwan Hussain
Symmetry 2025, 17(12), 2021; https://doi.org/10.3390/sym17122021 - 23 Nov 2025
Cited by 1 | Viewed by 784
Abstract
The accurate identification of internal and external pressures in thick-walled hyperelastic vessels is a challenging inverse problem with significant implications for structural health monitoring, biomedical devices, and soft robotics. Conventional analytical and numerical approaches address the forward problem effectively but offer limited means [...] Read more.
The accurate identification of internal and external pressures in thick-walled hyperelastic vessels is a challenging inverse problem with significant implications for structural health monitoring, biomedical devices, and soft robotics. Conventional analytical and numerical approaches address the forward problem effectively but offer limited means for recovering unknown load conditions from observable deformations. In this study, we introduce a Graph-FEM/ML framework that couples high-fidelity finite element simulations with machine learning models to infer normalized internal and external pressures from measurable boundary deformations. A dataset of 1386 valid samples was generated through Latin Hypercube Sampling of geometric and loading parameters and simulated using finite element analysis with a Neo-Hookean constitutive model. Two complementary neural architectures were explored: graph neural networks (GNNs), which operate directly on resampled and feature-enriched boundary data, and convolutional neural networks (CNNs), which process image-based representations of undeformed and deformed cross-sections. The GNN models consistently achieved low root-mean-square errors (≈0.021) and stable correlations across training, validation, and test sets, particularly when augmented with displacement and directional features. In contrast, CNN models exhibited limited predictive accuracy: quarter-section inputs regressed toward mean values, while full-ring and filled-section inputs improved after Bayesian optimization but remained inferior to GNNs, with higher RMSEs (0.023–0.030) and modest correlations (R2). To the best of our knowledge, this is the first work to combine boundary deformation observations with graph-based learning for inverse load identification in hyperelastic vessels. The results highlight the advantages of boundary-informed GNNs over CNNs and establish a reproducible dataset and methodology for future investigations. This framework represents an initial step toward a new direction in mechanics-informed machine learning, with the expectation that future research will refine and extend the approach to improve accuracy, robustness, and applicability in broader engineering and biomedical contexts. Full article
(This article belongs to the Special Issue Symmetries in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

35 pages, 17372 KB  
Article
Machine Learning Prediction on Progressive Collapse Resistance of Purely Welded Steel Frames Considering Weld Defects
by Zikang Guo, Peng Yu, Xinheng Huang, Yingkang Yao and Chunwei Zhang
Buildings 2025, 15(22), 4174; https://doi.org/10.3390/buildings15224174 - 19 Nov 2025
Viewed by 617
Abstract
This study proposes a machine learning (ML) framework to predict the progressive collapse resistance of purely welded steel frames considering weld defects. A finite element model (FEM) incorporating weld weakening degree at joints was developed and validated against push-down tests. A parametric modelling [...] Read more.
This study proposes a machine learning (ML) framework to predict the progressive collapse resistance of purely welded steel frames considering weld defects. A finite element model (FEM) incorporating weld weakening degree at joints was developed and validated against push-down tests. A parametric modelling program, combined with Latin Hypercube Sampling (LHS), was used to generate 700 samples from 27 design features across 8 categories, establishing a progressive collapse database containing full-process resistance curves. Five ML algorithms—DNN, SVR, RF, XGBoost, and LightGBM—were trained and evaluated. SVR was identified as the optimal model through Bayesian hyperparameter optimization and K-fold cross-validation, achieving an R2 = 0.988 and sMAPE = 5.096% in predicting the full-process resistance response. SHAP analysis was employed to examine feature interpretations both locally and globally, revealing that the failure scenario, beam span-to-height ratio, and weld quality are the three most significant factors affecting structural resistance, accounting for 22.6%, 22.5%, and 16% of the overall influence, respectively. For practical design, a steel frame with a beam span-to-height ratio of approximately 15, a weld joint relative position ratio between 0.15 and 0.18, a circular stub diameter-to-beam width ratio around 1.8, and a stub diameter-to-thickness ratio near 13 can achieve superior progressive collapse robustness, provided that weld quality is ensured. Full article
Show Figures

Figure 1

10 pages, 632 KB  
Proceeding Paper
Nonparametric Full Bayesian Significance Testing for Bayesian Histograms
by Fernando Corrêa, Julio Michael Stern and Rafael Bassi Stern
Phys. Sci. Forum 2025, 12(1), 11; https://doi.org/10.3390/psf2025012011 - 20 Oct 2025
Viewed by 457
Abstract
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy [...] Read more.
In this article, we present an extension of the Full Bayesian Significance Test (FBST) for nonparametric settings, termed NP-FBST, which is constructed using the limit of finite dimension histograms. The test statistics for NP-FBST are based on a plug-in estimate of the cross-entropy between the null hypothesis and a histogram. This method shares similarities with Kullback–Leibler and entropy-based goodness-of-fit tests, but it can be applied to a broader range of hypotheses and is generally less computationally intensive. We demonstrate that when the number of histogram bins increases slowly with the sample size, the NP-FBST is consistent for Lipschitz continuous data-generating densities. Additionally, we propose an algorithm to optimize the NP-FBST. Through simulations, we compare the performance of the NP-FBST to traditional methods for testing uniformity. Our results indicate that the NP-FBST is competitive in terms of power, even surpassing the most powerful likelihood-ratio-based procedures for very small sample sizes. Full article
Show Figures

Figure 1

10 pages, 761 KB  
Proceeding Paper
Nonparametric FBST for Validating Linear Models
by Rodrigo F. L. Lassance, Julio M. Stern and Rafael B. Stern
Phys. Sci. Forum 2025, 12(1), 2; https://doi.org/10.3390/psf2025012002 - 24 Sep 2025
Viewed by 502
Abstract
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) [...] Read more.
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) sidesteps this issue, standing out for also being logically coherent and offering a measure of evidence against H 0 , although its application to nonparametric settings is still limited. In this work, we use Gaussian process priors to derive FBST procedures that evaluate general linearity assumptions, such as testing the adherence of data and performing variable selection to linear models. We also make use of pragmatic hypotheses to verify if the data might be compatible with a linear model when factors such as measurement errors or utility judgments are accounted for. This contribution extends the theory of the FBST, allowing for its application in nonparametric settings and requiring, at most, simple optimization procedures to reach the desired conclusion. Full article
Show Figures

Figure 1

12 pages, 349 KB  
Article
Surrogate Model for In-Medium Similarity Renormalization Group Method Using Dynamic Mode Decomposition
by Sota Yoshida
Particles 2025, 8(1), 13; https://doi.org/10.3390/particles8010013 - 9 Feb 2025
Viewed by 1591
Abstract
I propose a data-driven surrogate model for the In-Medium Similarity Renormalization Group (IMSRG) method using Dynamic Mode Decomposition (DMD). First, the Magnus formulation of the IMSRG is leveraged to represent the unitary transformation of many-body operators of interest. Then, snapshots of these operators [...] Read more.
I propose a data-driven surrogate model for the In-Medium Similarity Renormalization Group (IMSRG) method using Dynamic Mode Decomposition (DMD). First, the Magnus formulation of the IMSRG is leveraged to represent the unitary transformation of many-body operators of interest. Then, snapshots of these operators at different flow parameters are decomposed by DMD to approximate the IMSRG flow in a latent space. The resulting emulator accurately reproduces the asymptotic flow behavior while lowering computational costs. I demonstrate that the DMD-based emulator results in a three to five times speedup compared to the full IMSRG calculation in a few test cases based on the ground state properties of 56Ni, 16O, and 40Ca in realistic nuclear interactions. While this is still not an acceleration that is significant enough to enable us to fully quantify, e.g., statistical uncertainties using Bayesian methods, this work offers a starting point for constructing efficient surrogate models for the IMSRG. Full article
Show Figures

Figure 1

16 pages, 1051 KB  
Article
Risk of Microvascular Complications in Newly Diagnosed Type 2 Diabetes Patients Using Automated Machine Learning Prediction Models
by Amar Khamis, Fatima Abdul, Stafny Dsouza, Fatima Sulaiman, Muhammad Farooqi, Fatheya Al Awadi, Mohammed Hassanein, Fayha Salah Ahmed, Mouza Alsharhan, Ayesha AlOlama, Noorah Ali, Aaesha Abdulaziz, Alia Mohammad Rafie, Nandu Goswami and Riad Bayoumi
J. Clin. Med. 2024, 13(23), 7422; https://doi.org/10.3390/jcm13237422 - 5 Dec 2024
Cited by 2 | Viewed by 2527
Abstract
Background/Objectives: In type 2 diabetes (T2D), collective damage to the eyes, kidneys, and peripheral nerves constitutes microvascular complications, which significantly affect patients’ quality of life. This study aimed to prospectively evaluate the risk of microvascular complications in newly diagnosed T2D patients in Dubai, [...] Read more.
Background/Objectives: In type 2 diabetes (T2D), collective damage to the eyes, kidneys, and peripheral nerves constitutes microvascular complications, which significantly affect patients’ quality of life. This study aimed to prospectively evaluate the risk of microvascular complications in newly diagnosed T2D patients in Dubai, UAE. Methods: Supervised automated machine learning in the Auto-Classifier model of the IBM SPSS Modeler package was used to predict microvascular complications in a training data set of 348 long-term T2D patients with complications using 24 independent variables as predictors and complications as targets. Three automated model scenarios were tested: Full All-Variable Model; Univariate-Selected Model, and Backward Stepwise Logistic Regression Model. An independent cohort of 338 newly diagnosed T2D patients with no complications was used for the model validation. Results: Long-term T2D patients with complications (duration = ~14.5 years) were significantly older (mean age = 56.3 ± 10.9 years) than the newly diagnosed patients without complications (duration = ~2.5 years; mean age = 48.9 ± 9.6 years). The Bayesian Network was the most reliable algorithm for predicting microvascular complications in all three scenarios with an area under the curve (AUC) of 77–87%, accuracy of 68–75%, sensitivity of 86–95%, and specificity of 53–75%. Among newly diagnosed T2D patients, 22.5% were predicted positive and 49.1% negative across all models. Logistic regression applied to the 16 significant predictors between the two sub-groups showed that BMI, HDL, adjusted for age at diagnosis of T2D, age at visit, and urine albumin explained >90% of the variation in microvascular measures. Conclusions: the Bayesian Network model effectively predicts microvascular complications in newly diagnosed T2D patients, highlighting the significant roles of BMI, HDL, age at diagnosis, age at visit, and urine albumin. Full article
(This article belongs to the Section Endocrinology & Metabolism)
Show Figures

Figure 1

20 pages, 1088 KB  
Article
Model-Based Sequential Design of Experiments with Machine Learning for Aerospace Systems
by Tim Gerling, Kai Dresia, Jan Deeken and Günther Waxenegger-Wilfing
Aerospace 2024, 11(11), 934; https://doi.org/10.3390/aerospace11110934 - 11 Nov 2024
Cited by 1 | Viewed by 2055
Abstract
Traditional experimental design methods often face challenges in handling complex aerospace systems due to the high dimensionality and nonlinear behavior of such systems, resulting in nonoptimal experimental designs. To address these challenges, machine learning techniques can be used to further increase the application [...] Read more.
Traditional experimental design methods often face challenges in handling complex aerospace systems due to the high dimensionality and nonlinear behavior of such systems, resulting in nonoptimal experimental designs. To address these challenges, machine learning techniques can be used to further increase the application areas of modern Bayesian Optimal Experimental Design (BOED) approaches, enhancing their efficiency and accuracy. The proposed method leverages neural networks as surrogate models to approximate the underlying physical processes, thereby reducing computational costs and allowing for full differentiability. Additionally, the use of reinforcement learning enables the optimization of sequential designs and essential real-time capability. Our framework is validated by optimizing experimental designs that are used for the efficient characterization of turbopumps for liquid propellant rocket engines. The reinforcement learning approach yields superior results in terms of the expected information gain related to a sequence of 15 experiments, exhibiting mean performance increases of 9.07% compared to random designs and 6.47% compared to state-of-the-art approaches. Therefore, the results demonstrate significant improvements in experimental efficiency and accuracy compared to conventional methods. This work provides a robust framework for the application of advanced BOED methods in aerospace testing, with implications for broader engineering applications. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

15 pages, 1540 KB  
Article
Objective Priors for Invariant e-Values in the Presence of Nuisance Parameters
by Elena Bortolato and Laura Ventura
Entropy 2024, 26(1), 58; https://doi.org/10.3390/e26010058 - 9 Jan 2024
Viewed by 1878
Abstract
This paper aims to contribute to refining the e-values for testing precise hypotheses, especially when dealing with nuisance parameters, leveraging the effectiveness of asymptotic expansions of the posterior. The proposed approach offers the advantage of bypassing the need for elicitation of priors [...] Read more.
This paper aims to contribute to refining the e-values for testing precise hypotheses, especially when dealing with nuisance parameters, leveraging the effectiveness of asymptotic expansions of the posterior. The proposed approach offers the advantage of bypassing the need for elicitation of priors and reference functions for the nuisance parameters and the multidimensional integration step. For this purpose, starting from a Laplace approximation, a posterior distribution for the parameter of interest is only considered and then a suitable objective matching prior is introduced, ensuring that the posterior mode aligns with an equivariant frequentist estimator. Consequently, both Highest Probability Density credible sets and the e-value remain invariant. Some targeted and challenging examples are discussed. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

20 pages, 557 KB  
Article
Adaptive Significance Levels in Tests for Linear Regression Models: The e-Value and P-Value Cases
by Alejandra E. Patiño Hoyos, Victor Fossaluza, Luís Gustavo Esteves and Carlos Alberto de Bragança Pereira
Entropy 2023, 25(1), 19; https://doi.org/10.3390/e25010019 - 22 Dec 2022
Cited by 3 | Viewed by 3024
Abstract
The full Bayesian significance test (FBST) for precise hypotheses is a Bayesian alternative to the traditional significance tests based on p-values. The FBST is characterized by the e-value as an evidence index in favor of the null hypothesis (H). [...] Read more.
The full Bayesian significance test (FBST) for precise hypotheses is a Bayesian alternative to the traditional significance tests based on p-values. The FBST is characterized by the e-value as an evidence index in favor of the null hypothesis (H). An important practical issue for the implementation of the FBST is to establish how small the evidence against H must be in order to decide for its rejection. In this work, we present a method to find a cutoff value for the e-value in the FBST by minimizing the linear combination of the averaged type-I and type-II error probabilities for a given sample size and also for a given dimensionality of the parameter space. Furthermore, we compare our methodology with the results obtained from the test with adaptive significance level, which presents the capital-P P-value as a decision-making evidence measure. For this purpose, the scenario of linear regression models with unknown variance under the Bayesian approach is considered. Full article
(This article belongs to the Special Issue Data Science: Measuring Uncertainties II)
Show Figures

Figure 1

14 pages, 3144 KB  
Article
Prediction of Casing Collapse Strength Based on Bayesian Neural Network
by Dongfeng Li, Heng Fan, Rui Wang, Shangyu Yang, Yating Zhao and Xiangzhen Yan
Processes 2022, 10(7), 1327; https://doi.org/10.3390/pr10071327 - 6 Jul 2022
Cited by 2 | Viewed by 2180
Abstract
With the application of complex fracturing and other complex technologies, external extrusion has become the main cause of casing damage, which makes non-API high-extrusion-resistant casing continuously used in unconventional oil and gas resources exploitation. Due to the strong sensitivity of string ovality, uneven [...] Read more.
With the application of complex fracturing and other complex technologies, external extrusion has become the main cause of casing damage, which makes non-API high-extrusion-resistant casing continuously used in unconventional oil and gas resources exploitation. Due to the strong sensitivity of string ovality, uneven wall thickness, residual stress, and other factors to high anti-collapse casing, the API formula has a big error in predicting the anti-collapse strength of high anti-collapse casing. Therefore, Bayesian regularization artificial neural network (BRANN) is used to predict the external collapse strength of high anti-collapse casing. By collecting full-scale physical data, including initial defect data, geometric size, mechanical parameters, etc., after data preprocessing, the casing collapse strength data set is established for model training and blind measurement. Under the classical three-layer neural network, the Bayesian regularization algorithm is used for training. Through empirical formula and trial and error method, it is determined that when the number of hidden neurons is 12, the model is the best prediction model for high collapse resistance casing. The prediction results of the blind test data imported by the model show that the coincidence rate of BRANN casing collapse strength prediction can reach 96.67%. Through error analysis with API formula prediction results and KT formula prediction results improved by least square fitting, the BRANN-based casing collapse strength prediction has higher accuracy and stability. Compared with the traditional prediction method, this model can be used to predict casing strength under more complicated working conditions, and it has a certain guiding significance. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

16 pages, 5782 KB  
Article
Bovine Milk Fat Intervention in Early Life and Its Impact on Microbiota, Metabolites and Clinical Phenotype: A Multi-Omics Stacked Regularization Approach
by João Pereira, Lucas R. F. Bresser, Natal van Riel, Ellen Looijesteijn, Ruud Schoemaker, Laurien H. Ulfman, Prescilla Jeurink, Eva Karaglani, Yannis Manios, Rutger W. W. Brouwer, Wilfred F. J. van Ijcken and Evgeni Levin
BioMedInformatics 2022, 2(2), 281-296; https://doi.org/10.3390/biomedinformatics2020018 - 24 May 2022
Cited by 2 | Viewed by 2680
Abstract
The integration and analysis of multi-omics modalities is an important challenge in bioinformatics and data science in general. A standard approach is to conduct a series of univariate tests to determine the significance for each parameter, but this underestimates the connected nature of [...] Read more.
The integration and analysis of multi-omics modalities is an important challenge in bioinformatics and data science in general. A standard approach is to conduct a series of univariate tests to determine the significance for each parameter, but this underestimates the connected nature of biological data and thus increases the number of false-negative errors. To mitigate this issue and to understand how different omics’ data domains are jointly affected, we used the Stacked Regularization model with Bayesian optimization over its full parameter space. We applied this approach to a multi-omics data set consisting of microbiota, metabolites and clinical data from two recent clinical studies aimed at detecting the impact of replacing part of the vegetable fat in infant formula with bovine milk fat on healthy term infants. We demonstrate how our model achieves a high discriminative performance, show the advantages of univariate testing and discuss the detected outcome in its biological context. Full article
Show Figures

Figure 1

10 pages, 557 KB  
Proceeding Paper
On Two Measure-Theoretic Aspects of the Full Bayesian Significance Test for Precise Bayesian Hypothesis Testing
by Riko Kelter
Phys. Sci. Forum 2021, 3(1), 10; https://doi.org/10.3390/psf2021003010 - 17 Dec 2021
Viewed by 1781
Abstract
The Full Bayesian Significance Test (FBST) has been proposed as a convenient method to replace frequentist p-values for testing a precise hypothesis. Although the FBST enjoys various appealing properties, the purpose of this paper is to investigate two aspects of the FBST [...] Read more.
The Full Bayesian Significance Test (FBST) has been proposed as a convenient method to replace frequentist p-values for testing a precise hypothesis. Although the FBST enjoys various appealing properties, the purpose of this paper is to investigate two aspects of the FBST which are sometimes observed as measure-theoretic inconsistencies of the procedure and have not been discussed rigorously in the literature. First, the FBST uses the posterior density as a reference for judging the Bayesian statistical evidence against a precise hypothesis. However, under absolutely continuous prior distributions, the posterior density is defined only up to Lebesgue null sets which renders the reference criterion arbitrary. Second, the FBST statistical evidence seems to have no valid prior probability. It is shown that the former aspect can be circumvented by fixing a version of the posterior density before using the FBST, and the latter aspect is based on its measure-theoretic premises. An illustrative example demonstrates the two aspects and their solution. Together, the results in this paper show that both of the two aspects which are sometimes observed as measure-theoretic inconsistencies of the FBST are not tenable. The FBST thus provides a measure-theoretically coherent Bayesian alternative for testing a precise hypothesis. Full article
Show Figures

Figure 1

23 pages, 664 KB  
Article
Cointegration and Unit Root Tests: A Fully Bayesian Approach
by Marcio A. Diniz, Carlos A. B. Pereira and Julio M. Stern
Entropy 2020, 22(9), 968; https://doi.org/10.3390/e22090968 - 31 Aug 2020
Cited by 6 | Viewed by 4343
Abstract
To perform statistical inference for time series, one should be able to assess if they present deterministic or stochastic trends. For univariate analysis, one way to detect stochastic trends is to test if the series has unit roots, and for multivariate studies it [...] Read more.
To perform statistical inference for time series, one should be able to assess if they present deterministic or stochastic trends. For univariate analysis, one way to detect stochastic trends is to test if the series has unit roots, and for multivariate studies it is often relevant to search for stationary linear relationships between the series, or if they cointegrate. The main goal of this article is to briefly review the shortcomings of unit root and cointegration tests proposed by the Bayesian approach of statistical inference and to show how they can be overcome by the Full Bayesian Significance Test (FBST), a procedure designed to test sharp or precise hypothesis. We will compare its performance with the most used frequentist alternatives, namely, the Augmented Dickey–Fuller for unit roots and the maximum eigenvalue test for cointegration. Full article
(This article belongs to the Special Issue Data Science: Measuring Uncertainties)
Show Figures

Figure 1

6 pages, 229 KB  
Article
The Structure of the Class of Maximum Tsallis–Havrda–Chavát Entropy Copulas
by Jesús E. García, Verónica A. González-López and Roger B. Nelsen
Entropy 2016, 18(7), 264; https://doi.org/10.3390/e18070264 - 19 Jul 2016
Cited by 4 | Viewed by 5411
Abstract
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy [...] Read more.
A maximum entropy copula is the copula associated with the joint distribution, with prescribed marginal distributions on [ 0 , 1 ] , which maximizes the Tsallis–Havrda–Chavát entropy with q = 2 . We find necessary and sufficient conditions for each maximum entropy copula to be a copula in the class introduced in Rodríguez-Lallena and Úbeda-Flores (2004), and we also show that each copula in that class is a maximum entropy copula. Full article
(This article belongs to the Special Issue Statistical Significance and the Logic of Hypothesis Testing)
20 pages, 373 KB  
Article
Bayesian Test of Significance for Conditional Independence: The Multinomial Model
by Pablo De Morais Andrade, Julio Michael Stern and Carlos Alberto De Bragança Pereira
Entropy 2014, 16(3), 1376-1395; https://doi.org/10.3390/e16031376 - 7 Mar 2014
Cited by 6 | Viewed by 7728
Abstract
Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional [...] Read more.
Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value). Full article
Show Figures

Graphical abstract

Back to TopTop