Bayesian Inference, Prediction and Model Selection

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Probability and Statistics".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 2993

Special Issue Editor

Department of Statistics, St. Anthony's College, Shillong 793001, India
Interests: Bayesian inference; classical inference; reliability theory; statistical quality control; distribution theory

Special Issue Information

Dear Colleagues,

The applicability of Bayesian methods in statistics and other sciences has eminently increased over the past few decades, owing to the use of large datasets and complexities involved. In many respects, Bayesian methods are found to be superior to classical statistical methods, especially when dealing with uncertainty or where future observations are not available. In such scenarios, predictive performance can be estimated using an assumed model for future observations. Furthermore, a set of models can be compared against each other according to their expected predictive performance. However, model selection can also be significant in tackling practical modeling problems. Nevertheless, selecting a more restricted model often results in ignoring uncertainties inherent in the initial model specification.

We invite our colleagues to submit their papers to areas related to the following keywords and also in any closed field of study not mentioned specifically here.

Dr. Sanku Dey
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bayesian estimation
  • Bayesian predictive inference
  • Bayesian model selection

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1767 KiB  
Article
Simultaneous Bayesian Clustering and Model Selection with Mixture of Robust Factor Analyzers
by Shan Feng, Wenxian Xie and Yufeng Nie
Mathematics 2024, 12(7), 1091; https://doi.org/10.3390/math12071091 - 4 Apr 2024
Viewed by 473
Abstract
Finite Gaussian mixture models are powerful tools for modeling distributions of random phenomena and are widely used for clustering tasks. However, their interpretability and efficiency are often degraded by the impact of redundancy and noise, especially on high-dimensional datasets. In this work, we [...] Read more.
Finite Gaussian mixture models are powerful tools for modeling distributions of random phenomena and are widely used for clustering tasks. However, their interpretability and efficiency are often degraded by the impact of redundancy and noise, especially on high-dimensional datasets. In this work, we propose a generative graphical model for parsimonious modeling of the Gaussian mixtures and robust unsupervised learning. The model assumes that the data are generated independently and identically from a finite mixture of robust factor analyzers, where the features’ salience is adjusted by an active set of latent factors to allow a violation of the local independence assumption. For the model inference, we propose a structured variational Bayes inference framework to realize simultaneous clustering, model selection and outlier processing. Performance of the proposed algorithm is evaluated by conducting experiments on artificial and real-world datasets. Moreover, an application on the high-dimensional machine learning task of handwritten alphabet recognition is introduced. Full article
(This article belongs to the Special Issue Bayesian Inference, Prediction and Model Selection)
Show Figures

Figure 1

19 pages, 635 KiB  
Article
Ridge-Type Pretest and Shrinkage Estimation Strategies in Spatial Error Models with an Application to a Real Data Example
by Marwan Al-Momani and Mohammad Arashi
Mathematics 2024, 12(3), 390; https://doi.org/10.3390/math12030390 - 25 Jan 2024
Viewed by 636
Abstract
Spatial regression models are widely available across several disciplines, such as functional magnetic resonance imaging analysis, econometrics, and house price analysis. In nature, sparsity occurs when a limited number of factors strongly impact overall variation. Sparse covariance structures are common in spatial regression [...] Read more.
Spatial regression models are widely available across several disciplines, such as functional magnetic resonance imaging analysis, econometrics, and house price analysis. In nature, sparsity occurs when a limited number of factors strongly impact overall variation. Sparse covariance structures are common in spatial regression models. The spatial error model is a significant spatial regression model that focuses on the geographical dependence present in the error terms rather than the response variable. This study proposes an effective approach using the pretest and shrinkage ridge estimators for estimating the vector of regression coefficients in the spatial error mode, considering insignificant coefficients and multicollinearity among regressors. The study compares the performance of the proposed estimators with the maximum likelihood estimator and assesses their efficacy using real-world data and bootstrapping techniques for comparison purposes. Full article
(This article belongs to the Special Issue Bayesian Inference, Prediction and Model Selection)
Show Figures

Figure 1

13 pages, 1287 KiB  
Article
Bayesian Identification Procedure for Triple Seasonal Autoregressive Models
by Ayman A. Amin and Saeed A. Alghamdi
Mathematics 2023, 11(18), 3823; https://doi.org/10.3390/math11183823 - 6 Sep 2023
Cited by 1 | Viewed by 609
Abstract
Triple seasonal autoregressive (TSAR) models have been introduced to model time series date with three layers of seasonality; however, the Bayesian identification problem of these models has not been tackled in the literature. Therefore, in this paper, we have the objective of filling [...] Read more.
Triple seasonal autoregressive (TSAR) models have been introduced to model time series date with three layers of seasonality; however, the Bayesian identification problem of these models has not been tackled in the literature. Therefore, in this paper, we have the objective of filling this gap by presenting a Bayesian procedure to identify the best order of TSAR models. Assuming that the TSAR model errors are normally distributed along with employing three priors, i.e., normal-gamma, Jeffreys’ and g priors, on the model parameters, we derive the marginal posterior distributions of the TSAR model parameters. In particular, we show that the marginal posteriors are multivariate t and gamma distributions for the TSAR model coefficients vector and precision, respectively. Using the marginal posterior distribution of the TSAR model coefficients vector, we present an identification procedure for the TSAR models based on a sequence of t-test of significance. We evaluate the accuracy of the proposed Bayesian identification procedure by conducting an extensive simulation study, followed by a real application to hourly electricity load datasets in six European countries. Full article
(This article belongs to the Special Issue Bayesian Inference, Prediction and Model Selection)
Show Figures

Figure 1

13 pages, 345 KiB  
Article
Bayesian Subset Selection of Seasonal Autoregressive Models
by Ayman A. Amin, Walid Emam, Yusra Tashkandy and Christophe Chesneau
Mathematics 2023, 11(13), 2878; https://doi.org/10.3390/math11132878 - 27 Jun 2023
Cited by 2 | Viewed by 766
Abstract
Seasonal autoregressive (SAR) models have many applications in different fields, such as economics and finance. It is well known in the literature that these models are nonlinear in their coefficients and that their Bayesian analysis is complicated. Accordingly, choosing the best subset of [...] Read more.
Seasonal autoregressive (SAR) models have many applications in different fields, such as economics and finance. It is well known in the literature that these models are nonlinear in their coefficients and that their Bayesian analysis is complicated. Accordingly, choosing the best subset of these models is a challenging task. Therefore, in this paper, we tackled this problem by introducing a Bayesian method for selecting the most promising subset of the SAR models. In particular, we introduced latent variables for the SAR model lags, assumed model errors to be normally distributed, and adopted and modified the stochastic search variable selection (SSVS) procedure for the SAR models. Thus, we derived full conditional posterior distributions of the SAR model parameters in the closed form, and we then introduced the Gibbs sampler, along with SSVS, to present an efficient algorithm for the Bayesian subset selection of the SAR models. In this work, we employed mixture–normal, inverse gamma, and Bernoulli priors for the SAR model coefficients, variance, and latent variables, respectively. Moreover, we introduced a simulation study and a real-world application to evaluate the accuracy of the proposed algorithm. Full article
(This article belongs to the Special Issue Bayesian Inference, Prediction and Model Selection)
Show Figures

Figure 1

Back to TopTop