Robust Bayesian Regression with Synthetic Posterior Distributions
Abstract
1. Introduction
2. Robust Bayesian Regression via Synthetic Posterior
2.1. Settings and Synthetic Posterior
2.2. Posterior Computation
2.3. Incorporating Shrinkage Priors
MCMC Algorithm under Shrinkage Prior
- (Sampling from and ) Generate , and set initial values and . Repeat the following procedures until convergence:
- –
- Compute the following weight:
- –
- Update the parameter values:
We adopt the final values as sampled values from the full conditional distribution. - (Sampling from and λ)
- –
- (Laplace prior) The full conditional distribution of is inverse-Gaussian with parameters and in the parametrization of the inverse-Gaussian density given byThe full conditional distribution of is .
- –
- (Horseshoe prior) The full conditional distribution of , and λ are , and , respectively.
3. Numerical Studies
3.1. Bayesian Robustness Properties
3.2. Simulation Study
3.3. Real Data Examples
4. Conclusions and Discussion
Supplementary Materials
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Basu, A.; Harris, I.R.; Hjort, N.L.; Jones, M.C. Robust and efficient estimation by minimising a density power divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef]
- Fujisawa, H.; Eguchi, S. Robust parameter estimation with a small bias against heavy contamination. J. Multivar. Anal. 2008, 99, 2053–2081. [Google Scholar] [CrossRef]
- Jones, M.C.; Hjort, N.L.; Harris, I.R.; Basu, A. A comparison of related density-based minimum divergence estimators. Biometrika 2001, 88, 865–873. [Google Scholar] [CrossRef]
- Kawashima, T.; Fujisawa, H. Robust and sparse regression via gamma-divergence. Entropy 2017, 19, 608. [Google Scholar] [CrossRef]
- Kawashima, T.; Fujisawa, H. Robust and Sparse Regression in GLM by Stochastic Optimization. Jpn. J. Stat. Data Sci. 2019, 2, 465–489. [Google Scholar] [CrossRef]
- Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Bissiri, P.G.; Holmes, C.; Walker, S.G. A General Framework for Updating Belief Distributions. J. R. Stat. Soc. Ser. B 2016, 78, 1103–1130. [Google Scholar] [CrossRef]
- Jewson, J.; Smith, J.Q.; Holmes, C. Principles of Bayesian inference using general divergence criteria. Entropy 2018, 20, 442. [Google Scholar] [CrossRef]
- Bhattacharya, A.; Pati, D.; Yang, Y. Bayesian fractional posteriors. Ann. Stat. 2019, 47, 39–66. [Google Scholar] [CrossRef]
- Miller, J.W.; Dunson, D.B. Robust Bayesian Inference via Coarsening. J. Am. Stat. Assoc. 2019, 114, 1113–1125. [Google Scholar] [CrossRef]
- Nakagawa, T.; Hashimoto, S. Robust Bayesian inference via γ-divergence. Commun. Stat. Theory Methods 2020, 49, 343–360. [Google Scholar] [CrossRef]
- Park, T.; Casella, G. The Bayesian Lasso. J. Am. Stat. Assoc. 2008, 103, 681–686. [Google Scholar] [CrossRef]
- Carvalho, C.M.; Polson, N.G.; Scott, J.G. The horseshoe estimator for sparse signals. Biometrika 2010, 97, 465–480. [Google Scholar] [CrossRef]
- Rubin, D.B. The Bayesian bootstrap. Ann. Stat. 1981, 9, 130–134. [Google Scholar] [CrossRef]
- Newton, M.A.; Raftery, A.E. Approximate Bayesian inference with weighted likelihood bootstrap. J. R. Stat. Soc. Ser. B 1994, 56, 3–48. [Google Scholar] [CrossRef]
- Lyddon, S.; Walker, S.G.; Holmes, C. Nonparametric learning from Bayesian models with randomized objective functions. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
- Newton, M.A.; Polson, N.G.; Xu, J. Weighted Bayesian bootstrap for scalable Bayes. arXiv 2018, arXiv:1803.04559. [Google Scholar]
- Gagnon, P.; Desgagne, A.; Bedard, M. A New Bayesian Approach to Robustness Against Outliers in Linear Regression. Bayesian Anal. 2020, 15, 389–414. [Google Scholar] [CrossRef]
- Hunter, D.R.; Lange, K. A tutorial on MM algorithms. Am. Stat. 2004, 58, 30–37. [Google Scholar] [CrossRef]
- Welling, M.; Teh, Y.W. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In Proceedings of the International Conference on Machine Learning (ICML 2011), Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
- Harrison, D.; Rubinfeld, D.L. Hedonic prices and the demand for clean air. J. Environ. Econ. Manag. 1978, 5, 81–102. [Google Scholar] [CrossRef]
- Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least Angle Regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar]
- Hung, H.; Jou, Z.; Huang, S. Robust Mislabel Logistic Regression without Modeling Mislabel Probabilities. Biometrics 2018, 74, 145–154. [Google Scholar] [CrossRef]
- Warwick, J.; Jones, M.C. Choosing a robustness tuning parameter. J. Stat. Comput. Simul. 2005, 75, 581–588. [Google Scholar] [CrossRef]
- von der Linden, W.; Dose, V.; Padayachee, J.; Prozesky, V. Signal and background separation. Phys. Rev. 1999, 59, 6527–6534. [Google Scholar] [CrossRef]
- von der Linden, W.; Dose, V.; Fischer, R.; Preuss, R. Maximum Entropy and Bayesian Methods Garching, Germany 1998: Proceedings of the 18th International Workshop on Maximum Entropy and Bayesian Methods of Statistical Analysis; Springer: London, UK, 1999; Volume 105. [Google Scholar]
- Von Toussaint, U. Bayesian inference in physics. Rev. Mod. Phys. 2011, 83, 943. [Google Scholar] [CrossRef]
- Kanamori, T.; Fujisawa, H. Robust estimation under heavy contamination using unnormalized models. Biometrika 2015, 102, 559–572. [Google Scholar] [CrossRef]
a = 10 | a = 20 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
ω | LM | c-LM | t-LM | RBR1 | RBR2 | LM | c-LM | t-LM | RBR1 | RBR2 |
0.05 | 1.689 | 0.704 | 0.256 | 0.188 | 0.318 | 2.883 | 0.682 | 0.238 | 0.172 | 0.313 |
0.1 | 2.212 | 0.605 | 0.292 | 0.229 | 0.328 | 3.471 | 0.552 | 0.257 | 0.171 | 0.297 |
0.15 | 2.886 | 0.587 | 0.422 | 0.341 | 0.386 | 4.214 | 0.537 | 0.418 | 0.240 | 0.367 |
0.2 | 3.074 | 0.576 | 0.579 | 0.429 | 0.393 | 4.445 | 0.547 | 0.683 | 0.297 | 0.398 |
(I)-Homo | (II)-Homo | |||||||||
ω | BL | c-BL | t-BL | RBL | RHS | BL | c-BL | t-BL | RBL | RHS |
0 | 96.0 | 92.6 | 94.8 | 93.5 | 93.4 | 96.0 | 92.6 | 94.8 | 93.5 | 93.4 |
0.05 | 96.0 | 93.9 | 96.6 | 94.2 | 94.0 | 96.4 | 94.2 | 97.2 | 93.7 | 93.7 |
0.1 | 96.4 | 94.5 | 97.3 | 94.6 | 94.2 | 96.5 | 95.4 | 98.5 | 94.0 | 93.9 |
0.15 | 96.6 | 95.4 | 98.1 | 95.2 | 95.2 | 96.7 | 96.6 | 99.0 | 94.4 | 94.1 |
0.2 | 96.6 | 96.1 | 98.5 | 96.3 | 96.0 | 96.7 | 97.4 | 98.6 | 95.3 | 94.8 |
(I)-Hetero | (II)-Hetero | |||||||||
δ | BL | c-BL | t-BL | RBL | RHS | BL | c-BL | t-BL | RBL | RHS |
0 | 96.0 | 92.6 | 94.8 | 93.5 | 93.4 | 96.0 | 92.6 | 94.8 | 93.5 | 93.4 |
1 | 95.9 | 93.4 | 96.3 | 94.3 | 94.1 | 95.9 | 94.0 | 97.1 | 93.7 | 93.6 |
2 | 96.0 | 94.8 | 97.6 | 95.1 | 94.9 | 95.0 | 95.3 | 98.5 | 94.0 | 93.7 |
3 | 96.7 | 95.1 | 98.4 | 95.5 | 95.3 | 94.3 | 96.3 | 98.9 | 94.4 | 94.1 |
4 | 96.6 | 95.6 | 98.8 | 96.2 | 96.2 | 92.7 | 96.9 | 96.7 | 95.0 | 94.5 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hashimoto, S.; Sugasawa, S. Robust Bayesian Regression with Synthetic Posterior Distributions. Entropy 2020, 22, 661. https://doi.org/10.3390/e22060661
Hashimoto S, Sugasawa S. Robust Bayesian Regression with Synthetic Posterior Distributions. Entropy. 2020; 22(6):661. https://doi.org/10.3390/e22060661
Chicago/Turabian StyleHashimoto, Shintaro, and Shonosuke Sugasawa. 2020. "Robust Bayesian Regression with Synthetic Posterior Distributions" Entropy 22, no. 6: 661. https://doi.org/10.3390/e22060661
APA StyleHashimoto, S., & Sugasawa, S. (2020). Robust Bayesian Regression with Synthetic Posterior Distributions. Entropy, 22(6), 661. https://doi.org/10.3390/e22060661