Informative g-Priors for Mixed Models
Abstract
:1. Introduction
2. Prior for Linear Regression Models
2.1. The Prior in [23]
2.2. New Prior Development
2.3. Hyper-Prior Elicitation for
2.4. Comparing to the Mixture of G Priors
2.5. Simple Example
2.6. Variable Selection
Information Paradox
3. Mixed Models
3.1. One-Way Random Effects ANOVA
3.2. Linear Mixed Models
3.3. Hyper-Prior Elicitation for in Mixed Models
3.4. Rats Data Example
3.5. Model Fitting via Block MCMC
4. Simulation Study
4.1. Simulation I: Fixed Effects Model
4.1.1. Parameter Estimation
4.1.2. Variable Selection
4.2. Simulation II: Random One-Way ANOVA
4.3. Simulation III: Random Intercept Model
4.4. Simulation IV: Linear Mixed Model
5. Discussion
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sun, C.Q.; Prajna, N.V.; Krishnan, T.; Mascarenhas, J.; Rajaraman, R.; Srinivasan, M.; Raghavan, A.; O’Brien, K.S.; Ray, K.J.; McLeod, S.D.; et al. Expert Prior Elicitation and Bayesian Analysis of the Mycotic Ulcer Treatment Trial I. Investig. Ophthalmol. Vis. Sci. 2013, 54, 4167–4173. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hampson, L.V.; Whitehead, J.; Eleftheriou, D.; Brogan, P. Bayesian methods for the design and interpretation of clinical trials in very rare diseases. Stat. Med. 2014, 33, 4186–4201. [Google Scholar] [CrossRef] [Green Version]
- Zhang, G.; Thai, V.V. Expert elicitation and Bayesian Network modeling for shipping accidents: A literature review. Saf. Sci. 2016, 87, 53–62. [Google Scholar] [CrossRef]
- Food and Drug Administration. Guidance for the use of Bayesian statistics in medical device clinical trials. Guid. Ind. Fda Staff. 2010, 2006, 1–50. [Google Scholar]
- O’Hagan, A. Eliciting expert beliefs in substantial practical applications. J. R. Stat. Soc. Ser. 1998, 47, 21–35. [Google Scholar]
- Kinnersley, N.; Day, S. Structured approach to the elicitation of expert beliefs for a Bayesian-designed clinical trial: A case study. Pharm. Stat. 2013, 12, 104–113. [Google Scholar] [CrossRef]
- Dallow, N.; Best, N.; Montague, T.H. Better decision making in drug development through adoption of formal prior elicitation. Pharm. Stat. 2018, 17, 301–316. [Google Scholar] [CrossRef] [Green Version]
- Hartmann, M.; Agiashvili, G.; Bürkner, P.; Klami, A. Flexible Prior Elicitation via the Prior Predictive Distribution. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), Virtual, 3–6 August 2020; Peters, J., Sontag, D., Eds.; PMLR: London, UK, 2020; Volume 124, pp. 1129–1138. [Google Scholar]
- Zellner, A. Applications of Bayesian Analysis in Econometrics. Statistician 1983, 32, 23–34. [Google Scholar] [CrossRef]
- Zellner, A. On Assessing Prior Distributions and Bayesian Regression Analysis With g-Prior Distributions. Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti; North-Holland/Elsevier: Amsterdam, The Netherlands, 1986; pp. 233–243. [Google Scholar]
- Li, Y.; Clyde, M.A. Mixtures of g-priors in generalized linear models. J. Am. Stat. Assoc. 2018, 113, 1828–1845. [Google Scholar] [CrossRef] [Green Version]
- Liang, F.; Paulo, R.; Molina, G.; Clyde, M.A.; Berger, J.O. Mixtures of g priors for Bayesian variable selection. J. Am. Stat. Assoc. 2008, 103, 410–423. [Google Scholar] [CrossRef]
- Bedrick, E.J.; Christensen, R.; Johnson, W. A New Perspective on Priors for Generalized Linear Models. J. Am. Stat. Assoc. 1996, 91, 1450–1460. [Google Scholar] [CrossRef]
- Hosack, G.R.; Hayes, K.R.; Barry, S.C. Prior elicitation for Bayesian generalised linear models with application to risk control option assessment. Reliab. Eng. Syst. Saf. 2017, 167, 351–361. [Google Scholar] [CrossRef]
- Ibrahim, J.G.; Chen, M.H. Power prior distributions for regression models. Stat. Sci. 2000, 15, 46–60. [Google Scholar]
- Ibrahim, J.G.; Chen, M.H.; Sinha, D. On optimality properties of the power prior. J. Am. Stat. Assoc. 2003, 98, 204–213. [Google Scholar] [CrossRef]
- Hobbs, B.P.; Carlin, B.P.; Mandrekar, S.J.; Sargent, D.J. Hierarchical commensurate and power prior models for adaptive incorporation of historical information in clinical trials. Biometrics 2011, 67, 1047–1056. [Google Scholar] [CrossRef] [PubMed]
- Ibrahim, J.G.; Chen, M.H.; Gwon, Y.; Chen, F. The power prior: Theory and applications. Stat. Med. 2015, 34, 3724–3749. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Agliari, A.; Parisetti, C.C. A-g Reference Informative Prior: A Note on Zellner’s g-Prior. J. R. Stat. Soc. Ser. D 1988, 37, 271–275. [Google Scholar] [CrossRef]
- van Zwet, E. A default prior for regression coefficients. Stat. Methods Med. Res. 2019, 28, 3799–3807. [Google Scholar] [CrossRef]
- Plummer, M. JAGS: A Program for Analysis of Bayesian Graphical Models Using Gibbs Sampling. In Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), Vienna, Austria, 20–22 March 2003; Hornik, K., Leisch, F., Zeileis, A., Eds.; ISSN 1609-395X. [Google Scholar]
- Su, Y.S.; Yajima, M. R2jags: Using R to Run ‘JAGS’; R Package Version 0.5-7; R Foundation for Statistical Computing: Vienna, Austria, 2015. [Google Scholar]
- Hanson, T.E.; Branscum, A.J.; Johnson, W.O. Informative g-Priors for Logistic Regression. Bayesian Anal. 2014, 9, 597–612. [Google Scholar] [CrossRef]
- Lally, N.R. The Informative g-Prior vs. Common Reference Priors for Binomial Regression with an Application to Hurricane Electrical Utility Asset Damage Prediction. Master’s Thesis, University of Connecticut, Mansfield, CT, USA, 31 July 2015. [Google Scholar]
- Carlin, B.P.; Gelfand, A.E. An iterative Monte Carlo method for nonconjugate Bayesian analysis. Stat. Comput. 1991, 1, 119–128. [Google Scholar] [CrossRef]
- Liu, C.; Martin, R.; Syring, N. Efficient simulation from a gamma distribution with small shape parameter. Comput. Stat. 2017, 32, 1767–1775. [Google Scholar] [CrossRef]
- Gabry, J.; Simpson, D.; Vehtari, A.; Betancourt, M.; Gelman, A. Visualization in Bayesian workflow. J. R. Stat. Soc. Ser. A 2019, 182, 389–402. [Google Scholar] [CrossRef] [Green Version]
- Gelman, A.; Simpson, D.; Betancourt, M. The Prior Can Often Only Be Understood in the Context of the Likelihood. Entropy 2017, 19, 555. [Google Scholar] [CrossRef]
- Wesner, J.S.; Pomeranz, J.P.F. Choosing priors in Bayesian ecological models by simulating from the prior predictive distribution. Ecosphere 2021, 12, e03739. [Google Scholar] [CrossRef]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
- Murphy, K.P. Conjugate Bayesian Analysis of the Gaussian Distribution; Technical Report; University of British Columbia: Vancouver, BC, Canada, 3 October 2007. [Google Scholar]
- Berger, J.O.; Pericchi, L.R.; Ghosh, J.; Samanta, T.; De Santis, F.; Berger, J.; Pericchi, L. Objective Bayesian methods for model selection: Introduction and comparison. Lect.-Notes-Monogr. Ser. 2001, 38, 135–207. [Google Scholar]
- Gelman, A. Prior distributions for variance parameters in hierarchical models. Bayesian Anal. 2006, 1, 515–533. [Google Scholar] [CrossRef]
- Box, G.E.P.; Tiao, G.C. Bayesian Inference in Statistical Analysis; Addison-Wesley: Reading, MA, USA, 1973. [Google Scholar]
- Daniels, M.J. A prior for the variance in hierarchical models. Can. J. Stat. 1999, 27, 567–578. [Google Scholar] [CrossRef] [Green Version]
- Wang, M. Mixture of g-priors for analysis of variance models with a divergining number of parameters. Bayesian Anal. 2017, 12, 511–532. [Google Scholar] [CrossRef]
- Lin, P.E. Some characterizations of the multivariate t distribution. J. Multivar. Anal. 1972, 2, 339–344. [Google Scholar] [CrossRef] [Green Version]
- Kass, R.E.; Natarajan, R. A default conjugate prior for variance components in a generalized linear mixed models (Comment on article by Browne and Draper). Bayesian Anal. 2006, 1, 535–542. [Google Scholar] [CrossRef]
- Natarajan, R.; Kass, R.E. Reference Bayesian methods for generalized linear mixed models. J. Am. Stat. Assoc. 2000, 95, 227–237. [Google Scholar] [CrossRef]
- Huang, A.; Wand, M.P. Simple marginally noninformative prior distributions for covariance matrices. Bayesian Anal. 2013, 8, 439–452. [Google Scholar] [CrossRef]
- Demirhan, H.; Kalaylioglu, Z. Joint prior distributions for variance parameters in Bayesian analysis of normal hierarchical models. J. Multivar. Anal. 2015, 135, 163–174. [Google Scholar] [CrossRef]
- Bates, D.; Mächler, M.; Bolker, B.; Walker, S. Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw. 2015, 67, 1–48. [Google Scholar] [CrossRef]
- Burdick, R.K.; Borror, C.M.; Montgomery, D.C. Design and Analysis of Gauge R and R Studies: Making Decisions with Confidence Intervals in Random and Mixed ANOVA Models; SIAM: Philadelphia, PA, USA, 2005. [Google Scholar]
- Spiegelhalter, D.; Thomas, A.; Best, N.; Lunn, D. WinBUGS User Manual, Version 1.4; Medical Research Council Biostatistics Unit: Cambridge, UK, 2003. [Google Scholar]
- Sargent, D.J.; Hodges, J.S.; Carlin, B.P. Structured Markov Chain Monte Carlo. J. Comput. Graph. Stat. 2000, 9, 217–234. [Google Scholar]
- Haario, H.; Saksman, E.; Tamminen, J. An Adaptive Metropolis Algorithm. Bernoulli 2001, 7, 223–242. [Google Scholar] [CrossRef] [Green Version]
- Brown, W.J.; Draper, D. A comparison of Bayesian and likelihood-based methods for fitting multilevel models. Bayesian Anal. 2006, 1, 473–514. [Google Scholar] [CrossRef]
- Zitzmann, S.; Helm, C.; Hecht, M. Prior specification for more stable Bayesian estimation of multilevel latent variable models in small samples: A comparative investigation of two different approaches. Front. Psychol. 2021, 11, 611267. [Google Scholar] [CrossRef]
Bias (MSE) | Coverage (Width) | |||||||
---|---|---|---|---|---|---|---|---|
Method | ||||||||
new-true | 0.041 (0.0196) | −0.050 (0.0894) | −0.012 (0.0092) | 0.94 (0.55) | 0.97 (0.86) | 0.96 (0.86) | 0.98 (0.43) | |
new-hist | 0.038 (0.0202) | −0.045 (0.0906) | −0.011 (0.0157) | 0.95 (0.56) | 0.98 (0.88) | 0.97 (0.87) | 0.96 (0.51) | |
new-none | 0.022 (0.0205) | −0.031 (0.0948) | 0.011 (0.0206) | 0.95 (0.57) | 0.98 (0.90) | 0.97 (0.90) | 0.95 (0.57) | |
benchmark | −0.012 (0.0194) | 0.003 (0.1049) | 0.006 (0.0205) | 0.96 (0.57) | 0.97 (0.91) | 0.96 (0.92) | 0.95 (0.57) | |
EB | 0.018 (0.0212) | −0.027 (0.0961) | 0.023 (0.0219) | 0.94 (0.56) | 0.97 (0.90) | 0.96 (0.90) | 0.95 (0.58) | |
hyper-g | 0.037 (0.0231) | −0.047 (0.0906) | 0.034 (0.0229) | 0.95 (0.59) | 0.97 (0.89) | 0.97 (0.89) | 0.95 (0.60) | |
new-true | 0.010 (0.0043) | −0.012 (0.0216) | 0.004 (0.0034) | 0.96 (0.25) | 0.95 (0.40) | 0.94 (0.40) | 0.96 (0.24) | |
new-hist | 0.010 (0.0044) | −0.012 (0.0217) | −0.001 (0.0038) | 0.95 (0.25) | 0.94 (0.40) | 0.94 (0.40) | 0.96 (0.24) | |
new-none | 0.007 (0.0044) | −0.008 (0.0219) | 0.003 (0.0041) | 0.95 (0.25) | 0.95 (0.40) | 0.94 (0.40) | 0.96 (0.25) | |
benchmark | 0.000 (0.0043) | −0.002 (0.0223) | 0.003 (0.0041) | 0.95 (0.25) | 0.94 (0.40) | 0.94 (0.40) | 0.96 (0.25) | |
EB | 0.006 (0.0044) | −0.007 (0.0219) | 0.006 (0.0042) | 0.95 (0.25) | 0.94 (0.40) | 0.94 (0.40) | 0.96 (0.25) | |
hyper-g | 0.009 (0.0045) | −0.011 (0.0217) | 0.008 (0.0042) | 0.95 (0.26) | 0.95 (0.40) | 0.94 (0.40) | 0.95 (0.25) |
Method | Size = 1 | Size = 2 | Size = 3 | Size = 4 | Size = 7 | Size = 10 | Size = 13 | Size = 16 |
---|---|---|---|---|---|---|---|---|
(C1) OLS estimation using the selected model | ||||||||
new-true | 0.064 | 0.076 | 0.079 | 0.095 | 0.108 | 0.116 | 0.130 | 0.138 |
new-hist | 0.062 | 0.074 | 0.078 | 0.095 | 0.107 | 0.116 | 0.130 | 0.137 |
new-none | 0.056 | 0.072 | 0.079 | 0.094 | 0.107 | 0.116 | 0.130 | 0.137 |
benchmark | 0.025 | 0.046 | 0.052 | 0.071 | 0.100 | 0.126 | 0.145 | 0.159 |
EB | 0.110 | 0.087 | 0.081 | 0.095 | 0.108 | 0.115 | 0.130 | 0.137 |
hyper-g | 0.094 | 0.081 | 0.079 | 0.093 | 0.107 | 0.114 | 0.131 | 0.138 |
(C2) Bayesian estimation using the true model | ||||||||
new-true | 0.000 | 0.012 | 0.020 | 0.029 | 0.042 | 0.058 | 0.069 | 0.078 |
new-hist | 0.007 | 0.013 | 0.021 | 0.030 | 0.044 | 0.061 | 0.072 | 0.083 |
new-none | 0.010 | 0.015 | 0.023 | 0.032 | 0.047 | 0.063 | 0.074 | 0.085 |
benchmark | 0.010 | 0.016 | 0.024 | 0.034 | 0.057 | 0.079 | 0.103 | 0.127 |
EB | 0.010 | 0.016 | 0.025 | 0.034 | 0.049 | 0.065 | 0.076 | 0.088 |
hyper-g | 0.010 | 0.016 | 0.025 | 0.034 | 0.048 | 0.064 | 0.075 | 0.086 |
(C3) Bayesian estimation using the full model | ||||||||
new-true | 0.017 | 0.050 | 0.060 | 0.069 | 0.072 | 0.078 | 0.079 | 0.078 |
new-hist | 0.028 | 0.056 | 0.065 | 0.073 | 0.077 | 0.082 | 0.083 | 0.083 |
new-none | 0.026 | 0.058 | 0.067 | 0.075 | 0.079 | 0.083 | 0.084 | 0.085 |
benchmark | 0.154 | 0.132 | 0.126 | 0.131 | 0.131 | 0.127 | 0.129 | 0.127 |
EB | 0.018 | 0.057 | 0.069 | 0.078 | 0.082 | 0.087 | 0.087 | 0.088 |
hyper-g | 0.023 | 0.057 | 0.067 | 0.075 | 0.080 | 0.084 | 0.085 | 0.086 |
Bias (MSE) | Coverage (Width) | ||||||||
---|---|---|---|---|---|---|---|---|---|
Method | |||||||||
, | |||||||||
new-true | −0.000 (0.000) | −0.015 (0.619) | −0.010 (0.007) | 0.010 (0.007) | - | 0.94 (0.94) | 0.97 (0.34) | 0.97 (0.34) | |
new-hist | −0.000 (0.018) | −0.0003 (0.711) | −0.0001 (0.012) | 0.055 (0.019) | 0.96 (0.55) | 0.95 (1.05) | 0.95 (0.46) | 0.97 (0.57) | |
new-none | 0.018 (0.035) | −0.145 (0.796) | 0.002 (0.015) | 0.094 (0.034) | 0.96 (0.78) | 0.95 (1.15) | 0.96 (0.51) | 0.98 (0.85) | |
unif | 0.017 (0.035) | −0.139 (0.814) | 0.019 (0.016) | 0.140 (0.067) | 0.97 (0.84) | 0.95 (1.20) | 0.95 (0.52) | 0.96 (1.13) | |
unif | 0.018 (0.035) | −0.145 (0.801) | 0.015 (0.016) | 0.267 (0.140) | 0.98 (0.96) | 0.98 (1.30) | 0.95 (0.52) | 0.95 (1.64) | |
gamma | 0.017 (0.035) | −0.133 (0.846) | 0.026 (0.017) | 0.057 (0.040) | 0.94 (0.77) | 0.93 (1.12) | 0.96 (0.53) | 0.93 (0.88) | |
shrink | 0.018 (0.035) | −0.142 (0.796) | 0.007 (0.015) | 0.125 (0.046) | 0.97 (0.83) | 0.96 (1.19) | 0.96 (0.51) | 0.98 (0.99) | |
, | |||||||||
new-true | −0.0000 (0.000) | −0.0006 (0.685) | 0.012 (0.011) | −0.0012 (0.011) | - | 0.94 (1.01) | 0.96 (0.43) | 0.96 (0.43) | |
new-hist | −0.0000 (0.029) | 0.003 (0.892) | 0.010 (0.013) | 0.056 (0.045) | 0.96 (0.71) | 0.95 (1.20) | 0.96 (0.49) | 0.95 (0.89) | |
new-none | 0.024 (0.059) | −0.206 (1.092) | 0.007 (0.015) | 0.109 (0.079) | 0.96 (0.98) | 0.96 (1.34) | 0.95 (0.51) | 0.98 (1.32) | |
unif | 0.023 (0.059) | −0.199 (1.099) | 0.018 (0.016) | 0.264 (0.211) | 0.97 (1.14) | 0.97 (1.46) | 0.96 (0.52) | 0.97 (2.02) | |
unif | 0.023 (0.059) | −0.195 (1.092) | 0.016 (0.016) | 0.471 (0.434) | 0.99 (1.27) | 0.98 (1.57) | 0.96 (0.52) | 0.95 (2.90) | |
gamma | 0.022 (0.059) | −0.185 (1.115) | 0.021 (0.016) | 0.135 (0.127) | 0.96 (1.05) | 0.96 (1.38) | 0.95 (0.53) | 0.95 (1.61) | |
shrink | 0.024 (0.059) | −0.205 (1.092) | 0.011 (0.015) | 0.168 (0.112) | 0.97 (1.07) | 0.96 (1.41) | 0.95 (0.52) | 0.98 (1.57) |
Bias (MSE) | Coverage (Width) | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | , | |||||||||||
, | ||||||||||||
new-true | 0.025 (0.034) | −0.0032 (0.085) | 0.053 (0.772) | 0.038 (0.017) | 0.084 (0.022) | 0.98 (0.81) | 0.95 (0.82) | 0.95 (0.82) | 0.96 (1.15) | 0.96 (0.52) | 0.98 (0.64) | |
new-hist | 0.028 (0.035) | −0.0034 (0.085) | 0.046 (0.787) | 0.029 (0.017) | 0.083 (0.026) | 0.97 (0.82) | 0.94 (0.81) | 0.94 (0.81) | 0.96 (1.14) | 0.96 (0.52) | 0.98 (0.70) | |
new-none | 0.025 (0.043) | −0.0034 (0.085) | 0.076 (0.828) | 0.027 (0.017) | 0.106 (0.039) | 0.94 (0.83) | 0.94 (0.81) | 0.94 (0.81) | 0.95 (1.15) | 0.97 (0.53) | 0.97 (0.88) | |
unif | −0.0011 (0.042) | 0.002 (0.090) | 0.074 (0.846) | 0.027 (0.018) | 0.154 (0.075) | 0.96 (0.93) | 0.94 (0.83) | 0.94 (0.83) | 0.96 (1.22) | 0.96 (0.53) | 0.96 (1.15) | |
unif | −0.0011 (0.041) | 0.002 (0.090) | 0.072 (0.832) | 0.022 (0.017) | 0.273 (0.146) | 0.98 (1.03) | 0.94 (0.82) | 0.95 (0.83) | 0.97 (1.31) | 0.97 (0.53) | 0.93 (1.55) | |
gamma | −0.0010 (0.042) | 0.001 (0.090) | 0.075 (0.941) | 0.052 (0.022) | 0.039 (0.043) | 0.94 (0.84) | 0.94 (0.84) | 0.95 (0.84) | 0.92 (1.12) | 0.95 (0.57) | 0.96 (0.94) | |
shrink | −0.0010 (0.041) | 0.001 (0.090) | 0.072 (0.827) | 0.015 (0.016) | 0.138 (0.052) | 0.96 (0.93) | 0.94 (0.82) | 0.94 (0.82) | 0.96 (1.21) | 0.97 (0.52) | 0.97 (1.00) | |
, | ||||||||||||
new-true | 0.023 (0.046) | −0.0032 (0.086) | 0.071 (0.976) | 0.035 (0.017) | 0.055 (0.028) | 0.97 (0.94) | 0.94 (0.82) | 0.95 (0.82) | 0.96 (1.28) | 0.96 (0.52) | 0.99 (0.82) | |
new-hist | 0.025 (0.050) | −0.0032 (0.086) | 0.061 (1.018) | 0.028 (0.017) | 0.069 (0.048) | 0.97 (0.96) | 0.94 (0.81) | 0.94 (0.82) | 0.96 (1.29) | 0.96 (0.52) | 0.98 (1.00) | |
new-none | 0.019 (0.066) | −0.0031 (0.086) | 0.109 (1.129) | 0.029 (0.018) | 0.131 (0.090) | 0.93 (0.99) | 0.94 (0.82) | 0.95 (0.82) | 0.95 (1.31) | 0.96 (0.53) | 0.96 (1.38) | |
unif | −0.0014 (0.065) | 0.002 (0.091) | 0.107 (1.134) | 0.025 (0.018) | 0.292 (0.238) | 0.97 (1.21) | 0.94 (0.83) | 0.95 (0.83) | 0.97 (1.48) | 0.97 (0.53) | 0.95 (2.07) | |
unif | −0.0015 (0.065) | 0.002 (0.091) | 0.107 (1.127) | 0.023 (0.017) | 0.481 (0.445) | 0.98 (1.33) | 0.94 (0.83) | 0.94 (0.83) | 0.98 (1.59) | 0.97 (0.53) | 0.93 (2.73) | |
gamma | −0.0014 (0.066) | 0.001 (0.091) | 0.106 (1.208) | 0.042 (0.021) | 0.133 (0.145) | 0.94 (1.11) | 0.94 (0.84) | 0.95 (0.84) | 0.95 (1.40) | 0.95 (0.57) | 0.96 (1.74) | |
shrink | −0.0013 (0.065) | 0.002 (0.091) | 0.102 (1.127) | 0.019 (0.017) | 0.192 (0.126) | 0.96 (1.15) | 0.94 (0.83) | 0.95 (0.83) | 0.96 (1.43) | 0.97 (0.53) | 0.97 (1.60) |
Bias (MSE) | Coverage (Width) | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | , ) | ||||||||||||
, | |||||||||||||
new-hist | 0.084 (0.048) | −0.0071 (0.020) | −0.160 (2.078) | 0.037 (0.019) | 0.093 (0.063) | 0.96 (0.90) | 0.94 (0.49) | 0.96 (0.23) | 0.93 (1.40) | 0.91 (0.80) | 0.95 (0.55) | 0.95 (0.98) | |
new-hist-i | 0.077 (0.049) | −0.0062 (0.018) | −0.184 (2.045) | 0.030 (0.018) | 0.038 (0.043) | 0.94 (0.86) | 0.92 (0.45) | 0.96 (0.23) | 0.94 (1.46) | 0.94 (0.90) | 0.96 (0.55) | 0.96 (0.87) | |
new-none | 0.086 (0.056) | −0.0069 (0.021) | −0.189 (2.129) | 0.035 (0.019) | 0.119 (0.091) | 0.94 (0.92) | 0.94 (0.50) | 0.95 (0.23) | 0.93 (1.41) | 0.91 (0.81) | 0.95 (0.56) | 0.94 (1.16) | |
new-none-i | 0.081 (0.057) | −0.0063 (0.019) | −0.211 (2.101) | 0.026 (0.019) | 0.045 (0.053) | 0.92 (0.87) | 0.91 (0.46) | 0.96 (0.23) | 0.94 (1.47) | 0.94 (0.90) | 0.96 (0.55) | 0.96 (0.97) | |
KN | 0.010 (0.050) | 0.005 (0.019) | −0.167 (2.209) | 0.046 (0.019) | - | 0.96 (0.95) | 0.95 (0.53) | 0.96 (0.23) | 0.93 (1.40) | 0.91 (0.81) | 0.96 (0.55) | - | |
, | |||||||||||||
new-hist | 0.091 (0.074) | −0.103 (0.035) | 0.096 (2.735) | 0.019 (0.019) | 0.125 (0.165) | 0.96 (1.11) | 0.89 (0.60) | 0.95 (0.23) | 0.94 (1.62) | 0.90 (0.94) | 0.94 (0.55) | 0.93 (1.47) | |
new-hist-i | 0.080 (0.070) | −0.0089 (0.028) | 0.068 (2.544) | 0.013 (0.019) | 0.010 (0.114) | 0.95 (1.06) | 0.91 (0.57) | 0.95 (0.23) | 0.95 (1.66) | 0.94 (1.01) | 0.94 (0.54) | 0.95 (1.32) | |
new-none | 0.086 (0.091) | −0.0096 (0.036) | 0.083 (2.848) | 0.021 (0.020) | 0.240 (0.330) | 0.94 (1.16) | 0.90 (0.63) | 0.95 (0.23) | 0.94 (1.66) | 0.91 (0.97) | 0.94 (0.55) | 0.91 (1.98) | |
new-none-i | 0.080 (0.087) | −0.0088 (0.029) | 0.055 (2.670) | 0.013 (0.019) | 0.057 (0.172) | 0.92 (1.09) | 0.90 (0.58) | 0.95 (0.23) | 0.94 (1.67) | 0.94 (1.02) | 0.94 (0.55) | 0.92 (1.61) | |
KN | −0.0014 (0.088) | 0.002 (0.035) | 0.104 (3.050) | 0.053 (0.023) | - | 0.95 (1.23) | 0.93 (0.70) | 0.95 (0.23) | 0.93 (1.68) | 0.91 (1.00) | 0.94 (0.58) | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chien, Y.-F.; Zhou, H.; Hanson, T.; Lystig, T. Informative g-Priors for Mixed Models. Stats 2023, 6, 169-191. https://doi.org/10.3390/stats6010011
Chien Y-F, Zhou H, Hanson T, Lystig T. Informative g-Priors for Mixed Models. Stats. 2023; 6(1):169-191. https://doi.org/10.3390/stats6010011
Chicago/Turabian StyleChien, Yu-Fang, Haiming Zhou, Timothy Hanson, and Theodore Lystig. 2023. "Informative g-Priors for Mixed Models" Stats 6, no. 1: 169-191. https://doi.org/10.3390/stats6010011
APA StyleChien, Y. -F., Zhou, H., Hanson, T., & Lystig, T. (2023). Informative g-Priors for Mixed Models. Stats, 6(1), 169-191. https://doi.org/10.3390/stats6010011