# A Novel Method to Use Coordinate Based Meta-Analysis to Determine a Prior Distribution for Voxelwise Bayesian Second-Level fMRI Analysis

## Abstract

**:**

## 1. Introduction

## 2. Experimental Procedure

#### 2.1. Materials

#### 2.1.1. Statistical Image Datasets for Analyses

#### 2.1.2. Meta-Analysis Results for Prior Determination and Performance Evaluation

#### 2.2. Basis of Voxelwise Second-Level fMRI Analysis

#### 2.3. Voxelwise Bayesian Second-Level fMRI Analysis

_{ab}) indicates to what extent evidence supports a specific hypothesis of interest (H

_{a}) over another (H

_{b}) [8,43]. To calculate Bayes Factor, we need to examine the posterior probability of each hypothesis by updating its prior probability through observing data. Let us assume that P(H

_{a}) indicates the prior probability of H

_{a}associated with our belief about whether H

_{a}is the case before observing data. In the same way, the prior probability of H

_{b}, P(H

_{b}), can also be determined. Through observation, the posterior probability of each hypothesis is updated from its prior probability. The posterior probability of H

_{a}, P(H

_{a}|D), means the likelihood of H

_{a}given data (D). In the same vein, the posterior probability of H

_{b}, P(H

_{b}|D), can also be defined. To calculate the posterior probabilities, the Bayesian updating process is performed following Bayes theorem as follows:

_{a}|D) and P(H

_{b}|D), are acquired, Bayes Factor BF

_{ab}can be calculated. We can start with considering the posterior odds, P(H

_{a}|D)/P(H

_{b}|D), which indicate the relative ratio of P(H

_{a}|D) versus P(H

_{b}|D). These odds shall be calculated as follows [6]:

_{a})/P(D|H

_{b}) indicates Bayes Factor, BF

_{ab}, where P(H

_{a})/P(H

_{b}) is the prior odds. We can utilize the value, BF

_{ab}= P(D|H

_{a})/P(D|H

_{b}), to examine to what extent evidence (D) supports H

_{a}over H

_{b}. If BF

_{ab}exceeds 1, evidence is supposed to favorably support H

_{a}in lieu of H

_{b}. If BF

_{ab}< 1, H

_{b}is deemed to be more likely to be the case given evidence.

_{ab}< 3, evidence is deemed to be anecdotal so it would still be unclear whether evidence significantly supports one hypothesis over the other. When BF

_{ab}≥ 3, evidence positively supports H

_{a}over H

_{b}. In the same vein, BF

_{ab}≥ 10, ≥ 30, and ≥ 100 have been used as indicators for presence of strong, very strong, and extremely strong evidence supporting H

_{a}over H

_{b}, respectively. If BF

_{ab}becomes smaller than 1/3, we can assume that evidence is more likely to support H

_{b}over H

_{a}. BF

_{ab}≤ 1/3, 1/10, 1/30, and 1/100 are deemed to indicate the presence of positive, strong, very strong, and extremely strong evidence supporting H

_{b}over H

_{a}, respectively.

_{1}, is about whether there is a significant non-zero effect in a voxel when two conditions are compared. Then, H

_{0}, a null hypothesis, is about whether there is not a significant non-zero effect. If we conduct frequentist analysis, then a resultant p-value indicates more about P(D|H), whether observed data is likely to be the case given a hypothesis, rather than P(H|D), whether the hypothesis is likely to be the case given the data, in which we, fMRI researchers are primarily interested, in most cases, unless we intend to examine null effects. In fact, p-values do not inform us about whether H

_{1}, an alternative hypothesis, shall be accepted; instead, they are only related to whether H

_{0}, a null hypothesis, shall be rejected. Interpreting p-values is also challenging. Unlike Bayes factors, which are about to what extent evidence supporting a hypothesis of interest, p-values are about the extremity of the observed data given the hypothesis. As fMRI researchers are primarily interested in testing presence of a significant non-zero effect (H

_{1}) instead of its absence (H

_{0}), at the epistemological level, in terms of interpretation, Bayes factors would be more useful than p-values.

_{10}, regarding to what extent evidence supporting presence of a significant effect (activity difference) in the voxel, was calculated with input images. Then, to identify voxels that reported significant activity, the resultant BF

_{10}values were thresholded at BF

_{10}≥ 3, indicating presence of positive evidence supporting a non-zero effect in each voxel.

#### 2.4. Prior Determination Based on Results from Meta-Analyses

#### 2.5. Performance Evaluation

#### 2.5.1. Overlap Index for Evaluation

_{ovl}, was calculated as follows [47]:

_{ovl}was the number of voxels that were significant in both the fMRI analysis result and meta-analysis result images, V

_{res}was the number of significant voxels in the fMRI analysis result image, and V

_{met}was the number of significant voxels in the meta-analysis result image. I

_{ovl}was calculated with a customized R code.

#### 2.5.2. Statistical Analysis of Performance Outcomes

_{ovl}. Frequentist mixed-effects analysis was performed with an R package, lmerTest. In addition to ordinary frequentist mixed-effects analysis, which reports p-values of tested predictors, Bayesian mixed-effect analysis was also performed with BayesFactor. Bayesian mixed-effects analysis is suitable for identifying the best regression model that predicts the dependent variable of interest in simple linear regression [50] as well as multilevel modeling [51]. By employing this method, whether the best regression model identified through Bayesian mixed-effects analysis included the analysis type as a predictor was examined. If the analysis type was included, it was deemed that the employment of different analysis methods was significantly associated with the difference in performance outcomes.

_{ovl}as the dependent variable, the analysis method (Bayesian analysis with a prior distribution determined by meta-analysis vs. Bayesian analysis with a default prior distribution vs. frequentist analysis) as the fixed effect, and the analyzed dataset, task condition category, and type of meta-analysis result used for performance evaluation as random effects. As explained previously, whether the best regression model included the analysis type as a predictor and whether evidence significantly supported inclusion of the analysis type in the model were tested. Moreover, to conduct auxiliary analysis, the same mixed-effects analyses were performed while employing a different fixed effect, the analysis type further differentiated by four different P values used for prior distribution determination (Bayesian analysis with a prior distribution determined by meta-analysis with four different P values (80%, 85%, 90%, 95%) vs. Bayesian analysis with a default prior distribution vs. frequentist analysis).

_{ovl}was set as the dependent variable, the analysis type (Bayesian analysis with a prior distribution determined by meta-analysis vs. Bayesian analysis with a default prior vs. frequentist analysis) and the type of meta-analysis used for prior determination (coordinate-based meta-analysis vs. image-based meta-analysis) as the two fixed effects, and the analyzed dataset, and type of meta-analysis result used for performance evaluation as random effects.

## 3. Results

#### 3.1. Voxelwise Second-Level fMRI Analyses

_{ovl}, are presented in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. The results are presented for each dataset, each type of meta-analysis used for prior determination, and each analysis type. In Figure 2, Figure 3 and Figure 4, the results of the analyses of three working memory datasets, DeYoung et al. (2009), Henson et al. (2002), and Pinho et al. (2020), respectively, are presented. In these cases, three subplots report results from Bayesian analyses with prior distributions determined by image-based meta-analysis (Figure 2A, Figure 3A and Figure 4A), BrainMap and Ginger ALE (Figure 2B, Figure 3B and Figure 4B), and NeuroQuery (Figure 2C, Figure 3C and Figure 4C), respectively. In Figure 5A,B, the results from the analyses of Pinho et al.’s (2020) speech dataset are presented. The results of the analyses of Gordon et al.’s (2017) face dataset are demonstrated in Figure 6. In the analyses of these two datasets, two different information sources, BrainMap and Ginger ALE (Figure 6A), and NeuroQuery (Figure 6B), were employed for prior determination.

_{ovl}s resulting from meta-analysis-informed Bayesian analyses were significantly higher than those resulting from Bayesian analyses with a default Cauchy prior distribution or frequentist analyses. The aforementioned higher I

_{ovl}s of Bayesian analyses with prior distributions determined by meta-analyses were reported from the analyses of all datasets, regardless of which type of meta-analysis was used for prior determination.

#### 3.2. Statistical Analyses of Performance Outcomes

^{84}. In addition, the inclusion of the analysis type was significantly substantiated by evidence, BF = 5.44 × 10

^{64}. The result from frequentist mixed-effects analysis reported that meta-analysis informed Bayesian analysis outperformed both Bayesian analysis with a default prior distribution, t (259.97) = −14.43, B = −0.07, se = 0.00, p < 0.001, Cohen’s d = −1.79, and frequentist analysis, t (260.32) = −27.35, B = −0.14, se = 0.01, p < 0.001, Cohen’s d = −3.39.

^{81}. Inclusion of the analysis type was also supported by evidence, BF = 1.69 × 10

^{62}. When meta-analysis informed Bayesian analysis with P = 80% was set as the reference group, frequentist mixed-effects analysis indicated that it outperformed meta-analysis informed Bayesian analysis with P = 95%, t (257.00) = −2.45, B = −0.01, se = 0.00, p = 0.01, Cohen’s d = −0.31, Bayesian analysis with a default prior distribution, t (257.00) = −12.30, B = −0.07, se = 0.00, p < 0.001, Cohen’s d = −1.53, and frequentist analysis, (257.20) = −23.27, B = −0.15, se = 0.00, p < 0.001, Cohen’s d = −2.90. However, such differences were not found when it was compared with meta-analysis informed Bayesian analysis with different P values, 85%, t (257.00) = −0.16, B = −0.00, se = 0.00, p = 0.87, Cohen’s d = −0.02, and 90%, t (257.00) = −0.64, B = −0.00, se = 0.00, p = 0.52, Cohen’s d = −0.08.

^{67}. Similarly, the inclusion of the analysis type was substantiated by evidence, BF = 7.34 × 10

^{58}, while that of the meta-analysis type was not, BF = 0.06. When the best model without the type of meta-analysis used for prior determination was examined, compared with Bayesian analysis with meta-analysis-informed prior determination, both Bayesian analysis with a default prior distribution, t (202.00) = −13.67, B = −0.08, se = 0.01, p < 0.001, Cohen’s d = −1.92, and frequentist analysis reported worse performance, t (202.00) = −27.36, B = −0.16, se = 0.01, p < 0.001, Cohen’s d = −3.85.

^{64}) included the type of analysis but not the type of meta-analysis used for prior determination. Although the inclusion of the analysis type in the best model was substantiated by evidence, BF = 6.7 × 10

^{55}, that of the type of meta-analysis employed for prior determination was not, BF = 0.05. When the best model was examined, Bayesian analysis with P = 80% outperformed Bayesian analysis with P = 95%, t (257.00) = −2.45, B = −0.01, se = 0.01, p = 0.02, Cohen’s d = −0.23, Bayesian analysis with a default prior distribution, t (257.00) = −12.30, B = −0.07, se = 0.01, p < 0.001, Cohen’s d = −1.60, and frequentist analysis, t (257.20) = −23.27, B = −0.15, se = 0.01, p < 0.001, Cohen’s d = −3.14. However, it did not show better performance compared with when P = 85%, t (257.00) = −0.16, B = −0.00, se = 0.01, p = 0.87, Cohen’s d = −0.01, or P = 90% was employed in Bayesian analysis, t (257.00) = −0.64, B = −0.00, se = 0.01, p = 0.52, Cohen’s d = −0.04.

## 4. Discussion

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Bennett, C.M.; Miller, M.B.; Wolford, G.L. Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument for Multiple Comparisons Correction. NeuroImage
**2009**, 47, S125. [Google Scholar] [CrossRef][Green Version] - Eklund, A.; Nichols, T.E.; Knutsson, H. Cluster Failure: Why FMRI Inferences for Spatial Extent Have Inflated False-Positive Rates. Proc. Natl. Acad. Sci. USA
**2016**, 113, 7900–7905. [Google Scholar] [CrossRef][Green Version] - Mueller, K.; Lepsien, J.; Möller, H.E.; Lohmann, G. Commentary: Cluster Failure: Why FMRI Inferences for Spatial Extent Have Inflated False-Positive Rates. Front. Hum. Neurosci.
**2017**, 11, 345. [Google Scholar] [CrossRef] - Nichols, T.E.; Eklund, A.; Knutsson, H. A Defense of Using Resting-State FMRI as Null Data for Estimating False Positive Rates. Cogn. Neurosci.
**2017**, 8, 144–149. [Google Scholar] [CrossRef][Green Version] - Cox, R.W.; Chen, G.; Glen, D.R.; Reynolds, R.C.; Taylor, P.A. FMRI Clustering in AFNI: False-Positive Rates Redux. Brain Connect.
**2017**, 7, 152–171. [Google Scholar] [CrossRef] - Wagenmakers, E.-J.; Marsman, M.; Jamil, T.; Ly, A.; Verhagen, J.; Love, J.; Selker, R.; Gronau, Q.F.; Šmíra, M.; Epskamp, S.; et al. Bayesian Inference for Psychology. Part I: Theoretical Advantages and Practical Ramifications. Psychon. Bull. Rev.
**2018**, 25, 35–57. [Google Scholar] [CrossRef] - Han, H.; Park, J.; Thoma, S.J. Why Do We Need to Employ Bayesian Statistics and How Can We Employ It in Studies of Moral Education?: With Practical Guidelines to Use JASP for Educators and Researchers. J. Moral Educ.
**2018**, 47, 519–537. [Google Scholar] [CrossRef][Green Version] - Wagenmakers, E.-J.; Love, J.; Marsman, M.; Jamil, T.; Ly, A.; Verhagen, J.; Selker, R.; Gronau, Q.F.; Dropmann, D.; Boutin, B.; et al. Bayesian Inference for Psychology. Part II: Example Applications with JASP. Psychon. Bull. Rev.
**2018**, 25, 58–76. [Google Scholar] [CrossRef][Green Version] - Rouder, J.N.; Speckman, P.L.; Sun, D.; Morey, R.D.; Iverson, G. Bayesian t Tests for Accepting and Rejecting the Null Hypothesis. Psychon. Bull. Rev.
**2009**, 16, 225–237. [Google Scholar] [CrossRef] - Gelman, A.; Hill, J.; Yajima, M. Why We (Usually) Don’t Have to Worry about Multiple Comparisons. J. Res. Educ. Eff.
**2012**, 5, 189–211. [Google Scholar] [CrossRef][Green Version] - Woolrich, M.W. Bayesian Inference in FMRI. NeuroImage
**2012**, 62, 801–810. [Google Scholar] [CrossRef] [PubMed] - Han, H.; Park, J. Using SPM 12’s Second-Level Bayesian Inference Procedure for FMRI Analysis: Practical Guidelines for End Users. Front. Neuroinform.
**2018**, 12, 1. [Google Scholar] [CrossRef] [PubMed][Green Version] - Mejia, A.F.; Yue, Y.; Bolin, D.; Lindgren, F.; Lindquist, M.A. A Bayesian General Linear Modeling Approach to Cortical Surface FMRI Data Analysis. J. Am. Stat. Assoc.
**2020**, 115, 501–520. [Google Scholar] [CrossRef] [PubMed][Green Version] - Han, H. BayesFactorFMRI: Implementing Bayesian Second-Level FMRI Analysis with Multiple Comparison Correction and Bayesian Meta-Analysis of FMRI Images with Multiprocessing. J. Open Res. Softw.
**2021**, 9, 1. [Google Scholar] [CrossRef] - Han, H. Implementation of Bayesian Multiple Comparison Correction in the Second-Level Analysis of FMRI Data: With Pilot Analyses of Simulation and Real FMRI Datasets Based on Voxelwise Inference. Cogn. Neurosci.
**2020**, 11, 157–169. [Google Scholar] [CrossRef] - de Jong, T. A Bayesian Approach to the Correction for Multiplicity; The Society for the Improvement of Psychological Science: Charlottesville, VA, USA, 2019. [Google Scholar] [CrossRef]
- Westfall, P.H.; Johnson, W.O.; Utts, J.M. A Bayesian Perspective on the Bonferroni Adjustment. Biometrika
**1997**, 84, 419–427. [Google Scholar] [CrossRef][Green Version] - Liu, C.C.; Aitkin, M. Bayes Factors: Prior Sensitivity and Model Generalizability. J. Math. Psychol.
**2008**, 52, 362–375. [Google Scholar] [CrossRef] - Sinharay, S.; Stern, H.S. On the Sensitivity of Bayes Factors to the Prior Distributions. Am. Stat.
**2002**, 56, 196–201. [Google Scholar] [CrossRef] - Han, H. A Method to Adjust a Prior Distribution in Bayesian Second-Level FMRI Analysis. PeerJ
**2021**, 9, e10861. [Google Scholar] [CrossRef] - Kruschke, J.K.; Liddell, T.M. Bayesian Data Analysis for Newcomers. Psychon. Bull. Rev.
**2018**, 25, 155–177. [Google Scholar] [CrossRef][Green Version] - van de Schoot, R.; Sijbrandij, M.; Depaoli, S.; Winter, S.D.; Olff, M.; van Loey, N.E. Bayesian PTSD-Trajectory Analysis with Informed Priors Based on a Systematic Literature Search and Expert Elicitation. Multivar. Behav. Res.
**2018**, 53, 267–291. [Google Scholar] [CrossRef][Green Version] - Avci, E. Using Informative Prior from Meta-Analysis in Bayesian Approach. J. Data Sci.
**2017**, 15, 575–588. [Google Scholar] [CrossRef] - Zondervan-Zwijnenburg, M.; Peeters, M.; Depaoli, S.; van de Schoot, R. Where Do Priors Come From? Applying Guidelines to Construct Informative Priors in Small Sample Research. Res. Hum. Dev.
**2017**, 14, 305–320. [Google Scholar] [CrossRef][Green Version] - Han, H.; Park, J. Bayesian Meta-Analysis of FMRI Image Data. Cogn. Neurosci.
**2019**, 10, 66–76. [Google Scholar] [CrossRef] [PubMed] - Salimi-Khorshidi, G.; Smith, S.M.; Keltner, J.R.; Wager, T.D.; Nichols, T.E. Meta-Analysis of Neuroimaging Data: A Comparison of Image-Based and Coordinate-Based Pooling of Studies. NeuroImage
**2009**, 45, 810–823. [Google Scholar] [CrossRef] [PubMed] - Eickhoff, S.B.; Bzdok, D.; Laird, A.R.; Roski, C.; Caspers, S.; Zilles, K.; Fox, P.T. Co-Activation Patterns Distinguish Cortical Modules, Their Connectivity and Functional Differentiation. NeuroImage
**2011**, 57, 938–949. [Google Scholar] [CrossRef][Green Version] - Eickhoff, S.B.; Bzdok, D.; Laird, A.R.; Kurth, F.; Fox, P.T. Activation Likelihood Estimation Meta-Analysis Revisited. NeuroImage
**2012**, 59, 2349–2361. [Google Scholar] [CrossRef][Green Version] - Eickhoff, S.B.; Laird, A.R.; Grefkes, C.; Wang, L.E.; Zilles, K.; Fox, P.T. Coordinate-Based Activation Likelihood Estimation Meta-Analysis of Neuroimaging Data: A Random-Effects Approach Based on Empirical Estimates of Spatial Uncertainty. Hum. Brain Mapp.
**2009**, 30, 2907–2926. [Google Scholar] [CrossRef][Green Version] - Dockès, J.; Poldrack, R.A.; Primet, R.; Gözükan, H.; Yarkoni, T.; Suchanek, F.; Thirion, B.; Varoquaux, G. NeuroQuery, Comprehensive Meta-Analysis of Human Brain Mapping. eLife
**2020**, 9, e53385. [Google Scholar] [CrossRef] - DeYoung, C.G.; Shamosh, N.A.; Green, A.E.; Braver, T.S.; Gray, J.R. Intellect as Distinct from Openness: Differences Revealed by FMRI of Working Memory. J. Personal. Soc. Psychol.
**2009**, 97, 883–892. [Google Scholar] [CrossRef][Green Version] - Henson, R.N.A.; Shallice, T.; Gorno-Tempini, M.L.; Dolan, R.J. Face Repetition Effects in Implicit and Explicit Memory Tests as Measured by FMRI. Cereb. Cortex
**2002**, 12, 178–186. [Google Scholar] [CrossRef] - Kragel, P.A.; Kano, M.; van Oudenhove, L.; Ly, H.G.; Dupont, P.; Rubio, A.; Delon-Martin, C.; Bonaz, B.L.; Manuck, S.B.; Gianaros, P.J.; et al. Generalizable Representations of Pain, Cognitive Control, and Negative Emotion in Medial Frontal Cortex. Nat. Neurosci.
**2018**, 21, 283–289. [Google Scholar] [CrossRef] - Pinho, A.L.; Amadon, A.; Gauthier, B.; Clairis, N.; Knops, A.; Genon, S.; Dohmatob, E.; Torre, J.J.; Ginisty, C.; Becuwe-Desmidt, S.; et al. Individual Brain Charting Dataset Extension, Second Release of High-Resolution FMRI Data for Cognitive Mapping. Sci. Data
**2020**, 7, 353. [Google Scholar] [CrossRef] - Gordon, E.M.; Laumann, T.O.; Gilmore, A.W.; Newbold, D.J.; Greene, D.J.; Berg, J.J.; Ortega, M.; Hoyt-Drazen, C.; Gratton, C.; Sun, H.; et al. Precision Functional Mapping of Individual Human Brains. Neuron
**2017**, 95, 791–807.e7. [Google Scholar] [CrossRef][Green Version] - Laird, A.R.; Lancaster, J.L.; Fox, P.T. BrainMap: The Social Evolution of a Human Brain Mapping Database. Neuroinformatics
**2005**, 3, 65–78. [Google Scholar] [CrossRef] - Turkeltaub, P.E.; Eickhoff, S.B.; Laird, A.R.; Fox, M.; Wiener, M.; Fox, P. Minimizing within-Experiment and within-Group Effects in Activation Likelihood Estimation Meta-Analyses. Hum. Brain Mapp.
**2012**, 33, 1–13. [Google Scholar] [CrossRef][Green Version] - Laird, A.R.; Fox, P.M.; Price, C.J.; Glahn, D.C.; Uecker, A.M.; Lancaster, J.L.; Turkeltaub, P.E.; Kochunov, P.; Fox, P.T. ALE Meta-Analysis: Controlling the False Discovery Rate and Performing Statistical Contrasts. Hum. Brain Mapp.
**2005**, 25, 155–164. [Google Scholar] [CrossRef] - Yarkoni, T.; Poldrack, R.A.; Nichols, T.E.; van Essen, D.C.; Wager, T.D. Large-Scale Automated Synthesis of Human Functional Neuroimaging Data. Nat. Methods
**2011**, 8, 665–670. [Google Scholar] [CrossRef][Green Version] - Poldrack, R.A. Inferring Mental States from Neuroimaging Data: From Reverse Inference to Large-Scale Decoding. Neuron
**2011**, 72, 692–697. [Google Scholar] [CrossRef][Green Version] - Glymour, C.; Hanson, C. Reverse Inference in Neuropsychology. Br. J. Philos. Sci.
**2016**, 67, 1139–1153. [Google Scholar] [CrossRef] - Dockès, J.; Poldrack, R.A.; Primet, R.; Gözükan, H.; Yarkoni, T.; Suchanek, F.; Thirion, B.; Varoquaux, G. About NeuroQuery. Available online: https://neuroquery.org/about (accessed on 13 January 2022).
- Kass, R.E.; Raftery, A.E. Bayes Factors. J. Am. Stat. Assoc.
**1995**, 90, 773–795. [Google Scholar] [CrossRef] - Stefan, A.M.; Gronau, Q.F.; Schönbrodt, F.D.; Wagenmakers, E.-J. A Tutorial on Bayes Factor Design Analysis Using an Informed Prior. Behav. Res. Methods
**2019**, 51, 1042–1058. [Google Scholar] [CrossRef] [PubMed][Green Version] - Han, H. Neural Correlates of Moral Sensitivity and Moral Judgment Associated with Brain Circuitries of Selfhood: A Meta-Analysis. J. Moral Educ.
**2017**, 46, 97–113. [Google Scholar] [CrossRef] - Cremers, H.R.; Wager, T.D.; Yarkoni, T. The Relation between Statistical Power and Inference in FMRI. PLoS ONE
**2017**, 12, e0184923. [Google Scholar] [CrossRef][Green Version] - Han, H.; Glenn, A.L. Evaluating Methods of Correcting for Multiple Comparisons Implemented in SPM12 in Social Neuroscience FMRI Studies: An Example from Moral Psychology. Soc. Neurosci.
**2018**, 13, 257–267. [Google Scholar] [CrossRef][Green Version] - Han, H.; Glenn, A.L.; Dawson, K.J. Evaluating Alternative Correction Methods for Multiple Comparison in Functional Neuroimaging Research. Brain Sci.
**2019**, 9, 198. [Google Scholar] [CrossRef][Green Version] - Ashburner, J.; Barnes, G.; Chen, C.-C.; Daunizeau, J.; Flandin, G.; Friston, K.; Kiebel, S.; Kilner, J.; Litvak, V.; Moran, R.; et al. SPM 12 Manual; Wellcome Trust Centre for Neuroimaging: London, UK, 2016. [Google Scholar]
- Han, H.; Dawson, K.J. Improved Model Exploration for the Relationship between Moral Foundations and Moral Judgment Development Using Bayesian Model Averaging. J. Moral Educ.
**2021**, 1–5. [Google Scholar] [CrossRef] - Han, H. Exploring the Association between Compliance with Measures to Prevent the Spread of COVID-19 and Big Five Traits with Bayesian Generalized Linear Model. Personal. Individ. Differ.
**2021**, 176, 110787. [Google Scholar] [CrossRef] - Gorgolewski, K.J.; Varoquaux, G.; Rivera, G.; Schwarz, Y.; Ghosh, S.S.; Maumet, C.; Sochat, V.V.; Nichols, T.E.; Poldrack, R.A.; Poline, J.-B.; et al. NeuroVault.Org: A Web-Based Repository for Collecting and Sharing Unthresholded Statistical Maps of the Human Brain. Front. Neuroinform.
**2015**, 9, 8. [Google Scholar] [CrossRef][Green Version] - Laird, A.R.; Eickhoff, S.B.; Fox, P.M.; Uecker, A.M.; Ray, K.L.; Saenz, J.J.; McKay, D.R.; Bzdok, D.; Laird, R.W.; Robinson, J.L.; et al. The BrainMap Strategy for Standardization, Sharing, and Meta-Analysis of Neuroimaging Data. BMC Res. Notes
**2011**, 4, 349. [Google Scholar] [CrossRef][Green Version] - Poldrack, R.A. Can Cognitive Processes Be Inferred from Neuroimaging Data? Trends Cogn. Sci.
**2006**, 10, 59–63. [Google Scholar] [CrossRef] [PubMed][Green Version] - Ly, A.; Stefan, A.; van Doorn, J.; Dablander, F.; van den Bergh, D.; Sarafoglou, A.; Kucharský, S.; Derks, K.; Gronau, Q.F.; Raj, A.; et al. The Bayesian Methodology of Sir Harold Jeffreys as a Practical Alternative to the P Value Hypothesis Test. Comput. Brain Behav.
**2020**, 3, 153–161. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**Results from the analyses of DeYoung et al.’s (2009) working memory dataset. Red: Voxels survived thresholding. (

**A**) Bayesian analysis with a prior distribution determined by image-based meta-analysis. (

**B**) Bayesian analysis with a prior distribution determined by coordinate-based meta-analysis with BrainMap and Ginger ALE. (

**C**) Bayesian analysis with a prior distribution determined by coordinate-based meta-analysis with NeuroQuery. (

**D**) Bayesian analysis with an adjusted default Cauchy prior distribution. (

**E**) Voxelwise frequentist analysis with familywise error correction.

**Figure 2.**Performance evaluation with DeYoung et al.’s (2009) dataset. (

**A**) Analysis results when image-based meta-analysis was used for prior determination. (

**B**) Analysis results when coordinate-based meta-analysis with BrainMap and Ginger ALE was used for prior determination. (

**C**) Analysis results when coordinate-based meta-analysis with NeuroQuery was used for prior determination.

**Figure 3.**Performance evaluation with Henson et al.’s (2002) dataset. (

**A**) Analysis results when image-based meta-analysis was used for prior determination. (

**B**) Analysis results when coordinate-based meta-analysis with BrainMap and Ginger ALE was used for prior determination. (

**C**) Analysis results when coordinate-based meta-analysis with NeuroQuery was used for prior determination.

**Figure 4.**Performance evaluation with Pinho et al.’s (2020) working memory dataset. (

**A**) Analysis results when image-based meta-analysis was used for prior determination. (

**B**) Analysis results when coordinate-based meta-analysis with BrainMap and Ginger ALE was used for prior determination. (

**C**) Analysis results when coordinate-based meta-analysis with NeuroQuery was used for prior determination.

**Figure 5.**Performance evaluation with Pinho et al.’s (2020) speech dataset. (

**A**) Analysis results when coordinate-based meta-analysis with BrainMap and Ginger ALE was used for prior determination. (

**B**) Analysis results when coordinate-based meta-analysis with NeuroQuery was used for prior determination.

**Figure 6.**Performance evaluation with Gordon et al.’s (2017) dataset. (

**A**) Analysis results when coordinate-based meta-analysis with BrainMap and Ginger ALE was used for prior determination. (

**B**) Analysis results when coordinate-based meta-analysis with NeuroQuery was used for prior determination.

Category | Dataset Name | Sample Size | Compared Task Conditions | Link to Open Repository |
---|---|---|---|---|

Working memory | DeYoung et al. (2009) (in Kragel et al.’s (2018) repository) | 15 | 3-back vs. fixation | https://neurovault.org/collections/3324/ CBM (accessed on 13 January 2022) |

Henson et al. (2002) | 12 | Famous vs. non-famous face memory | http://www.fil.ion.ucl.ac.uk/spm/download/data/face_rfx/face_rfx.zip (accessed on 13 January 2022) (under “cons_can” subfolder) | |

Pinho et al. (2020) | 13 | 2-back vs. 0-back | https://neurovault.org/collections/6618/ (accessed on 13 January 2022) | |

Speech | Pinho et al. (2020) | 18 | Speech vs. natural sound listening | https://neurovault.org/collections/2138/ (accessed on 13 January 2022) |

Face | Gordon et al. (2017) | 10 | Face vs.word identification | https://neurovault.org/collections/2447/ (accessed on 13 January 2022) |

Category | Type | Acquisition Method * |
---|---|---|

Working memory | Bayesian Meta-analysis | Han and Park’s (2018) meta-analysis Acquired from Han (2021) GitHub: https://github.com/hyemin-han/Prior-Adjustment-BayesFactorFMRI/tree/master/Working_memory_fMRI/Performance_evaluation (accessed on 13 January 2022) |

BrainMap + Ginger ALE | Slueth: Normal Mapping & Activations Only & Paradigm Class = n-back Ginger ALE: cluster forming p < 0.001, cluster-level FWE p < 0.01 | |

NeuroSynth | term = “working memory” https://neurosynth.org/analyses/terms/working%20memory/ (accessed on 13 January 2022) | |

NeuroQuery | term = “working memory” https://neuroquery.org/query?text=working+memory+ (accessed on 13 January 2022) | |

Speech | BrainMap + Ginger ALE | Slueth: Normal Mapping & Activations Only & Keywords = face | faces | face recognition | facial recognition Ginger ALE: cluster forming p < 0.001, cluster-level FWE p < 0.01 |

NeuroSynth | term = “speech” https://neurosynth.org/analyses/terms/speech/ (accessed on 13 January 2022) | |

NeuroQuery | term = “speech” https://neuroquery.org/query?text=speech+ (accessed on 13 January 2022) | |

Face | BrainMap + Ginger ALE | Slueth: Normal Mapping & Activations Only & Keywords = speaking | Speech | speech | speech processing Ginger ALE: cluster forming p < 0.001, cluster-level FWE p < 0.01 |

NeuroSynth | term = “face” https://neurosynth.org/analyses/terms/face/ (accessed on 13 January 2022) | |

NeuroQuery | term = “face” https://neuroquery.org/query?text=face+ (accessed on 13 January 2022) |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Han, H. A Novel Method to Use Coordinate Based Meta-Analysis to Determine a Prior Distribution for Voxelwise Bayesian Second-Level fMRI Analysis. *Mathematics* **2022**, *10*, 356.
https://doi.org/10.3390/math10030356

**AMA Style**

Han H. A Novel Method to Use Coordinate Based Meta-Analysis to Determine a Prior Distribution for Voxelwise Bayesian Second-Level fMRI Analysis. *Mathematics*. 2022; 10(3):356.
https://doi.org/10.3390/math10030356

**Chicago/Turabian Style**

Han, Hyemin. 2022. "A Novel Method to Use Coordinate Based Meta-Analysis to Determine a Prior Distribution for Voxelwise Bayesian Second-Level fMRI Analysis" *Mathematics* 10, no. 3: 356.
https://doi.org/10.3390/math10030356