Abstract
In psychology, small sample sizes are a frequent challenge—particularly when studying specific expert populations or using complex and cost-intensive methods like human scoring of creative answers—as they reduce statistical power, bias results, and limit generalizability. They also hinder the use of frequentist confirmatory factor analysis (CFA), which depends on larger samples for reliable estimation. Problems such as non-convergence, inadmissible parameters, and poor model fit are more likely. In contrast, Bayesian methods offer a robust alternative, being less sensitive to sample size and allowing the integration of prior knowledge through parameter priors. In the present study, we introduce small-sample-size structural equation modeling to creativity research by investigating the relationship between creative fluency and nested creative cleverness with right-wing authoritarianism, starting with a sample size of N = 198. We compare the stability of results in frequentist and Bayesian SEM while gradually reducing the sample by n = 25. We find that common frequentist fit indexes degrade below N = 100, while Bayesian multivariate Rhat values indicate stable convergence down to N = 50. Standard errors for fluency loadings inflate 40–50% faster in frequentist SEM compared to Bayesian estimation, and regression coefficients linking RWA to cleverness remain significant across all reductions. Based on these findings, we discuss (1) the critical role of Bayesian priors in stabilizing small-sample SEM, (2) the robustness of the RWA-cleverness relationship despite sample constraints, and (3) practical guidelines for minimum sample sizes in bifactor modeling.