You are currently viewing a new version of our website. To view the old version click .
Journal of Intelligence
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

17 November 2025

Small Samples, Big Insights: A Methodological Comparison of Estimation Techniques for Latent Divergent Thinking Models

,
and
1
Institute of Psychology, University of Hildesheim, Universitätsplatz 1, 31141 Hildesheim, Germany
2
Hector Research Institute of Education Sciences and Psychology, University of Tübingen, 72074 Tübingen, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Intell.2025, 13(11), 150;https://doi.org/10.3390/jintelligence13110150 
(registering DOI)
This article belongs to the Special Issue Analysis of a Divergent Thinking Dataset

Abstract

In psychology, small sample sizes are a frequent challenge—particularly when studying specific expert populations or using complex and cost-intensive methods like human scoring of creative answers—as they reduce statistical power, bias results, and limit generalizability. They also hinder the use of frequentist confirmatory factor analysis (CFA), which depends on larger samples for reliable estimation. Problems such as non-convergence, inadmissible parameters, and poor model fit are more likely. In contrast, Bayesian methods offer a robust alternative, being less sensitive to sample size and allowing the integration of prior knowledge through parameter priors. In the present study, we introduce small-sample-size structural equation modeling to creativity research by investigating the relationship between creative fluency and nested creative cleverness with right-wing authoritarianism, starting with a sample size of N = 198. We compare the stability of results in frequentist and Bayesian SEM while gradually reducing the sample by n = 25. We find that common frequentist fit indexes degrade below N = 100, while Bayesian multivariate Rhat values indicate stable convergence down to N = 50. Standard errors for fluency loadings inflate 40–50% faster in frequentist SEM compared to Bayesian estimation, and regression coefficients linking RWA to cleverness remain significant across all reductions. Based on these findings, we discuss (1) the critical role of Bayesian priors in stabilizing small-sample SEM, (2) the robustness of the RWA-cleverness relationship despite sample constraints, and (3) practical guidelines for minimum sample sizes in bifactor modeling.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.