The history of vocabulary research has specified a rich and complex construct, resulting in calls for vocabulary research, assessment, and instruction to take into account the complex problem space of vocabulary. At the intersection of vocabulary theory and assessment modeling, this paper suggests a suite of modeling techniques that model the complex structures present in vocabulary data in ways that can build an understanding of vocabulary development and its links to instruction. In particular, we highlight models that can help researchers and practitioners identify and understand construct-relevant and construct-irrelevant aspects of assessing vocabulary knowledge. Drawing on examples from recent research and from our own three-year project to develop a standardized measure of language and vocabulary, we present four types of confirmatory factor analysis (CFA) models: single-factor, correlated-traits, bi-factor, and tri-factor models. We highlight how each of these approaches offers particular insights into the complex problem space of assessing vocabulary in ways that can inform vocabulary assessment, theory, research, and instruction. Examples include identifying construct-relevant general or specific factors like skills or different aspects of word knowledge that could link to instruction while at the same time preventing an overly-narrow focus on construct-irrelevant factors like task-specific or word-specific demands. Implications for theory, research, and practice are discussed.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited