Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models
AbstractThis study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG). The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource1) short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s) (LLTM) were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties. View Full-Text
Share & Cite This Article
Loe, B.S.; Sun, L.; Simonfy, F.; Doebler, P. Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models. J. Intell. 2018, 6, 20.
Loe BS, Sun L, Simonfy F, Doebler P. Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models. Journal of Intelligence. 2018; 6(2):20.Chicago/Turabian Style
Loe, Bao S.; Sun, Luning; Simonfy, Filip; Doebler, Philipp. 2018. "Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models." J. Intell. 6, no. 2: 20.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.