Abstract: This study addresses the relationship between item response time and item accuracy (i.e., the response time accuracy correlation, RTAC) in figural matrices tests. The dual processing account of response time effects predicts negative RTACs in tasks that allow for relatively automatic processing and positive RTACs in tasks that require controlled processing. Contrary to these predictions, several studies found negative RTACs for reasoning tests. Nevertheless, it was demonstrated that the RTAC is moderated by task complexity (i.e., the interaction between person ability and item difficulty) and that under conditions of high complexity (i.e., low ability and high difficulty) the RTAC was even slightly positive. The goal of this study was to demonstrate that with respect to task complexity the direction of the RTAC (positive vs. negative) can change substantially even within a single task paradigm (i.e., figural matrices). These predictions were tested using a figural matrices test that employs a constructed response format and has a broad range of item difficulties in a sample with a broad range of ability. Confirming predictions, strongly negative RTACs were observed when task complexity was low (i.e., fast responses tended to be correct). With increasing task complexity, the RTAC flipped to be strongly positive (i.e., slow responses tended to be correct). This flip occurred earlier for people with lower ability, and later for people with higher ability. Cognitive load of the items is suggested as an explanation for this phenomenon.
Abstract: Response times may constitute an important additional source of information about cognitive ability as it enables to distinguishing between different intraindividual response processes. In this paper, we present a method to disentangle interindividual variation from intraindividual variation in the responses and response times of 978 subjects to the 14 items of the Hungarian WISC-IV Block Design test. It is found that faster and slower responses differ in their measurement properties suggesting that there are intraindivual differences in the response processes adopted by the subjects.
Abstract: Worst performance in cognitive processing tasks shows larger relationships to general intelligence than mean or best performance. This so called Worst Performance Rule (WPR) is of major theoretical interest for the field of intelligence research, especially for research on mental speed. In previous research, the increases in correlations between task performance and general intelligence from best to worst performance were mostly described and not tested statistically. We conceptualized the WPR as moderation, since the magnitude of the relation between general intelligence and performance in a cognitive processing task depends on the performance band or percentile of performance. On the one hand, this approach allows testing the WPR for statistical significance and on the other hand, it may simplify the investigation of possible constructs that may influence the WPR. The application of two possible implementations of this approach is shown and compared to results of a traditional worst performance analysis. The results mostly replicate the WPR. Beyond that, a comparison of results on the level of unstandardized relationships (e.g., covariances or unstandardized regression weights) to results on the level of standardized relationships (i.e., correlations) indicates that increases in the inter-individual standard deviation from best to worst performance may play a crucial role for the WPR. Altogether, conceptualizing the WPR as moderation provides a new and straightforward way to conduct Worst Performance Analysis and may help to incorporate the WPR more prominently into empirical practice of intelligence research.
Abstract: Cognitive modeling of response time distributions has seen a huge rise in popularity in individual differences research. In particular, several studies have shown that individual differences in the drift rate parameter of the diffusion model, which reflects the speed of information uptake, are substantially related to individual differences in intelligence. However, if diffusion model parameters are to reflect trait-like properties of cognitive processes, they have to qualify as trait-like variables themselves, i.e., they have to be stable across time and consistent over different situations. To assess their trait characteristics, we conducted a latent state-trait analysis of diffusion model parameters estimated from three response time tasks that 114 participants completed at two laboratory sessions eight months apart. Drift rate, boundary separation, and non-decision time parameters showed a great temporal stability over a period of eight months. However, the coefficients of consistency and reliability were only low to moderate and highest for drift rate parameters. These results show that the consistent variance of diffusion model parameters across tasks can be regarded as temporally stable ability parameters. Moreover, they illustrate the need for using broader batteries of response time tasks in future studies on the relationship between diffusion model parameters and intelligence.
Abstract: Mean reaction times (RT) and the intra-subject variability of RT in simple RT tasks have been shown to predict higher-order cognitive abilities measured with psychometric intelligence tests. To further explore this relationship and to examine its generalizability to a sub-adult-aged sample, we administered different choice RT tasks and Cattell’s Culture Fair Intelligence Test (CFT 20-R) to n = 362 participants aged eight to 18 years. The parameters derived from applying Ratcliff’s diffusion model and an ex-Gaussian model to age-residualized RT data were used to predict fluid intelligence using structural equation models. The drift rate parameter of the diffusion model, as well as σ of the ex-Gaussian model, showed substantial predictive validity regarding fluid intelligence. Our findings demonstrate that stability of performance, more than its mere speed, is relevant for fluid intelligence and we challenge the universality of the worst performance rule observed in adult samples.
Abstract: Blacks generally score significantly lower on intelligence tests than Whites. Spearman’s hypothesis predicts that there will be large Black/White differences on subtests of high cognitive complexity, and smaller Black/White differences on subtests of lower cognitive complexity. Spearman’s hypothesis tested on samples of Blacks and Whites has consistently been confirmed in many studies on children and adolescents, but there are many fewer studies on adults. We carried out a meta-analysis where we collected the existing tests of Spearman’s hypothesis on adults and collected additional datasets on Black and White adults that could be used to test Spearman’s hypothesis. Our meta-analytical search resulted in a total of 10 studies with a total of 15 datapoints, with participants numbering 251,085 Whites and 22,326 Blacks in total. For all these data points, the correlation between the loadings of a general factor that is manifested in individual differences on all mental tests, regardless of content (g) and standardized group differences was computed. The analysis of all 15 data points yields a mean vector correlation of 0.57. Spearman’s hypothesis is confirmed comparing Black and White adults. The differences between Black and White adults are strongly in line with those previously found for children and adults; however, because of lack of access to the original data, we could not test for measurement invariance.