Next Article in Journal
Age-Related Changes and Reorganization of Creativity and Intelligence Indices in Schoolchildren and University Students
Next Article in Special Issue
Metacognition, Mind Wandering, and Cognitive Flexibility: Understanding Creativity
Previous Article in Journal / Special Issue
Intelligence IS Cognitive Flexibility: Why Multilevel Models of Within-Individual Processes Are Needed to Realise This
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fluid Intelligence Emerges from Representing Relations

Cognitive Science Department, Institute of Philosophy, Jagiellonian Univeristy in Krakow, PL-31007 Kraków, Poland
J. Intell. 2022, 10(3), 51; https://doi.org/10.3390/jintelligence10030051
Submission received: 26 April 2022 / Revised: 21 June 2022 / Accepted: 27 July 2022 / Published: 2 August 2022
(This article belongs to the Special Issue Cognitive Flexibility: Concepts, Issues and Assessment)

Abstract

:
Based on recent findings in cognitive neuroscience and psychology as well as computational models of working memory and reasoning, I argue that fluid intelligence (fluid reasoning) can amount to representing in the mind the key relation(s) for the task at hand. Effective representation of relations allows for enormous flexibility of thinking but depends on the validity and robustness of the dynamic patterns of argument–object (role–filler) bindings, which encode relations in the brain. Such a reconceptualization of the fluid intelligence construct allows for the simplification and purification of its models, tests, and potential brain mechanisms.

1. Introduction

Intelligence (general cognitive ability) constitutes one of the central constructs in psychology, originating from the late nineteenth century (e.g., Galton 1883; Gilbert 1894). Its main purpose is twofold. On the one hand, intelligence research attempts to explain the enormous variability in intellectual powers found in any human population (and, recently, even in dogs; Arden and Adams 2016). On the other hand, this research needs to explain the considerable stability of these powers in individuals, meaning that their scores on one kind of intellectual test strongly correspond to their scores on other kinds of test (a phenomenon called positive manifold). The structure of human intelligence (various general and specific abilities and their mutual relations) as well as its predictive power have been a topic of vivid debate (e.g., Carroll 1993; Horn and Cattell 1966; Jäger 1982; Kovacs and Conway 2016; Spearman 1927; Thomson 1919; Thurstone 1938; Van Der Maas et al. 2006; Vernon 1964), suggesting that intelligence constitutes a complex entangled multilevel construct (McGrew 2009) reflecting the brain structure and function (Deary et al. 2010; Haier 2016).
However, as a comprehensive measurement of intelligence with diverse batteries of tests (knowledge use, verbal and memory skills, visual and auditory processing, mental speed, etc.) is not feasible, the research on cognitive abilities frequently focuses on fluid intelligence (Cattell 1963), also called fluid reasoning (Carroll 1993) or reasoning ability (Kyllonen and Christal 1990). According to the Cattell–Horn–Carroll model of abilities (McGrew 2009), fluid intelligence has been best-reflected by novel reasoning problems solved in a deliberate and controlled way, which cannot be automatized. In this model, fluid intelligence comprises at least three narrow abilities, namely deductive (called also general sequential), inductive, and quantitative reasoning. Whether these three abilities rely on separable processes, or stem from a single mechanism, such as mental model construction and verification (Johnson-Laird 2006) or Bayesian inference (Oaksford and Chater 2007), remains an open question; however, the fact that deductive and inductive subfactors typically correlate almost perfectly (Wilhelm 2005) suggests the latter case. Other authors also differentiated content types of fluid intelligence, specifically its verbal, numerical, and figural facets (Jäger 1982). Evidence is stronger for content than process facets (Lakin and Gambrell 2012; Schulze et al. 2005), suggesting that a specific test content involves also a respective ability beyond fluid reasoning (e.g., figural and spatial tasks may also require visual processing ability). In practice, content is frequently confounded with task type, as most of deductive tests are verbal, most of the inductive tests are figural–spatial, and quantitative tests by definition need to rely on numerical material (Wilhelm 2005). Figural inductive tests seem to be administered most.
The rationale for operationalizing cognitive ability as fluid intelligence is threefold at least. First, compared to all other abilities, fluid intelligence most strongly loads general intelligence factor (g), with loadings reaching unity (Arendasy et al. 2008; Gustafsson 1984; Kan et al. 2011), at least in homogenous samples (Kvist and Gustafsson 2008). Second, fluid intelligence tests display especially high validity and reliability, which have been refined for a century (Cattell 1949; Raven 1938). Third, considerable efforts have been devoted to understanding the processes captured by fluid reasoning tests (e.g., Carpenter et al. 1990). Therefore, even though in principle studying only fluid intelligence could narrow our understanding of a broader concepts of intellect, in this paper I will focus on fluid intelligence as a very feasible and valid way to study general human cognitive ability.
The goal of this work is to provide a selective discussion of existing knowledge on the cognitive mechanisms potentially responsible for individual differences in fluid intelligence. Based on available evidence and plausible models, I propose that effective solving of a fluid intelligence task in the mind can amount to representing the key task relation(s) in a valid and robust way, by linking respective elements with the roles that these elements play in the relation(s), using flexible patterns of bindings.

2. Psychometric Studies on Fluid Intelligence

Just after the advent of cognitive psychology—a discipline devoted to the understanding of architecture and mechanisms of human cognition using precise experimentation (Neisser 1967)—researchers began to search for elementary cognitive processes (ECPs)—the ones captured by tasks lasting from hundreds of milliseconds to several seconds—that could predict intelligence (Hunt et al. 1975; Jensen and Munro 1979). Initial efforts comprised applying a selected cognitive task (e.g., a short-term memory task, a visual search task, a forced choice task) and correlating its scores with the scores on a selected intelligence test (e.g., Raven’s Progressive Matrices, Cattell CFT-3, Wechsler’s Adult Intelligence Scale). Typically observed correlations were relatively low (10% of variance shared), very rarely approaching 50% variance (e.g., Neubauer 1990). Around the 1990s, progress in psychometrics indicated that the use of single tasks strongly underestimated the relationships between ECPs and intelligence. The task batteries tapping into a construct, by means of various tasks as well as latent variable modeling, allowed for achieving higher reliability by filtering out unwanted sources of variance (e.g., method-specific). This research identified several ECPs related with intelligence.
Processing speed, measured by response times on forced choice tasks, by performance on clerical tasks, and by inspection time, has been considered as a promising candidate from the 1970s onwards (Jensen and Munro 1979; Salthouse 1993; see Jensen 2006). However, subsequent meta-analyses suggested that processing speed indices moderately correlate with intelligence (Doebler and Scheffler 2016; Schubert 2019; Sheppard and Vernon 2008), and studies showed that this moderate contribution is fully mediated by other factors (Conway et al. 2002; Jastrzębski et al. 2021; Troche and Rammsayer 2009; for a defense of the speed account, see Schubert and Frischkorn 2020).
Since the 1980s (e.g., Stankov 1983), various forms of attention have been associated with intelligence, but reported correlations with intelligence were also low (Schweizer 2010; Unsworth et al. 2010). Even though not predicted by attention functioning per se, in the 1990s intelligence was linked with control over attention (attention control), understood as the mechanism responsible for focusing on task-relevant information, while blocking distraction and interference (Engle et al. 1999), or more generally as goal-related processing (Diamond 2013). Although some studies reported significant correlations between attention control and fluid intelligence (Unsworth et al. 2014), this line of research noted problems with the reliability and validity of presumed measures of attention control (Draheim et al. 2021; Hedge et al. 2018). Moreover, correlations with intelligence pertained to a single test of attention control (the anti-saccade task) and barely to its other established measures (Chuderski et al. 2012; Frischkorn and von Bastian 2021; Rey-Mermet et al. 2019).
Moreover, in the 1990s, interest in sensorimotor discrimination, initially considered at the dawn of intelligence research (Galton 1883), was revived (Deary 1994). However, despite early positive evidence that the efficiency of discriminating stimuli in visual, auditory, and even tactile modalities can predict intelligence (Deary et al. 2004; Li et al. 1998; Meyer et al. 2010), finally its contribution was low and fully mediated by working memory (Jastrzębski et al. 2021; Troche et al. 2014).
Working memory capacity (WMC), considered in intelligence research around that time, is reflected in the number of briefly maintained and then recalled/recognized items; typically, these are items difficult to articulate or presented under additional load and/or encoding requirements. WMC appears to be the strongest predictor of intelligence (e.g., Colom et al. 2008; Engle et al. 1999), explaining from two- to three-quarters of its variance (Oberauer et al. 2005). Other types of memory, such as long-term (Unsworth 2019) and associative memory (Williams and Pearlberg 2006), yielded much weaker contributions, so WMC seems special for intelligence.
One could summarize the existing correlational research on ECPs and fluid intelligence as bringing us closer to understanding the cognitive mechanisms underlying the latter construct. Specifically, some potential mechanisms (bare attention and stimulus discrimination) can be discarded, others (processing speed, attention control, and possibly long-term memory) should still be considered but require additional research, whereas active maintenance of and access to task-related information, as reflected by WMC, looks as if it is a fundamental mechanism for fluid intelligence. While I agree that correlational studies provided a multitude of results, I see at least three problems in their unequivocal interpretation, and thus applicability for an advancement of fluid intelligence theory.

3. Theoretical Limits of Psychometric Studies

First, the strength of relationships between fluid intelligence and its relatively strong predictors, especially working memory, might have been overestimated. For instance, measurement of distinct cognitive abilities may confound their true relations with some contextual factors (e.g., motivation, boredom, testing settings, etc.) that can boost shared variance. In line with this, when Chuderski and Jastrzębski (2018) controlled motivation, anxiety, openness to experience, and age in a relatively large psychometric study, an initially almost-isomorphic relationship between broad working memory factor and fluid intelligence factor equaling r = .94 dropped to r = .74, that is, as much as 38% of initially shared variance could actually be explained by other factors.
Moreover, although most fluid intelligence tests were initially designed as power tests, typical testing conditions (large samples, long procedures) entice researchers to cut original administration times. That only affects the tests’ reliability a little, but may hugely alter their validity (Lu and Sireci 2007), for example, increasing the role of attention and immediate memory, while decreasing the impact of longer-lasting processes such as counterexample construction, solution verification, and schema learning—all identified as crucial stages of deductive and inductive reasoning (Holyoak 2012; Johnson-Laird 2006). Moreover, under high time pressure, the late test items, which require the most advanced reasoning, are rarely attempted by participants (Estrada et al. 2017). Indeed, when Chuderski (2013) manipulated the administration time of two intelligence tests, their shared variance with a working memory factor dropped from 100% for strictly limited time to only 36% for virtually unlimited time (see also Chuderski 2015a; Ren et al. 2018). Under strict time pressure, the variance in the most difficult test items was virtually null. When the truly shared variance between fluid intelligence and working memory in fact falls below 50% instead of approaching unity, then the connection of these two constructs, even though still substantial, is no longer close-to-perfect and leaves room for alternative explanations of the fluid intelligence underpinnings.
Second, no cognitive task developed so far can capture a unique “elementary cognitive process”. Claiming that a given task captures a given elementary process, researchers incorrectly transfer the task intended requirements, as typically defined by the task instruction, onto their interpretation (naive model) of the information flow during solving the task, based on coarse-grained psychological concepts. At the same time, theoretical results from computational modeling in psychology and neuroscience show that this information flow needs to be described in terms of much finer-grained entities and their transformations. The picture is also complicated by the fact that the models proposed so far largely differ in how they describe such flow on the fine-grained level.
For example, the well-known Stroop task is typically defined in psychometrics as an attention-control task, or a response-inhibition task, which requires blocking word reading while focusing attention on color naming (as asked by the task instructions). In computational cognitive neuroscience, a number of Stroop-like task models have been developed, which assume entities that can hardly match the above high-level description, including such terms as “energy”, “utility”, “activation”, “dimensional uncertainty”, and “inhibitory conductance”, to name only a few. Moreover, these models assume differing mechanisms responsible for effective color naming, including activation spread among concepts, lemmas, and word forms (Roelofs 2003), action utility learning (Lovett 2005), reinforcement learning (Holroyd et al. 2005), conflict adaptation (Botvinick et al. 2001), conflict-based Hebbian learning (Verguts and Notebaert 2008), contingency learning (Schmidt 2013), and outcome-action prediction (Alexander and Brown 2011) as well as some combination of these (Chuderski and Smoleń 2016). No model assumes explicitly the processes of word-reading inhibition and attention control over color naming. Moreover, no model assumes one central controlling process of attention control, but rather a complex interaction of diverse information flows (with adaptation and learning being important components). Finally, typical models include entities occupying various levels of abstraction, such as low-level associations (e.g., connection weights) linking higher-level objects (e.g., lemmas, production rules). As most of these models quite validly predict data from Stroop-like task experiments, we should accept that at least to some extent they capture the neurocognitive machinery generating the resulting behavior. Yet, they definitely do not match the naive models developed in psychometrics. So, it is fair to say that we do not know yet what exactly our minds do when they perform the Stroop task and multiple other ECP (e.g., the anti-saccade and vigilance) tasks; therefore, any unitary psychometric interpretation of them would be dubious.
An analogous situation pertains to working-memory tasks, with the one exception that they are even more complex than attention control tasks. Actually, from a perspective of cognitive modeling, a, let’s say, complex-span task (e.g., the operation span, which requires verification of a series of arithmetic equations and later recalling their solutions in a serial order) is comparably complex, as are some reasoning tasks presumably tapping fluid intelligence (e.g., number series, Latin square task, etc.). Moreover, there is an ongoing debate about what limits WMC: is it the number of available slots to maintain separate memory objects (Vogel et al. 2001); the number of the objects’ features that can be concurrently maintained (Fougnie and Alvarez 2011); the number of relations among objects and features (Clevenger and Hummel 2014); the size of the entire structure of such relations (Brady et al. 2011); some continuous resource that can feed memory representations (Bays 2015); interference among overlapping memory representations (Oberauer and Lin 2017); or an inability to desynchronize too many dynamic oscillatory patterns (Horn and Usher 1991; Raffone and Wolters 2001), to name only a few accounts. One consequence of the low consensus on the actual mechanisms underlying WMC (see Cowan 2022; Oberauer et al. 2018) is the fact that quite-distinct tasks have been applied to measure one and the same WMC construct (see Wilhelm et al. 2013), as well as one and the same working-memory task (e.g., the change-detection task; Vogel et al. 2001) being used to reflect many constructs beyond WMC, such as iconic memory (with rapid stimuli presentation; Sligte et al. 2010) and attention control (with a need to ignore some stimuli; Draheim et al. 2021). Due to our poor understanding of WMC, even the strong WMC–intelligence correlations help us little in advancing a theory of fluid intelligence.
Third, it seems that the fact that correlational studies are correlational has been neglected when researchers draw conclusions from the correlations they observe. Even though each statistical handbook notes that correlation between X and Y can, in principle, be caused either by X or Y, or by their reciprocal interaction, as well as by another (typically unknown) variable Z (or more such variables), a superficial simplicity of ECPs relative to intelligence tests (shorter trials and simpler stimuli) as well as an implicit but ill-conceived reductionist stance has led many researchers to draw causal conclusions that intelligence varies because of ECPs. However, given the findings in cognitive modeling and cognitive neuroscience, it is likely that both X and Y are caused (or, better to say, modulated; Buzsaki 2006) by a large number of Zs—which can be understood as parameters of the cognitive system (Oberauer 2016), in part emerging from the underlying structural and functional brain architecture (Barbey 2018; Haier 2016). It is likely that intelligence tests can capture a larger number of such parameters, or capture them more reliably, than can tasks intended to capture ECPs. So, even though for most researchers it would sound heretical, instead of ECPs underpinning intelligence, it is equally plausible that it is intelligence, understood as a set of neurocognitive parameters validly captured by fluid intelligence tests, which translates onto the scores on ECPs (as suggested by Spearman 1927). At least, current psychometric studies can say little on the causality underpinning ECPs and intelligence.
To summarize this section, since neither experimental manipulation of individual intelligence levels nor fine-grained measurement and interpretation of the neurophysiology underlying these levels are possible at the current stage of scientific development, most research on fluid intelligence has had to resort to psychometric analyses of the correlational patterns between intelligence tests and various cognitive tasks. This is an inevitable research tool that has provided a highly informative (even though far from conclusive) “map” of relationships between cognitive constructs. However, this very tool seems strongly limited in the depth of fluid intelligence explanation it can yield.
In the next sections, I shortly review two alternatives, more process-oriented approaches to developing a theory of fluid intelligence, which may offer insights beyond those offered by the psychometric approach. The first approach examines experimentally the properties of established fluid intelligence tests, trying to discover the crucial requirements of these tests—what minimal cognitive task is sufficient to capture the same variance in fluid intelligence that is captured by the tests used to date. It seems that little is required—just to validly represent a relatively simple relation. The second approach analyzes neurocomputational models of deductive and inductive reasoning in order to identify the sources of intrinsic limitation in representing relations.

4. What Is Needed for a Task to Become a Fluid Intelligence Test?

Typical fluid intelligence tests involve identifying abstract rules governing relatively complex figural stimuli patterns and selecting the response option that best matches these rules. Probably the most widely applied test is Raven’s Advanced Progressive Matrices (RAPM; Raven et al. 1983), which presents the 3 × 3 matrices of geometric patterns, with a bottom-right pattern missing. RAPM requires inducing this missing pattern from the structure of the row- and column-wise variation among the remaining patterns, including permutation, increase in number or value, and logical relations such as AND, OR, and XOR. Depending on the type and number of rules (Carpenter et al. 1990) and the number of figural elements (perceptual complexity; Primi 2002), accuracy on consecutive RAPM items decreases from 90% to 10%. However, which RAPM features make it such a good fluid intelligence test (i.e., correlating so strongly with other such tests)? When analyzed in more detail, RAPM involves at least: abstracting the key geometric transformations from the perceptual input, which first must be identified (not easy in the case of overlaid complex figures); discovering the rules governing these transformations; constructing the missing element and/or inspecting the response options in search of a cue; actively maintaining the rules; and, finally, comparing the most plausible options in order to select the correct one. Are all of these processes (and perhaps others) necessary? Although there are only several reliable studies addressing this question, the answer is no: most of the above requirements are completely dispensable when measuring fluid intelligence.
Crucially, discovery of rules seems irrelevant for capturing fluid intelligence. Notably, in many other established fluid intelligence tests, including geometric analogies, paper folding, and necessary arithmetic operations (all very close to RAPM in the multidimensional scaling model of Snow et al. (1984)), the rules are trivial or revealed to participants. In line with this, in a convincing study, Carlstedt et al. (2000) administered three fluid intelligence tests in either the blocked order (one test applied after another) or with all the test items mixed. They assumed that in the blocked order the participants quickly discovered and learned all the rules required by a given test due to the homogeneous sequence of items, whereas in the mixed order discovery of the test rules was much more demanding, as the rules varied enormously. However, the blocked order, in which the participants already knew the rules for the middle and late items of each test, yielded higher loadings on the general intelligence factor than did the mixed item order.
Also studies of RAPM did not find that rule discovery matters. Loesche et al. (2015) applied RAPM either in a typical administration requiring rule discovery or after intensive training on rules, finding that in two out of three experiments the correlation with WMC (a proxy for cognitive ability in that study) increased after rule training (in the third experiment, the correlation was comparable). Thus, the need to discover rules might distort fluid intelligence measurement instead of improving it (see also Levacher et al. 2022). Although an initial work (Wiley et al. 2011) reported stronger RAPM item-wise correlations with the complex-span task, when the specific combination of rules was used for the first time throughout the RAPM test, compared to when it was repeated, suggesting a role for rule discovery, these correlations were overall very weak. Two later studies that observed stronger correlations (due to using the WMC factor instead of a single task) reported comparable correlations for new vs. old rule combinations (Smoleń and Chuderski 2015; Little et al. 2014). Finally, using a design that prevented potential confounds, Harrison et al. (2015) found that the correlation with WMC is actually higher for the old-combination items than for the new ones. Therefore, rule discovery seems to contribute little to fluid intelligence measurement.
These findings were supported by the statistical models that separated the item-position effect, assumed to reflect the learning of test rules from item to item, from the “pure” reasoning ability (i.e., with the item-position effect eliminated). Both for RAPM (Lozano 2015) and another fluid intelligence test (Schweizer et al. 2018), the item-position effect was not related significantly to other markers of cognitive ability. Even more, some studies (Hayes et al. 2015; Lozano and Revuelta 2020) suggested that the item-position effect in RAPM is not related to rule learning at all but mainly reflects more basic practice effects of optimizing perceptual and spatial strategies in the test.
Loesche et al. (2015) observed, additionally, that after rule training the participants more often used the constructive strategy during coping with the RAPM items, as opposed to the response-elimination strategy (see Bethell-Fox et al. 1984; Vigneau et al. 2006). In the former strategy, the participants analyze the matrix trying to fully reconstruct the missing pattern and then look for its potential match among the response options. In the latter strategy, the participants develop only a partial pattern using the most salient cues and then use it to eliminate the most obviously incorrect options (and select from the remaining ones), toggling back and forth between the matrix and the response options. Therefore, large perceptual complexity and substantial variation in response options (a case of RAPM) actually can help to bypass the presumed cognitive requirements of the test, by facilitating the use of perceptual cues and simpler heuristics that allow to choose correct solutions above chance. Consequently, the test variants in which correct and incorrect response options were hardly distinguishable (Arendasy and Sommer 2013; Chuderski 2015b; Jarosz and Wiley 2012), or the responses had to be constructed from scratch (Becker et al. 2016), showed increased validity as fluid intelligence measures. In consequence, perceptual complexity and the number of response options contributes little—tests lean on perceptual content and excluding response options (so responses have to be construed; e.g., Jastrzębski et al. 2022; Thissen et al. 2018) seem to promote the uniform solution strategy and capture fluid intelligence more precisely (Levacher et al. 2022).
Besides the perceptual complexity and response-option diversity of a fluid reasoning test, it is interesting what level of the problem’s scope and abstraction is actually required to tap fluid intelligence. Chuderski (2019) examined these questions in a series of six experiments using the transitive-reasoning task (Goodwin and Johnson-Laird 2008). By systematically simplifying the task, it was tested how trivial and concrete variants of it can still be apt measures of fluid intelligence (see Shokri-Kojori and Krawczyk 2018). The original task variant presented an abstract problem: “Three pairs of objects bound by relations ‘<’ or ‘>’ unequivocally define the monotonic linear order of four symbols. Organize the symbols in your mind into the valid order”. For example, the pairs could be “A > B”, “C < B”, or “C > D”, and the response to be selected could be either “D < A” or “D > A”. To respond correctly (“D < A”), all the pairs had to be integrated. This variant yielded low accuracy and strongly correlated with the fluid intelligence factor (r = .55). However, this correlation remained strong (r = .67) in a variant in which the participants were not given an abstract instruction to organize the symbols into the linear order, but only to decide which relation with the reversed symbols was the same as the relation in one of presented pairs (e.g., “B > A” or “B < A”?). The correlation held strong (r = .58) when no concept of relation was mentioned, but the participants were just asked to identify exactly the same pair as one presented (e.g., “A > B” or “A < B”?), and dropped only a little (to r = .46) when the task was to simply match the three-symbol string including a middle slash (e.g., does “A/B” match “A/B” or “A\B”?). The correlation only disappeared when the task was to match unbound pairs of symbols (is “AB” the same as “AB” or “BA”). It was concluded that full-blown abstract reasoning is not needed to capture fluid intelligence, and as little as a single trivial binding of simple information in the mind is critical.
These findings were compatible with the results on the so-called relation-monitoring task developed by Oberauer (1993), which required responding when the stimuli on the screen satisfied a simple predefined relation. In one task variant, people observed a constantly changing matrix of symbol strings and decided whether the three strings in one row, column, or diagonal line ended with the same symbol. The task, thus, required identifying relations among the symbols, while imposing relatively low storage requirements (all information was constantly available on-screen) and involving no form of reasoning (information did not need to be transformed in any way). Despite the task’s simplicity, several of its variants have been shown to capture fluid reasoning comparably to the hallmark reasoning and the working-memory tests (Bateman et al. 2019; Chuderski 2014; Jastrzębski et al. 2020; Oberauer et al. 2008).
Finally, Jastrzębski et al. (2020) found that the factor that loaded the above mentioned task of comparing the “>” and “<” relations, the relation-monitoring task in which three symbols in a row or column had to be mutually different, as well as a novel simple task that required mapping two nodes between two structurally isomorphic but perceptually different graphs (Jastrzębski et al. 2022), was statistically indistinguishable from the factor that loaded typical fluid intelligence tests such as RAPM, Cattell’s CFT-3, and figural analogies. The two factors shared over 90% of variance, suggesting that the rank order of fluid intelligence can be reproduced with tasks devoid of perceptual complexity, rule discovery, abstraction, and multiple-response alternatives. Actually, these tasks involved neither complex rules nor multiple-rule integration, required no inference steps, and captured faster cognitive processes (response delivered in several seconds), as compared to typical matrix and analogical-reasoning tests (up to a minute required for a response). In their essence, each of these tasks required the processing of a single predefined relation that bound simple elements, such as symbols with their positions and letters with the “<” or “>” relation sign, as well as nodes and arrows in a graph. This is an important clue regarding what fluid reasoning can be about.

5. Computational Models Which Process Structures and Relations

Another source of insight on what is crucial for fluid intelligence comes from the computational models of structured information processing developed in cognitive neuroscience and cognitive psychology. In cognitive neuroscience, the rhythmic activation of neuronal groups encoding task elements was proposed as a mechanism underlying the brain processing of ordered and structured information. Probably the most widely cited iEEG study of the rat brain (O’Keefe and Recce 1993) showed that as a rat approached food, it used the subsequent firing of distinct hippocampal neurons in the gamma band, synchronized with the rat’s theta rhythm, to encode the consecutive steps on its route. Such a phase precession has recently also been reported for people (Qasim et al. 2021). Siegel et al. (2009) showed that a monkey was able to remember two pictures in the correct order, only if the sequential spiking pattern was present in its frontal cortex and the order of the spikes matched the order of presentation of the corresponding pictures (for recent analogous evidence regarding humans, see Bahramisharif et al. 2018).
Inspired by O’Keefe and Recce’s (1993) findings, Lisman and Idiart (1995) developed a computational model in which the lists of items in human short-term memory were encoded analogously, as rats encode consecutive locations—by a sequence of gamma cycles desynchronized by a global inhibitory signal, which forced each item representation to peak in a distinct phase of the theta cycle (see also models by Chuderski et al. 2013; Horn and Usher 1991; Koene and Hasselmo 2007; Raffone and Wolters 2001). The model showed that only a few elements can be held in order at one time, due to intrinsic limitations of neural oscillatory synchronization and desynchronization.
Recent EEG (see Sauseng et al. 2019) and transcranial stimulation research (see Hanslmayr et al. 2019) supported this category of short-term-memory models. Analogous evidence for phase synchronization was reported for directing visual attention to consecutive objects (see Jensen et al. 2021). Even for response latency and hit rate, research showed that rhythmic asynchronous variation (so-called behavioral oscillations) can be aligned to items (Pomper and Ansorge 2021). Overall, the studies suggest that the brain encodes structures by dynamically organizing activation patterns in time, including the coupling of subsequent elements to phase (Cohen 2011).
In cognitive modeling, multiple models of problem-solving and reasoning assumed that representing and transforming relations, as well as mapping their structures across situations, is the core process (called structure mapping; Gentner 1983) leading to valid solutions and conclusions. These models described two tasks most strongly involving fluid reasoning (explicitly called relational reasoning; Holyoak 2012): analogical reasoning (Forbus et al. 1994; Halford et al. 2010; Keane et al. 1994) and inductive reasoning in matrix problems (e.g., the RAPM test; Carpenter et al. 1990; Lovett and Forbus 2017). Crucially, a group of reasoning models represented the key relational structures using rhythmic patterns of activations (see Shastri and Ajjanagadde 1993), providing convergence with the above-mentioned cognitive neuroscience studies on short-term memory and attention.
For example, the LISA model of analogy-making (Hummel and Holyoak 1997, 2003) mapped and transferred the relations and their arguments between a familiar and a novel situation by means of discovering the structural and semantic correspondences between the two situations. LISA’s most important feature was that relations and their arguments had to be represented in the model’s working memory, which was limited to several role–object pairs. More complex relations had to be divided into smaller fragments (mapping was incremental). Each role of a relation was a distinct oscillation, and an object was bound in phase to the role it played. In contrast, the relation’s predicate oscillated for the total time of oscillation of all its pairs, binding them into the complete relation (see Figure 1). The relation cycle was associated with the theta rhythm, while the cycles for particular pairs reflected gamma oscillations (Knowlton et al. 2012). With more capacious working memory, LISA was able to process more complex analogies. The model was also adapted to explain the development of relational reasoning in children (Doumas et al. 2008).
As LISA has never been used for explicit simulations of the individual differences in reasoning performance, Chuderski and Andrelczyk (2015) developed an oscillatory model of a figural analogies test in which variation in the parameters governing the oscillations of bindings between consecutive figures and respective geometric transformations allowed for reconstruction of the distribution of analogical reasoning scores in the human sample, including the types of errors made by high- vs. low-performing participants. Rasmussen and Eliasmith (2014) developed a model, based on a spiking-neurons architecture, which used the dynamic patterns of activity to model reasoning on RAPM. This model simulated performance differences between younger and older participants. All these models constitute the proof of concept that representing relations with flexible patterns of bindings can both explain the processing in the fluid (i.e., relational) reasoning tasks and the individual differences therein.

6. Fluid Intelligence and Relational Representations

On the basis of the findings described above, I propose that fluid reasoning can amount to representing in the mind the key relation(s) for the task at hand in a valid and robust way. Such relations could be encoded in the brain by an asynchronous pattern of a required number of correct role–filler bindings. I start to elaborate the above proposal by qualifying more precisely its main three elements: relations, valid, and robust.
Relation is defined as a labeled, ordered list (tuple) of arguments. A label identifies a relation and allows other entities to refer to it (e.g., a relation can be an argument for some other relation). In cognitive science and psychology specifically, arguments are more commonly called relational roles, and their values are called fillers (Doumas and Hummel 2005; Halford et al. 2010; Holyoak 2012). Each relational role is grounded in the human conceptual system, which provides knowledge on what fillers can play that role and what they can “do”. A label is associated with a relation’s intension. A relation divides the Cartesian product of all possible fillers that can be assigned to the roles in the relation into the subset, for which the relation is satisfied (“true”), and its complement, for which the relation is not satisfied (“false”). An ability to represent relations in the mind allows humans to abstract from perceptual and semantic properties of fillers (such properties can be misleading in abstract, formal reasoning; see Markman and Gentner 1993), supporting the compositionality and productivity of human thinking (Hofstadter 2001).
However, in order to do so, mental representations of relations need to be valid, that is, they must include the correct bindings between respective fillers and their roles as well as preserve these bindings during consecutive mental operations. First, fillers need to satisfy the categorial constraints for particular roles (Keane et al. 1994)—for instance in the relation Gave (giver, recipient, gift), the role of giver assumes a person or a group. The relation’s instance Gave (Tom, Ann, cat) clearly satisfies this constraint; but, in a creative story on a magic cat in which the constraints can be violated, it would also be acceptable that Gave (cat, Tom, Ann). However, as many fluid intelligence tests are semantically lean (include meaningless symbols or shapes), the above categorial constraints help little in processing relations. Second, and more important, the relation representation must be structurally consistent within the entire system of relations a person holds (Halford et al. 2010), so when considering the relation Received (recipient, giver, gift), the actors and objects must play the corresponding roles as in Gave, that is, Received (Ann, Tom, cat). Third, the categorial and structural properties of relations together allow for making multiple inferences. For instance, knowing that “A is above B” and “C is below B”, one can infer that “A is on top”, “C is at bottom”, “B is between them”, etc.
Simple fluid intelligence tests may capture fluid reasoning so aptly because they require a valid relation representation. For instance, in the relation-monitoring task, such a representation must include only the last symbols bound correctly to the consecutive matrix positions (e.g., first, second, third column), such as Row1 (Ψ, Φ, Ω). Following the structural consistency principle, it needed to be transformed (in a process sometimes called relational integration; Oberauer et al. 2008) into some form of representation that explicitly reflects three symbol differences, for instance Diff (Ψ ≠ Φ, Φ ≠ Ω, Ψ ≠ Ω).
The condition that relation representation has to be robust means simply that this representation has to be accessible, as long as it is required during coping with a task. For instance, the relation Row1 cannot break down before the relation Diff is construed (but can be dispensed afterwards). Failure to access the key relation, either because of failure to construe it or to maintain it after construal, likely leads to a failure on the task.
The power to represent relations in a valid and robust way potentially provides an individual with crucial flexibility in dealing with cognitive tasks, because relational representations can encode arbitrary knowledge structures of any form. Especially, they enable novel structures, including ones which are not possible in the real word (“a mouse bigger than an elephant”), underlying innovative and creative thinking. This very flexibility stems from the fact that relational roles can, in principle, be paired with any fillers (see Doumas and Hummel 2005). Moreover, dynamic bindings can potentially encode diverse types of structures (e.g., lists, stacks, trees, networks), because relational roles can easily define particular placeholders in a specific structure (“a node”, “a root”, etc.). Finally, the intrinsic trade-offs between the size (many asynchronous bindings) vs. stability/precision (only one or two bindings, but each of them binding together rich information) can be potentially resolved by adapting the internal organization of bindings to the requirements of a task. For instance, in some situations (e.g., a change-detection task) it is more important to encode a large number of objects, even at the risk of losing some of them, while in some other situations (e.g., attention-control tasks), it is crucial to focus on a single maximally undistorted representation for as long as possible (see the adjustable-attention hypothesis by Cowan). Dynamic bindings seem to allow all that flexibility.
The effective maintenance of relation representations may drive individual differences in fluid reasoning, because evidence from cognitive neuroscience and cognitive modeling suggests that processing relations in a valid and robust way is very difficult for such a biological system as the human brain. Specifically, if the oscillatory models of working memory and reasoning are right, the relation representation requires the maintaining of an arbitrary and precise pattern of dynamics: representation of a given relational role and corresponding filler must be maintained in synchrony, while consecutive role–filler bindings must be active asynchronously. Both data from brain recordings (Bahramisharif et al. 2018; Qasim et al. 2021) and computational models (Chuderski and Andrelczyk 2015; Usher et al. 2001) suggest that maintaining such patterns is a demanding process that may become unstable with three or more bindings. The individual differences in fluid intelligence may, thus, stem from whether one succeeds or fails to process simple relations in a valid form (e.g., without missing or mixing roles and fillers) for the entire interval during which this relation is needed.
The above proposal should not be treated as a novel theory of fluid intelligence, just as a working hypothesis. In the present state of cognitive (neuro)science, we still await more precise data and models on how humans process relations, which are not easily extractable from the black box of the human mind. Even though psychometrics (new simple fluid intelligence tests), brain imaging (a growing evidence for the role of rhythmic neural patterns in cognition), and cognitive modeling (development of the models of relational reasoning yielding insight into the underlying mechanisms) provided initial cues for the conceptualization of fluid intelligence in terms of flexible relational representations, future research is needed to develop a causal theoretical model explaining how dynamic patterns in the brain could translate into relational representations, which themselves would translate into the processing of cognitive tasks (including fluid intelligence tests) that, in turn, translates into academic, professional, socioeconomic, and life status. However, even not being such a theory, the current proposal helps to purify the fluid intelligence construct and develop new ways of its scientific pursuit.
Even though a direct method to observe how relations are represented in the brain is still lacking (but we can already identify simple list structures encoded by phase precession (Qasim et al. 2021)), several kinds of less direct evidence could either support or undermine the role of relation representation for fluid intelligence. On the psychometric level, as the work by Jastrzębski et al. (2020, 2022) can count only as initial evidence, it should be systematically examined whether novel relation-processing tests can substitute for the traditional tests in the fluid intelligence measurement (e.g., explaining the same variance, surpassing competitor predictors such as working-memory tasks) and can equally strongly predict phenomena known to relate substantially to fluid intelligence (e.g., academic achievement, learning). On the cognitive level, computational models that allow variation in relation representation effectiveness should be able to replicate main findings from the reasoning ability literature (distributions of scores, pattern of errors, interrelations between various tasks, developmental patterns, etc.), even though successful simulations can never act as a decisive proof. On the neural level, assuming that temporal patterns of bindings are encoded via their coupling to the phase of some brain rhythm, certain parameters of coupling (yet to be identified) are expected to more strongly predict scores on both simple relation-processing and typical fluid intelligence tests, as compared to alternative brain markers (for initial evidence pertaining to the delta-gamma coupling see Gągol et al. 2018). Finally, the holy grail of intelligence research is cognitive training capable of increasing individual intelligence levels; however, a recent second-order meta-analysis refuted that visible far-transfer effects can be achieved by working-memory training (Sala et al. 2019). The plausibility of the relation-representation hypothesis would be strongly supported if, instead, the training programs based on trivial relation-processing tasks resulted in persistent increases in fluid intelligence, operationalized at the latent level (for initial evidence of a successful relation processing training in adolescents, see McLoughlin et al. 2020).
Similar relation-based proposals present in the literature as well as their differences with the current proposal need to be highlighted. A concept of relation appeared a century ago in Spearman’s (1927) idea of eduction of relations and correlates (analogous discriminating and perceiving relations was also considered by Cattell (1943)). However, the psychometric evidence cited above suggests that neither identification of the valid relation when the values of its arguments are known (rule discovery) nor assigning the valid values to the arguments for the known relation (relation instantiation) is crucial for capturing fluid intelligence. Instead, a more basic process of relation representation is considered here: even when both the relation and its arguments are provided by the task, it is still demanding for the brain to maintain the resulting pattern of bindings for a required duration. Moreover, this representation is necessary for processes that are conceptually simpler than eduction of relations, such as bare validation of a relation—checking whether the current assignment of fillers to roles satisfies that relation, or even just reporting a relation—recalling an ordered list of elements in the complex-span task.
As compared to Oberauer’s (1993; Oberauer et al. 2008) idea of relational integration, and an analogous conception of construction of relational representations by Halford et al. (2015), which saw the crucial source of fluid intelligence differences in the process of integrating basic elements of relations into more complex relational structures, which may succeed or fail, the current proposal focuses primarily on the very maintenance of those basic elements, as it can be demanding even in the absence of the need for relational integration. However, Duncan et al. (2017) showed that a fluid intelligence test with reduced rule integration (each rule could be processed separately) became feasible even for low IQ participants, implying the key role of relational integration (compositionality, in their terms). It is likely that both these sources of variance, mutually entangled to a large extent, do concurrently contribute to fluid intelligence, as suggested by initial research that contrasted these two sources (Shokri-Kojori and Krawczyk 2018). Definitely, more detailed research is needed here.
As to Halford et al.’s (1998) idea of relational complexity—the maximum number of independent variables or relation arguments that one can grasp simultaneously without segmentation and chunking, which limits the reasoning process one can handle (Andrews and Halford 2002), the current proposal is also somewhat more basic: according to it, individual differences in fluid intelligence can also emerge from failures to robustly maintain relations lower in relational complexity than the maximum possible complexity of relation an individual can process.
Finally, relational frame theory (Hayes et al. 2001), which describes how relational processing and behavior can emerge from operant conditioning (so-called arbitrarily applicable relational responding), seems compatible with the current proposal, yet it is formulated in a much more abstract way. Moreover, this theory primarily pertains to the development of relational processing, instead of the adult differences in it.
In short, the present proposal claims that neither the process of integrating the bindings into the complete relational representation nor the available relational complexity of such a representation are crucial for individual differences in fluid reasoning, even though they may contribute to some of its variance. By contrast, such differences primarily reflect the validity and robustness of the more basic role–filler bindings representing relations. Inability to maintain, validly and robustly, such bindings in the brain/mind, as a result, limits the construal and processing of relation representations, even simple ones.

7. Conclusions

In this work, I focused on the importance of relation representation and processing for the construct of fluid intelligence. I argued that when inessential features of fluid intelligence tests are abstracted away, the process captured by these tests amounts to representing the key relation(s) in the valid and robust way. Based on my review of neurocognitive data and models, this ability may itself be rooted in the parameters determining the maintenance of arbitrary patterns of dynamic bindings linking consecutive arguments of a relation with the roles played by them. These arbitrary, dynamic bindings potentially allow substantial flexibility of (relational) thinking. How valid and robust representations of relations one can maintain due to valid and robust bindings, may determine a fluid intelligence level one displays. Validation of this proposal requires novel precise data and models, with the future integration of findings from cognitive neuroscience and psychology with computational models of working memory and reasoning. The proposal can potentially help to reconceptualize fluid intelligence towards a more process-oriented construct (theory), as compared to the traditional correlational approaches considered to date. Although such a complex phenomenon as fluid intelligence definitely cannot be reduced to a single factor, the idea of relation representation may purify this very phenomenon and stipulate new lines of research.

Funding

This work was supported by the National Science Centre of Poland (grant #2019/33/B/HS6/00321). The funding source had no role in preparation of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data associated with this work.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Alexander, William H., and Joshua W. Brown. 2011. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience 14: 1338–44. [Google Scholar] [CrossRef] [PubMed]
  2. Andrews, Glenda, and Graeme S. Halford. 2002. A cognitive complexity metric applied to cognitive development. Cognitive Psychology 45: 153–219. [Google Scholar] [CrossRef]
  3. Arden, Rosalind, and Mark J. Adams. 2016. A general intelligence factor in dogs. Intelligence 55: 79–85. [Google Scholar] [CrossRef]
  4. Arendasy, Martin, Andreas Hergovich, and Markus Sommer. 2008. Investigating the ‘g’ saturation of various stratum-two factors using automatic item generation. Intelligence 36: 574–83. [Google Scholar] [CrossRef]
  5. Arendasy, Martin E., and Markus Sommer. 2013. Reducing response elimination strategies enhances the construct validity of figural matrices. Intelligence 41: 234–43. [Google Scholar] [CrossRef]
  6. Bahramisharif, Ali, Ole Jensen, Joshua Jacobs, and John Lisman. 2018. Serial representation of items during working memory maintenance at letter-selective cortical sites. PLoS Biology 16: e2003805. [Google Scholar] [CrossRef]
  7. Barbey, Aron K. 2018. Network neuroscience theory of human intelligence. Trends in Cognitive Sciences 22: 8–20. [Google Scholar] [CrossRef] [Green Version]
  8. Bateman, Joel E., Kate A. Thompson, and Damian P. Birney. 2019. Validating the relation-monitoring task as a measure of relational integration and predictor of fluid intelligence. Memory & Cognition 47: 1457–68. [Google Scholar]
  9. Bays, Paul M. 2015. Spikes not slots: Noise in neural populations limits working memory. Trends in Cognitive Science 19: 431–38. [Google Scholar] [CrossRef]
  10. Becker, Nicolas, Florian Schmitz, Anke M. Falk, Jasmin Feldbrugge, Daniel R. Recktenwald, Oliver Wilhelm, Franzis Preckel, and Frank M. Spinath. 2016. Preventing response elimination strategies improves the convergent validity of figural matrices. Journal of Intelligence 4: 2. [Google Scholar] [CrossRef] [Green Version]
  11. Bethell-Fox, Charles E., David F. Lohman, and Richard E. Snow. 1984. Adaptive reasoning: Componential and eye movement analysis of geometric analogy performance. Intelligence 8: 205–38. [Google Scholar] [CrossRef]
  12. Botvinick, Matthew M., Todd S. Braver, Deanna M. Barch, Cameron S. Carter, and Jonathan D. Cohen. 2001. Conflict monitoring and cognitive control. Psychological Review 108: 624–52. [Google Scholar] [CrossRef] [PubMed]
  13. Brady, Timothy F., Talia Konkle, and George A. Alvarez. 2011. A review of visual memory capacity: Beyond individual items and toward structured representations. Journal of Vision 11: 4. [Google Scholar] [CrossRef] [PubMed]
  14. Buzsaki, Gyorgy. 2006. Rhythms of the Brain. Oxford: Oxford University Press. [Google Scholar]
  15. Carlstedt, Berit, Jan-Eric Gustafsson, and Eva Ullstadius. 2000. Item sequencing effects on the measurement of fluid intelligence. Intelligence 28: 145–60. [Google Scholar] [CrossRef] [Green Version]
  16. Carpenter, Patricia A., Marcel A. Just, and Peter Shell. 1990. What one intelligence test measures: A theoretical account of the processing in the Raven progressive matrices test. Psychological Review 97: 404–31. [Google Scholar] [CrossRef] [PubMed]
  17. Carroll, John B. 1993. Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge: Cambridge University Press. [Google Scholar]
  18. Cattell, Raymond B. 1943. The measurement of adult intelligence. Psychological Bulletin 40: 153–93. [Google Scholar] [CrossRef]
  19. Cattell, Raymond B. 1949. Culture Free Intelligence Test, Scale 1, Handbook. Champaign: IPAT. [Google Scholar]
  20. Cattell, Raymond B. 1963. Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology 54: 1–22. [Google Scholar] [CrossRef]
  21. Chuderski, Adam. 2013. When are fluid intelligence and working memory isomorphic and when are they not? Intelligence 41: 244–62. [Google Scholar] [CrossRef]
  22. Chuderski, Adam. 2014. The relational integration task explains fluid reasoning above and beyond other working memory tasks. Memory & Cognition 42: 448–63. [Google Scholar]
  23. Chuderski, Adam. 2015a. The broad factor of working memory is virtually isomorphic to fluid intelligence tested under time pressure. Personality and Individual Differences 85: 98–104. [Google Scholar] [CrossRef]
  24. Chuderski, Adam. 2015b. Why people fail on the fluid intelligence tests. Journal of Individual Differences 36: 138–49. [Google Scholar] [CrossRef]
  25. Chuderski, Adam. 2019. Even a single trivial binding of information is critical for fluid intelligence. Intelligence 77: 101396. [Google Scholar] [CrossRef]
  26. Chuderski, Adam, and Krzysztof Andrelczyk. 2015. From neural oscillations to complex cognition: Simulating the effect of the theta-to-gamma cycle length ratio on analogical reasoning. Cognitive Psychology 76: 78–102. [Google Scholar] [CrossRef] [PubMed]
  27. Chuderski, Adam, Krzysztof Andrelczyk, and Tomasz Smoleń. 2013. An oscillatory model of individual differences in working memory capacity and relational integration. Cognitive Systems Research 24: 87–95. [Google Scholar] [CrossRef]
  28. Chuderski, Adam, and Jan Jastrzębski. 2018. Much ado about Aha! Insight problem solving is strongly related to working memory capacity and reasoning ability. Journal of Experimental Psychology: General 147: 257–81. [Google Scholar] [CrossRef]
  29. Chuderski, Adam, and Tomasz Smoleń. 2016. An integrated model of utility-based evaluation and resolution of conflicts in the Stroop task. Psychological Review 123: 255–90. [Google Scholar] [CrossRef]
  30. Chuderski, Adam, Maciej Taraday, Edward Nęcka, and Tomasz Smoleń. 2012. Storage capacity explains fluid intelligence but executive control does not. Intelligence 40: 278–95. [Google Scholar] [CrossRef]
  31. Clevenger, Pamela E., and John E. Hummel. 2014. Working memory for relations among objects. Attention, Perception & Psychophysics 76: 1933–53. [Google Scholar]
  32. Cohen, Michael X. 2011. It’s about time. Frontiers in Human Neuroscience 5: 2. [Google Scholar] [CrossRef] [Green Version]
  33. Colom, Roberto, Francisco J. Abad, M. Angeles Quiroga, Pei C. Shih, and Carmen Flores-Mendoza. 2008. Working memory and intelligence are highly related constructs but why? Intelligence 36: 584–606. [Google Scholar] [CrossRef]
  34. Conway, Andrew R., Nelson Cowan, Michael F. Bunting, David J. Therriault, and Scott R. Minkoff. 2002. A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence 30: 163–83. [Google Scholar] [CrossRef]
  35. Cowan, Nelson. 2022. Working memory development: A 50-year assessment of research and underlying theories. Cognition 224: 105075. [Google Scholar] [CrossRef] [PubMed]
  36. Deary, Ian J. 1994. Sensory Discrimination and Intelligence: Postmortem or Resurrection? American Journal of Psychology 107: 95–115. [Google Scholar] [CrossRef]
  37. Deary, Ian J., P. Joseph Bell, Andrew J. Bell, Mary L. Campbell, and Nicola D. Fazal. 2004. Sensory discrimination and intelligence: Testing Spearman’s other hypothesis. American Journal of Psychology 117: 1–18. [Google Scholar] [CrossRef]
  38. Deary, Ian J., Lars Penke, and Wendy Johnson. 2010. The neuroscience of human intelligence differences. Nature Reviews Neuroscience 11: 201–11. [Google Scholar] [CrossRef]
  39. Diamond, Adele. 2013. Executive functions. Annual Review of Psychology 64: 135–68. [Google Scholar] [CrossRef] [Green Version]
  40. Doebler, Phillip, and Barbara Scheffler. 2016. The relationship of choice reaction time variability and intelligence: A meta-analysis. Learning and Individual Differences 52: 157–66. [Google Scholar] [CrossRef]
  41. Doumas, Leonidas A. A., and John E. Hummel. 2005. Approaches to modeling human mental representations: What works, what doesn’t and why. In The Cambridge Handbook of Thinking and Reasoning. Edited by Keith J. Holyoak and Robert Morrison. Cambridge: Cambridge University Press, pp. 73–91. [Google Scholar]
  42. Doumas, Leonidas A. A., John E. Hummel, and Catherine M. Sandhofer. 2008. A theory of the discovery and predication of relational concepts. Psychological Review 115: 1–43. [Google Scholar] [CrossRef] [Green Version]
  43. Draheim, Christopher, Jason S. Tsukahara, Jessie D. Martin, Cody A. Mashburn, and Randall W. Engle. 2021. A toolbox approach to improving the measurement of attention control. Journal of Experimental Psychology: General 150: 242–75. [Google Scholar] [CrossRef]
  44. Duncan, John, Daphne Chylinski, Daniel J. Mitchell, and Apoorva Bhandari. 2017. Complexity and compositionality in fluid intelligence. Proceedings of the National Academy of Sciences 114: 5295–99. [Google Scholar] [CrossRef] [Green Version]
  45. Engle, Randall W., Stephen W. Tuholski, James E. Laughlin, and Andrew R. A. Conway. 1999. Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General 128: 309–31. [Google Scholar] [CrossRef] [PubMed]
  46. Estrada, Eduardo, Francisco J. Román, Francisco J. Abad, and Roberto Colom. 2017. Separating power and speed components of standardized intelligence measures. Intelligence 61: 159–68. [Google Scholar] [CrossRef]
  47. Forbus, Kenneth D., Ronald W. Fergusson, and Dedre Gentner. 1994. Incremental structure mapping. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society. Edited by Ashwin Ram and Kurt Eislet. Hillsdale: Lawrence Erlbaum, pp. 313–18. [Google Scholar]
  48. Fougnie, Daryl, and George A. Alvarez. 2011. Object features fail independently in visual working memory: Evidence for a probabilistic feature-store model. Journal of Vision 11: 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Frischkorn, Gideon. T., and Claudia C. von Bastian. 2021. In search of the executive cognitive processes proposed by Process-Overlap Theory. Journal of Intelligence 9: 43. [Google Scholar] [CrossRef] [PubMed]
  50. Galton, Francis. 1883. Inquiries into Human Faculty. London: Dent. [Google Scholar]
  51. Gągol, Adam, Mikołaj Magnuski, Bartłomiej Kroczek, Patrycja Kałamała, Michał Ociepka, Emiliano Santarnecchi, and Adam Chuderski. 2018. Delta-gamma coupling as a potential neurophysiological mechanism of fluid intelligence. Intelligence 66: 54–63. [Google Scholar] [CrossRef]
  52. Gentner, Dedre. 1983. Structure mapping: A theoretical framework for analogy. Cognitive Science 7: 155–70. [Google Scholar] [CrossRef]
  53. Gilbert, J. Allen. 1894. Researches on the mental and physical development of schoolchildren. Studies of Yale Psychological Laboratory 2: 40–100. [Google Scholar]
  54. Goodwin, Geoffrey P., and Phillip N. Johnson-Laird. 2008. Transitive and pseudotransitive inferences. Cognition 108: 320–52. [Google Scholar] [CrossRef]
  55. Gustafsson, Jan-Eric. 1984. A unifying model for the structure of intellectual abilities. Intelligence 8: 179–203. [Google Scholar] [CrossRef]
  56. Haier, Richard J. 2016. The Neuroscience of Intelligence. New York: Cambridge University Press. [Google Scholar]
  57. Halford, Graeme S., Glenda Andrews, and William H. Wilson. 2015. Relational processing in reasoning: The role of working memory. In Reasoning as Memory. Edited by Aidan Feeney and Valerie A. Thompson. Hove: Psychology Press, pp. 34–52. [Google Scholar]
  58. Halford, Graeme S., William H. Wilson, and Steven Phillips. 1998. Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology. Behavioral and Brain Sciences 21: 803–64. [Google Scholar] [CrossRef]
  59. Halford, Graeme S., William H. Wilson, and Steven Phillips. 2010. Relational knowledge: The foundation of higher cognition. Trends in Cognitive Sciences 14: 497–505. [Google Scholar] [CrossRef] [PubMed]
  60. Hanslmayr, Simon, Nikolai Axmacher, and Cory S. Imman. 2019. Modulating human memory via entrainment of brain oscillations. Trends in Neurosciences 42: 485–99. [Google Scholar] [CrossRef]
  61. Harrison, Tyler L., Zach Shipstead, and Randall W. Engle. 2015. Why is working memory capacity related to matrix reasoning tasks? Memory & Cognition 43: 389–96. [Google Scholar]
  62. Hayes, Steven C., Dermot Barnes-Holmes, and Bryan Roche. 2001. Relational Frame Theory: A Post-Skinnerian Account of Human Language and Cognition. New York: Plenum Press. [Google Scholar]
  63. Hayes, Taylor R., Alexander D. Petrov, and Per B. Sederberg. 2015. Do we really become smarter when our fluid-intelligence scores improve? Intelligence 48: 1–14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Hedge, Craig, Georgina Powell, and Petroc Sumner. 2018. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods 50: 1166–86. [Google Scholar] [CrossRef] [PubMed]
  65. Hofstadter, Douglas R. 2001. Analogy as the Core of Cognition. In The Analogical Mind: Perspectives from Cognitive Science. Edited by Dedre Gentner, Keith J. Holyoak and Boicho N. Kokinov. Cambridge: The MIT Press/Bradford Book, pp. 499–538. [Google Scholar]
  66. Holroyd, Clay B., Nick Yeung, Michael G. H. Coles, and Jonathan D. Cohen. 2005. A mechanism for error detection in speeded response time tasks. Journal of Experimental Psychology: General 134: 163–91. [Google Scholar]
  67. Holyoak, Keith J. 2012. Analogy and relational reasoning. In The Oxford Handbook of Thinking and Reasoning. Edited by Keith J. Holyoak and Robert G. Morrison. New York: Oxford University Press, pp. 234–59. [Google Scholar]
  68. Horn, John L., and Raymond B. Cattell. 1966. Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology 57: 253–70. [Google Scholar] [CrossRef]
  69. Horn, David, and Marius Usher. 1991. Parallel activation of memories in an oscillatory neural network. Neural Computation 3: 31–43. [Google Scholar] [CrossRef]
  70. Hummel, John E., and Keith J. Holyoak. 1997. Distributed representations of structure: A theory of analogical access and mapping. Psychological Review 104: 427–66. [Google Scholar] [CrossRef]
  71. Hummel, John E., and Keith J. Holyoak. 2003. A symbolic-connectionist theory of relational inference and generalization. Psychological Review 110: 220–64. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Hunt, Earl B., Clifford Lunneborg, and Joe Lewis. 1975. What does it mean to be high verbal? Cognitive Psychology 1: 194–227. [Google Scholar] [CrossRef]
  73. Jastrzębski, Jan, Bartłomiej Kroczek, and Adam Chuderski. 2021. Galton and Spearman revisited: Can single general discrimination ability drive performance on diverse sensorimotor tasks and explain intelligence? Journal of Experimental Psychology: General 150: 1279–302. [Google Scholar] [CrossRef]
  74. Jastrzębski, Jan, Michał Ociepka, and Adam Chuderski. 2020. Fluid intelligence is equivalent to relation processing. Intelligence 82: 101–489. [Google Scholar] [CrossRef]
  75. Jastrzębski, Jan, Michał Ociepka, and Adam Chuderski. 2022. Graph Mapping: A novel and simple test to validly assess fluid reasoning. Behavior Research Methods, 1–13. [Google Scholar] [CrossRef]
  76. Jensen, Arthur R. 2006. Clocking the Mind: Mental Chronometer Individual Differences. Amsterdam: Elsevier. [Google Scholar]
  77. Jensen, Arthur R., and Ella Munro. 1979. Reaction time, movement time, and intelligence. Intelligence 3: 121–26. [Google Scholar] [CrossRef]
  78. Jensen, Ole, Yali Pan, Steven Frisson, and Lin Wang. 2021. An oscillatory pipelining mechanism supporting previewing during visual exploration and reading. Trends in Cognitive Sciences 25: 1033–44. [Google Scholar] [CrossRef] [PubMed]
  79. Johnson-Laird, Phillip N. 2006. How We Reason? Oxford: Oxford University Press. [Google Scholar]
  80. Kan, Kes-Jan, Rogier A. Kievit, Conor Dolan, and Han van der Maas. 2011. On the interpretation of the CHC factor Gc. Intelligence 39: 292–302. [Google Scholar] [CrossRef]
  81. Keane, Mark T., Tim Ledgeway, and Stuart Duff. 1994. Constraints on analogical mapping: A comparison of three models. Cognitive Science 18: 387–438. [Google Scholar] [CrossRef]
  82. Knowlton, Barbara J., Robert G. Morrison, John E. Hummel, and Keith J. Holyoak. 2012. A neurocomputational system for relational reasoning. Trends in Cognitive Sciences 16: 373–81. [Google Scholar] [CrossRef]
  83. Koene, Randal, and Michael Hasselmo. 2007. First-in-first-out item replacement in a model of short-term memory based on persistent spiking. Cerebral Cortex 17: 1766–81. [Google Scholar] [CrossRef] [Green Version]
  84. Kovacs, Kristof, and Andrew R. A. Conway. 2016. Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry 27: 151–77. [Google Scholar] [CrossRef]
  85. Kvist, Ann V., and Jan-Eric Gustafsson. 2008. The relation between fluid intelligence and the general factor as a function of cultural background: A test of Cattell’s Investment theory. Intelligence 36: 422–36. [Google Scholar] [CrossRef] [Green Version]
  86. Kyllonen, Patrick, and Raymond E. Christal. 1990. Reasoning ability is (little more than) working memory capacity? Intelligence 433: 389–433. [Google Scholar] [CrossRef]
  87. Lakin, Joni M., and James L. Gambrell. 2012. Distinguishing verbal, quantitative, and figural facets of fluid intelligence in young students. Intelligence 40: 560–70. [Google Scholar] [CrossRef]
  88. Levacher, Julie, Marco Koch, Johanna Hissbach, Frank M. Spinath, and Nicolas Becker. 2022. You can play the game without knowing the rules—But you’re better off knowing them: The influence of rule knowledge on figural matrices tests. European Journal of Psychological Assessment 38: 15–23. [Google Scholar] [CrossRef]
  89. Li, Shu-Chen, Malina Jordanova, and Ulman Lindenberger. 1998. From good senses to good sense: A link between tactile information processing and intelligence. Intelligence 26: 99–122. [Google Scholar] [CrossRef] [Green Version]
  90. Lisman, John E., and Marco A. P. Idiart. 1995. Storage of 7 ± 2 short-term memories in oscillatory subcycles. Science 267: 1512–14. [Google Scholar] [CrossRef]
  91. Little, Daniel R., Stephan Lewandowsky, and Stewart Craig. 2014. Working memory capacity and fluid abilities: The more difficult the item, the more more is better. Frontiers in Psychology 5: 239. [Google Scholar] [CrossRef] [PubMed]
  92. Loesche, Patrick, Jennifer Wiley, and Marcus Hasselhorn. 2015. How knowing the rules affects solving the Raven progressive matrices test. Intelligence 48: 58–75. [Google Scholar] [CrossRef]
  93. Lovett, Andrew, and Kenneth Forbus. 2017. Modeling visual problem solving as analogical reasoning. Psychological Review 124: 60–90. [Google Scholar] [CrossRef] [PubMed]
  94. Lovett, Marsha C. 2005. A strategy-based interpretation of Stroop. Cognitive Science 29: 493–524. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Lozano, José H. 2015. Are impulsivity and intelligence truly related constructs? Evidence based on the fixed-links model. Personality and Individual Differences 85: 192–98. [Google Scholar] [CrossRef]
  96. Lozano, Jose H., and Javier Revuelta. 2020. Investigating operation-specific learning effects in the Raven’s Advanced Progressive Matrices: A linear logistic test modeling approach. Intelligence 82: 101468. [Google Scholar] [CrossRef]
  97. Lu, Ying, and Stephen G. Sireci. 2007. Validity issues in test speededness. Educational Measurement: Issues and Practice 26: 29–37. [Google Scholar] [CrossRef]
  98. Markman, Arthur B., and Dedre Gentner. 1993. Structural alignment during similarity comparisons. Cognitive Psychology 25: 431–67. [Google Scholar] [CrossRef] [Green Version]
  99. McGrew, Kevin S. 2009. CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37: 1–10. [Google Scholar] [CrossRef]
  100. McLoughlin, Shane, Ian Tyndall, and Antonina Pereira. 2020. Convergence of multiple fields on a relational reasoning approach to cognition. Intelligence 83: 101491. [Google Scholar] [CrossRef]
  101. Meyer, Christine. S., Priska Hagmann-von Arx, Sakari Lemola, and Alexander Grob. 2010. Correspondence between the general ability to discriminate sensory stimuli and general intelligence. Journal of Individual Differences 31: 46–56. [Google Scholar] [CrossRef]
  102. Neisser, Urlic. 1967. Cognitive Psychology. Englewood Cliffs: Prentice-Hall. [Google Scholar]
  103. Neubauer, Aljoscha C. 1990. Speed of information processing in the Hick paradigm and response latencies in a psychometric intelligence test. Personality and Individual Differences 11: 147–52. [Google Scholar] [CrossRef]
  104. Oaksford, Mike, and Nick Chater. 2007. Bayesian Rationality: The Probabilistic Approach to Human Reasoning. Oxford: Oxford University Press. [Google Scholar]
  105. Oberauer, Klaus. 1993. Die Koordination kognitiver Operationen. Eine Studie zum Zusammenhang von “working-memory” und Intelligenz. Zeitschrift für Psychologie 201: 57–84. [Google Scholar]
  106. Oberauer, Klaus. 2016. Parameters, not processes, explain general intelligence. Psychological Inquiry 27: 231–35. [Google Scholar] [CrossRef]
  107. Oberauer, Klaus, Stephan Lewandowsky, Edward Awh, Gordon D. A. Brown, Andrew Conway, Nelson Cowan, Christopher Donkin, Simon Farrell, Graham J. Hitch, Mark J. Hurlstone, and et al. 2018. Benchmarks for models of short-term and working memory. Psychological Bulletin 144: 885–958. [Google Scholar] [CrossRef] [PubMed]
  108. Oberauer, K., and Hsuan-Yu Lin. 2017. An interference model of visual working memory. Psychological Review 124: 21–59. [Google Scholar] [CrossRef] [PubMed]
  109. Oberauer, Klaus, Ralf Schultze, Oliver Wilhelm, and Heinz-Martin Süß. 2005. Working memory and intelligence—Their correlation and their relation: Comment on Ackerman, Beier, and Boyle. Psychological Bulletin 131: 61–65. [Google Scholar] [CrossRef] [Green Version]
  110. Oberauer, Klaus, Heinz-Martin Süß, Oliver Wilhelm, and Werner W. Wittman. 2008. Which working memory functions predict intelligence? Intelligence 36: 641–52. [Google Scholar] [CrossRef] [Green Version]
  111. O’Keefe, John, and Michael L. Recce. 1993. Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3: 317–30. [Google Scholar] [CrossRef] [PubMed]
  112. Qasim, Salman E., Itzhak Fried, and Joshua Jacobs. 2021. Phase precession in the human hippocampus and entorhinal cortex. Cell 184: 3242–55. [Google Scholar] [CrossRef]
  113. Pomper, Urlich, and Urlich Ansorge. 2021. Theta-rhythmic oscillation of working memory performance. Psychological Science 32: 1801–10. [Google Scholar] [CrossRef]
  114. Primi, Ricardo. 2002. Complexity of geometric inductive reasoning tasks: Contribution to the understanding of fluid intelligence. Intelligence 30: 41–70. [Google Scholar] [CrossRef]
  115. Raffone, Antonino, and Gezinus Wolters. 2001. A cortical mechanism for binding in visual memory. Journal of Cognitive Neuroscience 13: 766–85. [Google Scholar] [CrossRef]
  116. Rasmussen, Daniel, and Chris Eliasmith. 2014. A spiking neural model applied to the study of human performance and cognitive decline on Raven’s Advanced Progressive Matrices. Intelligence 42: 53–82. [Google Scholar] [CrossRef]
  117. Raven, John C. 1938. Progressive Matrices. London: Lewis. [Google Scholar]
  118. Raven, John C., John. H. Court, and Jean Raven. 1983. Manual for Raven’s Progressive Matrices and Vocabulary Scales (Section 4: Advanced Progressive Matrices). London: H. K. Lewis. [Google Scholar]
  119. Ren, Xuezhu, Tengfei Wang, Sumin Sun, Mi Deng, and Karl Schweizer. 2018. Speeded testing in the assessment of intelligence gives rise to a speed factor. Intelligence 66: 64–71. [Google Scholar] [CrossRef]
  120. Rey-Mermet, Alodie Miriam Gade, Alessandra S. Souza, Claudia C. von Bastian, and Klaus Oberauer. 2019. Is executive control related to working memory capacity and fluid intelligence? Journal of Experimental Psychology: General 148: 1335–72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  121. Roelofs, Ardi. 2003. Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review 110: 88–125. [Google Scholar] [CrossRef] [Green Version]
  122. Sala, Giovanni, N. Deniz Aksayli, K. Semir Tatlidil, Tomoko Tatsumi, Yasuyuki Gondo, and Fernand Gobet. 2019. Near and far transfer in cognitive training: A second-order meta-analysis. Collabra: Psychology 5: 18. [Google Scholar] [CrossRef] [Green Version]
  123. Salthouse, Timothy. 1993. Relations between running memory and fluid intelligence. Intelligence 43: 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Sauseng, Paul, Charline Peylo, Anna L. Biel, Elisabeth V. C. Friedrich, and Carola Romberg-Taylor. 2019. Does cross-frequency phase coupling of oscillatory brain activity contribute to a better understanding of visual working memory? British Journal of Psychology 110: 245–55. [Google Scholar] [CrossRef]
  125. Schmidt, James R. 2013. The Parallel Episodic Processing (PEP) model: Dissociating contingency and conflict adaptation in the item-specific proportion congruent paradigm. Acta Psychologica 142: 119–26. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  126. Schubert, Anna-Lena. 2019. A meta-analysis of the worst performance rule. Intelligence 73: 88–100. [Google Scholar] [CrossRef] [Green Version]
  127. Schubert, Anna-Lena, and Gideon. T. Frischkorn. 2020. Neurocognitive psychometrics of intelligence: How measurement advancements unveiled the role of mental speed in intelligence differences. Current Directions in Psychological Science 29: 140–46. [Google Scholar] [CrossRef]
  128. Schulze, Doreen, Andre Beauducel, and Burkhard Brocke. 2005. Semantically meaningful and abstract figural reasoning in the context of fluid and crystallized intelligence. Intelligence 33: 143–59. [Google Scholar] [CrossRef]
  129. Schweizer, Karl. 2010. The relationship of attention and intelligence. In Handbook of Individual Differences in Cognition: Attention, Memory, and Executive Control. Edited by Aleksandra Gruszka, Garry Matthiews and Błażej Szymura. New York: Springer, pp. 247–62. [Google Scholar]
  130. Schweizer, Karl, Stephan J. Troche, and Thomas H. Rammsayer. 2018. Does processing speed exert an influence on the special relationship of fluid and general intelligence? Personality and Individual Differences 131: 57–60. [Google Scholar] [CrossRef]
  131. Shastri, Lokendra, and Venkat Ajjanagadde. 1993. From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences 16: 417–94. [Google Scholar] [CrossRef] [Green Version]
  132. Sheppard, Leah D., and Philip A. Vernon. 2008. Intelligence and speed of information-processing: A review of 50 years of research. Personality and Individual Differences 44: 535–51. [Google Scholar] [CrossRef]
  133. Shokri-Kojori, Ehsan, and Daniel C. Krawczyk. 2018. Signatures of multiple processes contributing to fluid reasoning performance. Intelligence 68: 87–99. [Google Scholar] [CrossRef]
  134. Siegel, Markus, Melissa R. Warden, and Earl K. Miller. 2009. Phase-dependent neuronal coding of objects in short-term memory. Proceedings of the National Academy of Sciences of the United States of America 106: 21341–46. [Google Scholar] [CrossRef] [Green Version]
  135. Sligte, Ilja G., Annelinde R. E. Vandenbroucke, H. Steven Scholte, and Victor A. F. Lamme. 2010. Detailed sensory memory, sloppy working memory. Frontiers in Psychology 1: 175. [Google Scholar] [CrossRef] [Green Version]
  136. Smoleń, Tomasz, and Adam Chuderski. 2015. The quadratic relationship between difficulty of intelligence test items and their correlations with working memory. Frontiers in Psychology 6: 1270. [Google Scholar] [CrossRef] [Green Version]
  137. Snow, Richard E., Patrick C. Kyllonen, and Brian Marshalek. 1984. The topography of ability and learning correlations. In Advances in the Psychology of Human Intelligence. Edited by Robert J. Sternberg. Hillsdale: Lawrence Erlbaum Associates, vol. 2, pp. 47–103. [Google Scholar]
  138. Spearman, Charles. 1927. The Abilities of Man. London: Macmillan. [Google Scholar]
  139. Stankov, Lazar. 1983. Attention and intelligence. Journal of Educational Psychology 75: 471–90. [Google Scholar] [CrossRef]
  140. Thissen, Alica, Marco Koch, Nicolas Becker, and Frank M. Spinath. 2018. Construct your own response: The cube construction task as a novel format for the assessment of spatial ability. European Journal of Psychological Assessment 34: 304–11. [Google Scholar] [CrossRef]
  141. Thomson, Godfrey H. 1919. The hierarchy of abilities. British Journal of Psychology 9: 337–44. [Google Scholar] [CrossRef]
  142. Thurstone, Louis L. 1938. Primary Mental Abilities. Chicago: University of Chicago Press. [Google Scholar]
  143. Troche, Stephan J., and Thomas H. Rammsayer. 2009. The influence of temporal resolution power and working memory capacity on psychometric intelligence. Intelligence 37: 489–96. [Google Scholar] [CrossRef]
  144. Troche, Stephan. J., Felicitas L. Wagner, Annik E. Voelke, Claudia. M. Roebers, and Thomas. H. Rammsayer. 2014. Individual differences in working memory capacity explain the relationship between general discrimination ability and psychometric intelligence. Intelligence 44: 40–50. [Google Scholar] [CrossRef]
  145. Unsworth, Nash. 2019. Individual differences in long-term memory. Psychological Bulletin 145: 79–139. [Google Scholar] [CrossRef]
  146. Unsworth, Nash, Keisuke Fukuda, Edward Awh, and Edward K. Vogel. 2014. Working memory and fluid intelligence: Capacity, attention control, and secondary memory. Cognitive Psychology 71: 1–26. [Google Scholar] [CrossRef] [Green Version]
  147. Unsworth, Nash, Thomas S. Redick, Chad E. Lakey, and Diana L. Young. 2010. Lapses in sustained attention and their relation to executive and fluid abilities: An individual differences investigation. Intelligence 38: 111–22. [Google Scholar] [CrossRef]
  148. Usher, Markus, Jonathan D. Cohen, Henk Haarmann, and David Horn. 2001. Neural mechanism for the magical number 4: Competitive interactions and nonlinear oscillation. Behavioral and Brain Sciences 24: 151–52. [Google Scholar] [CrossRef]
  149. Jarosz, Andrew F., and Jennifer Wiley. 2012. Why does working memory capacity predict RAPM performance? A possible role of distraction. Intelligence 40: 427–38. [Google Scholar] [CrossRef]
  150. Jäger, Adolf O. 1982. Mehrmodale Klassifikation von Intelligenzleistungen: Experimentell kontrollierte Weiterentwicklung eines deskriptiven Intelligenz-struktur modells. Diagnostica 28: 195–225. [Google Scholar]
  151. Wiley, Jennifer, Andrew F. Jarosz, Patrick J. Cushen, and Gregory J. H. Colflesh. 2011. New rule use drives the relation between working memory capacity and Raven’s Advanced Progressive Matrices. Journal of Experimental Psychology: Learning, Memory, & Cognition 37: 256–63. [Google Scholar]
  152. Wilhelm, Oliver. 2005. Measuring reasoning ability. In Handbook of Understanding and Measuring Intelligence. Edited by O. Wilhelm and R. W. Engle. Thousand Oaks: Sage Publications, Inc., pp. 373–92. [Google Scholar]
  153. Wilhelm, Oliver, Andrea Hildebrandt, and Klaus Oberauer. 2013. What is working memory, and how can we measure it? Frontiers in Psychology 4: 433. [Google Scholar] [CrossRef] [Green Version]
  154. Williams, Ben A., and Stephen L. Pearlberg. 2006. Learning of three-term contingencies correlates with Raven scores, but not with measures of cognitive processing. Intelligence 34: 177–91. [Google Scholar] [CrossRef]
  155. Van Der Maas, Han L. J., Conor V. Dolan, Raoul P. P. P. Grasman, Jelte M. Wicherts, Hilde M. Huizenga, and Maartje E. J. Raijmakers. 2006. A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review 113: 842–61. [Google Scholar] [CrossRef] [PubMed]
  156. Verguts, Tom, and Wim Notebaert. 2008. Hebbian learning of cognitive control. Psychological Review 115: 518–25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  157. Vernon, Philip E. 1964. The Structure of Human Abilities. London: Methuen. [Google Scholar]
  158. Vigneau, Francois, Andre F. Caissie, and Douglas A. Bors. 2006. Eye-movement analysis demonstrates strategic influences on intelligence. Intelligence 34: 261–72. [Google Scholar] [CrossRef]
  159. Vogel, Edward K., Geoffrey F. Woodman, and Steven J. Luck. 2001. Storage of features, conjunctions, and objects in visual working memory. Journal of Experimental Psychology: Human Perception & Performance 27: 92–114. [Google Scholar]
Figure 1. An example of temporal pattern of bindings in LISA’s representation of the relation instance Gave (Tom, Ann, cat).
Figure 1. An example of temporal pattern of bindings in LISA’s representation of the relation instance Gave (Tom, Ann, cat).
Jintelligence 10 00051 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chuderski, A. Fluid Intelligence Emerges from Representing Relations. J. Intell. 2022, 10, 51. https://doi.org/10.3390/jintelligence10030051

AMA Style

Chuderski A. Fluid Intelligence Emerges from Representing Relations. Journal of Intelligence. 2022; 10(3):51. https://doi.org/10.3390/jintelligence10030051

Chicago/Turabian Style

Chuderski, Adam. 2022. "Fluid Intelligence Emerges from Representing Relations" Journal of Intelligence 10, no. 3: 51. https://doi.org/10.3390/jintelligence10030051

APA Style

Chuderski, A. (2022). Fluid Intelligence Emerges from Representing Relations. Journal of Intelligence, 10(3), 51. https://doi.org/10.3390/jintelligence10030051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop