1. Introduction
A large body of research in cognitive psychology has devoted much attention to understanding the nature and structure of executive functions (EF). Generally, EF has been defined as elementary control processes (e.g., regulation of behavior, planning, self-monitoring, self-control, problem-solving, maintenance of attention) that guide complex behaviors often deemed important in various settings [
1]. Executive functions have been shown to predict school success [
2], school readiness [
3], physical health [
4], and intelligence [
5].
The process of characterizing the structure of EF has been a long-standing focus of researchers in numerous fields [
6,
7,
8]. Cognitive scientists have engaged in debates surrounding the generality (EF develops in a domain-general manner) versus the specificity of EF (EF develops according to context-specific demands) [
6], but over the last couple of decades, the prevailing work has centered on EF as a set of component skills or processes that have predictive power for a variety of complex behaviors [
9]. Specifically, Miyake and colleagues [
9] identified a data-driven model that explained EF as composed of three components (i.e., updating, shifting, inhibition) that are both unified in their power to explain EF and diverse in their ability to operate individually. However, others have called this reductionist view of EF problematic, arguing that the components of EF cannot be conceptually or psychometrically condensed but instead must be considered within specific contexts [
10]. What has ensued is an enhanced discussion about theoretical and conceptual issues related to EF, with calls for a renewed focus on theory development [
7] and a balanced approach between competing models of EF (i.e., domain-general vs. context-dependent conceptualizations) [
11].
This debate is arguably an important one, as understanding EF has important implications at both the theoretical and applied levels. Given the importance of EF for a variety of outcome variables [
12,
13], much work has been devoted to developing intervention strategies designed to mitigate EF deficits [
14,
15]. Mixed results in the findings produced by this intervention work [
15,
16] have also added to the debate about the nature and structure of EF, leading cognitive scientists to ponder whether mixed results are a function of interventions themselves or a result of not targeting the correct construct. While much of the focus of the debate has been centered on the conceptualization of EF, one could argue that the discussion about the conceptualization of EF has inevitably bled into the psychometric study of EF since the measurement of a construct is intricately tied to how one describes the concept. As such, one possibility to consider is whether the crux of the debate is less about the conceptualization of EF and more about the psychometric structure (operationalization) used to understand it.
As there is consensus in cognitive science that EF is significant for certain outcome variables [
1], it is critical that the field has confidence that the accepted models of EF constructs are accurate for both theoretical and applied purposes. As researchers apply work on EF in real-world settings (e.g., school environments) to inform treatment and intervention decisions for individuals, ensuring an accurate representation of the constructs enhances the efficacy of applied methods. Though the research on EF is extensive, minimal work has focused on evaluating the strength and veracity of the psychometric model of EF. The current paper (utilizing two studies) is focused on enhancing our understanding of the psychometric structure of EF for these purposes.
1.1. Three-Factor Model of Executive Function (Miyake et al. [9])
The most prominent model of EF used in the field of cognitive psychology is the three-factor model [
9]. This latent variable model proposes a theoretical account for the structure of EF, using the manifest variables of shifting (i.e., the ability to switch between different mental states; cognitive flexibility), updating (i.e., holding information in working memory while subsequently replacing the information with relevant new information), and inhibition (i.e., the ability to suppress distracting information while engaging in an on-going task). These three variables were considered key for the model, given their predictive role in other higher-order cognitive processes [
9]. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) revealed that the data yielded and fit a specified three-factor structure of executive functions [
9]. Further, the three factors (i.e., shifting, updating, and inhibition) were correlated, with coefficients ranging from small to moderate [
9]. These findings (i.e., EFA and CFA yielding a distinct three-factor structure, yet the factors were correlated) led Miyake et al. [
9] to argue for the general idea of both unity and diversity of EFs.
Since Miyake and colleagues [
9] introduced their findings to the field, many cognitive scientists have adopted the model as the “gold standard” of EF (see [
7] for a more complete overview), leading subsequent research to use this model to understand and measure EF [
5,
17,
18], as well as to help explain the relationship between EF and other variables [
19,
20]. However, recent work has begun to explore alternative ideas as scientists ponder the best ways to understand the complexity of EF as a construct [
7].
1.2. Is the Three-Factor Model of EF Acceptable?
The Miyake and colleagues [
9] three-factor model uses a latent variable model approach, thereby relying on manifest (or directly measured or observable) variables to assess whether the latent (or unobserved) variable is present. Historically, cognitive ability research has predominantly used this latent variable modeling approach to explore the structure of EF. However, more recent work has called into question the effectiveness of this approach for numerous reasons.
First, both the original Miyake et al. [
9] study and others that have used the model (e.g., [
5,
19]) have yielded findings in which more than half of the variance in the manifest variables is unexplained by the latent factor. While the model showed a good fit, the variance explained across all EF manifest variables was, at best, modest [
5,
9,
19], indicating that the measurement model is not entirely satisfactory. These findings and those of others [
18,
21,
22] call into question the construct validity of the EF latent construct. Thus, it is important to systematically examine (via meta-analysis) whether studies using the three-factor model of EF consistently show poor measurement models of EF and how this might theoretically argue for a reconsideration of the model moving forward.
Second, latent variable models present the disadvantage of relying on subjective interpretations of the latent factors, with the decision for how to define the latent factors residing with the researchers [
23,
24]. This is problematic given that measures of cognitive ability are often not process pure and thus, confining such measures to a latent factor that is labeled by the researcher can potentially misrepresent the true nature of the constructs.
Finally, one of the most significant disadvantages of latent variable modeling lies in the principle of local independence, which states that observations explained by one latent factor are independent of observations explained by another. In other words, latent variable models enforce the idea that manifest variables cannot correlate with one another [
25]. This poses an inherent problem in EF research as it assumes that a task in one EF component (e.g., working memory) cannot bleed into any other component (e.g., shifting), which is largely unrealistic given the complexity of the construct.
In short, while latent variable approaches have dominated much of our understanding of EF, the inherent problems with this method warrant discussion of its viability. The longstanding debate about the nature and structure of EF demonstrates the field’s understanding of the complexity of the construct itself. Arguably, this debate (and the psychometric approaches used to understand it) is related to the established division in the field of cognitive science: those that study the mechanisms of cognition and those that study the individual differences in cognition [
26]. Utilizing psychometric approaches that can bridge this divide and allow cognitive science to address both approaches would be worthwhile. While latent variable modeling is well-developed statistically and is useful as a modeling technique [
26], its approach does not allow cognitive scientists to dive deeper into the complexity of EF components—a must for understanding more context-dependent models of EF.
1.3. Network Analysis: An Alternative Approach
One alternative to increasing our understanding is utilizing network analysis of EF. Network models can surpass the limitations of latent variable models (e.g., the subjectivity of the latent factors, principle of local independence) by detecting the underlying structure of cognitive abilities in an exploratory manner [
25]. Kan et al. [
25] noted that network models do not suffer from any of the limitations held by latent variable models for several reasons.
First, network modeling does not require researchers to specify latent factors, as there is no common cause for the manifest variables in a study. Therefore, the techniques used in network modeling eliminate the subjectivity found in CFA models of cognitive abilities. Second, network modeling is not constrained by the principle of local independence; therefore, tasks in a study are allowed to freely form connections with one another. Tasks that share similar processes will cluster more closely than tasks in the model that might not share many processes—an approach that is more compatible with modern views of cognitive abilities. Specifically, more contemporary ideas in the field recognize the complexity of the cognitive system and state that cognitive abilities frequently overlap during cognitive activities [
27]. Therefore, network modeling is an approach consistent with the idea that cognitive tasks are not process pure. Taken together, these considerations provide strong arguments for the use of network modeling over latent variable modeling, especially in situations where the tasks of choice are theoretically driven (e.g., complex span tasks).
1.4. The Current Project
This project addresses a significant gap in the cognitive-psychological literature that has both theoretical and practical implications. A coherent model of EF that accounts for acceptable levels of variance in manifest variables of the construct informs basic research on cognitive processes as well as applied work in a variety of settings in which understanding cognitive abilities is important (e.g., the school setting). At the theoretical level, the availability of an EF model that better represents the observable cognitive processes will guide future work that seeks to explain the nature of EF. Alternatively, an enhanced model of EF has important implications in applied research that often focuses on intervention efficacy; that is, by having a model with greater explanatory power, professionals can better target specific skills for intervention and likely develop more effective strategies to remediate these abilities. Across two studies, this paper will first test the psychometric strength and adequacy of the currently accepted model of EF proposed by Miyake et al. [
9] using meta-analysis. In the second study, a new model of EF utilizing network modeling will be described.
4. Discussion
Long-standing discussions in the field of cognitive science revolve around the generality versus specificity of EFs (for a review, see Doebel [
6]), which have important implications at both the theoretical and applied levels of the field. This dichotomy is encapsulated by opposing arguments where one proposes that executive functions develop as domain-general mechanisms (generality perspective), while the other argument postulates that executive functions operate differently according to specific task demands (specificity perspective). Consistent evidence shows that EF is critical for many outcome variables (e.g., academic achievement, job performance, etc.); therefore, examining the strength of the psychometric model that has long driven applied work in this area (i.e., Miyake et al. [
9]) is important. The current project examined whether the popular three-factor model of EF proposed by Miyake and colleagues [
9] was psychometrically valid or provided sufficient explanatory power for the structure of EF. Miyake and colleagues [
9] posited that EF components in their model (i.e., shifting, updating, and inhibition) were both unified (in that they all explained executive function) and diverse (each component operated individually). Utilizing two studies, the current work aimed to evaluate the psychometric structure of this model using two different types of analyses, as doing so can inform the field and ensure that our understanding of EF captures the complexity of its basic components.
Results from Study 1 showed the EF components of shifting, updating, and inhibition inadequately explained the variance in the various cognitive tasks. On the one hand, the mean effect sizes for these three components (0.85 for shifting, 0.80 for updating, and 0.55 for inhibition) indicate that across studies, the tasks used to measure shifting and updating were successful at predicting the performance of these EF components, while the tasks for inhibition were less so. However, what is particularly troublesome is the large range of variance explained across the three EF components (72% for shifting, 64% for updating, and 30% for inhibition). While one might argue that the components of the shifting and updating approach explained adequate amounts of variance, the inhibition factor did not, and even in the case of shifting and updating, 28–36% of the variance is still unaccounted for. In addition, the variability across the three components (ranging from 30% to 72%) is problematic.
These findings are troubling for several reasons. First, in considering the outcomes from the inhibition factor only, results demonstrated that the model of inhibition yielded low explanatory power, corroborating other studies with similar results [
32]. Specifically, Rey-Mermet et al. [
32] tested various models of inhibition across older and younger adults using CFA analyses, finding proportions of explained variance ranging from 2% to 36%, resulting in a conclusion that what is being termed “inhibition” should be reconsidered. A vast amount of research has embraced this construct in predicting real-world outcomes like creativity [
19], delayed gratification [
33], and academic performance [
34]. However, if the factor of inhibition is showing repeated low explanatory power in predicting performance on tasks thought to be tapping into the construct across studies, the currently accepted and prominent model of EFs proposed by Miyake and colleagues [
9] should be revised to consider whether the EF factor of inhibition is truly an EF pertaining to the model. If it is not, alternative EF should be considered. Further, empirical investigations are needed to establish what the inhibition tasks are actually assessing.
Second, while shifting and updating fared better in the findings of Study 1, some questions remain. As with inhibition, shifting and updating are predictive of real-world abilities [
5,
9,
19]. Given the current results, while well over half of the variance (72% for shifting, 64% for updating) in tasks measuring these EF components is explained by the shifting and updating latent factors, there remains a notable amount of variance unexplained. This calls into question whether the psychometric nature of EF for these components is distinct or is a conglomeration of other abilities. For instance, in the case of the updating component, it is likely that the tasks designed to measure updating are also measuring components of working memory capacity, like attention, given that updating is a capability necessary for working memory to function optimally. Because of this, it is probable that, updating tasks used to derive an updating EF factor are also recruiting working memory abilities.
Miyake and colleagues [
9] posited the general idea of unity and diversity in EF. That is, their model showed that although the EF factors and tasks are correlated, they are also separate factors that operate individually. However, the current meta-analysis showed that although there is much diversity (divergent validity) among the factors at the task level, there is minimal unity (convergent validity) amongst the tasks measuring the same EF factor. This raises significant questions about the structure of EF. While these questions have been explored at length at the conceptual level [
1], less has been done to examine them at the psychometric level. Arguably, the field’s understanding of the EF construct could be enhanced using network modeling, as this approach allows for recognition of the complexity of the cognitive system.
The current study’s focus on the network modeling approach shows just that. Study 2 utilized network analysis to test Miyake et al.’s [
9] psychometric structure of EF. Findings revealed a greatly varied set of results that open the door for even more questions about what EF looks like conceptually. Network models revealed the lack of a network among EF abilities with (a) neither divergence [
9,
21] nor convergence of EF tasks [
9,
18], and (b) connections within the EF components but no convergence across EF [
19]. Specifically, across the various models, the tasks of inhibition, shifting, and updating showed high divergent validity where the tasks measuring the same constructs were generally related to each other (less so in the case of inhibition); however, and perhaps more alarmingly, there was low convergent validity such that the measures of EF were minimally correlated with one another. In other words, diversity was seen across the components of shifting, updating, and inhibition, but these components together lacked unity.
Taken together, the findings of Study 1 and Study 2 have important implications in a number of areas. First, the unity and diversity model put forth by Miyake and colleagues [
9]—which is considered a gold-standard model in the field—is called into question. For quite some time now, cognitive science has considered EF to be made up of three components (i.e., updating, shifting, inhibition) that are both unified and diverse, and this model has been used to demonstrate the importance of EF with regard to quite a number of outcome variables [
1]. However, as the current project demonstrates, the latent variable model used consistently demonstrates an inability to explain adequate variance in manifest variables, leaving a significant percentage of variance unexplained. Further, the subjective interpretations and the principle of local independence that characterize latent variable modeling add to the ineffectiveness of this approach in the conceptualization of EF, as cognitive tasks are not process pure and the cognitive abilities tapped within those tasks will often be connected to one another.
Further, related to the first implication, a lack of a coherent understanding of EF has great consequences in the applied sector. Specifically, as cognitive scientists design interventions to help mitigate EF concerns demonstrated by individuals in a variety of settings (e.g., the school setting), it is critical that targeted components are accurate. Intervention work is intricately tied to the theoretical models that guide it, and without a clear conceptualization of the EF construct, it becomes unclear whether mixed findings in intervention research are a result of the interventions themselves or a result of targeting components that are actually not a function of EF. It is critical that the field assess the validity of its constructs to increase the effectiveness of the applied work that follows.
Finally, the results of this project add questions surrounding the conceptualization and measurement of the EF construct. Conceptualization and measurement are intricately intertwined. While some might look at these results and question the validity of the EF construct as a whole, it is alternatively suggested that what is needed in the field of cognitive science is a focus on more accurate and updated psychometric measurement of EF. Rather than relying on measurement that restricts the understanding of construct complexity, more informed analyses—like network modeling—that allow for a more intricate, objective look at how the constructs hang together are warranted. In the field of cognitive science, the probable inability to design “process pure” measures highlights the complexity of EF components. While one measure might be designed to assess inhibition, other EF processes (e.g., attention, working memory) might also be assessed simultaneously. Studies administer different measures, and because of this, the different mixtures of tasks across studies lead to the assessment of other abilities not limited to only the intended EFs. This ultimately leads to the low explanatory power of the three-factor model of EFs at the psychometric level. It is this type of complexity that network modeling can capture, given its numerous statistical and theoretical strengths. Thus, we are not stating here that the unity and diversity model of EF is inherently wrong, as it serves an important purpose in neuropsychology. Rather, from a psychometric standpoint, we are suggesting that the use of network modeling can help bridge statistical models and theoretical models of EF and provide an alternative way of conceptualizing the nature of EF.
Though the current analysis evaluates the validity and predictive power of the three-factor model of EF, there are limitations to be highlighted. First, a relatively small sample of studies was selected for the meta-analysis. Because of this, the generalizability of the current findings is limited. However, according to the fail-safe N index, it would take 7 studies finding null results to reduce the shifting effect size to 0.50, 8 studies to reduce the inhibition effect size to 0.30, and 6 studies to reduce the updating effect size to 0.50. This indicates that it would take about half of the studies included here showing null results to change the results of these findings, which indicates that the results reported here (though with a relatively small sample of studies) are nonetheless meaningful. Secondly, the effect sizes compiled here showed heterogeneity. Because of this, the current results should be considered with some caution. However, it is important to note that heterogeneity may be driven by the issues discussed above (e.g., different tasks, and lack of process pure tasks). Here it is shown that heterogeneity was in part driven by distinct factors and not by the different age groups. Taken together, these results suggest that the current mean effect sizes across the EF factors are unstable, and thus the conclusions derived here should be taken with caution. However, as noted above, this is likely due to measurement challenges. Last, though network modeling holds important advantages over latent variable modeling, some limitations are worth highlighting. First, network models are successful only if the covariance between variables is large [
25]. Second, if the data possesses a high measurement error, then the network structure can be misrepresented and misleading [
25]. Finally, given the relative novelty of the network modeling technique in the field of cognitive abilities, there is no standard practice for implementing this technique on cognitive-behavioral data. Despite these shortcomings, network modeling shows important strengths in regard to uncovering the psychometric nature of EF, as shown in these two studies.