Next Article in Journal
Hospitalized Patients with Medically Unexplained Physical Symptoms: Clinical Context and Economic Costs of Healthcare Management
Previous Article in Journal
Anxiety and Depression in Adolescents with Severe Asthma and in Their Parents: Preliminary Results after 1 Year of Treatment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dynamic Framework for Modelling Set-Shifting Performances

by
Marco D’Alessandro
* and
Luigi Lombardi
Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto, Italy
*
Author to whom correspondence should be addressed.
Behav. Sci. 2019, 9(7), 79; https://doi.org/10.3390/bs9070079
Submission received: 14 May 2019 / Revised: 10 July 2019 / Accepted: 12 July 2019 / Published: 18 July 2019

Abstract

:
Higher-order cognitive functions can be seen as a class of cognitive processes which are crucial in situations requiring a flexible adjustment of behaviour in response to changing demands of the environment. The cognitive assessment of these functions often relies on tasks which admit a dynamic, or longitudinal, component requiring participants to flexibly adapt their behaviour during the unfolding of the task. An intriguing feature of such experimental protocols is that they allow the performance of an individual to change as the task unfolds. In this work, we propose a Latent Markov Model approach to capture some dynamic aspects of observed response patterns of both healthy and substance dependent individuals in a set-shifting task. In particular, data from a Wisconsin Card Sorting Test were analysed in order to represent performance trends in terms of latent cognitive states dynamics. The results highlighted how a dynamic modelling approach can considerably improve the amount of information a researcher, or a clinician, can obtain from the analysis of a set-shifting task.

1. Introduction

In recent years there has been an increasing interest in modelling behavioural data from experimental tasks aimed at investigating higher-level cognitive functions [1,2,3,4,5,6]. Generally, higher-order cognitive functions can be seen as a class of cognitive processes which are crucial in situations requiring a flexible adjustment of behaviour in order to correspond to changing demands of the environment [7]. They also allow previous experiences and feedback related information to be integrated in order to maximize optimal choices [8]. Deficits at this level of cognitive functioning can be observed in rather heterogeneous clinical populations [9,10,11], each characterized, ideally, by a different pattern of impaired psychological sub-processes (see for example [3]). The cognitive assessment of these functions often relies on tasks which admit a dynamic, or longitudinal, component requiring an individual to modify h(er/is) behaviour during the unfolding of the task. For instance, consider a general set-shifting framework in which participants learn to pay attention and respond to relevant stimuli features, while ignoring irrelevant ones, as a function of experimental feedback. Here, negative feedback should allow participants to conceive a feature as irrelevant, modifying their responses accordingly. In this context, observed response patterns could consist of the occurrences of casual errors, feedback-related errors, and perseverations on shifting tasks, to name a few (e.g., [12]). The basic idea is that these response patterns reflect the presence (or the absence) of a cognitive impairment, either at a functional or neural level [13].
In this paper we propose a latent variable approach to model cognitive performances on a typical set-shifting task from a group level perspective. The approach is applied to data from the Wisconsin Card Sorting Test (WCST; [12,14]), which provides a renowned tool to measure set-shifting, deficient inhibitory processes on the basis of environmental feedback in cognitive settings [15]. In general, the test consists of a target and a set of stimulus cards with geometric figures that vary according to three perceptual features. The task requires participants to find the correct classification principle by trials and errors using the examiner’s feedback. An intriguing, and underestimated, aspect of such experimental protocol regards how individual performances change as the task unfolds, plausibly due to “learning to learn” capacity [16] or shifting cognitive strategies [14]. According to our view, a formal analysis of this performance trend (see, for example, [16]) can reasonably provide a novel interesting metric for the cognitive assessment of this sort of test outcomes. Therefore, several dynamic models regarding decision-making [17], learning [18], risky behaviour [19] and categorization [20], have proven to be able in uncovering characteristics of cognitive functioning which could not be detected with a standard (static) analytic approach based on collecting summary measures of individuals’ responses.
In order to take dynamic properties into account in our context, we adopted a Latent Markov Model (LMM; [21,22]) perspective to assess the dynamics of a latent states process underlying the observed behaviour. The basic assumption is that participants may evolve in their latent characteristic/states during the unfolding of the task. Thus, rather than simply analysing how the observed responses configuration evolves during the unfolding of the task, our target becomes to model the entire evolution of the latent states underlying these responses. The idea that observed behaviour is the final result arising from two or more latent data-generating states is clearly not new (e.g., [23,24]). Here, the basic intuition consists in the fact that human cognition can be influenced not only by external task demands, which are usually known and observable, but also by unknown latent mental processes and brain states that dynamically change with time [25]. In our context, observing a changing trend in participants’ responses (e.g., increase in the occurrence of perseverative errors at a given phase of the task) should reflect the fact that a change in the latent process has occurred. The Latent Markov Model nicely captures this intuition by considering the observed responses as a measure of the underlying latent states, and by directly providing a unified account for the dynamic of the latent states process. The reader is referred to [6,26] for different model-based approaches to model cognitive phenomena related to that analysed in the present work.
The manuscript is organized as follows. The next section provides the basic features of the LMM framework. In the third section, a model application is presented on a real data set collected using the WCST in the context of substance addiction. The last two sections present discussions of results and some conclusions.

2. Materials and Methods

2.1. The Formal Framework

Latent Markov models (LMM) were primarily developed for the analysis of longitudinal data, and thought to deal with categorical response variables [21]. Generally, a LMM can be seen as a generalization of the latent class model [27] allowing each subject to move between latent classes during time. To do so, these models make use of time-specific latent variables which are assumed to be discrete. Below, we briefly outline the main properties of this modelling approach.
To begin with, consider the directed graph depicted in Figure 1 illustrating the logic behind a basic LMM which evolves across T discrete time steps (e.g., [21,22]).
In a LMM, it is assumed that a sequence of observed response variables, Y 1 , Y 2 , , Y T , are conditionally independent given a corresponding pairwise sequence of latent variables, S 1 , S 2 , , S T , called states. More formally:
P ( Y 1 , Y 2 , , Y T | S 1 , S 2 , , S T ) = P ( Y 1 | S 1 ) P ( Y 2 | S 2 ) P ( Y T | S T ) .
The lack of directional connections (directed arrows) between observed variable nodes reflects the idea that only the latent states dynamics are responsible for the response pattern observed across the entire task. In other words, the evolution of the observed responses in time can be (phenomenologically) considered as the result of transition dynamics between latent states. In particular, the latent process follows a first-order Markov chain in that the latent variable S t at step t only depends on the outcome of the former step, S t 1 , (with t = 2 , , T ) :
P ( S 1 , S 2 , , S T ) = P ( S 1 ) t = 2 T P ( S t | S t 1 ) .
There are at least three main reasons why we consider this modelling approach promising in representing cognitive behaviours observed in set-shifting tasks: (1) a LMM provides a formal representation of the latent states that can be put in relation with a certain observed behaviour outcome [28] (2) it is a method to shape and analyse the unfolding of behaviour in time and its relation with the evolving dynamic of some aspects of cognition (3) it has a clear probabilistic framework to examine how different intervening factors (e.g., observed covariates) affect the evolving response patterns. Understanding the advantages of such general framework to model complex cognitive phenomena may be of great interest as discrete latent states can be nicely associated with some brain, cognitive, or abstract states that we assume might influence a given observed behaviour.

2.2. Model Application

In this section, we present the proposed modelling approach to analyse participants’ performances in the WCST and show how the LMM framework can account for differences between dynamic patterns in distinct groups. To this purpose, we apply the model to the analysis of an already published dataset (see [6,9]) that, in our context, represents an ideal case of set-shifting task study.

2.3. Participants

In our study, we analysed performances of 38 substance dependent individuals (SDI) and 44 healthy individuals in the Wisconsin Card Sorting Test (ibidem). Control participants had no history of mental retardation, substance abuse, or any systemic central nervous system disease. Regarding the SDI, the Structured Clinical Interview for DSM-IV [29] was used to determine a diagnosis of substance dependence. All participants in the study were adults (>18 years old) and gave their informed consent for inclusion which was approved by the appropriate human subject committee at the University of Iowa (see [9] for details).

2.4. Task Procedure

In the common version of the WCST, participants are presented a target card and a set of four stimulus cards. All the cards consist of geometric figures that vary in terms of three features, namely, color (red, green, blue, yellow), shape (triangle, star, cross, circle) and number of objects (1, 2, 3 and 4). Figure 2 illustrates an example of a typical WCST trial. For each trial, a participant is asked to sort the target card with one of the four stimulus cards according to one of the three sorting rules. Each participant’s response is followed by a feedback (either positive or negative) telling the individual if the sorting is right or wrong. After a fixed number of consecutive correct responses, the experimenter changes the sorting rule without any warning to the participant. For each trial, the observed response can be classified as either a correct response, a perseverative error, or a non-perseverative error (see [12] for details). The error-related information is particularly meaningful as it seems to predict executive function deficits and frontal lobe dysfunctions (e.g., [30]). Moreover, the accounting of sub-types of error may help in discriminating cognitive processes that disrupt set-shifting performances in clinical population (e.g., [31]).

2.5. Data Modelling

In order to model the performance trends, we relied on the following data transformation procedure. First, we codified the observed sequence of participants’ responses according to a neuropsychological criterion proposed by Flashman, Horner, & Freides [32]. In particular, we focused on three categories of responses: correct responses (C), non-persevertive errors (E), and perseverative errors (PE). As a further step, for each participant, we considered the entire response pattern as partitioned into a limited number of blocks, also defined as windows of trials. Our main purpose was to model the dynamics of participants’ response patterns across these trial windows, rather than single trials. To this aim, in our application we considered for each participant five distinct windows (see Supplementary Material for details), which are thought to partition the entire task into (virtual) phases. More precisely, let Z ( j ) be the vector of responses for the individual j, such that, Z ( j ) { C , E , PE } . The response vector is partitioned as follows:
Z ( j ) = ( z 1 ( j ) , , z n j ( j ) ) , ( z n j + 1 ( j ) , , z 2 n j ( j ) ) , , ( z 4 n j + 1 ( j ) , , z 5 n j ( j ) )
where the element z t ( j ) reflects the individuals’ codified response at trial t. The subscript n j indicates the length of the task phase for individual j, and is calculated in order to obtain equally-sized trial windows. It is important to notice that participants can vary in the number of observations within windows. The fact that participants are not homogeneous in the number of trials which constitute each phase is not a matter of concern for our modelling aim, since the task phases are considered to reflect the percentage of progress in the task. At this point, the resulting data structure was organized according to a longitudinal design where a specific block, Y t , consisted of all the observed responses aggregated across all participants for a specific task phase. As an example, consider the data vector for the time occasion t = 1 , that is, for the first block of the longitudinal design. It consists of the aggregated responses of all participant’s first task phases, and can be formally represented as:
Y 1 = ( z 1 ( 1 ) , , z n 1 ( 1 ) ) , ( z 1 ( 2 ) , , z n 2 ( 2 ) ) , , ( z 1 ( J ) , , z n J ( J ) ) .
About the latent process characterization, we adopted a model selection criterion to choose the number of latent states S. Since our dependent variable is a categorical response variable with three levels, the possible choices reduce to a two-state model and a three-state model. In order to select the best model we relied on both BIC (Bayesian Information Criterion; [33]) and AIC (Akaike information criterion; [34]) criteria. Note that, for both criteria smaller values indicate a better model performance. Both two-state and three-state models are preferable to a baseline 1-state model which does not account for latent process dynamics (Table 1).
However, since results are very similar for the two candidate models, we adopted a further qualitative model selection criterion. In particular, we compared the estimates for the two models to determine which one provided the most useful and realistic substantive description of the data. We concluded that the three-state model accounted for a more sensible and complete description of set-shifting performances (see [35] for a similar approach). The reader is referred to the Supplementary Material for a more detailed comparison of models’ estimates. Thus, we required the model to be based on three distinct latent components, which were expected to have a direct psychological interpretation (see the results section). Moreover, in order to account for group differences in the latent process we also used a binary time-fixed covariate X, codifying the membership of each participant to either Control group ( X = 0 ) or Substance Dependent group ( X = 1 ). In such a way, we could control for eventual differences between the two sub-populations. Therefore, eventual differences in set-shifting performance trends between the two groups were completely captured by differences in the latent states dynamics (see Appendix A for details).
In what follows, we describe the model parameters and the main probabilistic relations in the system:
(i) 
the conditional response probabilities
ϕ y | s = P ( Y t = y | S t = s ) ,
where y { C , E , PE } and s = 1 , 2 , 3 . This parameters set characterizes the measurement model which concerns the conditional distribution of the possible responses given the latent process. It is assumed that the measurement model is conditionally independent of the covariate. Here we are not interested in explaining heterogeneity in the response model between the two groups, since in our view only dynamics in the latent process are responsible for differences in performance trend between groups;
(ii) 
the initial probabilities
π s | 0 = P ( S 1 = s | X = 0 ) , π s | 1 = P ( S 1 = s | X = 1 ) ,
where s = 1 , 2 , 3 . This parameter characterizes a distribution for the initial state across the (latent) states. In particular, π s | 0 and π s | 1 refer to the initial probabilities vectors of the states for the control group and for the substance dependent group, respectively;
(iii) 
the transition probabilities
π s t | s t 1 ( 0 ) = P ( S t = s t | S t 1 = s t 1 , X = 0 ) , π s t | s t 1 ( 1 ) = P ( S t = s t | S t 1 = s t 1 , X = 1 ) ,
where t = 2 , , 5 and s t , s t 1 = 1 , 2 , 3 . This parameter characterizes the conditional probabilities of transitions between latent states across the task phases. In particular, π s t | s t 1 ( 0 ) and π s t | s t 1 ( 1 ) refer to the transition probabilities for the control group and the substance dependent group, respectively. Here we assume that a specific covariate entails the characterization of a sub-population with its own initial and transition probabilities of the latent process. In this way, accounting for differences in performance trend relies on explaining heterogeneity in the latent states process between the two groups;
According to this framework, the identification of the probabilistic relationships between latent states and observed responses, as well as those between latent states themselves, conveys all the information needed to characterize the observed response patterns dynamics.

3. Results

The proposed model was fitted using the LMest package [36] developed within the R framework [37]. LMest relies on an efficient log-likelihood maximization procedure (e.g., Expectation- Maximization Algorithm) for parameters estimation. Moreover, a model selection criterion was used to evaluate if the model with the group covariate X was preferable to the simpler model without the grouping variable. In particular, we adopted both the BIC (Bayesian Information Criterion; [33]) and AIC (Akaike information criterion; [34]) to measure the overall model performance. As expected, the model with the group covariate turned out to be the most appropriate model (see Table 2) thus confirming that the performance patterns were clearly different between the two groups.

3.1. Conditional Response Probabilities

The estimated conditional response probabilities ϕ ^ y | s are presented in Table 3. These probabilities allowed us to characterize the latent states. The first state ( s = 1 ) showed the highest probability to respond correctly, indicating that participants minimized errors within a task phase. By contrast, the second state ( s = 2 ) showed an increased probability of the error component, in particular the probabilities that a non-perseverative error or a perseverative error occur were approximately the same. This indicated the adoption of a non-efficient strategy, although the probability to respond correctly was still relatively high. Finally, the third state ( s = 3 ) showed a different pattern in which the probability to produce a correct response resulted lower than the probability to produce an error. The errors pattern also entailed a higher perseverative component compared to the second state.
These probability distributions represent cognitive response strategies as a function of the latent component or state. In particular, in our context, State 1 may be easily understood as an Optimal Strategy whereas State 2 seems to characterize a type of Sub-Optimal Strategy. Finally, State 3 indicates a Perseverative Non-Optimal Strategy. Therefore, this latent states characterization may be adopted to describe the average ability to operate shifting cognitive strategies.

3.2. Initial Probabilities

Table 4 reports the model initial probability configurations. These initial probabilities indicated that the two groups performed the early phase of the test very differently. In particular, the control group showed a higher overall probability of starting the initial test phase in State 1. By contrast, the SDI group showed a higher probability to adopt a strategy admitting an error component at the initial phase of the task. This interesting result could reflect the finding that substance dependent individuals usually show an inefficient initial conceptualization of the task [12].

3.3. Transitions Probabilities

All the available information on the dynamics of the latent process can be conveyed by the transition probabilities matrices (see Figure 3). These matrices represent, at a given task phase t, the probability to transit from a current state s to a different state s * or to remain in the same state s.
The transition matrix for the control group showed a clear pattern. First, the diagonal values revealed that the probability to reiterate a certain strategy decreased as it became less optimal, up to a zero probability to reiterate transitions to State 3, which clearly represents the non-optimal response strategy. Moreover, the very low probability values in the second and third columns indicate that it was nearly impossible to adopt a response strategy affected by the error component, since the overall probabilities to transit to State 2 or State 3 approached zero. It is also worth noting that, the probabilities approaching 1 in the first column indicate a general tendency of the system to transit to State 1, suggesting that these participants tended to switch to the optimal strategy in case they were not in that status, and to maintain that strategy for the rest of the task. Clearly, this pattern reflected the tendency to quickly minimize both the perseverative and the non-perseverative components of the error, as the task unfolded across time.
The transition matrix for the SDI group showed rather different dynamics. Importantly, the system exhibited a general tendency to transit to State 2, the sub-optimal strategy. In particular, it is worth emphasizing that a probability approaching 1 on the main diagonal could be understood as the presence of an absorbing state. This means that once in State 2, the system tended to reiterate the same latent state and that SDI participants systematically reiterated the sub-optimal strategy and never transited to the optimal strategy during the task. Further, once in State 3, there was a relatively high probability that an individual remained stacked in that state, indicating the tendency to reiterate the non-optimal strategy and to show a perseverative component of the error. On one hand, this pattern could also reflect the presence of mental rigidity as for substance dependent individuals was nearly impossible to switch to the optimal strategy. On the other hand, the tendency to reiterate a sub-optimal strategy by keeping fixed the error component across the task could also be seen as a probabilistic account for the failure to maintain set phenomenon [38]. This is in accordance with some findings reporting this peculiar behaviour in substance dependent individuals [16].

3.4. Marginal Latent States Distributions

In order to better understand our model results, we analysed the marginal distribution of the latent states. For each task phase, we derived a probability distribution over the three states for each group. To do so, we relied on basic rules for markov chains. Let π t be the distribution of the latent states at a certain time step t, or task phase, and let the transition matrix π s t | s t 1 be codified as P, for notational convenience. For each time step t + 1 , t + 2 , , T = 5 we want to compute the quantities π t + 1 , π t + 2 , , π T . The purpose is to move the distribution π t forward one unit of time, by starting from π 1 , which is the initial probability vector. It can be shown that π t + n = π t + ( n 1 ) P [39]. Regarding the control group, Figure 4 shows that the optimal strategy was maintained for the entire duration of the test, and the probability to adopt a strategy admitting errors component decreased quickly. This indicates that participants in the control group tended to learn immediately how to minimize the error component. Figure 5 shows the marginal distributions of the states for the SDI group. The plot shows that the probability to adopt an optimal strategy decreased faster than the probability to adopt a non-optimal perseverative strategy. The sub-optimal strategy with both error components showed the higher probability to be maintained for the rest of the task, suggesting that substance dependent individuals never minimized the error component during the test.

4. Discussion of Results

Results clearly show that our model is able to capture differences in performance trend between Control and SDI groups in terms of differences in their latent process transition dynamics. The characterization of the conditional response probabilities allows to rephrase the latent states as cognitive strategies adopted in a given phase of the task. Results also show that a three-state model is a reasonable choice if we want to differentiate dynamics in strategy shifting between groups. In fact, it is unrealistic to think that individuals can rely only on two (latent) cognitive strategies to accomplish the task, as it would be in case of a two-state model. The three states could be clearly interpreted as error-related strategies with gradually increasing error components. However, one might argue that our state process characterization does not account for three distinct latent components, due to similarities in probability patterns of responses for some of the states (such as State 1 and 2). According to our view, an inspection of the marginal distributions of the latent states in Figure 4 and Figure 5 can clarify that our model actually accounts for three non-overlapping latent components, as reflected by the differential marginal states probabilities pathways for the two groups.
It is also worth emphasizing that these results can increase the amount of information a researcher can obtain from the assessment of set-shifting performances. Generally, the analysis of data from the WCST reduces to the computation of summary statistics of the scoring measures, which in turn may provide the input for standard statistical analysis, as well as for classification procedures based on cut-off thresholds [15]. Mean scoring measures across individuals provide a simple way to account for group-level differences in performances (Table 5).
In our case study the groups differ in the number of perseverative ( t ( 80 ) = 5.62 , p < 0.001 ) and non-perseverative ( t ( 80 ) = 6.48 , p < 0.001 ) errors. However, mean differences cannot account for hypothesis about the underlying causes. From our modelling perspective, differences in mean scoring measures can be explained by the heterogeneity in the latent process affecting the way in which individuals within each group respond at a given phase of the task. Thus, a fundamental additional information provided by our model consists in the data generating process.

5. General Discussion

The modelling approach proposed in this work was able to map the evolution of response patterns in a set-shifting task with the evolution of a latent states process underlying the observed behaviour. The model provided a parsimonious description of the dynamic processes underlying the data, since we were able to represent the performance trend by using a latent variable with just three categories, representing different cognitive strategies evolving in time. Moreover, the estimated parameters capturing these dynamic aspects could be readily put in relation with some psychological constructs of potential clinical relevance. However, a crucial issue is that related to the interpretation of these parameters. Although accounting for a data generating process could convey interesting and additional information for the analysis of behaviour outcomes, parameters interpretation is not trivial. Marginal latent states distributions offered a straightforward way to examine dynamic aspects of error-related behaviour. For instance, marginal distributions showed that control group (resp. SDI group) settles to State 1 (resp. State 2) across task phases, which is approximately the same information conveyed by the summary measures of the number of errors for each group. From this perspective, marginal distributions provided no additional information for the analysis of participants’ performances. Conversely, transition probabilities matrices provided a more exhaustive source of information at the cost of an increasing difficulty in results interpretation (e.g., differences in performance trends between groups must rely on row-wise, column-wise, main diagonal values comparison). Therefore, transition probabilities offer the advantage to rely on parameters estimates for simulation and forecasting purposes. In particular, the transition matrices can be seen as cognitive system profiles and one might be interested in generating data in order to test sensible hypothesis. For example, given the two estimated profiles, one for each group, a sensible question could be: Which system does reach the optimal strategy first, on average, given the assumption that both systems start the task at State 3, the perseverative non-optimal strategy? This kind of investigation could be hard, or even impossible, for standard analytical frameworks based on simple summary statistics of the scoring measures.
In conclusion, our LMM model provides, at least in this first preliminary work, an interesting tool to analyse data presenting a dynamic component. It also illustrates an efficient way to manage differences between groups by accounting for the heterogeneity in the latent process characteristics between them. However, further works are needed in order to solidly establish connections between parameters estimates and more subtle cognitive constructs.

Supplementary Materials

The following are available online at https://www.mdpi.com/2076-328X/9/7/79/s1.

Author Contributions

Conceptualization, M.D. and L.L.; methodology, M.D.; software, M.D.; validation, M.D.; formal analysis, M.D. and L.L.; data curation, M.D.; writing–original draft preparation, M.D.; writing–review and editing, M.D. and L.L.; visualization, M.D.; supervision, L.L.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LMMLatent Markov Model
WCSTWisconsin Card Sorting Test

Appendix A

In order to allow the covariate to condition the characterization of the latent process we adopt a logit parameterization as follows:
log P ( S 1 = s | X t = x ) P ( S 1 = 1 | X t = x ) = log π s | x π 1 | x = x Θ
where s = 2 , , k , for the initial probabilities, and
log P ( S t = s | S t 1 = s * , X t = x ) P ( S t = s * | S t 1 = s * , X t = x ) = log π s | s * , x π s * | s * , x = x Γ
where t = 2 , , T = 5 , s , s * = 1 , 2 , 3 and s s * , for the transition probabilities. In both parameterization, x is a proper design matrix, whilst Θ and Γ are regression coefficients vectors.
In this way the covariate entails the characterization of a sub-population with its own initial and transition probabilities of the latent process, whereas the conditional distribution of the response variable given the latent process does not depend on the specific sub-population.

References

  1. Dehaene, S.; Changeux, J.P. The Wisconsin card sorting test: Theoretical analysis and modeling in a neuronal network. Cereb. Cortex 1992, 1, 62–79. [Google Scholar] [CrossRef] [PubMed]
  2. Busemeyer, J.R.; Stout, J.C. A contribution of cognitive decision models to clinical assessment: Decomposing performance on the Bechara gambling task. Psychol. Assess. 2002, 14, 253–262. [Google Scholar] [CrossRef] [PubMed]
  3. Yechiam, E.; Goodnight, J.; Bates, J.E.; Busemeyer, J.R.; Dodge, K.A.; Pettit, G.S.; Newman, J.P. A formal cognitive model of the go/no-go discrimination task: Evaluation and implications. Psychol. Assess. 2006, 18, 239–249. [Google Scholar] [CrossRef] [PubMed]
  4. Hull, R.; Martin, R.C.; Beier, M.E.; Lane, D.; Hamilton, A.C. Executive function in older adults: A structural equation modeling approach. Neuropsychology 2008, 22, 508–522. [Google Scholar] [CrossRef] [PubMed]
  5. Bartolucci, F.; Solis-Trapala, I.L. Multidimensional Latent Markov Models in a developmental study of inhibitory control and attentional flexibility in early childhood. Psychometrika 2010, 75, 725–743. [Google Scholar] [CrossRef]
  6. Bishara, A.J.; Kruschke, J.K.; Stout, J.C.; Bechara, A.; McCabe, D.P.; Busemeyer, J.R. Sequential learning models for the wisconsin card sort task: Assessing processes in substance dependent individuals. J. Math. Psychol. 2010, 54, 5–13. [Google Scholar] [CrossRef]
  7. Zelazo, P.D.; Muller, U.; Frye, D.; Marcovitch, S. The development of executive function: Cognitive complexity and control-revised. Monogr. Soc. Res. Child Dev. 2003, 68, 93–119. [Google Scholar] [CrossRef]
  8. Busemeyer, J.R.; Stout, J.C.; Finn, P. Using computational models to help explain decision making processes of substance abusers. In Cognitive and Affective Neuroscience of Psychopathology; Barch, D., Ed.; Oxford University Press: New York, NY, USA, 2003. [Google Scholar]
  9. Bechara, A.; Damasio, H. Decision-making and addiction (part I): Impaired activation of somatic states in substance dependent individuals when pondering decisions with negative future consequences. Neuropsychologia 2002, 40, 1675–1689. [Google Scholar] [CrossRef]
  10. Zakzanis, K.K. The subcortical dementia of Huntington’s desease. J. Clin. Exp. Neuropsychol. 1998, 40, 565–578. [Google Scholar] [CrossRef]
  11. Braff, D.L.; Heaton, R.K.; Kuck, J.; Cullum, M. The generalized pattern of neuropsychological deficits in outpatients with chronic schizophrenia with heterogeneous Wisconsin Card Sorting Test results. Arch. Gen. Psychiatry 1991, 48, 891–898. [Google Scholar] [CrossRef]
  12. Heaton, R.K.; Chelune, G.J.; Talley, J.L.; Kay, G.G.; Curtiss, G. Wisconsin Card Sorting Test Manual: Revised and Expanded; Psychological Assessment Resources Inc.: Odessa, FL, USA, 1993. [Google Scholar]
  13. Buchsbaum, B.R.; Greer, S.; Chang, W.L.; Berman, K.F. Meta-analysis of neuroimaging studies of the Wisconsin card-sorting task and component processes. Hum. Brain Mapp. 2005, 25, 35–45. [Google Scholar] [CrossRef]
  14. Berg, E.A. A simple objective technique for measuring flexibility in thinking. J. Gen. Psychol. 1948, 39, 15–22. [Google Scholar] [CrossRef]
  15. Demakis, G.J. A meta-analytic review of the sensitivity of the Wisconsin Card Sorting Test to frontal and lateralized frontal brain damage. Neuropsychology 2003, 17, 255–264. [Google Scholar] [CrossRef]
  16. Tarter, R.E. An analysis of cognitive deficits in chronic alcoholics. J. Nerv. Ment. Dis. 1973, 157, 138–147. [Google Scholar] [CrossRef]
  17. Dai, J.; Pleskac, T.J.; Pachur, T. Dynamic cognitive models of intertemporal choice. Cogn. Psychol. 2018, 104, 29–56. [Google Scholar] [CrossRef]
  18. Gershman, S.J. A Unifying Probabilistic View of Associative Learning. PLoS Comput. Biol. 2015, 11, e1004567. [Google Scholar] [CrossRef]
  19. Wallsten, T.S.; Pleskac, T.J.; Lejuez, C.W. Modeling Behavior in a Clinically Diagnostic Sequential Risk-Taking Task. Psychol. Rev. 2005, 112, 862–880. [Google Scholar] [CrossRef]
  20. Kruschke, J. Models of Categorization. In The Cambridge Handbook of Computational Psychology; Sun, R., Ed.; Cambridge University Press: New York, NY, USA, 2008. [Google Scholar]
  21. Wiggins, L.M. Panel Analysis: Latent Probability Models for Attitude and Behavior Processes; Elsevier: Amsterdam, The Netherlands, 1973. [Google Scholar]
  22. Bartolucci, F.; Farcomeni, A.; Pennoni, F. Latent Markov Models for Longitudinal Data; Chapman and Hall/CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  23. Smallwood, J.; Schooler, J.W. The science of mind wandering: Empirically navigating the stream of consciousness. Annu. Rev. Psychol. 2014, 66, 487–518. [Google Scholar] [CrossRef]
  24. Hawkins, G.E.; Mittner, M.; Forstmann, B.U.; Heathcote, A. On the efficiency of neurally-informed cognitive models to identify latent cognitive states. J. Math. Psychol. 2017, 76, 142–155. [Google Scholar] [CrossRef] [Green Version]
  25. Taghia, J.; Cai, W.; Ryali, S.; Kochalka, J.; Nicholas, J.; Chen, T.; Menon, V. Uncovering hidden brain state dynamics that regulate performance and decision-making during cognition. Nat. Commun. 2018, 9, 2505. [Google Scholar] [CrossRef]
  26. Maarten, S.; Lagnado, D.A.; Wilkinson, L.; Jahanshahi, M.; Shanks, D.R. Models of probabilistic category learning in Parkinson’s disease: Strategy use and the effects of L-dopa. J. Math. Psychol. 2010, 54, 123–136. [Google Scholar]
  27. Pennoni, F. Issues on the Estimation of Latent Variable and Latent Class Models: With Applications in the Social Sciences; Scholars’ Press: Saarbucken, Germany, 2014. [Google Scholar]
  28. Visser, I. Seven things to remember about hidden Markov models: A tutorial on Markovian models for time series. J. Math. Psychol. 2011, 55, 403–415. [Google Scholar] [CrossRef]
  29. First, M.B.; Spitzer, R.L.; Gibbon, M.; Williams, J.B.W. Structured Clinical Interview for DSM-IV Axis I Disorders, Research Version, Non-Patient Edition (SCID-I:NP); Biometrics Research: New York, NY, USA, 1997. [Google Scholar]
  30. Nagahama, Y.; Okina, T.; Suzuki, N.; Nabatame, H.; Matsuda, M. The cerebral correlates of different types of perseveration in the Wisconsin Card Sorting Test. J. Neurol. Neurosurg. Psychiatry 2005, 76, 169–175. [Google Scholar] [CrossRef]
  31. Miller, H.L.; Ragozzino, M.E.; Cook, E.H.; Sweeney, J.A.; Mosconi, M.W. Cognitive set shifting deficits and their relationship to repetitive behaviors in autism spectrum disorder. J. Autism Dev. Disord. 2015, 45, 805–815. [Google Scholar] [CrossRef]
  32. Flashman, L.A.; Horner, M.D.; Freides, D. Note on scoring perseveration on the Wisconsin card sorting test. Clin. Neuropsychol. 1991, 5, 190–194. [Google Scholar] [CrossRef]
  33. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  34. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 1974, AC-19, 716–723. [Google Scholar] [CrossRef]
  35. de Haan-Rietdijk, S.; Kuppens, P.; Bergeman, C.S.; Sheeber, L.B.; Allen, N.B.; Hamaker, E.L. On the use of mixed Markov models for intensive longitudinal data. Multivar. Behav. Res. 2017, 52, 747–767. [Google Scholar] [CrossRef]
  36. Bartolucci, F.; Pandolfi, S.; Pennoni, F. LMest: An R Package for Latent Markov Models for Longitudinal Categorical Data. J. Stat. Softw. 2017, 81, 1–38. [Google Scholar] [CrossRef]
  37. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]
  38. Figueroa, I.J.; Youmans, R.J. Failure to Maintain Set A Measure of Distractibility or Cognitive Flexibility? Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2013, 57, 828–832. [Google Scholar] [CrossRef]
  39. Dobrow, R.P. Introduction to Stochastic Processes with R; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
Figure 1. Conditional (in)dependencies structure of a basic LMM. Shaded nodes represent observed variables. White nodes represent unobserved (latent) variables.
Figure 1. Conditional (in)dependencies structure of a basic LMM. Shaded nodes represent observed variables. White nodes represent unobserved (latent) variables.
Behavsci 09 00079 g001
Figure 2. Example of a typical trial in the Wisconsin card sorting test. Arrows represent possible choices. In this example, the current sorting principle is color. Solid arrows, which sort the target card with stimulus cards (A) and (B), represent wrong matches. The dotted arrow, which sortes the target card with stimulus card (C), represents a right match.
Figure 2. Example of a typical trial in the Wisconsin card sorting test. Arrows represent possible choices. In this example, the current sorting principle is color. Solid arrows, which sort the target card with stimulus cards (A) and (B), represent wrong matches. The dotted arrow, which sortes the target card with stimulus card (C), represents a right match.
Behavsci 09 00079 g002
Figure 3. Transition probability matrices (left) and relative graphical model representations (right) for the control group (top) and the SDI group (bottom).
Figure 3. Transition probability matrices (left) and relative graphical model representations (right) for the control group (top) and the SDI group (bottom).
Behavsci 09 00079 g003
Figure 4. Marginal distribution of the latent states for each task phase, for the Control group.
Figure 4. Marginal distribution of the latent states for each task phase, for the Control group.
Behavsci 09 00079 g004
Figure 5. Marginal distribution of the latent states for each task phase, for the SDI group.
Figure 5. Marginal distribution of the latent states for each task phase, for the SDI group.
Behavsci 09 00079 g005
Table 1. Latent States selection.
Table 1. Latent States selection.
ModelBICAIC
1-state87928781
two-state85618490
three-state86088493
Table 2. Model Selection criteria.
Table 2. Model Selection criteria.
ModelBICAIC
Basic88588691
Covariate86088493
Table 3. Estimated conditional probabilities of responses given the latent state.
Table 3. Estimated conditional probabilities of responses given the latent state.
ϕ ^ y | s
y s = 1 s = 2 s = 3
C0.930.800.44
E0.020.100.38
PE0.050.100.18
Table 4. Estimated initial probabilities for each group.
Table 4. Estimated initial probabilities for each group.
s = 1 s = 2 s = 3
π ^ s | 0 0.570.140.29
π ^ s | 1 0.380.210.41
Table 5. Mean scoring measures (SE in parenthesis).
Table 5. Mean scoring measures (SE in parenthesis).
CEPE
Control66.34 (0.64)5.25 (0.29)4.65 (0.42)
SDI72.34 (2.17)10.57 (0.96)14.28 (1.52)

Share and Cite

MDPI and ACS Style

D’Alessandro, M.; Lombardi, L. A Dynamic Framework for Modelling Set-Shifting Performances. Behav. Sci. 2019, 9, 79. https://doi.org/10.3390/bs9070079

AMA Style

D’Alessandro M, Lombardi L. A Dynamic Framework for Modelling Set-Shifting Performances. Behavioral Sciences. 2019; 9(7):79. https://doi.org/10.3390/bs9070079

Chicago/Turabian Style

D’Alessandro, Marco, and Luigi Lombardi. 2019. "A Dynamic Framework for Modelling Set-Shifting Performances" Behavioral Sciences 9, no. 7: 79. https://doi.org/10.3390/bs9070079

APA Style

D’Alessandro, M., & Lombardi, L. (2019). A Dynamic Framework for Modelling Set-Shifting Performances. Behavioral Sciences, 9(7), 79. https://doi.org/10.3390/bs9070079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop