Next Article in Journal
Low-Cost Educational Materials and University Student Teachers’ Recycling Knowledge and Attitudes: A Quasi-Experimental Study
Previous Article in Journal
Toward Responsible Integration: A Review of Applications, Capabilities, and Perceptions of Generative AI in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Construct Validity of Self-Assessment Tools for Students’ Adaptive Expertise

1
Department of Dentistry, Radboud University Medical Center, 6525 EX Nijmegen, The Netherlands
2
School of Education, HAN University of Applied Sciences, 6525 EN Nijmegen, The Netherlands
3
Utrecht Center for Research and Development of Health Professions Education, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands
4
Faculty of Educational Sciences, Open University, 6419 AT Heerlen, The Netherlands
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(2), 324; https://doi.org/10.3390/educsci16020324
Submission received: 30 October 2025 / Revised: 12 January 2026 / Accepted: 22 January 2026 / Published: 17 February 2026

Abstract

The construct validity of self-assessment tools designed to measure adaptive expertise, which is crucial for today’s complex work environments, is evaluated in this study. Although students are still novices and do not yet possess fully developed adaptive expertise, its fostering and assessment should begin during higher education, when future professionals build the foundations of their expertise. Three instruments originally developed for working professionals, the Adaptive Expertise Inventory, the Self-Adapt, and the Adaptability Scale, were examined for their applicability to higher education students. Confirmatory factor analysis revealed strong construct and ecological validity for Self-Adapt and the Adaptability Scale, consistent with previous research. The Adaptive Expertise Inventory showed less stability, with inconsistent factor loadings, potentially related to its prompt or conceptual framework. Exploratory factor analysis indicated no shared factor structures across instruments, suggesting limited conceptual clarity. A flexible approach is recommended to match instruments with program needs, particularly for high-stakes decisions such as advancement. Combining generic, domain-specific, and qualitative tools at the program level may yield deeper insights. These tools should be evaluated using adapted quality criteria to ensure valid and reliable student assessments.

1. Introduction

Adaptive expertise is an increasingly prominent concept, driven by the fact that today’s workplace environments are marked by uncertainty, complexity, and ambiguity (Krawczyńska-Zaucha, 2020). This dynamic landscape necessitates the preparation of future professionals to navigate constant change and evolving situations. All students need to develop as adaptive experts in their careers, specifically by enhancing their ability to manage uncertain and complex work situations from the start (van Tartwijk et al., 2023). Adaptive experts encompass not only efficiency in familiar and routine situations, also known as routine expertise, but also the capacity to employ flexible problem-solving strategies and devise innovative solutions in response to changing or unforeseen circumstances (Hatano & Inagaki, 1986; Ward et al., 2018). Adaptive expertise builds upon routine expertise, and adaptive performance can be seen as the visible expression of adaptive expertise, which is triggered by (contextual) change in tasks or environments (Pelgrim et al., 2022).
There is an increasing focus on how to prepare students to manage the unknown and to overcome wicked problems (Bowman et al., 2024; Groenier et al., 2025; Veltman et al., 2021). Students, as future professionals, are currently novices and have not yet developed expertise. Moreover, they have had limited opportunities to build adaptive expertise and adaptive performance. Therefore, fostering their (self)-reflection and awareness of the components of adaptive expertise and adaptive performance is crucial for their further development. Self-evaluation tools, such as self-evaluation questionnaires, can be particularly valuable in this regard because they can effectively serve a formative function: they help students map their current level of adaptive expertise, assist in setting goals for improving this expertise, and facilitate discussions with their supervisors about these goals. In this way, self-evaluation questionnaires can function as interventions that support the professional growth of students in their work environments (Yan & Carless, 2021). Despite some limitations, these tools are widely recommended for providing an initial understanding of a particular area of interest (Allen & van der Velden, 2005; Gilmore & Feldon, 2010), and in this case, they can offer insights into students’ perceptions regarding adaptive expertise and adaptive performance.
In the literature, several instruments to measure adaptive expertise or adaptive performance have been described. A recent review study on measurement instruments for adaptive expertise and adaptive performance revealed the identification of 19 evaluation instruments, including 12 self-evaluation instruments (Hissink et al., 2025). Two of the self-evaluation tools were specifically designed for students in higher education: in biomedical engineering (Fisher & Peterson, 2001) and biomedical engineering design (Ferguson et al., 2018). However, these instruments have limited evidence of validity and reliability indicators (Hissink et al., 2025). The remaining self-evaluation instruments have been studied and validated with working professionals but not with future professionals (students) as a target group. Therefore, little is known about the quality of these instruments when used with this population. It is unclear whether the underlying constructs to self-evaluate potential aspects of adaptive expertise and adaptive performance are the same or comparable for students as they are for working professionals. Gaining insight into the construct validity of these instruments is therefore essential to determine to what extent the self-evaluation instruments for measuring adaptive expertise and adaptive performance are appropriate and useful for students. This is particularly important in developing instruments applicable across the full continuum of professional development, from students in higher education to experienced professionals. If these instruments prove suitable for all target groups, students could use them to monitor their own development in these areas. This could facilitate both students and their educators in fostering awareness and reflection about adaptive expertise and adaptive performance, and enhancing self-reflection and awareness related to this concept.

2. Research Question

The aim of this study is to evaluate the quality of self-evaluation instruments designed to measure the adaptive expertise and adaptive performance when used with higher education students instead of working professionals. This leads to the following research question:
What is the construct validity of the self-evaluation measurement instruments to measure adaptive expertise and adaptive performance of higher education students?

3. Method

3.1. Study Design

The study is evaluative in nature and examines the construct validity of measures of adaptive expertise and adaptive performance among Dutch higher education students, including students from Universities of Applied Science and Research Universities. This evaluation is denoted as T1 (time 1). A subset of students participated in the second measurement (T2), to verify the initial findings. Measuring students’ adaptive expertise and/or performance was part of Adapt at Work (2024), a design-based research project on the impact of work-based learning interventions on adaptive expertise development of higher education students (Adapt at Work, 2024; Nieuwenhuis & Fluit, 2019).

3.2. Instrumentation

The selection of measurement instruments in this study is based on an extensive overview of suitable measurement instruments to measure adaptive expertise and adaptive performance (Hissink et al., 2025). In their study, 19 instruments were identified. To specify the selection of instruments, the following inclusion criteria were developed for this study:
  • The instrument is a self-evaluation tool;
  • All items of the instrument are accessible, either within the article, online, or through another publication;
  • The instrument is broadly applicable in different professional domains and work contexts, and not limited to one or a few domains;
  • The instrument has moderate or strong evidence of validity, reliability, and fairness in testing (according to the Hissink et al. overview);
  • The instrument is available in English.
The 19 instruments were independently reviewed by EH (main researcher, educational specialist, dental education and healthcare education) and LN (supervisor, emeritus professor, professional and vocational education). Of the 19 instruments, 14 did not meet the inclusion criteria. Of the 19 instruments, 7 were not self-evaluation instruments. Of the 12 self-evaluation instruments, 1 instrument contained items that are not accessible, 1 instrument was not available in English, 1 instrument was not broadly applicable, and 4 instruments did not have moderate or strong evidence of validity, reliability, and/or fairness in testing. This resulted in five instruments that matched the criteria. Three of these five can be used in a wide range of workplace settings and are easily available in both English and Dutch. Therefore, these three instruments were selected, resulting in three usable instruments:
All three instruments are well grounded in the international literature. A Dutch version of the Self-Adapt and the Adaptability Scale had been previously analysed and described (Oprins et al., 2018; van Dam & Meulders, 2020) and a Dutch version of the Adaptive Expertise Inventory was received from the authors of this instrument (Bohle-Carbonell et al., 2016). Each instrument is briefly outlined below, including the factors of the instrument and a sample question.

3.2.1. The Adaptive Expertise Inventory (Bohle-Carbonell et al., 2016)

The Adaptive Expertise Inventory measures self-reported adaptive expertise in a wide range of workplace settings (mostly professional, scientific, technical, and educational areas). The instrument is centred on adaptive knowledge and expertise within the work domain. It discerns (1) domain-specific and (2) innovative adaptivity, which are empirically supported as two factors of adaptive expertise (Bohle-Carbonell et al., 2016).
According to Bohle-Carbonell et al., adaptive expertise is built on routine expertise as both forms of expertise contain the ability to perform standard tasks in the domain without errors. The difference only becomes apparent when confronted with a non-standard situation: individuals with adaptive expertise possess a more extensive and integrated knowledge base than do individuals with routine expertise (Hatano & Inagaki, 1986). This helps them to determine when not to rely on their automatic processes; when this happens, they can “slow down” and make conscious efforts to deal with the problem. It follows that they abandon skill- and rule-based decision-making and spend time building a mental model of the situation, in which they draw analogies between standard and novel situations (Barnett & Koslowski, 2002; Chi, 2011). According to Bohle-Carbonell, adaptive expertise is a developmental process that is being propelled by problem-solving skills (Tynjälä et al., 1997). Consequently, characteristics of traditional expertise research (e.g., repeated high performance, standardized tasks) play a lesser role compared to the need for non-standard but realistic tasks, which elicit the problem-solving skills of individuals with adaptive expertise. Table 1 shows the factors of the Adaptive Expertise Inventory, including sample items.
The internal consistency of the subscales is acceptable across populations, with α = 0.79 for professionals and 0.79 for graduates on the domain skills subscale, and α = 0.78 for professionals and 0.74 for graduates on the innovative skills subscale, respectively (Bohle-Carbonell et al., 2016).

3.2.2. The Self-Adapt (Oprins et al., 2018)

The Self-Adapt is the second part of the Dutch Adaptability Dimensions And Performance Test (D-ADAPT). The D-ADAPT is used in the military and civil domain, and is twofold: respondents are asked to describe job requirements (tWork-Adapt scale) and to evaluate their own capabilities to deal with these requirements (tSelf-Adapt scale), using the same items with a different prompt. The instrument measures six dimensions of adaptability: (1) solving difficult problems; (2) handling crisis situations; (3) culturally demanding situations; (4) physically demanding circumstances; (5) handling work stress; and (6) interpersonal interactions. These six dimensions are confirmed through confirmatory factor analysis, both for the Work-Adapt scale and the Self-Adapt scale (Oprins et al., 2018). In this study only the Self-Adapt is used.
Oprins et al. (2018) view adaptability as a competency of individuals, partly determined by innate personal qualities and partly by acquired skills. Adaptability is considered a competency that can, to a certain extent, be improved by training and experience (Pulakos et al., 2000). Insight in this competency can contribute to the selection and training of employees for a specific job. Adaptability is seen as multidimensional competency (based on the dimensions of Pulakos et al., 2002) and as a situation-specific competency (J. M. Campbell & McCord, 1999; J. P. Campbell et al., 1993; Hesketh & Neal, 1999; Pulakos et al., 2000). Table 2 shows the factors of the Self-Adapt, including sample items.
The internal consistency of the Self-Adapt dimensions varies across subscales, with Cronbach’s α values ranging from 0.68 to 0.95, indicating generally acceptable to excellent estimated reliability across the different adaptability dimensions (Oprins et al., 2018).

3.2.3. The Adaptability Scale (van Dam & Meulders, 2020)

The Adaptability Scale is used to assess self-reported employee adaptability in workplace settings. The instrument is based on organizational development and focuses on adaptive traits separated in cognitive, behavioural, and affective adaptability. The Adaptability Scale assesses employee adaptability as an individual difference construct. The individual difference construct approach views employee adaptability as a set of underlying characteristics that allow individuals to be effective under changing task conditions (Jundt et al., 2015). This approach assumes that there is a certain stability and generalizability in how individuals relate to changing, uncertain, and novel situations (Baard et al., 2013) following Ployhart and Bliese’s (2006) conceptualization of employee adaptability as individuals’ tendency to be flexible, open-minded, resilient, and ready to actively change or fit novel, changing, or ambiguous work environments. This conceptualization captures three key elements of adaptability: (1) adaptability is an individual characteristic; (2) adaptation can have both proactive and reactive components; and (3) adaptability should be considered as a state-like capacity. These three key elements show up as stable factors in a confirmatory factor analysis (van Dam & Meulders, 2020). Table 3 shows the factors of the Adaptability Scale, including sample items.
The internal consistency of the Adaptability Scale is good, with McDonald’s ω values around 0.85 across samples, indicating acceptable reliability (van Dam & Meulders, 2020).

3.3. Participants

Dutch higher education students from five Dutch Research Universities and six Dutch Universities of Applied Sciences participated in the study. The research was conducted in accordance with ethical guidelines, and all students provided informed consent. Data were pseudonymized prior to analysis and securely stored in the Research Drive. Ethical approval was obtained from the Ethical Review Board of the HAN University of Applied Sciences (approval number 312.12/21, Nijmegen, The Netherlands).
The universities were selected because of their attention to adaptive expertise development. They all presented an operational case, consisting of an alleged innovative work-based learning module or intervention to enhance the development of students’ adaptive expertise, by open-ended tasks in a work-based learning environment. The majority of the students were full-time students (with some being part-time students) and were between 18 and 26 years old. About two-thirds of the students were pursuing a bachelor’s degree, while one-third were pursuing a master’s program at either one of the Research Universities or one of the Universities of Applied Sciences.
All students that participated in the modules were administered a questionnaire. At the beginning of the module (T1), nearly the entire population of the students completed the questionnaire (506 students). At the end of the module (T2), the level of effort to enhance student response varied across universities. As a result, 179 students participated at both T1 and at T2, accounting for 35% of the initial participants.

3.4. Data Collection

In the second semester of 2022 or the first semester of 2023, both at the beginning and upon completion of the learning module, students were administered a questionnaire that included all items from the three measurement instruments: the Adaptive Expertise Inventory, the Self-Adapt, and the Adaptability Scale. The Dutch versions of the instruments were used at nine universities, while the English versions were used at two universities due to the presence of international student groups. In most universities, students completed the questionnaire as a classroom activity. The timing of the post-test administration depended on the length of the modules, which ranged from 6 to 24 weeks. Due to COVID regulations, the post-test was administered only online (see Table 4 for the response count).
The data that support the findings of this study are openly available in are digitally stored on the secured and protected R-station of the Department of Responsive Vocational and Professional Education of HAN University of Applied Sciences.

3.5. Data Analysis

The consistency of the instruments was tested using confirmatory factor analysis (CFA) and exploratory factor analysis (EFA). The main analysis was conducted using the data from T1. An additional examination was conducted on the data from T2, acknowledging that the response rate was only 35%. In the analysis, the data from T1 and T2 were strictly separated, with the analysis of the T2 data serving to validate the results from the T1 data and for descriptive purposes.
The first analysis consisted of applying a CFA to the scores of all students in T1 and T2 for each separate instrument. In the second analysis, data from all three instruments were combined for a joint EFA to explore their communalities.

4. Results

4.1. Confirmation of the Identified Factors of the Three Instruments

A CFA was conducted based on the number of factors reported by the authors of the three instruments (though not necessarily adhering strictly to their recommendations), as follows:
  • The Adaptive Expertise Inventory: 2 factors;
  • The Self-Adapt: 6 factors;
  • The Adaptability Scale: 3 factors.
For each instrument, the presentation of the results of the CFAs consists of four parts. Each CFA began with an exploratory analysis to determine the number of factors (Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13).
  • Eigenvalues: First, Table 5, Table 7, Table 9, and Table 11 present the eigenvalues. According to the Kaiser criterion, factors should have eigenvalues of 1.0 or higher. The last factor meeting this criterion is highlighted in bold in each table, indicating the number of factors to be used in the factor analysis.
  • Explained variance: In the tables with the eigenvalues, the explained variance for T1 is also reported, as it provides an indication of the contribution of each factor to the total variance, as well as the relative strength of the factors. A cumulative explained variance of 60% is considered the threshold criterion.
  • Scree plots: Next, the scree plots are displayed (Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8). A scree plot visualizes the eigenvalues and helps identify the number of factors for the factor analysis.
  • Factor loadings: Finally, the tables with the factor loadings are presented (Table 6, Table 8, Table 10, Table 12, and Table 13). Loadings below −0.4 or above 0.4 are highlighted in bold. If the CFA demonstrates that the factors are distinct from one another, there should be a clear clustering of items, with each factor forming a separate group.

4.2. The Adaptive Expertise Inventory of Bohle-Carbonell et al. (2016) (Table 5, Table 6, Figure 1 and Figure 2)

At T1, the Kaiser criterion indicates two factors, while at T2, three factors are indicated. The explained variance (T1) is over 53% for two factors, and 61% for three factors. The scree plots clearly indicates a knee at two factors. Based on the eigenvalues at T1 and on substantive reasoning, a two-factor analysis was selected, as the analyses by Bohle-Carbonell also identified two factors (Bohle-Carbonell et al., 2016).
The results of the CFA for the Adaptive Expertise Inventory differ from those reported by Bohle-Carbonell et al. (2016). A clear separation of DS and IS in two distinct factors was anticipated. However, at T1, several items expected to load onto factor 2 instead load onto factor 1. At T2, only four items load onto factor 1, with more items loading onto factor 2.

4.3. The Self-Adapt of Oprins et al. (2018) (Table 7, Table 8, Figure 2 and Figure 3)

At both T1 and T2, the Kaiser criterion and the scree plot criterion indicate six factors for the Self-Adapt. The explained variance (T1) is almost 60%. Therefore, CFA was conducted with six factors.
The results of the CFA for the Self-Adapt indicate that the factor structure reported by Oprins et al. (2018) closely matches the findings of the current factor analysis. At T1, all items are assigned to the same factors with one exception (item STS1). At T2, the vast majority of items remain assigned to the same factors, though five items exhibit lower factor loadings. Overall, the Self-Adapt scale demonstrates a stable factor structure.

4.4. The Adaptability Scale of van Dam and Meulders (2020) (Table 9, Table 10, Figure 5 and Figure 6)

Both the Kaiser criterion and the scree plot criterion indicate the number of three factors for the Adaptability Scale. The explained variance (T1) is more than 60%. Therefore, CFA was executed with three factors.
The results of the CFA for the Adaptability Scale indicate that the factor structure reported by van Dam and Meulders (2020) generally aligns with the findings of the current factor analysis. At T1, all items load onto the reported factors; however, not all items have high enough loadings to be assigned definitively to a single factor (items A1 and C1 both show small, double loadings on two factors, although they do not meet the cut-off point of >0.4 or <-0.4. Additionally, the difference between the items is more than 0.2. At T2, these two items load onto a different factor (Behavioural). Overall, the Adaptability Scale demonstrates a stable factor structure.

4.5. Convergence in the Factor Structure Among the Three Instruments (Table 11, Table 12, Table 13, Figure 7 and Figure 8)

An EFA combining the three instruments was conducted to investigate if the instruments share communalities in their factor structures. This was done by including all items of the three instruments in a single factor analysis, once with T1 data and once with T2 data.
At T1, the Kaiser criterion indicates eleven factors, while at T2, twelve factors are suggested. The explained variance (T1) is over 60% for two factors. The scree plot criterion indicates eleven factors at both T1 and T2. Since the total number of factors for the three instruments, based on the authors’ articles, amounts to eleven, an eleven-factor analysis was chosen.
The results of the EFA indicate a total number of eleven factors. However, the factor structure slightly differs from the (combined) factor structure of the three separate instruments. A detailed analysis of the results first shows that the three instruments do not merge in the overall EFA. The items of the three instruments load on different factors and do not show communalities. Most of the items adhere to the same factors as in the individual CFA analysis.
Regarding the Adaptive Expertise Inventory, we found that at T1, the overall EFA results for the Adaptive Expertise Inventory suggested a one-factor structure for this instrument, which also turned out to be a possibility in the individual CFA. At T2, a three-factor structure was suggested. Within this structure, the DS factor remained relatively intact, but the IS factor did not, as the remaining items are distributed across two factors.
The Self-Adapt shows at T1 that four factors of the Self-Adapt (CRS, CUS, IPS, and PDC) remain stable, consistent with the individual CFA analyses. The items load onto to the same factors and exhibit similar factor loadings. However, two factors (CPS and STS) in the overall EFA differ from the CFA. The CPS factor loses two items, and the STS factor splits into two smaller factors. When a 10-factor structure is applied in a CFA, the STS items merge back into one single factor without affecting the main results of the other factors. At T2, five factors of the Self-Adapt (CPS; CRS; CUS; STS; and PDC) remain stable, similar to the individual CFA analyses. The items adhere to the same factors and show approximately the same factor loadings. However, one factor (IPS) in the overall EFA differs from the CFA; this factor loses one item to another factor, and one item does not load onto any factor. Overall, a six-factor structure appears to be the best fit for the Self-Adapt, although some items either do not load onto any factor or load onto a different factor than expected.
For the Adaptability Scale, we found at T1 that the factor structure was stable and aligned with the individual analyses, with the affective factor becoming even stronger, as item A1 reassigns to this factor in the overall EFA. At T2, a three-factor structure is also suggested. However, two items from the cognitive factor load onto the behavioural factor, and only one item remains loading onto the cognitive factor.

5. Discussion

5.1. Theoretical Considerations

Adaptive expertise is crucial in today’s complex work environment. Although students are still novices and do not yet possess ample expertise, developing adaptive expertise and the resulting adaptive performance is essential for the execution of their future profession. In higher education, considerable attention is therefore already being given to its development (Mylopoulos et al., 2022). Intrapersonal, interpersonal, and organizational interactions can contribute to the development of adaptive expertise, with the practice of reflection playing a crucial role in this process (Kua et al., 2022). Self-evaluation tools are valuable for this as self-reflection enhances students’ learning achievements (Yan & Carless, 2021). To assess the adaptive expertise of students, various instruments are available, most of which are self-evaluation instruments. Though it is important to have instruments that can be applied across the entire professional trajectory, from students in higher education to practicing professionals, most instruments of sufficient quality have never been tested with students. Therefore, the aim of this study was to evaluate the construct validity of each of the self-evaluation measurement instruments—the Adaptive Expertise Inventory, the Self-Adapt, and the Adaptability Scale—with the target group of students. Part of this involved investigating the convergence in the factor structure, which had not been previously examined in either a working population or a student population.
We first evaluated the consistency of the individual instruments to determine the stability of the factors indicated in the original instruments, using CFA. The results of the Self-Adapt and the Adaptability Scale align closely with previous findings (Oprins et al., 2018; van Dam & Meulders, 2020) and we found roughly the same factor structure as reported before. These instruments demonstrate good construct validity. Additionally, since the instruments have been used in ‘real-life’ settings outside controlled conditions, they also exhibit good ecological validity. This makes the instruments suitable for both the student population and working professionals. Bohle-Carbonell et al.’s Adaptive Expertise Inventory showed less stability than reported before (Bohle-Carbonell et al., 2016). At T1, several items expected to load onto factor 2 instead load onto factor 1. At T2, only four items load onto factor 1, with more items loading onto factor 2. We can therefore conclude that, although two factors are found, the distribution of the items at both T1 and T2 is not consistent with previous results. One possible explanation might be conceptual: the distinction between the two factors (Domain Skills and Innovativeness) may not be clear enough, or the questionnaire items might be too similar. Another explanation could be related to the prompt of the questionnaire. The prompt “In the past year…” addresses a different context than the prompts of the other questionnaires, namely, “How effective do you consider yourself at performing this behaviour in your work?” (the Self-Adapt) and “To what extent do you agree with the following statements?” (the Adaptability Scale). The Adaptive Expertise Inventory might not explicitly address non-standard situations sufficiently, although conceptually, according to Bohle-Carbonell et al., this is the key element of (developing) adaptive expertise. The prompt, however, does not explicitly indicate that respondents need to reflect on non-standard situations. It might also be possible that students struggle more with the prompt “In the past year…” because they have less (work) experience and therefore have fewer experiences to draw upon that relate to adaptive expertise. Finally, Bohle-Carbonell et al. initially proposed three factors (Metacognitive Skills in addition to Domain Skills and Innovativeness). However, their factor analysis revealed only the two latter factors Domain Skills and Innovativeness. This, combined with our findings, might suggest that the instrument is based on a construct that is less precisely defined.
Administering all three instruments also provided an opportunity to examine the stability of the factor structure of the instruments together, and to determine whether they measure the same concept. The main conclusion of the combined EFA analysis is that the three instruments do not show any communalities. On the one hand, this is a remarkable finding because all instruments pretend to measure adaptive expertise or adaptive performance (the latter being the visible expression of adaptive expertise). If they were measuring the same concept, an overall analysis would show communalities with fewer factors in the factor structures. On the other hand, as Pelgrim et al. (2022) and Cupido et al. (2022) already argued, a lack of conceptual clarity exists regarding the precise or even comparable definitions of the adaptive expertise and adaptive performance. Contextual changes in tasks or environments, as well as factors like uncertainty, complexity, and ambiguity, trigger adaptive expertise and adaptive performance. However, the specific behaviours that define these concepts vary among authors and depend on the conceptualizations and operationalizations they use to develop their instruments. This is also evident in the names of the factors of the three instruments: the specific behaviours associated with adaptive expertise or adaptive performance, according to the authors, depend on the (theoretical) construct and the way in which adaptive expertise or adaptive performance is conceptualized and operationalized. This reinforces the conclusion presented by Hissink et al. (2025) in their article: the origins of the instruments exhibit considerable diversity, stemming from various academic disciplines (particularly HRM and education), each characterized by distinct conceptualizations, elaborations, and observable behaviours. The instruments stem from different conceptual islands and we did not find any bridging factors in this archipelago.

5.2. Practical Considerations

It is crucial for those using the instruments to be thoroughly informed about the different conceptualizations and operationalizations. We therefore propose an approach that can yield different outcomes for various faculties and programs. Validity, alongside reliability and fairness in testing, is one of the principal quality criteria for assessment instruments (American Educational Research Association et al., 2014). All three instruments claim to measure adaptive expertise or adaptive performance; however, the factor analyses indicate that the factors of the individual tests largely remain distinct. Different authors conceptualize and operationalize the constructs differently and a user of any of the instruments should consider what exactly needs to be (self) measured. It is important for users of the instruments to carefully consider the principles of constructive alignment to determine which instrument best fits a faculty or educational program. This ensures a logical connection, as a good education system aligns teaching methods and assessments with the learning activities outlined in the objectives (Loughlin et al., 2020). This is especially relevant when important (summative) decisions are based on one of the instruments (for instance, passing a learning module or advancing to the next level within a program).

5.3. Recommendations for Future Research

Finally, at the program level, it is worthwhile to consider instruments other than generic self-evaluation tools, as doing so can provide valuable insights and more nuanced evaluations. At the program level, measurement does not need to span across domains; instead, the focus can be on the development of adaptive expertise and adaptive performance within a single domain or even a specific profession. In addition to using a generic instrument, program-specific tools, which are often more qualitative in nature, can be employed. Examples of such tools include design scenarios (Walker et al., 2006) or mixed-method evaluations (Yoon et al., 2019; Suh et al., 2023). However, these instruments should be evaluated using adapted quality criteria, which requires further research, as well as the appropriate combination of several instruments to make qualitatively valid and reliable statements about student progress.

6. Conclusions

Building on the discussion above, the following conclusions highlight the main theoretical, methodological, and practical implications of our findings.
This study contributes to the field of adaptive expertise by demonstrating that commonly used self-evaluation instruments (the Adaptive Expertise Inventory, the Self-Adapt, and the Adaptability Scale) show distinct factor structures and limited overlap. Theoretically, this underscores the need for greater conceptual clarity in defining adaptive expertise and adaptive performance, as the operationalizations of these constructs vary across instruments and disciplines. Methodologically, this study provides evidence on the construct validity of these instruments in a student population, extending their applicability beyond working professionals. Practically, educators and program designers are encouraged to carefully consider which instrument aligns with the learning objectives and context of their programs, ensuring constructive alignment between teaching, learning activities, and assessment. Finally, future research could focus on developing domain-specific or mixed-method assessments that capture more nuanced aspects of adaptive expertise and performance, as well as investigating how these instruments can best support longitudinal development in educational and professional trajectories.

Author Contributions

Conceptualization: E.H., L.N.; Methodology: E.H., T.D.L., M.v.d.S., L.N.; Formal analysis: T.D.L.; Investigation: M.P.; Writing—original draft preparation: E.H.; Writing—review and editing: M.v.d.S., L.N.; Supervision: M.v.d.S., L.N.; Funding acquisition: L.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Dutch Nationaal Regieorgaan Onderwijsonderzoek, grant number 40.5.19945.601.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethical Review Board of the HAN University of Applied Sciences (protocol code ECO 312.12/21 and date of approval 23 December 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available in R-station of the Department of Responsive Vocational and Professional Education of HAN University of Applied Sciences (https://doi.org/10.17026/SS/PVQ66Y; accessed on 11 January 2026).

Acknowledgments

This study is part of the This study is part of the Adapt at work project: a 5-year research project (July 2019–July 2024) that was in initiated in the Netherlands, involving a consortium of 6 universities of applied sciences and 5 research universities. The project centers around the development of adaptive expertise in work-based learning contexts within higher education.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adapt at Work. (2024). Available online: https://adaptatwork.nl/en/ (accessed on 11 January 2026).
  2. Allen, J. P., & van der Velden, R. K. W. (2005). The role of self-assessment in measuring skills. In REFLEX working paper series No. 2. ROA external reports. ROA. [Google Scholar]
  3. American Educational Research Association, American Psychological Association & National Council on Measurement in Education (Eds.). (2014). Standards for educational and psychological testing. American Educational Research Association. [Google Scholar]
  4. Baard, S. K., Rench, T. A., & Kozlowski, S. W. J. (2013). Performance adaptation: A theoretical integration and review. Journal of Management, 40, 48–99. [Google Scholar] [CrossRef]
  5. Barnett, S. M., & Koslowski, B. (2002). Adaptive expertise: Effects of type of experience and the level of theoretical understanding it generates. Thinking & Reasoning, 8(4), 237–267. [Google Scholar] [CrossRef]
  6. Bohle-Carbonell, K., Könings, K. D., Segers, M., & van Merriënboer, J. J. G. (2016). Measuring adaptive expertise: Development and validation of an instrument. European Journal of Work and Organizational Psychology, 25(2), 167–180. [Google Scholar] [CrossRef]
  7. Bowman, S., Salter, J., Stephenson, C., & Humble, D. (2024). Metamodern sensibilities: Toward a pedagogical framework for a wicked world. Teaching in Higher Education, 29(5), 1361–1380. [Google Scholar] [CrossRef]
  8. Campbell, J. M., & McCord, D. M. (1999). Measuring social competence with the Wechsler picture arrangement and comprehension subtests. Assessment, 6(3), 215–223. [Google Scholar] [CrossRef]
  9. Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt, & W. Borman (Eds.), Personnel selection in organizations (pp. 35–70). Jossey-Bass. [Google Scholar]
  10. Chi, M. (2011). Theoretical perspectives, methodological approaches, and trends in the study of expertise. In Y. Li, & G. Kaiser (Eds.), Expertise in mathematics instruction: An international perspective (pp. 17–39). Springer. [Google Scholar]
  11. Cupido, N., Ross, S., Lawrence, K., Bethune, C., Fowler, N., Hess, B., van der Goes, T., & Schultz, K. (2022). Making sense of adaptive expertise for frontline clinical educators: A scoping review of definitions and strategies. Advances in Health Sciences Education, 27(5), 1213–1243. [Google Scholar] [CrossRef] [PubMed]
  12. Ferguson, J. H., Lehmann, J., Zastavker, Y. V., Chang, S., Higginson, R. P., & Talgar, C. P. (2018, June 23–July 27). Adaptive expertise: The development of a measurement instrument. 2018 ASEE Annual Conference & Exposition, Salt Lake City, UT, USA. [Google Scholar] [CrossRef]
  13. Fisher, F. T., & Peterson, P. L. (2001, June 24–27). A tool to measure adaptive expertise in biomedical engineering students. ASEE Annual Conference Proceedings (pp. 1249–1263), Albuquerque, NM, USA. [Google Scholar]
  14. Gilmore, J., & Feldon, D. F. (2010, April 30–May 4). Measuring graduate students’ teaching and research skills through self-report: Descriptive findings and validity. Annual Meeting of the American Educational Research Association, Denver, CO, USA. [Google Scholar]
  15. Groenier, M., Khaled, A., Kamphorst, B. A., de Jong, T., & Verdonschot, S. G. M. (2025). Becoming an adaptive expert through work-based learning: A realist review. Higher Education Research & Development, 44(1), 1–16. [Google Scholar] [CrossRef]
  16. Hatano, G., & Inagaki, K. (1986). Two courses of expertise. In H. Stevenson, H. Azmuma, & K. Hakuta (Eds.), Child development and education in Japan. W.Y. Freeman and Co. [Google Scholar]
  17. Hesketh, B., & Neal, A. (1999). Technology and performance. In D. R. Ilgen, & D. P. Pulakos (Eds.), The changing nature of performance: Implications for staffing, motivation and development (pp. 21–55). Jossey-Bass. [Google Scholar]
  18. Hissink, E., Pelgrim, E. A. M., Nieuwenhuis, A. F. M., Bus, L., Kuijer-Siebelink, W., & Van Der Schaaf, M. (2025). Measuring adaptive expertise in (becoming) healthcare professionals—A review of measurement instruments and their quality. Advances in Health Sciences Education, 1–27. [Google Scholar] [CrossRef]
  19. Jundt, D. K., Shoss, M. K., & Huang, J. L. (2015). Individual adaptive performance in organizations: A review. Journal of Organizational Behavior, 36, S53–S71. [Google Scholar] [CrossRef]
  20. Krawczyńska-Zaucha, T. (2020). What is the philosophy of education needed in the XXI century? Culture Society Education, 18(2), 467–478. [Google Scholar] [CrossRef]
  21. Kua, J., Teo, W., & Lim, W. S. (2022). Learning experiences of adaptive experts: A reflexive thematic analysis. Advances in Health Sciences Education, 27(5), 1345–1359. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  22. Loughlin, C., Lygo-Baker, S., & Lindberg-Sand, Å. (2020). Reclaiming constructive alignment. European Journal of Higher Education, 11(2), 119–136. [Google Scholar] [CrossRef]
  23. Mylopoulos, M., Dolmans, D. H. J. M., & Woods, N. N. (2022). The imperative for (and opportunities of) research on adaptive expertise in health professions education. Advances in Health Sciences Education, 27(5), 1207–1212. [Google Scholar] [CrossRef]
  24. Nieuwenhuis, A. F. M., & Fluit, C. R. M. G. (2019). Flexible higher education: Lifelong professional competence development in work-based settings [Essay]. Netherlands Organization for Research (NRO). [Google Scholar]
  25. Oprins, E. A. P. B., van den Bosch, K., & Venrooij, W. (2018). Measuring adaptability demands of jobs and the adaptability of military and civilians. Military Psychology, 30(6), 576–589. [Google Scholar] [CrossRef]
  26. Pelgrim, E., Hissink, E., Bus, L., van der Schaaf, M., Nieuwenhuis, L., van Tartwijk, J., & Kuijer-Siebelink, W. (2022). Professionals’ adaptive expertise and adaptive performance in educational and workplace settings: An overview of reviews. Advances in Health Sciences Education, 27(5), 1245–1263. [Google Scholar] [CrossRef]
  27. Ployhart, R. E., & Bliese, P. D. (2006). Individual adaptability (I-ADAPT) theory: Conceptualizing the antecedents, consequences, and measurement of individual differences in adaptability. In C. S. Burke, L. G. Pierce, & E. Salas (Eds.), Understanding adaptability: A prerequisite for effective performance within complex environments (pp. 3–39). Elsevier. [Google Scholar] [CrossRef]
  28. Pulakos, E. D., Arad, S., Donovan, M. A., & Plamondon, K. E. (2000). Adaptability in the workplace: Development of a taxonomy of adaptive performance. Journal of Applied Psychology, 85(4), 612–624. [Google Scholar] [CrossRef]
  29. Pulakos, E. D., Schmitt, N., Dorsey, D. W., Arad, S., Borman, W. C., & Hedge, J. W. (2002). Predicting adaptive performance: Further tests of a model of adaptability. Human Performance, 15(4), 299–323. [Google Scholar] [CrossRef]
  30. Suh, J. K., Hand, B., Dursun, J. E., Lammert, C., & Fulmer, G. (2023). Characterizing adaptive teaching expertise: Teacher profiles based on epistemic orientation and knowledge of epistemic tools. Science Education, 107(4), 884–911. [Google Scholar] [CrossRef]
  31. Tynjälä, P., Nuutinen, A., Eteläpelto, A., Kirjonen, J., & Remes, P. (1997). The acquisition of professional expertise—A challenge for educational research. Scandinavian Journal of Educational Research, 41, 475–494. [Google Scholar] [CrossRef]
  32. van Dam, K., & Meulders, M. (2020). The adaptability scale: Development, internal consistency, and initial validity evidence. European Journal of Psychological Assessment, 37(2), 123–134. [Google Scholar] [CrossRef]
  33. van Tartwijk, J., van Dijk, E. E., Geertsema, J., Kluijtmans, M., & van der Schaaf, M. (2023). Teacher expertise and how it develops during teachers’ professional lives. In R. J. Tierney, F. Rizvi, & K. Erkican (Eds.), International encyclopedia of education (5th ed., pp. 170–179). Elsevier. [Google Scholar]
  34. Veltman, M. E., van Keulen, H., & Voogt, J. M. (2021). Using problems with wicked tendencies as vehicles for learning in higher professional education: Towards coherent curriculum design. The Curriculum Journal, 32(4), 710–728. [Google Scholar] [CrossRef]
  35. Walker, J. M. T., King, P. H., & Brophy, S. P. (2006). Design scenarios as an assessment of adaptive expertise. International Journal of Engineering Science, 22(3), 645–651. [Google Scholar]
  36. Ward, P., Schraagen, J. M., Gore, J., & Roth, E. M. (2018). An introduction to the handbook, communities of practice, and definitions of expertise. In P. Ward, J. M. Schraagen, J. Gore, & E. M. Roth (Eds.), The oxford handbook of expertise. Oxford University Press. [Google Scholar]
  37. Yan, Z., & Carless, D. (2021). Self-assessment is about more than self: The enabling role of feedback literacy. Assessment & Evaluation in Higher Education, 47(7), 1116–1128. [Google Scholar] [CrossRef]
  38. Yoon, S. A., Evans, C., Miller, K., Anderson, E., & Koehler, J. (2019). Validating a model for assessing science teacher’s adaptive expertise with computer-supported complex systems curricula and its relationship to student learning outcomes. Journal of Science Teacher Education, 30(8), 890–905. [Google Scholar] [CrossRef]
Figure 1. Scree plot of the Adaptive Expertise Inventory—T1.
Figure 1. Scree plot of the Adaptive Expertise Inventory—T1.
Education 16 00324 g001
Figure 2. Scree plot of the Adaptive Expertise Inventory—T2.
Figure 2. Scree plot of the Adaptive Expertise Inventory—T2.
Education 16 00324 g002
Figure 3. Scree plot of the Self-Adapt—T1.
Figure 3. Scree plot of the Self-Adapt—T1.
Education 16 00324 g003
Figure 4. Scree plot of Self-Adapt—T2.
Figure 4. Scree plot of Self-Adapt—T2.
Education 16 00324 g004
Figure 5. Scree plot of the Adaptability Scale—T1.
Figure 5. Scree plot of the Adaptability Scale—T1.
Education 16 00324 g005
Figure 6. Scree plot of the Adaptability Scale—T2.
Figure 6. Scree plot of the Adaptability Scale—T2.
Education 16 00324 g006
Figure 7. Scree plot of the three instruments combined—T1.
Figure 7. Scree plot of the three instruments combined—T1.
Education 16 00324 g007
Figure 8. Scree plot of the three instruments combined—T2.
Figure 8. Scree plot of the three instruments combined—T2.
Education 16 00324 g008
Table 1. Factors and sample questions of the Adaptive Expertise Inventory.
Table 1. Factors and sample questions of the Adaptive Expertise Inventory.
FactorAbbreviationSample Item
Prompt: In the Past Year…
Domain Skills DS1
α = 0.79
I was able to develop and integrate new knowledge with what I learned in the past.
Innovative SkillsIS1
α = 0.78/0.74
I applied my knowledge in new and unfamiliar situations in areas related to my discipline with a degree of success.
Table 2. Factors and sample questions of the Self-Adapt.
Table 2. Factors and sample questions of the Self-Adapt.

Factor
AbbreviationSample Item
Prompt: How Effective Do You Consider Yourself at Performing This Behaviour in Your Work?
Complex problem situations CPS
α 0.81
Analysing an unfamiliar problem.
Crisis situations CRS
α 0.94
Understanding the situation in order to make a proper decision.
Cultural situationsCUS
α 0.93
Being open to how people from a different cultural background behave.
Interpersonal situationsIPS
α 0.68
Observing the behaviour of other people in order to get to know them.
Stress situationsSTS
α 0.89
Recognizing signs of stress in complicated situations.
Physically demanding circumstancesPDC
α 0.95
Recognizing when physical circumstances make your job harder to carry out.
Table 3. Factors and sample questions of the Adaptability Scale.
Table 3. Factors and sample questions of the Adaptability Scale.

Factor
Abbreviation/Item NumberSample Item
Prompt: To What Extent Do You Agree with the Following Statements?
CognitiveCog 1I am confident that I can handle every challenge.
BehaviouralBeh 1I can handle new and unknown situations well.
AffectiveAff 1If I have to change my plans, I stay relaxed.
Table 4. Response count.
Table 4. Response count.
RU (*)/UAS (**)DomainResponse T1Response T2
1UASEconomy253
2UASHealth256
3UASSocial studies10316
4UASTechnology52
5UASTechnology1510
6RUScience13065
7RUHealth6529
8UASInterdisciplinary50
9RUHealth/Technology4119
10RUEducation484
11RUAgriculture4425
Total506179
(*) RU = Research University; (**) UAS = University of Applied Sciences.
Table 5. Eigenvalues (T1 and T2) and explained variance (T1) of the factors of the Adaptive Expertise Inventory.
Table 5. Eigenvalues (T1 and T2) and explained variance (T1) of the factors of the Adaptive Expertise Inventory.
FactorEigenvalues T1% of Variance T1% Cumulative Variance T1Eigenvalues T2
14.65642.32842.3284.280
21.18610.78253.1101.324
30.9398.53761.6471.102
40.7837.11468.7610.812
50.7136.48075.2410.723
Table 6. Factor loadings of the Adaptive Expertise Inventory—T1 and T2.
Table 6. Factor loadings of the Adaptive Expertise Inventory—T1 and T2.
T1T2
ItemFactor 1Factor 2ItemFactor 1Factor 2
DS10.5510.223DS10.1120.58
DS20.4530.18DS20.4230.24
DS30.6810.059DS30.5440.176
DS40.813−0.092DS40.997−0.236
DS50.803−0.206DS50.5680.18
DS60.5780.151DS60.0880.573
IS10.4630.298IS10.1590.642
IS20.4230.301IS2−0.0610.64
IS3−0.0050.246IS30.2170.038
IS4−0.0060.705IS4−0.0350.578
IS50.1610.58IS5−0.0250.689
Table 7. Eigenvalues (T1 and T2) and explained variance (T1) of the factors of the Self-Adapt.
Table 7. Eigenvalues (T1 and T2) and explained variance (T1) of the factors of the Self-Adapt.
FactorEigenvalues T1% of Variance T1% Cumulative Variance T1Eigenvalues T2
18.31726.82926.82910.046
23.31210.68437.5133.173
32.2757.33844.8512.828
41.9116.16451.0142.198
51.3484.34955.3631.854
61.3004.19359.5561.243
70.9753.14562.7010.993
80.8562.76365.4640.961
Table 8. Factor loadings of the Self-Adapt—T1 and T2.
Table 8. Factor loadings of the Self-Adapt—T1 and T2.
T1T2
ItemFactor
1
Factor
2
Factor
3
Factor
4
Factor
5
Factor
6
ItemFactor
1
Factor
2
Factor
3
Factor
4
Factor
5
Factor
6
CPS 1−0.04−0.050.100.00−0.030.65CPS 1−0.07−0.080.060.090.340.16
CPS 20.000.00−0.040.02−0.020.70CPS 2−0.070.09−0.03−0.070.47−0.01
CPS 30.000.060.070.10−0.120.45CPS 30.04−0.09−0.06−0.040.580.11
CPS 40.040.020.04−0.02−0.020.60CPS 40.120.050.100.060.79−0.09
CPS 50.070.120.10−0.010.070.44CPS 50.040.02−0.030.040.42−0.07
CRS 10.060.000.60−0.010.020.12CRS 10.070.20−0.220.130.240.07
CRS 20.050.080.620.09−0.020.04CRS 20.040.10−0.640.020.060.11
CRS 30.05−0.050.70−0.020.070.12CRS 30.02−0.02−0.830.050.03−0.02
CRS 40.00−0.020.80−0.070.010.01CRS 40.120.03−0.84−0.01−0.04−0.05
CRS 50.000.050.590.09−0.110.00CRS 5−0.06−0.06−0.960.01−0.02−0.03
CUS 10.010.71−0.020.080.030.00CUS 10.040.850.04−0.050.020.02
CUS 2−0.010.77−0.06−0.040.030.05CUS 20.030.860.010.02−0.04−0.03
CUS 30.000.82−0.03−0.10−0.140.06CUS 30.080.84−0.030.03−0.01−0.02
CUS 40.010.780.090.000.04−0.06CUS 4−0.050.86−0.070.090.000.01
CUS 50.030.680.020.120.080.00CUS 50.000.84−0.11−0.020.06−0.07
IPS 10.000.090.200.44−0.110.03IPS 10.100.33−0.010.020.030.30
IPS 2−0.080.100.240.53−0.14−0.09IPS 2−0.010.230.010.150.060.34
IPS 30.090.080.100.49−0.090.06IPS 30.010.370.080.070.040.25
IPS 40.03−0.01−0.110.68−0.070.00IPS 40.16−0.030.110.130.050.59
IPS 50.050.02−0.030.710.180.10IPS 50.000.03−0.09−0.040.000.80
STS 10.01−0.010.100.17−0.380.14STS 10.120.08−0.120.55−0.030.16
STS 20.04−0.04−0.040.11−0.670.05STS 2−0.010.04−0.060.76−0.070.07
STS 30.080.000.000.05−0.64−0.03STS 3−0.010.08−0.050.81−0.070.03
STS 40.02−0.020.00−0.07−0.750.06STS 40.03−0.030.110.930.10−0.11
STS 50.070.040.03−0.06−0.690.00STS 50.01−0.07−0.040.850.05−0.04
PDC 10.480.00−0.010.01−0.220.05PDC 10.830.020.010.010.070.08
PDC 20.640.040.050.06−0.06−0.08PDC 20.88−0.03−0.030.02−0.030.03
PDC 30.73−0.030.09−0.020.00−0.07PDC 30.92−0.030.02−0.020.010.03
PDC 40.730.060.10−0.07−0.09−0.05PDC 40.890.01−0.040.07−0.04−0.04
PDC 50.78−0.06−0.070.010.010.11PDC 50.90−0.03−0.050.010.030.00
PDC 60.720.04−0.040.040.090.08PDC 60.890.100.00−0.03−0.02−0.06
Table 9. Eigenvalues (T1 and T2) and explained variance (T1) of the factors of the Adaptability Scale.
Table 9. Eigenvalues (T1 and T2) and explained variance (T1) of the factors of the Adaptability Scale.
FactorEigenvalues T1% of Variance T1% Cumulative Variance T1Eigenvalues T2
14.18141.81141.8114.446
21.49914.99556.8051.569
31.04710.47067.2750.949
40.6386.38573.6600.642
50.6176.17379.8330.561
Table 10. Factor loadings of the Adaptability Scale—T1 and T2.
Table 10. Factor loadings of the Adaptability Scale—T1 and T2.
T1T2
ItemFactor
1
Factor
2
Factor
3
ItemFactor
1
Factor
2
Factor
3
Cognitive 10.3770.0860.306Cognitive 10.6810.0960.111
Cognitive 20.067−0.0060.716Cognitive 2−0.046−0.160.598
Cognitive 3−0.052−0.1030.748Cognitive 30.1670.120.709
Behavioural 10.618−0.0720.133Behavioural 10.487−0.0570.38
Behavioural 20.8270.0120.02Behavioural 20.621−0.0790.155
Behavioural 30.732−0.031−0.054Behavioural 30.884−0.045−0.117
Affective 10.399−0.347−0.069Affective 10.429−0.315−0.022
Affective 20.091−0.7030.036Affective 20.004−0.874−0.027
Affective 3−0.032−0.750.025Affective 30.028−0.746−0.025
Affective 4−0.029−0.8380.05Affective 40.017−0.670.152
Table 11. Eigenvalues (T1 and T2) and explained variance (T1) of measures of the three instruments combined.
Table 11. Eigenvalues (T1 and T2) and explained variance (T1) of measures of the three instruments combined.

Factor
Eigenvalues T1% of Variance T1% Cumulative Variance T1Eigenvalues T2
111.01921.19021.19011.629
24.2838.23729.4275.197
33.2776.30235.7293.439
42.9605.69341.4223.030
52.2454.31845.7402.750
61.9193.68949.4292.187
71.4282.74652.1751.842
81.3702.63554.8101.664
91.3122.52457.3341.449
101.0822.08259.4151.336
111.0081.93861.3531.222
120.9931.91063.2641.110
Table 12. Factor loadings of the three instruments combined—T1.
Table 12. Factor loadings of the three instruments combined—T1.
CFA per Instrument (=Table 6, Table 8 and Table 10)Exploratory Factor Analysis of the Three Instruments Combined
ItemFactor
1
Factor
2
Factor
3
Factor
4
Factor
5
Factor
6
Factor
1
Factor
2
Factor
3
Factor
4
Factor
5
Factor
6
Factor
7
Factor
8
Factor
9
Factor
10
Factor
11
DS10.5510.223 −0.01−0.080.000.630.010.010.030.05−0.010.100.01
DS20.4530.18 0.10−0.08−0.010.500.05−0.05−0.10−0.090.07−0.020.05
DS30.6810.059 −0.110.040.020.660.13−0.040.01−0.040.060.080.00
DS40.813−0.092 −0.100.02−0.030.740.01−0.020.070.03−0.030.09−0.05
DS50.803−0.206 −0.08−0.010.010.61−0.110.020.140.00−0.020.21−0.08
DS60.5780.151 0.07−0.05−0.060.60−0.04−0.010.05−0.02−0.060.050.02
IS10.4630.298 0.200.030.000.60−0.03−0.010.10−0.020.03−0.110.05
IS20.4230.301 0.200.07−0.060.52−0.12−0.04−0.010.160.02−0.030.08
IS3−0.0050.246 −0.17−0.01−0.050.120.190.02−0.10−0.030.12−0.050.15
IS4−0.0060.705 0.13−0.090.100.290.10−0.04−0.050.000.000.030.31
IS50.1610.58 0.090.10−0.040.420.10−0.02−0.090.080.08−0.110.27
CPS 1−0.04−0.050.100.00−0.030.650.56−0.050.020.080.190.030.020.050.05−0.020.03
CPS 20.000.00−0.040.02−0.020.700.54−0.03−0.030.120.08−0.020.010.040.060.080.00
CPS 30.000.060.070.10−0.120.450.38−0.11−0.100.050.13−0.010.09−0.050.04−0.050.12
CPS 40.040.020.04−0.02−0.020.600.520.00−0.06−0.020.11−0.06−0.01−0.050.060.050.07
CPS 50.070.120.10−0.010.070.440.360.06−0.15−0.010.15−0.08−0.030.030.040.070.09
CRS 10.060.000.60−0.010.020.120.11−0.05−0.01−0.010.59−0.070.010.05−0.050.040.05
CRS 20.050.080.620.09−0.020.040.10−0.06−0.09−0.020.58−0.090.13−0.01−0.03−0.010.01
CRS 30.05−0.050.70−0.020.070.120.090.040.040.070.69−0.060.000.00−0.010.040.04
CRS 40.00−0.020.80−0.070.010.010.070.030.02−0.120.76−0.02−0.010.060.050.010.00
CRS 50.000.050.590.09−0.110.000.08−0.14−0.06−0.010.59−0.030.140.100.01−0.02−0.14
CUS 10.010.71−0.020.080.030.00−0.040.10−0.72−0.02−0.03−0.010.050.000.110.16−0.01
CUS 2−0.010.77−0.06−0.040.030.050.040.02−0.77−0.01−0.050.01−0.040.080.000.06−0.10
CUS 30.000.82−0.03−0.10−0.140.060.07−0.04−0.83−0.01−0.05−0.02−0.09−0.040.09−0.060.04
CUS 40.010.780.090.000.04−0.06−0.06−0.01−0.780.020.10−0.010.03−0.01−0.08−0.050.00
CUS 50.030.680.020.120.080.000.03−0.07−0.690.030.00−0.030.17−0.03−0.20−0.030.06
IPS 10.000.090.200.44−0.110.030.030.07−0.120.070.16−0.030.38−0.080.26−0.070.05
IPS 2−0.080.100.240.53−0.14−0.09−0.060.04−0.120.100.190.040.44−0.100.27−0.070.09
IPS 30.090.080.100.49−0.090.060.05−0.07−0.09−0.020.11−0.110.46−0.080.100.070.11
IPS 40.03−0.01−0.110.68−0.070.00−0.03−0.130.000.07−0.05−0.010.680.060.00−0.060.02
IPS 50.050.02−0.030.710.180.100.060.12−0.050.090.05−0.030.650.00−0.010.020.00
STS 10.01−0.010.100.17−0.380.140.12−0.010.00−0.020.08−0.060.110.090.580.08−0.09
STS 20.04−0.04−0.040.11−0.670.050.04−0.260.030.02−0.10−0.100.070.010.60−0.050.02
STS 30.080.000.000.05−0.64−0.030.02−0.250.00−0.06−0.07−0.140.02−0.010.55−0.030.00
STS 40.02−0.020.00−0.07−0.750.060.08−0.720.03−0.010.00−0.05−0.02−0.030.15−0.010.03
STS 50.070.040.03−0.06−0.690.00−0.05−0.75−0.020.050.08−0.08−0.01−0.030.070.02−0.01
PDC 10.480.00−0.010.01−0.220.050.00−0.140.000.060.01−0.510.010.050.140.04−0.11
PDC 20.640.040.050.06−0.06−0.08−0.09−0.07−0.02−0.080.06−0.650.09−0.080.000.110.02
PDC 30.73−0.030.09−0.020.00−0.07−0.030.070.02−0.050.04−0.75−0.04−0.050.05−0.020.03
PDC 40.730.060.10−0.07−0.09−0.05−0.01−0.09−0.070.050.08−0.75−0.050.07−0.01−0.11−0.10
PDC 50.78−0.06−0.070.010.010.110.050.020.050.06−0.07−0.78−0.010.010.00−0.040.11
PDC 60.720.04−0.040.040.090.080.040.04−0.040.04−0.03−0.700.040.06−0.050.010.02
Cog 10.3770.0860.306 0.12−0.06−0.030.050.16−0.09−0.13−0.01−0.050.240.33
Cog 20.067−0.0060.716 −0.02−0.03−0.160.180.020.01−0.050.02−0.010.600.15
Cog 3−0.052−0.1030.748 0.170.05−0.030.260.020.03−0.070.110.070.480.03
Beh 10.618−0.0720.133 0.000.000.000.060.01−0.020.080.170.070.140.58
Beh 20.8270.0120.02 0.06−0.03−0.05−0.07−0.05−0.050.150.18−0.060.140.62
Beh 30.732−0.031−0.054 0.13−0.07−0.01−0.01−0.02−0.130.080.15−0.06−0.020.57
Aff 10.399−0.347−0.069 −0.04−0.18−0.02−0.010.030.070.110.46−0.060.000.26
Aff 20.091−0.7030.036 0.050.020.01−0.020.000.000.040.750.060.040.03
Aff 3−0.032−0.750.025 0.020.050.01−0.080.050.00−0.050.720.020.030.00
Aff 4−0.029−0.8380.05 −0.100.05−0.020.110.05−0.07−0.080.840.02−0.040.00
Table 13. Factor loadings of the three instruments combined—T2.
Table 13. Factor loadings of the three instruments combined—T2.
CFA per Instrument (=Table 6, Table 8 and Table 10)Exploratory Factor Analysis of the Three Instruments Combined
1234561234567891011
DS10.1120.58 −0.100.090.07−0.09−0.150.250.100.040.00−0.01−0.46
DS20.4230.24 0.100.130.090.00−0.100.61−0.05−0.11−0.05−0.09−0.05
DS30.5440.176 0.00−0.14−0.04−0.04−0.240.560.17−0.04−0.12−0.06−0.14
DS40.997−0.236 0.11−0.15−0.010.020.010.750.010.00−0.020.110.01
DS50.5680.18 0.050.040.030.050.100.62−0.050.210.170.17−0.19
DS60.0880.573 −0.140.110.040.06−0.030.30−0.03−0.020.120.02−0.48
IS10.1590.642 0.020.00−0.030.05−0.050.290.04−0.02−0.200.08−0.50
IS2−0.0610.64 −0.110.240.090.02−0.130.18−0.01−0.12−0.040.16−0.32
IS30.2170.038 0.090.13−0.070.06−0.020.220.040.05−0.07−0.130.05
IS4−0.0350.578 0.050.23−0.060.060.080.060.050.11−0.610.02−0.29
IS5−0.0250.689 0.000.02−0.030.06−0.020.040.000.14−0.230.00−0.66
CPS 1−0.07−0.080.060.090.340.16−0.100.02−0.07−0.09−0.100.01−0.040.16−0.050.36−0.06
CPS 2−0.070.09−0.03−0.070.47−0.01−0.030.030.070.040.070.040.00−0.010.070.47−0.05
CPS 30.04−0.09−0.06−0.040.580.110.05−0.06−0.090.090.040.010.120.160.090.59−0.03
CPS 40.120.050.100.060.79−0.090.150.010.05−0.08−0.070.02−0.01−0.05−0.080.730.14
CPS 50.040.02−0.030.040.42−0.070.040.06−0.020.06−0.11−0.040.05−0.12−0.110.290.03
CRS 10.070.20−0.220.130.240.070.10−0.110.250.23−0.12−0.100.200.06−0.120.20−0.17
CRS 20.040.10−0.640.020.060.110.020.030.110.68−0.04−0.010.030.120.000.06−0.01
CRS 30.02−0.02−0.830.050.03−0.020.020.02−0.030.86−0.080.000.04−0.020.120.04−0.02
CRS 40.120.03−0.84−0.01−0.04−0.050.140.050.070.840.03−0.02−0.10−0.05−0.09−0.060.00
CRS 5−0.06−0.06−0.960.01−0.02−0.03−0.02−0.040.020.910.020.020.02−0.04−0.05−0.03−0.02
CUS 10.040.850.04−0.050.020.020.05−0.020.82−0.010.020.010.050.030.12−0.03−0.08
CUS 20.030.860.010.02−0.04−0.030.030.030.830.04−0.030.020.00−0.020.12−0.08−0.08
CUS 30.080.84−0.030.03−0.01−0.020.090.010.830.05−0.030.01−0.020.00−0.04−0.020.05
CUS 4−0.050.86−0.070.090.000.01−0.040.030.850.08−0.060.080.000.02−0.020.030.06
CUS 50.000.84−0.11−0.020.06−0.070.02−0.110.890.110.01−0.070.01−0.05−0.080.050.05
IPS 10.100.33−0.010.020.030.300.030.030.320.00−0.070.130.070.35−0.020.040.24
IPS 2−0.010.230.010.150.060.34−0.040.160.210.03−0.15−0.03−0.040.390.010.080.04
IPS 30.010.370.080.070.040.250.050.160.42−0.10−0.02−0.070.010.20−0.130.04−0.04
IPS 40.16−0.030.110.130.050.590.19−0.130.01−0.11−0.17−0.090.070.60−0.19−0.01−0.15
IPS 50.000.03−0.09−0.040.000.800.020.000.040.05−0.040.03−0.050.770.040.04−0.02
STS 10.120.08−0.120.55−0.030.160.140.000.070.13−0.580.05−0.020.150.11−0.02−0.04
STS 2−0.010.04−0.060.76−0.070.070.020.010.030.07−0.780.10−0.020.120.14−0.030.12
STS 3−0.010.08−0.050.81−0.070.030.05−0.020.060.08−0.81−0.090.040.050.11−0.05−0.08
STS 40.03−0.030.110.930.10−0.110.060.07−0.01−0.09−0.860.02−0.05−0.06−0.040.110.05
STS 50.01−0.07−0.040.850.05−0.040.04−0.030.010.02−0.750.02−0.04−0.03−0.280.05−0.07
PDC 10.830.020.010.010.070.080.81−0.010.05−0.01−0.04−0.02−0.020.04−0.030.07−0.04
PDC 20.88−0.03−0.030.02−0.030.030.870.02−0.020.04−0.040.05−0.030.000.07−0.02−0.03
PDC 30.92−0.030.02−0.020.010.030.880.04−0.030.01−0.010.07−0.030.050.030.020.08
PDC 40.890.01−0.040.07−0.04−0.040.880.000.030.05−0.060.030.00−0.040.000.00−0.02
PDC 50.90−0.03−0.050.010.030.000.88−0.03−0.010.08−0.030.04−0.050.02−0.020.030.05
PDC 60.890.100.00−0.03−0.02−0.060.89−0.020.100.000.030.000.03−0.06−0.03−0.020.01
Cog 10.6810.0960.111 0.010.73−0.020.05−0.06−0.170.00−0.02−0.070.04−0.03
Cog 2−0.046−0.160.598 0.180.140.02−0.110.02−0.070.210.080.180.06−0.43
Cog 30.1670.120.709 0.160.430.06−0.11−0.06−0.080.00−0.010.160.05−0.33
Beh 10.487−0.0570.38 0.090.590.03−0.10−0.020.020.18−0.040.100.00−0.22
Beh 20.621−0.0790.155 0.060.580.030.06−0.02−0.010.200.02−0.08−0.01−0.10
Beh 30.884−0.045−0.117 −0.030.740.070.010.020.200.160.05−0.120.020.22
Aff 10.429−0.315−0.022 0.060.260.05−0.090.030.060.390.05−0.150.01−0.04
Aff 20.004−0.874−0.027 −0.02−0.040.11−0.090.040.010.88−0.040.00−0.010.06
Aff 30.028−0.746−0.025 −0.03−0.02−0.040.090.030.000.79−0.03−0.01−0.010.03
Aff 40.017−0.670.152 −0.040.10−0.050.03−0.03−0.040.710.020.080.06−0.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hissink, E.; Laat, T.D.; Schaaf, M.v.d.; Peters, M.; Nieuwenhuis, L. Evaluating the Construct Validity of Self-Assessment Tools for Students’ Adaptive Expertise. Educ. Sci. 2026, 16, 324. https://doi.org/10.3390/educsci16020324

AMA Style

Hissink E, Laat TD, Schaaf Mvd, Peters M, Nieuwenhuis L. Evaluating the Construct Validity of Self-Assessment Tools for Students’ Adaptive Expertise. Education Sciences. 2026; 16(2):324. https://doi.org/10.3390/educsci16020324

Chicago/Turabian Style

Hissink, Elske, Tom De Laat, Marieke van der Schaaf, Martijn Peters, and Loek Nieuwenhuis. 2026. "Evaluating the Construct Validity of Self-Assessment Tools for Students’ Adaptive Expertise" Education Sciences 16, no. 2: 324. https://doi.org/10.3390/educsci16020324

APA Style

Hissink, E., Laat, T. D., Schaaf, M. v. d., Peters, M., & Nieuwenhuis, L. (2026). Evaluating the Construct Validity of Self-Assessment Tools for Students’ Adaptive Expertise. Education Sciences, 16(2), 324. https://doi.org/10.3390/educsci16020324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop