Next Article in Journal
Disseminating the Past in 3D: O Corro dos Mouros and Its Ritual Landscape (Galicia, Spain)
Previous Article in Journal
Miniaturized Near-Infrared Analyzer for Quantitative Detection of Trace Water in Ethylene Glycol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing the Selection of Digital Learning Materials: A Facet of Pre-Service Teachers’ Digital Competence

1
Institute for Didactics of Mathematics, University of Cologne, 50931 Cologne, Germany
2
Department of Mathematics Education, University College of Education Upper Austria, 4020 Linz, Austria
3
Department of Psychology, Faculty of Human Sciences, University of Cologne, 50931 Cologne, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(11), 6024; https://doi.org/10.3390/app15116024
Submission received: 18 March 2025 / Revised: 23 May 2025 / Accepted: 26 May 2025 / Published: 27 May 2025

Abstract

:
Given the increasing digitalization of education and the variety of available digital learning materials (dLMs) of differing quality, (pre-service) teachers must develop the ability to select appropriate dLMs. Objective, reliable, and valid assessment instruments are necessary to evaluate the effectiveness of that development. This study conceptualized and designed an economical four-item instrument for assessing “selecting dLMs” based on accepted frameworks and competence models. The scientific quality of the instrument was evaluated in Study 1 (n = 164) with four dLMs and empirically investigated in a subsequent Study 2 (n = 395) with pre-service mathematics teachers from two universities. The empirical results indicate that the instrument could objectively and reliably gauge different levels of “selecting dLMs”. Furthermore, the results are consistent with the widely accepted notion that the competence of “selecting dLMs” depends on (content) knowledge; however, that relation was not strong. In addition, the results for objectively assessing “selecting dLMs” paralleled the results of self-assessed TPACK in terms of the academic progression of participants. The proposed approach allows for variations and integration of diverse dLMs, and it has the potential to be adapted in other subject areas and contexts.

1. Introduction

The development of technology and technology-enhanced learning materials has transformed teaching and learning during recent decades [1,2], and technology has become ubiquitous in education [3,4,5]. However, the term “technology” in education is ambiguous [6,7]. We understand technology as software applications that support educators’ activities, such as organizing their work, representing learning content, and facilitating self-regulated learning or collaboration [8,9]. Furthermore, the same activities are supported by digital learning materials (dLMs) [10], which are this manuscript’s focus. In particular, the facet of “selecting dLMs” and its assessment in pre-service teachers as part of their digital competence is our focus. This facet is essential for pre-service teachers, as they will later, in their teaching practice, need to assess dLMs that they may not have encountered during their teacher training.
In the conceptualization of the assessment presented in this manuscript, umbrella terms such as educational technology and digital resources typically refer to both digital technology and dLMs [7,11]. However, for the instrument developed here, only the selection of dLMs, referred to as “selecting dLMs”, is of relevance.
In general, teachers play an essential role in successfully integrating dLMs into their lesson planning. They must be competent in “selecting dLMs” [1,8,12] to provide learners with appropriate learning opportunities [13]. This aspect of digital competence is also crucial for teachers as they are confronted with the rapid development of dLMs and the vast number of dLMs of varying quality [14,15]. Developing this facet of digital competence during initial teacher training is therefore essential. Reliable and valid assessment instruments are needed to evaluate the effectiveness of that development [12]. The TPACK framework by Mishra and Koehler [16] describes that knowledge which pre-service teachers need to select digital resources, in general, and dLMs specifically. The framework is also widely used in assessments of pre-service teachers [11,17,18].
Two concerns arise regarding assessing the facet of “selecting dLMs” and the effort needed for its assessment. On the one hand, existing assessment instruments for teachers’ digital competence require participants to, for example, evaluate the value of learning apps [14], the interpretation of video vignettes [19,20], the use of static learning material [21,22], or the evaluation of developed lesson plans incorporating digital resources [23,24,25], but none directly addresses the selection of dLMs. On the other hand, most instruments, especially those regarding the frequently used TPACK framework [11,18,26], are self-report assessments. They are criticized for lacking validity and subject specificity [27,28,29] and may suffer from social desirability bias [30,31].
Because of the importance of “selecting dLMs” and the lack of open- and closed-text objective assessment instruments [11,18], we propose assessing “selecting dLMs” using four closed and open-text items that require respondents to reason for or against selecting a given dLM for a specific learner’s age, special learner’s needs, and explicit learning content. The items and scoring proposed in this manuscript for assessing “selecting dLMs” were applied to diverse dLMs and hold the potential to be adapted to other teaching subjects.
This study’s aim is threefold: (1) to elucidate the genesis of items for objectively assessing the digital competence facet of “selecting dLMs” in pre-service teachers using existing theoretical teacher competence models; (2) to empirically validate the designed four-item instrument (Study 1); and (3), to compare in a larger follow-up study (Study 2) the measurement results of this new instrument with those of an established TPACK self-report instrument by Schmid et al. [32].

2. Theoretical Background

2.1. Teacher Competence in Teacher Education

The professional competence of pre-service teachers is described in the teacher education model by Kaiser and König [33] as an outcome of teacher education that is both dispositional and situation-specific [34,35,36]. Teacher education enables pre-service teachers to master professional tasks such as lesson planning, including “selecting dLMs” [12]. Appropriately “selecting dLMs” for learning content and learner age contributes to student learning outcomes (cf. [13,37,38]). Therefore, it is crucial to have objective, reliable, and valid instruments to assess the effectiveness of developing the facet of “selecting dLMs”. The assessment results of such instruments provide teacher educators with insight into the effectiveness of their teaching while also offering pre-service teachers an adequate evaluation of their performance [33]. To develop “selecting dLMs”, pre-service teachers must understand the (pedagogical and content-specific) opportunities and risks associated with integrating dLMs into their teaching. The prominent framework that outlines this knowledge is Mishra and Koehler’s TPACK framework [16,39]. The framework is used globally and across teaching subjects in teacher education for development and assessment [11,18].

2.2. TPACK Framework

The TPACK framework by Mishra and Koehler [16] extends Shulman’s [40] seminal pedagogical content knowledge framework by adding technological knowledge and the resulting intersections. These intersections are known as technological content knowledge (TCK), technological pedagogical knowledge (TPK), and technological pedagogical and content knowledge (TPACK). Figure 1 illustrates the framework in the form of a Venn diagram [39].
Technological knowledge is conceptualized as developmental knowledge evolving according to technological changes. It encompasses teachers’ knowledge, enabling them to accomplish various tasks using technology. TCK refers to the reciprocal relationship between content and technology. Teachers must learn how technology can alter, enhance, or hinder the learning of (subject-specific) content and how integrating technology can change the subject matter. Although not explicitly mentioned in the framework, TCK is one component when “selecting dLMs” for specific learning content. TPK refers to technology integration in teaching and learning processes and the knowledge of how various technologies impact such processes. In the definition of the TPK component, the authors also explicitly address the selection of digital resources in general as follows: “...the ability to choose a tool based on its fitness, strategies for using the tool’s affordances, and knowledge of pedagogical strategies...” (p. 1028, [16]).
TPACK, the key component that brings everything together, encompasses teachers’ knowledge of the challenges and changes in teaching when using technology. This includes the knowledge of what makes content-related concepts easier or more challenging to learn and ways to overcome learning difficulties by using technology [16]. Although “selecting dLMs” is not explicitly addressed in the description of the TPACK component, one can deduce that this activity is included, as the selection process requires knowledge of the intersections of all components. Therefore, the facet of digital competence “selecting dLMs” is positioned within the TPACK component (see Figure 1).
Content knowledge is described as knowledge of the learning content, the curriculum knowledge regarding the learning content, and the special needs of learners [40]. The literature suggests that subject-specific content knowledge is essential when selecting dLMs [11,33,41].
Although the TPACK model’s naming and terms are described as knowledge, the research community understands that they are more than knowledge and entail attitudes and skills, which therefore describe competences [11,33,34,42,43]. Nevertheless, we follow the naming convention established by Koehler et al. [39] and refer to the framework and the intersection of all three components (i.e., content, pedagogical, and technological knowledge) as TPACK.
For developing “selecting dLMs” and designing items for its assessment, the TPACK framework and the descriptions of the CK, TCK, TPK, and TPACK components are essential. (1) Reasoning when “selecting dLMs” requires an understanding of the learning content with regard to the learner’s age and the special needs of learners. (2) Justification when selecting dLMs can involve subject-specific TCK, subject-unspecific TPK, and TPACK reasoning. (3) Furthermore, reasoning when selecting dLMs can also be categorized as TCK, TPK, and TPACK reasoning. (4) There is not necessarily a single decision for selecting dLMs across different teaching situations and contexts. Thus, the reasoning behind the selection, rather than the decision itself, is essential when assessing “selecting dLMs”.
The following sections provide examples of TCK and TPK reasoning when “selecting dLMs”. TCK-x and TPK-x numerically denote the various TCK and TPK reasons, as summarized in Table 1.
In addition, the provided examples of TCK-x, TPK-x, and TPACK reasoning are reinforced by references to local studies, the teacher digital competence framework of the European Commission DigCompEdu [44], the TPACK framework itself [16], and Grossman’s more Anglo-American teaching practices [45].

2.2.1. TCK-Reasoning

Our examples of TCK-x reasoning focus on mathematics education while offering limited examples related to other subjects. In mathematics education, dLMs can enable different representations [16] that are otherwise not possible using analog material (TCK-1), or offer real-world representations of content (TCK-2), and allow for dynamic representations (TCK-3) [16,46,47,48,49]. Using dLMs, multiple representations can be dynamically linked in new ways [50]. All these are “representation” reasons for selecting dLMs.
Another reason for integrating dLMs into mathematics education is their potential to decrease extraneous cognitive load (TCK-4) or to modify extraneous cognitive load (TCK-5) [9,16,51] by outsourcing repetitive tasks. This can help save lesson time and allow students to focus more on higher levels of mathematical thinking, leading to improved learning outcomes [6,37,41,48,51]. However, some dLMs can also increase the extraneous cognitive load if learners are unfamiliar with them or dLMs are inadequate for the specific learning content (TCK-6).
Other teaching subjects can entail additional TCK-x reasoning specific to them, but not applicable to mathematics. Examples include the ability to simulate phenomena invisible to the eye in biology or chemistry [20,52], connecting physical phenomena with their representations in physics [53], or understanding content through audio cues in language education [27].

2.2.2. TPK-Reasoning

TPK refers to the integration of technology in teaching and learning processes, and is subject-specific, as is TPK reasoning. Learner engagement and motivation may be why a teacher selects dLMs (TPK-1) [27,45,46,54,55,56,57]. In addition, learner motivation is also linked to improved student learning outcomes [6,37,41,48,51]. Moreover, dLMs can potentially support students’ self-regulated learning processes (TPK-2) [41,44,54,55,57,58,59]. In addition, dLMs can enhance discovery learning (TPK-3) [14,41,44,45] or be customized for learners, thus promoting differentiation and inclusion (TPK-4) [3,14,37,41,44,45,46], which could be reasons for their selection.
Also, applying dLMs in assessments may positively impact teachers’ efficiency (TPK-5) and can be a consideration in their selection [1,9,26,45,55]. On the other hand, dLMs may distract learners from the learning objective (TPK-6), which could be a reason for not selecting them [9,38,45,57].
Considering the multifaceted reasoning involved in “selecting dLMs” and the significance of this facet of digital competence for pre-service teachers, single- and multiple-choice items appear methodologically inadequate for assessing it. The evaluation of teaching approximations [11,45], which necessitate “selecting dLMs” but do not allow for assessing the underlying reasoning, is similarly insufficient. In addition, their instrumentation is also time-consuming. Evaluating (fictitious) lesson plans requires “selecting dLMs” and can uncover the rationale, but it is also time-intensive for evaluators [11].
We argue that open-text items for assessing “selecting dLMs” can capture the reasoning, are more time-efficient for evaluators, and are better suited than the frequently used TPACK self-assessments [11,18].
A literature review, focusing on the selection of digital resources, and particularly “selecting dLMs”, further supports this approach and identified a methodological and empirical research gap regarding objective, reliable, and valid open-text items for its assessment [11]. Consequently, we focused on the following research objectives and questions:
Research objective one: Design of open-text items for assessing “selecting dLMs”.
RQ 1.1: Are the designed items and scoring for assessing “selecting dLMs” objective, reliable, and valid?
In addition, we are interested in how the measurement results of the present instrument for “selecting dLMs” compare to the results of the frequently used TPACK self-assessments [32]. We also want to see if the open-text items are sensitive to educational characteristics [60,61] and what the relationship is between the instruments’ items (see Table 2). Based on this objective, we pose the following:
Research objective two: Empirical investigation of pre-service mathematics teachers using the new instrument to assess “selecting dLMs”.
RQ 2.1: Can the designed instrument for “selecting dLMs” assess different levels regarding the number of semesters of study, and are the results distinguishable from TPACK self-report results?
RQ 2.2: What is the relationship between the designed items?
RQ 2.3: What specific reasoning is considered when “selecting dLMs”?

3. Materials and Methods

This section outlines the design of items for assessing “selecting dLMs” and their scoring (research objective one). Additionally, the design of the two studies, along with their respective participant samples, is presented. Furthermore, a brief description of the four dLMs (dLM1-dLM4) used in the studies is presented.

3.1. Research Objective One: Design of Open-Text Items for Assessing “Selecting dLMs”

The items were constructed using a multi-methods approach. At the process level, we conducted a systematic literature review of existing TPACK methods for assessing the competence facet of selecting digital resources [11].
In addition, we engaged in individual and group discussions with mathematics teacher educators and in-service teachers to elucidate what digital resources they use in their teaching and their reasons for selecting them. Furthermore, we used small pilot studies testing different open-text items and contexts [62,63], supplemented by a qualitative interview study with pre- and in-service teachers, reasoning for or against using digital resources for a specific learning content and learner group [9]. This iterative process provided insights into the reasoning used when “selecting dLMs” and resulted in the items shown in Table 2.
At the conceptual level, the designed items shown in Table 2 are based on the competence model in teacher education by Blömeke et al. [34]. They distinguish between the reasoning for “selecting dLMs” as a cognitive disposition, the essential content knowledge needed for decision-making [41], and the situational factors influencing reasoning in such decisions [34]. The specifications for and the need to assess the essential knowledge are derived from the “content knowledge” definition within the TPACK framework [16,40], as outlined in items 1–3 in Table 2. The union of perception and interpretation of situational and contextual factors in decision-making is identified as TCK, TPK, and TPACK reasoning, found in item 4 in Table 2. The resulting four items present a very economical instrument, assessing “selecting dLMs” with only two open and two closed items.

3.2. Research Objective One: Scoring of the Designed Open-Text Items for Assessing “Selecting dLMs”

The understanding of the learning content, the suitability for a learner’s age, and the special needs of learners (see Figure 2, items 1–3) were evaluated together, as varying reasoning is possible depending on the former. A four-point scale ranging from zero to three was used for the scoring. We derived the scoring scale partly inductively from the responses, and deductively from the German and Austrian curricula. The latter outlines the requirements for learners with special needs and the appropriateness of learning content for a specific learner’s age.
Score of zero: When there is no description or an incorrect definition of the learning content is provided.
Score of one: For describing the learning content correctly, but which is not appropriate for the learner’s age or special educational needs specified in items 1 and 2.
Score of two: A generic description of the learning content that is appropriate for the learner’s age and the special educational needs specified in items 1 and 2.
Score of three: A detailed description of the learning content that is appropriate for the learner’s age and the special educational needs specified in items 1 and 2.
The reasoning for “selecting dLMs” (see Figure 2, item 4) was scored from zero to four on a five-point scale. It was categorized as TCK-x, TPK-x, or TPACK reasoning (see Table 1). The scoring scale was derived inductively from the pilot [62,63] and interview studies [9].
Score of zero: If no reasoning was given.
Score of one: If a generic reason was given.
Score of two: If one TCK-x or one TPK-x reason was given.
Score of three: If two TCK-x or two TPK-x reasons were given.
Score of four: If a TCK-x and a TPK-x reason, therefore TPACK, was given.
It is worth noting that, except for TPK-5 (i.e., justifying the selection of dLMs to increase the teacher’s efficacy), the learner’s age and special needs were considered in the reasoning scoring (see Figure 2, item 4). Reasoning, such as increasing or decreasing the extraneous cognitive load using a dLM, depends on the specified learner’s age and special needs. The dashed line in Figure 2 between items 1–3 and 4 indicates this relationship.

3.3. Description of the dLMs Used in the Studies for Evaluating the Designed Instrument

To provide context for evaluating the items for assessing “selecting dLMs”, we have chosen multiple dLMs (dLM1–dLM4) within GeoGebra due to its wide distribution. It boasts 100 million users in 190 countries and offers open-source access to more than one million open, accessible dLMs [64]. For dlM1 [65], we detail the intended learning content and provide possible reasoning in Appendix A.1. Additionally, we outline why we specifically decided on this dLM to address the research objectives. The same process was followed when deciding on the other dLMs (dLM2–dLM4).
dLM1 [65] and dLM2 [66] address the content of the subject Geometry, specifically radius (dLM1) and symmetry (dLM2). Both dLMs are designed to support the understanding of a mathematical concept [10], whereas dLM3 [67] and dLM4 [68] are intended for practicing [1,9,10]. More specifically, dLM3 [67] is designed for practicing the representation of frequencies in bar charts, and dLM4 [68] for practicing the calculation of the arithmetic mean. In both dLMs, similar tasks are presented multiple times, and automatic feedback is provided [1]. Table A1 and Table A2 in the Appendix A provide examples of responses and the scoring for all four dLMs.

3.4. Participants and Design of Study 1 and Study 2

We conducted two studies to address the research objectives and questions. For research objective one (Design of open-text items for assessing “selecting dLMs”), we applied the designed items to four different dLMs (dLM1–dLM4). In contrast, for research objective two (Empirical investigation of pre-service mathematics teachers using the new instrument to assess “selecting dLMs”), we used only dLM1 but a larger participant sample. Table 3 provides an overview of the two studies, including their sample sizes and references to the applied dLMs. Convenience sampling was used in both studies. The items and dLMs were distributed in an online test to teacher educators of pre-service mathematics teachers from universities in Linz, Austria, and Cologne, Germany. We selected these universities because they share a common language and have similar curricula. Within the online tests, participants were informed about the purpose of the study, and their participation was voluntary.

3.4.1. Design of Study 1 for Research Objective One: Design of Open-Text Items for Assessing “Selecting dLMs”

Study 1 focused on establishing the scientific quality of the items we designed to assess “selecting dLMs” (RQ 1.1). The designed items (1–4) were presented to participants four times, each time in the context of a different dLM (dLM1–dLM4). Cronbach’s α was calculated using the four scores for “selecting dLM1” to “selecting dLM4” to establish the reliability of the designed items. Interrater Reliability was calculated to assess the objectivity of the scoring of the items. Furthermore, the validity of the designed four-item instrument for evaluating the facet of “selecting dLMs” was discussed.

3.4.2. Design of Study 2 for Research Objective Two: Empirical Investigation of Pre-Service Mathematics Teachers Using the New Instrument to Assess “Selecting dLMs”

In Study 2, we recruited a new participant sample. We applied the new instrument validated in Study 1, along with a TPACK instrument developed by Schmid et al. [32]. This self-assessment instrument was selected because it includes statements that address “selecting dLMs”, for example, “I can select technologies to use in my classroom that enhance what I teach, how I teach, and what students learn” (p. 4, [32]).
For reasons of test-time economy, we included only one dLM in this larger study. The dLM (dLM1) for exploring the properties of a circle was selected due to its alignment with the curriculum and its suitability for the participant pool of both universities (pre-service mathematics teachers for primary, special, lower, and upper secondary education).
Parametric tests (ANOVA) and post-hoc tests with Bonferroni correction were employed where appropriate. Otherwise, non-parametric tests (Kruskal–Wallis) and post-hoc tests with Dunn–Bonferroni correction were used to address research question 2.1.
Spearman’s rank correlation and descriptive analysis were applied to address research questions 2.2 and 2.3. R version 4.4.2 (31 October 2024) and the package psych, version 2.4.12, were used for all statistical analyses. A significance level of 5% was applied in all statistical tests, and the appropriate effect size statistics were determined [69].

4. Results

4.1. Results for Research Objective One: Design of Open-Text Items for Assessing “Selecting dLMs”

To evaluate the designed instruments’ scientific quality, we used the criteria for psychometric instruments: objectivity, reliability, and validity [70]. Data were based on a sample of n = 164 mathematics pre-service teachers in Study 1 who worked on dLM1-dLM4.
To evaluate the objectivity of the coding and scoring of the items, two coders independently categorized the participants’ responses. The open-text responses to item 3 were evaluated and scored on a scale from zero to three based on the selections made in items 1 and 2 (see Figure 2). The open-text item 4—representing TCK, TPK, and TPACK reasoning—was coded according to the coding model in Table 1 and scored on a scale from zero to four (see Figure 2). Differences in the coding and scoring were resolved by mutual agreement, and an agreed coding and scoring was used. During this process, the examples of the coding, scoring (Table A1 and Table A2), and coding model (see Figure 2 and Table 1) were used as guidelines.
To assess the reliability of the items, we computed Cronbach’s α, as items 1-4 were used four times, each time in the context of a different dLM (dLM1–dLM4). This denoted a repetition of the measurement “selecting dLMs”. Using the scores established for items 1–3 across the four dLMs (dLM1–dLM4) yielded a value of α = 0.83, and for item 4, a value of α = 0.91. These coefficients indicate good to excellent reliability of the instruments’ items.
Content validity is provided since the designed instrument is based on scientifically tested and widely accepted frameworks and models, namely the TPACK framework by Mishra and Koehler [16] and the competence model by Blömeke et al. [34]. In addition, localized studies and Grossman’s core teaching practices [45] support the coding model (see Figure 2 and Table 1), reflecting a more practice-oriented and Anglo-American perspective. In contrast, DigCompEdu [44] was also employed to establish the coding model (see Figure 2 and Table 1) and describes a research-based, theoretical, and European framework for digital competence, further strengthening the approach’s validity.
Moreover, content validity was enhanced by engaging peers, teacher educators, and in-service teachers in developing the items. Additionally, content validity is supported by the conducted pilot [62,63] and interview studies, which examined the reasoning behind the selection of digital resources in preparation for the items’ genesis and scoring.

4.2. Results for Research Objective Two: Empirical Investigation of Pre-Service Mathematics Teachers Using the New Instrument to Assess “Selecting dLMs”

To further examine the items (1–4) and the insights they provide regarding their generalizability, we applied them in an online test with a larger sample of pre-service mathematics teachers in Study 2 (n = 379).
Following the coding and scoring that we established and validated using the dataset from Study 1, we employed two trained coders who independently categorized the participants’ responses. The inter-coder agreement was calculated separately for coding items 1–3 (see Figure 2), resulting in Cohen’s κ = 0.86, and item 4 (see Figure 2), κ = 0.99, placing them in a near-perfect range [71].

4.2.1. RQ 2.1: Can the Designed Instrument for “Selecting dLMs” Assess Different Levels Regarding the Number of Semesters of Study, and Are the Results Distinguishable from TPACK Self-Report Results?

To address this research question, we grouped participants from Study 2 (n = 379) in increments of two semesters because the number of semesters in the development stages varies between the two universities. Table 4 shows the descriptive results of the pre-service teachers’ scores for “selecting dLMs” (external assessment) by number of semesters of study and the self-reported TPACK using the TPACK instrument by Schmid et al. [32]. For the external assessment, pre-service teachers achieved the highest results for “selecting dLMs” in semester 7 and above (M = 2.65, SD = 1.51).
To determine the items’ ability to differentiate competence levels of “selecting dLMs”, we examined the dependence of the scores on the number of semesters using one-factor ANOVA (see Table 4). The ANOVA revealed statistically significant differences between semesters (F (3375) = 8.51, p < 0.0000, ηp2 = 0.063, n = 379). Based on these results, we conclude that the designed items enable us to differentiate between different competence levels of “selecting dLMs”.
To analyze whether the results using the new instrument for “selecting dLMs” for the participants in Study 2 are distinguishable from the results of the TPACK self-report instrument by Schmid et al. [34], we also examined the latter concerning the number of semesters of study using the Kruskal–Wallis test (Table 4). The results revealed that the self-assessed TPACK is also influenced by the number of semesters (Chi2 = 11.99, p = 0.007). The descriptive results again revealed that participants in semesters 7 and above exhibited the highest self-reported TPACK (M = 3.55, SD = 0.76). In addition, we calculated Cronbach’s α for the TPACK self-report scale, which was 0.97. This indicated excellent reliability and surpassed the 0.87 reported in the original publication by Schmid et al. [32].
We calculated effect sizes for each assessment to compare the results of self-reported TPACK [32] and the external assessment of “selecting dLMs” via the new instrument presented here. The effect size for the “selecting dLMs” performance assessed using the new instrument was f = 0.26. The self-reported TPACK showed an effect size of f = 0.23.
Due to the smaller subgroups of pre-service teachers by study stages (primary, special education, lower, and upper secondary education), we did not investigate the stages of study of participants as done in other studies [60,61]. For the same reasons, we did not use the universities from the two countries, Austria and Germany, as a factor, nor did we perform two-factor ANOVAs.

4.2.2. RQ 2.2: What Is the Relationship Between the Designed Items?

We use Spearman rank correlation to determine whether the scores for content knowledge (items 1–3) and TCK-x, TPK-x, and TPACK reasoning (item 4) support the notion in the literature that higher content knowledge parallels better reasoning [11,33,41].
The results reveal a positive but small correlation between pre-service teachers’ content knowledge (items 1–3) and their TCK-x, TPK-x, and TPACK reasoning (item 4) (rs = 0.17, p = 0.001).

4.2.3. RQ 2.3: What Specific Reasoning Is Considered When “Selecting dLMs”?

To address this research question, we examined the frequency of different reasoning (Table 1). Figure 3 illustrates the frequency of the reasoning (item 4) for dLM1 [65] in Study 2. The abbreviations TCK-x and TPK-x denote specific reasoning (Table 1). The surface areas represent the frequency of each type of reasoning. “No” (33.0%) and “generic reasoning” (17.7%) together accounted for slightly more than half of the total reasoning (50.7%). TCK reasoning (24.3%), including individual TCK-[1–6] reasoning, was used more frequently than TPK reasoning (18.7%), which included individual TPK-[1–6] reasoning. Responses with either two TCK or two TPK, along with TPACK reasoning that integrates both TCK and TPK reasoning, were the least frequent in the sample (6.3%).
Comparing the reasoning used for dLM1 [72] in Study 1 (n = 164) and Study 2 (n = 379) descriptively reveals that the same top six reason codes are utilized, though in a different order. In both studies, “no” and “generic reasoning” proportions exceeded fifty percent (Study 1: 61.0% and Study 2: 50.7%). This is followed by TCK-1 (different representations), TCK-4 (decreasing extraneous cognitive load), TPK-2 (self-regulated learning), TPK-4 (differentiation, inclusion), and TPK-1 (motivation) reasoning, which together represent 28.7% in Study 1 and 37.7% in Study 2. Other reasoning accounts for the remainder. Participants in Studies 1 and 2 differed in structure regarding the number of semesters and the study stage.
Furthermore, we examined the reasoning of participants by academic progression. The bar chart in Figure 4 illustrates the summarization of TCK-x and TPK-x reasoning by semester groups of participants in Study 2 (see Table 4).
The bar chart in Figure 4 shows a lower proportion of “no reasoning” and “generic reasoning”, accompanied by higher TCK-x and TPK-x single and multiple reasoning depending on the academic progression of participants. The trend in the TPACK reasoning proportion remains consistently low. The proportion of single TPK-x reasoning among participants in semesters 1–2 is greater than that of TCK-x reasoning. However, the difference reverses with academic progression in the sample, and the proportion of TCK-x in semesters 7 and higher is greater than that of TPK-x reasoning.

5. Discussion

As part of digital competence, we identified “selecting dLMs” as a crucial aspect that has not yet been addressed empirically in larger studies and methodologically using open-text items [11]. Therefore, we designed and evaluated items suitable for objectively assessing “selecting dLMs” and will discuss the four-item instrument and empirical results obtained from using it in the following sections (research aims one and two).

5.1. Discussion Research Objective One: Design of Open-Text Items for Assessing “Selecting dLMs”

Regarding research question 1.1, the proposed approach is characterized by a suitable process since the items’ operationalization and coding procedures are documented intersubjectively and comprehensibly. In addition, standardized online tests were used to capture the data consistently.
The results of the inter-rater agreement indicated the objectivity of the coding and scoring. The reliability of the four-item instrument, as measured by Cronbach’s α, was good (Study 1). A critical question is whether using Cronbach’s α to assess the reliability of the instrument is justified, given that the same items were used, albeit in different contexts (dLM1–dLM4). Other approaches, such as developing different items, retesting at different times, or using the existing items in more different contexts, as demonstrated in one of the pilot and interview studies we conducted [9,63], are alternatives. However, the latter approaches carry other risks, as the reasoning for “selecting dLMs” may differ from the reasoning for selecting digital technology or the knowledge that test participants acquire over time. The variation of the dLMs used in this study presents sufficiently different contexts using the same four items to assess “selecting dLMs”; however, the high Cronbach’s α values may be explained by using the same items [72]. Participants in Studies 1 and 2 applied similar reasoning for dLM1 (research question 2.3), which further evidences reliability, despite the differing structure of the participant pool in both studies.
The validity of the four-item instrument is theoretically supported by the frameworks used in its conceptualization, namely the TPACK framework [16] and the competence model by Blömeke et al. [34]. In addition, the European DigCompEdu [44] and Grossman’s Anglo-American teacher practices [45] are utilized to establish the TCK, TPK, and TPACK reasoning model (see Figure 2 and Table 1). Moreover, involving peers, teacher educators, and in-service teachers in the genesis of the items and the smaller quantitative pilot [62,63] and the qualitative interview study [9] further substantiated the approach and its validity.
Whether the four-item instrument assesses competence, particularly the competence facet of “selecting dLMs”, we concur with the findings of Tabach and Trgalová [43] that the instrument assesses something along the continuum of dispositional knowledge and competence, as articulated in the model by Blömeke et al. [34]. Items (1–3) assess dispositional knowledge, and item 4 evaluates the interpretation and perception of the dLM regarding its suitability for learning content, learner age, and the special needs of learners, and thus competence.
But does the four-item instrument capture TPACK? The literature and conceptualization support the notion that the instrument captures TCK, TPK, and thus TPACK reasoning. For obvious reasons, reasoning using just one TCK-x and TPK-x reason, hence TPACK reasoning, does not encompass TPACK in its entirety. Even if we were to expand the scoring, as established in Figure 2, so that all TCK-x and TPK-x reasons (Table 1) were required to obtain the maximum score, one could still argue it is only in the context of a single dLM and thus remains limited. However, similar limitations apply to other assessments, such as evaluating (fictitious) lesson plans, which also have a restricted scope. The study by Janssen et al. [73] cites three to ten selection opportunities for a lesson plan and four to eighteen justifications. The latter equates to an average of two justifications per selection, the same number of reasoning considered in our scoring as the maximum (see Figure 2). Given the economics of pre-service teacher assessments, a decision must be made regarding the assessment type and the time required. We suggest that our approach using four items offers a more time-efficient method and may not be as comprehensive as other, more time-intensive approaches, like the evaluation of lesson plans.
To summarize, concerning research question 1.1 and the overall research aim one, it can be concluded that (1) the four-item instrument satisfies the necessary criteria for objectivity, reliability, and validity in evaluating “selecting dLM”. It can also be concluded (2) that the instrument provides an economical way to assess a “competence facet”, namely “selecting dLM,” and TPACK reasoning. Hence, we position “selecting dLMs” within the TPACK component (see Figure 1).

5.2. Discussion Research Objective Two: Empirical Investigation of Pre-Service Mathematics Teachers Using the New Instrument to Assess “Selecting dLMs”

Regarding research question 2.1, the results show that the instrument can distinguish between levels of teacher preparation based on academic progression. Pre-service teachers with fewer semesters of study score lowest and have less ability to describe the learning content a dLM intends to deliver, place the learning content in the curriculum, and justify their selection of dLM—they have a lesser level of “selecting dLM”. These results are consistent with the literature [9,60]. However, the results of the ANOVA showed a small effect size (f = 0.26) regarding academic progression. This may be due to other factors being at play, such as the university curriculum and coursework [14,74], participants’ previous experiences [9,75], motivation, and attitude toward technology [42,76]. In addition, other studies assessing digital competence using the TPACK framework, such as evaluating lesson plans, also do not consistently find or report different levels of TPACK [19,77,78]. In addition, comparing the effect size of assessing “selecting dLMs” objectively and the effect size of self-reported TPACK [32], concerning academic progression, showed differences. However, the differences were minor (f = 0.26 versus f = 0.23), indicating that the objectively evaluated “selecting dLMs” performance parallels the self-reported TPACK [32] performance regarding academic progression. This aligns with the goals of educational institutions to equip pre-service teachers with competence and confidence in their abilities [12,33].
Further regarding research question 2.1, the objective results for pre-service teachers at both universities indicate room for improvement in “selecting dLMs”. Even the median scores of the subpopulations with the highest performance are less than half of the maximum. The low results for pre-service teachers at the beginning of their teacher education could be attributed to their entry dispositions [33]. Nevertheless, the overall low results for pre-service teachers in higher semesters could stem from the learning opportunities and outcomes of teacher education regarding “selecting dLMs.” In addition, some of the low results could also be explained by participants’ beliefs about teaching with technology [76], or their voluntary participation in the online tests [21,79].
Regarding research question 2.2, the results show that the statistical dependence of content knowledge (items 1–3) and sound reasoning (item 4), as revealed by the Spearman correlation, is consistent with the widely accepted notion that competence depends on (content) knowledge [11,36,41]. However, given the small effect size (rs = 0.17), this relationship may not be as strong as the literature assumes concerning “selecting dLMs” and the four-item instrument.
For research question 2.3, the type of reasoning when “selecting dLM” for dLM1 revealed that only a few participants (6.3% in Study 2, see Figure 3) argued at the TPACK level or used multiple TCK or TPK reasoning. Again, this could be caused by factors such as the university curriculum and coursework [14,74] or voluntary participation [21,79]. The latter is supported by the high percentage of the category of “no reasoning” (33% in Study 2, see Figure 3). However, it is noteworthy that there is a downward trend in the proportions of “no reasoning” and “generic reasoning”, paralleled by a rise in TCK-x and TPK-x reasoning, with the academic progression of participants (see Figure 4). Participants in semesters 7 and higher use a higher proportion of TCK-x than TPK-x reasoning, compared to pre-service teachers in semesters 1–2. This may be an outcome of their university education, which prepared them to be more cognizant of the implications of dLMs for their teaching subject, mathematics. The fact that TPACK reasoning, requiring the evaluation of dLMs from a learning content (TCK) and pedagogical (TPK) perspective, remained relatively flat (see Figure 3 and Figure 4) aligns with the findings by Gonzalez and González-Ruiz [80]. They stated that TPACK does not develop spontaneously. Especially for pre-service teachers, it requires reinforcement and training to evaluate dLMs from both the learning content (TCK) and the pedagogical (TPK) perspective, thus the TPACK perspective, as well as considering situational and infrastructure factors.
However, one must be careful with these interpretations of the results, due to the composition of the sample and the reasoning possible with the specific dLM1 (see Appendix A.1). Also, the wording of the items could have been the cause for the low proportion of TPACK reasoning, as no particular type of reasoning, i.e., pedagogical (TPK) or learning content (TCK), or a minimum number of reasons, was specified.
Additionally, it is worth noting that none of the participants cited the unavailability of digital technology required for using dLMs in classrooms as part of their reasoning, as has been done in other studies. This could be because the wording of the items implied it, or due to the participants’ understanding of the level of available infrastructure in education in the local context.
To summarize, regarding research questions 2.1–2.3 and the overall research aim, we can conclude the following for the new four-item instrument for assessing “selecting dLM”. (1) The instrument enables the differentiation of different levels of “selecting dLMs”. (2) The relationship between content knowledge (items 1–3) and sound reasoning (item 4) is not as strong as suggested by the literature. (3) The analytics of the reasoning (items 4) enables the identification of areas in which educational institutions are effective and ineffective in their teaching regarding the facet of “selecting dLMs”.

5.3. Implications for Teacher Education and the Development of “Selecting dLMs”

Regarding the empirical results, we emphasized several findings in the previous section that educational institutions could take into consideration when developing “selecting dLMs” in teacher education. Additionally, we now provide further recommendations concerning training needs and the adaptability of the instrument to other subjects, as well as methodological recommendations for developing “selecting dLMs”.
Concerning the training needs for developing “selecting dLMs”, one requirement is to present dLMs as options in lesson planning in teacher education. If pre-service teachers are unaware of dLMs and their associated opportunities and risks, they might use inferior dLMs or not use dLMs in their future teaching [9]. Similarly, pre-service teachers should examine dLMs as part of their training to determine, for example, whether and why a dLM increases or decreases extraneous cognitive load and what additional scaffolding and support (subpopulations of) learners may need when using dLMs [3]. Furthermore, the application of dLMs in different teaching phases [9,10] should be examined, along with a comparison of digital and non-digital learning materials [80]. Here, comparing lecture times, scaffolding, and learner motivation in teaching approximations [45] are just some examples of possible learning opportunities. The automatic grading and feedback of dLMs require a different type of diagnostic competence in pre-service teachers. They need to diagnose and develop learning activities based on the automatic grading information from dLMs, while taking into account the automated feedback from dLMs [1]. In addition, an understanding of the TPACK framework in the context of their respective curricula will enable pre-service teachers to identify content areas where dLMs can support them in their future teaching. Pre-service teachers also need to learn how to decide whether they will enact dLMs as teachers themselves or provide learners with the opportunity to use them. Given the small dependency of content knowledge on reasoning, the development of “selecting dLMs” may be more suited for a lecture focusing on technological pedagogical knowledge, complemented with the necessary technological content knowledge. Such a recommendation certainly depends on the specifics of an educational institution [33] and the identified deltas regarding the development of “selecting dLMs” and other technological advances. Ideally, dLMs and the development of “selecting dLMs” would be interwoven in multiple lectures [12,33].
Equally important in teacher education is adapting instruments to local requirements and other teaching subjects. The presented four-item instrument and scoring system allow for this, as the scoring of content knowledge (items 1–3) is evaluated within the context of the local curriculum and teaching subject. Furthermore, the scoring of reasoning (item 4) regarding “selecting dLMs” is structured alongside TCK, TPK, and TPACK. For TPK, it can be directly transferred to other teaching subjects. The TCK coding applies to mathematics learning content and may require adaptations [20,27,52,53,80]. Also crucial for the longevity of an assessment instrument in teacher education is the ability to adjust an instrument and create variations so it can be reused without requiring retesting. A straightforward modification is to utilize different dLMs. When doing so, one should use dLMs that are complex enough to provide various reasoning [9,10] (see Appendix A.1). Another option is, instead of inquiring about learner age and special education needs, to provide these and other contextual factors along with a dLM, as well as the items and the requirement to argue for or against selecting a dLM with those confinements. In addition, specifying the need to reason from a pedagogical and/or learning content perspective and requiring a minimum number of reasons are other possible adaptations.
Methodically, when assessing “selecting dLMs”, it should be noted that “selecting dLMs” is only one facet of the digital competence pre-service teachers require. Other facets, such as decision-making in (digital) classroom settings [81,82], the creation of digital learning materials [41], the variation of problem tasks [83], problem-solving [84] with technology, and information literacy [85], are also essential and require focus in teacher education. Therefore, given these other vital facets of digital competence, our economic approach using just four items is well-suited to be integrated with the assessments of these different facets in summative evaluations of digital competence of pre-service teachers.

5.4. Limitations

Despite the theoretical support and empirical validation, our approach has several limitations. First, the empirical data are confined to mathematics pre-service teachers from two universities in Germany and Austria. The composition of participants in both studies varied and included smaller subsets, which limited the statistical analysis. This limitation particularly affected the analysis of the influence of cultural or local factors. Second, self-selected sampling was employed, which may have introduced volunteer bias [21]. Although we assessed reasoning rather than the decision to use dLMs, we believe biases stemming from social desirability [30,31] played a minor role, and inattentive responses [79] were filtered out. Nonetheless, we cannot entirely dismiss these effects or volunteer bias. Third, due to the available sample population of pre-service mathematics teachers, the dLMs primarily addressed simple mathematical concepts that are applicable to pre-service teachers in primary and special education.

6. Conclusions

In summary, notwithstanding these limitations, our economic approach of assessing the facet of digital competence “selecting dLMs”, using only four items—two open and two closed items—provides a more focused and efficient approach compared to the evaluation of lesson plans [23,24,25] and is more objective than TPACK self-reports. In addition, the instrument enables the objective assessment of TPACK reasoning, as the reasoning is evaluated rather than the decision to use or not use a dLM.
Furthermore, the results obtained from the instrument enable other diagnostics, leading to other learning opportunities in teacher education than would be the case with self-reported TPACK. In addition, our approach can be adapted to different teaching subjects, educational stages, and local contexts. We have already validated the items with pre-service teachers at two universities in different countries. This validation proves their suitability for comparing the effectiveness of differing teacher training systems [12], especially university-based teacher training programs for differing levels of mathematics teaching.
This research on assessing “selecting dLMs” should be extended beyond the current investigation of dLMs for mathematical content for primary schools and GeoGebra dLMs, i.e., Desmos, Wolfram Alpha, Sagemath, or Geometer’s Sketchpad, to name only a few. For the generalizability of the model (see Figure 2 and Table 1), dLMs for other teaching subjects should be examined. Additionally, research on practices [86,87] involving the use of dLMs in the classroom and various pedagogical contexts is another critical research focus to enhance pre-service teacher training.

Author Contributions

P.G. conceptualized, conducted the formal analysis, visualized data, prepared the original draft, and wrote the manuscript. P.G. and E.L. curated data. E.L., K.K. and B.R. reviewed and edited it. B.R. also provided supervision and secured funding. All authors have read and agreed to the published version of the manuscript.

Funding

The DEAL project financed the Open Access publication of the manuscript.

Institutional Review Board Statement

Ethical review and approval were waived for this study. In Germany, as stated by the German Research Association (DFG), the present study did not require the approval of an ethics committee because the research did not pose any threats or risks to the respondents. The study was not associated with high physical or emotional stress, and the respondents were informed about the study’s objectives in advance. At the beginning, participants were told that the data of this study would be used for research purposes only and that participation was voluntary in all cases. The General Data Protection Regulation (GDPR) was followed.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Description of dLM1 Used in Studies 1 and 2

For choosing dLMs for the two studies, we evaluated two aspects: (1) If the dLMs are suitable for primary and secondary school students and if they are also accessible to those with special educational needs, and thus are appropriate for the participants of both Studies. (2) If iconic and enactive non-digital alternative materials exist to explore the learning content, and thus allow for diverse reasoning regarding the selection.
Following this approach, we chose dLM1 [10,65], as illustrated in Figure A1. In this dLM, learners should discover the concept of a circle as a shape composed of all the points in a plane that are a certain distance (radius) from a certain point (the center). The dLM consists of an introductory text, a task, and various crosses representing a tree (dark-colored cross), a child named Maxi (dark-colored cross), and several other children (light-colored crosses). Learners should move the light-colored crosses to be the same distance from the tree as Maxi and discover that they form a circle (see Figure A1a). A circle appears once the learners push the “solution” button (see Figure A1b), reinforcing the concept of the circle and its defining property.
Figure A1. (a) Shows the starting point of the dLM, and (b) shows the dLM after pressing “solution”. For this publication, the dLM was translated from German and adapted for printing by the authors.
Figure A1. (a) Shows the starting point of the dLM, and (b) shows the dLM after pressing “solution”. For this publication, the dLM was translated from German and adapted for printing by the authors.
Applsci 15 06024 g0a1
Possible arguments for or against selecting this dLM, using the TCK-x and TPK-x numbering of Table 1, are that learners can self-regulate (TPK-2), discover (TPK-3) the learning content, and check whether their solution is correct. The shortcomings of the dLM are that learners can place all crosses (children) on the same point and that not all crosses (children) need to be moved before pressing the solution button. The latter corresponds to increasing extraneous cognitive load (TCK-6). The knowledge of these shortcomings is a reason for not selecting the dLM. However, in contrast, one can argue that the dLM reduces the extraneous cognitive load, as the abstract concept of a circle and its defining property is made available in a virtual, enactive way (TCK-1). In addition, manually constructing and drawing a circle is outsourced to the dLM, reducing the extraneous cognitive load (TCK-4). Learners manipulate the crosses using their fingers or a mouse. Thus, using the dLM gives learners dynamic control of the presented objects, equivalent to TCK reasoning (TCK-3). The outlined reasoning, the learning content, and its place according to the curriculum are specific to this dLM1.

Appendix A.2. Sample Responses for dLM1-dLM4 for Items 1–3

Table A1. Scoring examples for dLM1-dLM4 for items 1–3 (learner age and special needs, and description of the learning content of a dLM). Translated from German into English by the authors.
Table A1. Scoring examples for dLM1-dLM4 for items 1–3 (learner age and special needs, and description of the learning content of a dLM). Translated from German into English by the authors.
ScoredLM1 (Circle, and Property of Radius) [65]dLM2 (Symmetry and Axis of Symmetry) [66]dLM3 (Generate Bar Charts) [67]dLM4 (Arithmetic Mean) [68]
No, or wrong description (score 0)“geometry, shapes, figures”“spatial thinking”“visualization, sizes and quantities”“probability”
description inappropriate for learner age, or needs (score 1)7–8th grade and no special educational needs; “circles and the properties of circles (radius...)”7–8th grade and no FSP “symmetry”1–2nd grade and learning disabilities “bar charts”7–8th grade and hearing impairment “arithmetic mean”
generic description
appropriate for learner age, or needs (score 2)
“the content is useful for introducing circles and their radius”“axis mirroring”“bar charts”“arithmetic mean”
detailed description
appropriate for learner age, or needs (score 3)
“To introduce the circle. Pupils should be made aware that every point on the circle is exactly the same distance from the center.”“the material presents that the dimensions of the mirrored object remain the same size when mirrored at a straight line.”“It is about absolute frequencies and the creation of bar charts.”“The dLM evaluates students’ understanding of how to calculate the arithmetic mean, ... helping students to grasp the underlying principles of the calculation.”

Appendix A.3. Sample Responses for dLM1-dLM4 for Item 4

Table A2. Scoring examples for dLM1-dLM4 for item 4 (TCK, TPK, and TPACK reasoning). Translated from German into English by the authors.
Table A2. Scoring examples for dLM1-dLM4 for item 4 (TCK, TPK, and TPACK reasoning). Translated from German into English by the authors.
ScoredLM1 (Circle, and Property of Radius) [65]dLM2 (Symmetry and Axis of Symmetry) [66]dLM3 (Generate Bar Charts) [67]dLM4 (Arithmetic Mean) [68]
no, or wrong
reasoning (score 0)
“I don’t see much point in the dLM”“There are better visual examples”“Would rather do it analogue”“I don’t know”
generic reasoning
(score 1)
“a good concept that combines math with technology”“good representation of the principle of symmetry”“simple and nice”“a good task for calculating”
1 × TCK, or 1 × TPK
reasoning
(score 2)
“I wouldn’t use the learning environment in a setting where students need support. It requires a lot of cognitive skills to comprehend the task and be able to visualize it...”
(TCK-6)
“It is fun and motivating for the children to watch how the butterfly can move its
wings.”
(TPK-1)
“It is a good activity to check whether students understand the representation of the bar chart without having to draw a chart themselves (saves time).”(TCK-4)“...learners can all work self-regulated, there are solution hints...”
(TPK-2)
2 × TCK, or 2 × TPK
reasoning
(score 3)
“Learners self-regulate how it is possible to solve the task and thus learn an important property of the circle (radius) in a playful way” (TPK-1, TPK3)“Students learn about symmetries through play, students can learn about the properties of symmetries through experimentation, which would be more difficult without digital media”
(TPK-1, TPK-3)
“...motivational, context accessible to all learners...”
(TPK-1, TPK-4)
“...as everyone can work on the tasks at their own pace and it can be a motivating factor for the children to work digitally and see results immediately...” (TPK-1, TPK-2)
TCK and TPK, thus
TPACK reasoning (score 4)
“I would use this learning environment because it is enactive and visual learning that actively engages students in the learning process. Through the concrete task of positioning x in a circle around the tree, the children experience geometric concepts such as radius, center, and circle shape. This not only promotes an understanding of abstract mathematical concepts, [...] the ability to recognize connections”
(TCK-2, TPK-1)
“I wouldn’t use the learning environment... For example, the task is too abstract for learners or offers too few differentiated approaches to understand the core of axial symmetry. If there’s no way to adapt the task to different learning levels, some students might be overwhelmed or under-challenged.”
(TCK-6, TPK-4)
“...without requiring learners to do a lot of drawing. Learners can easily experiment and check their solutions independently. Doing this on paper would waste lesson time and verification of results is time-consuming for teachers...”
(TCK-4, TPK-3)
“It assesses students’ understanding of calculating the arithmetic mean. Students are forced to rethink their learned knowledge of calculation and can thus better reflect on the arithmetic mean calculation. However, I view this learning environment more as a test to determine the extent to which students have internalized the subject matter they have learned.”
(TCK-4, TPK-2)

References

  1. Drijvers, P.; Sinclair, N. The Role of Digital Technologies in Mathematics Education: Purposes and Perspectives. ZDM Math. Educ. 2023, 56, 239–248. [Google Scholar] [CrossRef]
  2. Roblyer, M.D.; Hughes, J.E. Integrating Educational Technology into Teaching: Transforming Learning Across Disciplines, 8th ed.; Pearson Education, Inc.: New York, NY, USA, 2019; ISBN 978-0-13-474641-8. [Google Scholar]
  3. Weinhandl, R.; Houghton, T.; Lindenbauer, E.; Mayerhofer, M.; Lavicza, Z.; Hohenwarter, M. Integrating Technologies Into Teaching and Learning Mathematics at the Beginning of Secondary Education in Austria. Eurasia J. Math. Sci. Tech. Ed. 2021, 17, 1–15. [Google Scholar] [CrossRef] [PubMed]
  4. Engelbrecht, J.; Llinares, S.; Borba, M.C. Transformation of the Mathematics Classroom with the Internet. ZDM Math. Educ. 2020, 52, 825–841. [Google Scholar] [CrossRef]
  5. OECD The Future of Education and Skills Education 2030; OECD: Paris, France, 2018.
  6. Weigand, H.-G.; Trgalova, J.; Tabach, M. Mathematics Teaching, Learning, and Assessment in the Digital Age. ZDM Math. Educ. 2024, 56, 525–541. [Google Scholar] [CrossRef]
  7. Heine, S.; König, J.; Krepf, M. Digital Resources as an Aspect of Teacher Professional Digital Competence: One Term, Different Definitions—A Systematic Review. Educ. Inf. Technol. 2022, 28, 3711–3738. [Google Scholar] [CrossRef]
  8. Clark-Wilson, A.; Robutti, O.; Thomas, M. Teaching with Digital Technology. ZDM Math. Educ. 2020, 52, 1223–1242. [Google Scholar] [CrossRef]
  9. Gonscherowski, P.; Rott, B. How Do Pre-/In-Service Mathematics Teachers Reason for or against the Use of Digital Technology in Teaching? Mathematics 2022, 10, 2345. [Google Scholar] [CrossRef]
  10. Lindenbauer, E.; Infanger, E.-M.; Lavicza, Z. Enhancing Mathematics Education through Collaborative Digital Material Design: Lessons from a National Project. Eur. J. Sci. Math. Educ. 2024, 12, 276–296. [Google Scholar] [CrossRef]
  11. Gonscherowski, P.; Rott, B. Selecting Digital Technology: A Review of TPACK Instruments. In Proceedings of the 46th Conference of the International Group for the Psychology of Mathematics Education, Haifa, Israel, 16–21 July 2023; Ayalon, M., Koichu, B., Leikin, R., Rubel, L., Tabach, M., Eds.; PME: Haifa, Israel, 2023; Volume 2, pp. 378–386. [Google Scholar]
  12. König, J.; Heine, S.; Kramer, C.; Weyers, J.; Becker-Mrotzek, M.; Großschedl, J.; Hanisch, C.; Hanke, P.; Hennemann, T.; Jost, J.; et al. Teacher Education Effectiveness as an Emerging Research Paradigm: A Synthesis of Reviews of Empirical Studies Published over Three Decades (1993–2023). J. Curric. Stud. 2023, 56, 371–391. [Google Scholar] [CrossRef]
  13. Schmidt, W.H.; Xin, T.; Guo, S.; Wang, X. Achieving Excellence and Equality in Mathematics: Two Degrees of Freedom? J. Curric. Stud. 2022, 54, 772–791. [Google Scholar] [CrossRef]
  14. Handal, B.; Campbell, C.; Cavanagh, M.; Petocz, P. Characterising the Perceived Value of Mathematics Educational Apps in Preservice Teachers. Math. Educ. Res. J. 2016, 28, 199–221. [Google Scholar] [CrossRef]
  15. Valtonen, T.; Leppänen, U.; Hyypiä, M.; Sointu, E.; Smits, A.; Tondeur, J. Fresh Perspectives on TPACK: Pre-Service Teachers’ Own Appraisal of Their Challenging and Confident TPACK Areas. Educ. Inf. Technol. 2020, 25, 2823–2842. [Google Scholar] [CrossRef]
  16. Mishra, P.; Koehler, M.J. Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teach. Coll. Rec. 2005, 108, 1017–1054. [Google Scholar] [CrossRef]
  17. Koehler, M.J.; Shin, T.S.; Mishra, P. How Do We Measure TPACK? Let Me Count the Ways. In A Research Handbook on Frameworks and Approaches; Ronau, R.N., Rakes, C.R., Niess, M.L., Eds.; IGI Global: Hershey, PA, USA, 2011; pp. 16–31. ISBN 978-1-60960-750-0. [Google Scholar]
  18. Schmid, M.; Brianza, E.; Mok, S.Y.; Petko, D. Running in Circles: A Systematic Review of Reviews on Technological Pedagogical Content Knowledge (TPACK). Comput. Educ. 2024, 214, 105024. [Google Scholar] [CrossRef]
  19. Karakaya Cirit, D.; Canpolat, E. A Study on the Technological Pedagogical Contextual Knowledge of Science Teacher Candidates across Different Years of Study. Educ. Inf. Technol. 2019, 24, 2283–2309. [Google Scholar] [CrossRef]
  20. von Kotzebue, L. Beliefs, Self-Reported or Performance-Assessed TPACK: What Can Predict the Quality of Technology-Enhanced Biology Lesson Plans? J. Sci. Educ. Technol. 2022, 31, 570–582. [Google Scholar] [CrossRef]
  21. Fabian, A.; Fütterer, T.; Backfisch, I.; Lunowa, E.; Paravicini, W.; Hübner, N.; Lachner, A. Unraveling TPACK: Investigating the Inherent Structure of TPACK from a Subject-Specific Angle Using Test-Based Instruments. Comput. Educ. 2024, 217, 105040. [Google Scholar] [CrossRef]
  22. Lachner, A.; Fabian, A.; Franke, U.; Preiß, J.; Jacob, L.; Führer, C.; Küchler, U.; Paravicini, W.; Randler, C.; Thomas, P. Fostering Pre-Service Teachers’ Technological Pedagogical Content Knowledge (TPACK): A Quasi-Experimental Field Study. Comput. Educ. 2021, 174, 104304. [Google Scholar] [CrossRef]
  23. Jin, Y.; Harp, C. Examining Preservice Teachers’ TPACK, Attitudes, Self-Efficacy, and Perceptions of Teamwork in a Stand-Alone Educational Technology Course Using Flipped Classroom or Flipped Team-Based Learning Pedagogies. J. Digit. Learn. Teach. Educ. 2020, 36, 166–184. [Google Scholar] [CrossRef]
  24. Mouza, C.; Nandakumar, R.; Yilmaz Ozden, S.; Karchmer-Klein, R. A Longitudinal Examination of Preservice Teachers’ Technological Pedagogical Content Knowledge in the Context of Undergraduate Teacher Education. Action Teach. Educ. 2017, 39, 153–171. [Google Scholar] [CrossRef]
  25. Pekkan, Z.T.; Ünal, G. Technology Use: Analysis of Lesson Plans on Fractions in an Online Laboratory School. In Proceedings of the 45th Conference of the International Group for the Psychology of Mathematics Education, Alicante, Spain, 18–23 July 2022; Fernández, C., Llinares, S., Gutiérrez, A., Planas, N., Eds.; PME: Alicante, Spain, 2022; Volume 4, p. 410, ISBN 978-84-1302-178-2. [Google Scholar]
  26. McCulloch, A.; Leatham, K.; Bailey, N.; Cayton, C.; Fye, K.; Lovett, J. Theoretically Framing the Pedagogy of Learning to Teach Mathematics with Technology. Contemp. Issues Technol. Teach. Educ. (CITE J.) 2021, 21, 325–359. [Google Scholar]
  27. Tseng, S.-S.; Yeh, H.-C. Fostering EFL Teachers’ CALL Competencies through Project-Based Learning. Educ. Technol. Soc. 2019, 22, 94–105. [Google Scholar]
  28. Revuelta-Domínguez, F.-I.; Guerra-Antequera, J.; González-Pérez, A.; Pedrera-Rodríguez, M.-I.; González-Fernández, A. Digital Teaching Competence: A Systematic Review. Sustainability 2022, 14, 6428. [Google Scholar] [CrossRef]
  29. Yeh, Y.; Hsu, Y.; Wu, H.; Hwang, F.; Lin, T. Developing and Validating Technological Pedagogical Content Knowledge—Practical TPACK through the Delphi Survey Technique. Br. J. Educ. Tech. 2014, 45, 707–722. [Google Scholar] [CrossRef]
  30. Grimm, P. Social Desirability Bias. In Wiley International Encyclopedia of Marketing; Sheth, J., Malhotra, N., Eds.; Wiley: Hoboken, NJ, USA, 2010; ISBN 978-1-4051-6178-7. [Google Scholar]
  31. Safrudiannur Measuring Teachers’ Beliefs Quantitatively: Criticizing the Use of Likert Scale and Offering a New Approach; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2020.
  32. Schmid, M.; Brianza, E.; Petko, D. Self-Reported Technological Pedagogical Content Knowledge (TPACK) of Pre-Service Teachers in Relation to Digital Technology Use in Lesson Plans. Comput. Hum. Behav. 2021, 115, 106586. [Google Scholar] [CrossRef]
  33. Kaiser, G.; König, J. Competence Measurement in (Mathematics) Teacher Education and Beyond: Implications for Policy. High. Educ. Policy 2019, 32, 597–615. [Google Scholar] [CrossRef]
  34. Blömeke, S.; Gustafsson, J.-E.; Shavelson, R.J. Beyond Dichotomies: Competence Viewed as a Continuum. Z. Psychol. 2015, 223, 3–13. [Google Scholar] [CrossRef]
  35. Deng, Z. Powerful Knowledge, Educational Potential and Knowledge-Rich Curriculum: Pushing the Boundaries. J. Curric. Stud. 2022, 54, 599–617. [Google Scholar] [CrossRef]
  36. Yang, X.; Deng, J.; Sun, X.; Kaiser, G. The Relationship between Opportunities to Learn in Teacher Education and Chinese Preservice Teachers’ Professional Competence. J. Curric. Stud. 2024, 1–19. [Google Scholar] [CrossRef]
  37. Schleicher, A. PISA 2022 Insights and Interpretations; OECD: Paris, France, 2023. [Google Scholar]
  38. Hattie, J. Visible Learning, the Sequel: A Synthesis of over 2,100 Meta-Analyses Relating to Achievement, 1st ed.; Routledge: New York, NY, USA, 2023; ISBN 978-1-0719-1701-5. [Google Scholar]
  39. Koehler, M.J.; Mishra, P.; Cain, W. What Is Technological Pedagogical Content Knowledge (TPACK)? J. Educ. 2013, 193, 13–19. [Google Scholar] [CrossRef]
  40. Shulman, L.S. Those Who Understand: Knowledge Growth in Teaching. Educ. Res. 1986, 15, 4–14. [Google Scholar] [CrossRef]
  41. Reinhold, F.; Leuders, T.; Loibl, K.; Nückles, M.; Beege, M.; Boelmann, J.M. Learning Mechanisms Explaining Learning with Digital Tools in Educational Settings: A Cognitive Process Framework. Educ. Psychol. Rev. 2024, 36, 14. [Google Scholar] [CrossRef]
  42. Mishra, P.; Warr, M. Contextualizing TPACK within Systems and Cultures of Practice. Comput. Hum. Behav. 2021, 117, 106673. [Google Scholar] [CrossRef]
  43. Tabach, M.; Trgalová, J. Teaching Mathematics in the Digital Era: Standards and Beyond. In STEM Teachers and Teaching in the Digital Era; Ben-David Kolikant, Y., Martinovic, D., Milner-Bolotin, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 221–242. ISBN 978-3-030-29395-6. [Google Scholar]
  44. Redecker, C.; Punie, Y. Digital Competence of Educators; Publications Office of the European Union: Seville, Spain, 2017. [Google Scholar] [CrossRef]
  45. Grossman, P.L. Teaching Core Practices in Teacher Education; Core Practices in Education Series; Harvard Education Press: Cambridge, MA, USA, 2018; ISBN 978-1-68253-187-7. [Google Scholar]
  46. Anderson, S.; Griffith, R.; Crawford, L. TPACK in Special Education: Preservice Teacher Decision Making While Integrating iPads into Instruction. Contemp. Issues Technol. Teach. Educ. (CITE J.) 2017, 17, 97–127. [Google Scholar]
  47. Bonafini, F.C.; Lee, Y. Investigating Prospective Teachers’ TPACK and Their Use of Mathematical Action Technologies as They Create Screencast Video Lessons on iPads. TechTrends Link. Res. Pract. Improv. Learn. 2021, 65, 303–319. [Google Scholar] [CrossRef]
  48. Hillmayr, D.; Ziernwald, L.; Reinhold, F.; Hofer, S.I.; Reiss, K.M. The Potential of Digital Tools to Enhance Mathematics and Science Learning in Secondary Schools: A Context-Specific Meta-Analysis. Comput. Educ. 2020, 153, 103897. [Google Scholar] [CrossRef]
  49. Morgan, C.; Kynigos, C. Digital Artefacts as Representations: Forging Connections between a Constructionist and a Social Semiotic Perspective. Educ. Stud. Math. 2014, 85, 357–379. [Google Scholar] [CrossRef]
  50. Moreno-Armella, L.; Hegedus, S.J.; Kaput, J.J. From Static to Dynamic Mathematics: Historical and Representational Perspectives. Educ. Stud. Math. 2008, 68, 99–111. [Google Scholar] [CrossRef]
  51. Sweller, J.; van Merriënboer, J.J.G.; Paas, F. Cognitive Architecture and Instructional Design: 20 Years Later. Educ. Psychol. Rev. 2019, 31, 261–292. [Google Scholar] [CrossRef]
  52. Pusparini, F.; Riandi, R.; Sriyati, S. Developing Technological Pedagogical Content Knowledge (TPACK) in Animal Physiology. J. Phys. Conf. Ser. 2017, 895, 012052. [Google Scholar] [CrossRef]
  53. Solvang, L.; Haglund, J. How Can GeoGebra Support Physics Education in Upper-Secondary School—A Review. Phys. Educ. 2021, 56, 055011. [Google Scholar] [CrossRef]
  54. Turan, Z.; Karabey, S.C. The Use of Immersive Technologies in Distance Education: A Systematic Review. Educ. Inf. Technol. 2023, 28, 16041–16064. [Google Scholar] [CrossRef]
  55. Rüth, M.; Breuer, J.; Zimmermann, D.; Kaspar, K. The Effects of Different Feedback Types on Learning with Mobile Quiz Apps. Front. Psychol. 2021, 12, 665144. [Google Scholar] [CrossRef]
  56. Drijvers, P. Digital Technology in Mathematics Education: Why It Works (Or Doesn’t). In Selected Regular Lectures from the 12th International Congress on Mathematical Education; Cho, S.J., Ed.; Springer International Publishing: Cham, Switzerland, 2015; pp. 135–151. ISBN 978-3-319-17186-9. [Google Scholar]
  57. Gerhard, K.; Jäger-Biela, D.J.; König, J. Opportunities to Learn, Technological Pedagogical Knowledge, and Personal Factors of Pre-Service Teachers: Understanding the Link between Teacher Education Program Characteristics and Student Teacher Learning Outcomes in Times of Digitalization. Z. Erzieh. 2023, 26, 653–676. [Google Scholar] [CrossRef]
  58. Drijvers, P.; Ball, L.; Barzel, B.; Heid, M.K.; Cao, Y.; Maschietto, M. Uses of Technology in Lower Secondary Mathematics Education: A Concise Topical Survey; Kaiser, G., Ed.; ICME-13 Topical Surveys; Springer International Publishing: Cham, Switzerland, 2016; ISBN 978-3-319-33665-7. [Google Scholar]
  59. Molenaar, I.; Boxtel, C.; Sleegers, P. Metacognitive Scaffolding in an Innovative Learning Arrangement. Instr. Sci. 2011, 39, 785–803. [Google Scholar] [CrossRef]
  60. Rott, B. Inductive and Deductive Justification of Knowledge: Epistemological Beliefs and Critical Thinking at the Beginning of Studying Mathematics. Educ. Stud. Math. 2021, 106, 117–132. [Google Scholar] [CrossRef]
  61. Guillén-Gámez, F.D.; Mayorga-Fernández, M.J.; Bravo-Agapito, J.; Escribano-Ortiz, D. Analysis of Teachers’ Pedagogical Digital Competence: Identification of Factors Predicting Their Acquisition. Tech. Knowl. Learn. 2021, 26, 481–498. [Google Scholar] [CrossRef]
  62. Gonscherowski, P.; Rott, B. Measuring Digital Competencies of Pre-Service Teachers-a Pilot Study. In Proceedings of the 44th Conference of the International Group for the Psychology of Mathematics Education, Khon Kaen, Thailand, 19–22 July 2021; Volume 1, p. 143. [Google Scholar]
  63. Gonscherowski, P.; Rott, B. Instrument to Assess the Knowledge and the Skills of Mathematics Educators’ Regarding Digital Technology. In Proceedings of the Beiträge zum Mathematikunterricht 2022; WTM: Frankfurt, Germany, 2022; Volume 3, p. 1424. [Google Scholar]
  64. GeoGebra Team Classroom Resources. Available online: https://www.geogebra.org/materials (accessed on 23 May 2023).
  65. Flink Maxi und der Baum—GeoGebra. Available online: https://www.geogebra.org/m/a4pppe7a (accessed on 10 July 2021).
  66. Schüngel, M. Bewege Den Schmetterling—GeoGebra. Available online: https://www.geogebra.org/m/zrj2zcam (accessed on 28 February 2022).
  67. FLINK Lieblingssport—GeoGebra. Available online: https://www.geogebra.org/m/v4xuvmhf (accessed on 28 February 2023).
  68. FLINK Welche Zahl Fehlt?—GeoGebra. Available online: https://www.geogebra.org/m/qqv3kxt6 (accessed on 28 February 2023).
  69. Cohen, J. Quantitative Methods in Psychology: A Power Primer. Psychol. Bull. 1992, 112, 155–159. [Google Scholar] [CrossRef] [PubMed]
  70. Cicchetti, D.V. Guidelines, Criteria, and Rules of Thumb for Evaluating Normed and Standardized Assessment Instruments in Psychology. Psychol. Assess. 1994, 6, 284–290. [Google Scholar] [CrossRef]
  71. Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
  72. Taber, K.S. The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education. Res. Sci. Educ. 2018, 48, 1273–1296. [Google Scholar] [CrossRef]
  73. Janssen, N.; Knoef, M.; Lazonder, A.W. Technological and Pedagogical Support for Pre-Service Teachers’ Lesson Planning. Technol. Pedagog. Educ. 2019, 28, 115–128. [Google Scholar] [CrossRef]
  74. König, J.; Heine, S.; Jäger-Biela, D.; Rothland, M. ICT Integration in Teachers’ Lesson Plans: A Scoping Review of Empirical Studies. Eur. J. Teach. Educ. 2022, 47, 821–849. [Google Scholar] [CrossRef]
  75. Brianza, E.; Schmid, M.; Tondeur, J.; Petko, D. Uncovering Relations between Self-Reported TPACK and Objective Measures: Accounting for Experience. In Proceedings of Society for Information Technology & Teacher Education International Conference; Cohen, R.J., Ed.; Association for the Advancement of Computing in Education (AACE): Orlando, FL, USA, 2025; pp. 3114–3123. [Google Scholar]
  76. Thurm, D.; Barzel, B. Teaching Mathematics with Technology: A Multidimensional Analysis of Teacher Beliefs. Educ. Stud. Math. 2022, 109, 41–63. [Google Scholar] [CrossRef]
  77. Chen, W.; Tan, J.S.H.; Pi, Z. The Spiral Model of Collaborative Knowledge Improvement: An Exploratory Study of a Networked Collaborative Classroom. Int. J. Comput. Support. Collab. Learn. 2021, 16, 7–35. [Google Scholar] [CrossRef]
  78. Purwaningsih, E.; Nurhadi, D.; Masjkur, K. TPACK Development of Prospective Physics Teachers to Ease the Achievement of Learning Objectives: A Case Study at the State University of Malang, Indonesia. J. Phys. Conf. Ser. 2019, 1185, 012042. [Google Scholar] [CrossRef]
  79. Curran, P.G. Methods for the Detection of Carelessly Invalid Responses in Survey Data. J. Exp. Soc. Psychol. 2016, 66, 4–19. [Google Scholar] [CrossRef]
  80. Gonzalez, M.J.; González-Ruiz, I. Behavioural Intention and Pre-Service Mathematics Teachers’ Technological Pedagogical Content Knowledge. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 601–620. [Google Scholar] [CrossRef]
  81. Knievel, I.; Lindmeier, A.M.; Heinze, A. Beyond Knowledge: Measuring Primary Teachers’ Subject-Specific Competences in and for Teaching Mathematics with Items Based on Video Vignettes. Int. J. Sci. Math. Educ. 2015, 13, 309–329. [Google Scholar] [CrossRef]
  82. Weyers, J.; Kramer, C.; Kaspar, K.; König, J. Measuring Pre-Service Teachers’ Decision-Making in Classroom Management: A Video-Based Assessment Approach. Teach. Teach. Educ. 2024, 138, 104426. [Google Scholar] [CrossRef]
  83. Baumanns, L.; Pohl, M. Leveraging ChatGPT for Problem Posing: An Exploratory Study of Pre-Service Teachers’ Professional Use of AI. In Proceedings of the Mathematics Education in the Digital Age 4 (MEDA4), Bari, Italy, 3–6 September 2024. [Google Scholar]
  84. Cai, J.; Rott, B. On Understanding Mathematical Problem-Posing Processes. ZDM Math. Educ. 2024, 56, 61–71. [Google Scholar] [CrossRef]
  85. Trixa, J.; Kaspar, K. Information Literacy in the Digital Age: Information Sources, Evaluation Strategies, and Perceived Teaching Competences of Pre-Service Teachers. Front. Psychol. 2024, 15, 1336436. [Google Scholar] [CrossRef] [PubMed]
  86. Sen, M.; Demirdögen, B. Seeking Traces of Filters and Amplifiers as Pre-Service Teachers Perform Their Pedagogical Content Knowledge. Sci. Educ. Int. 2023, 34, 58–68. [Google Scholar] [CrossRef]
  87. Brungs, C.L.; Buchholtz, N.; Streit, H.; Theile, Y.; Rott, B. Empirical Reconstruction of Mathematics Teaching Practices in Problem-Solving Lessons: A Multi-Method Case Study. Front. Educ. 2025, 10, 1555763. [Google Scholar] [CrossRef]
Figure 1. TPACK framework adapted from Koehler et al. [39] and the facet of “selecting dLMs”.
Figure 1. TPACK framework adapted from Koehler et al. [39] and the facet of “selecting dLMs”.
Applsci 15 06024 g001
Figure 2. A graphical representation of the structure and the scoring of the instrument.
Figure 2. A graphical representation of the structure and the scoring of the instrument.
Applsci 15 06024 g002
Figure 3. Frequency of no, generic, single/multiple TCK-x, TPK-x, and TPACK reasoning used to justify the selection of dLM1 [65] in Study 2.
Figure 3. Frequency of no, generic, single/multiple TCK-x, TPK-x, and TPACK reasoning used to justify the selection of dLM1 [65] in Study 2.
Applsci 15 06024 g003
Figure 4. Proportions of no, generic, single TCK-x, TPK-x, and multiple TCK-x/TPK-x and TPACK reasoning used to justify the selection of dLM1 [65] in Study 2.
Figure 4. Proportions of no, generic, single TCK-x, TPK-x, and multiple TCK-x/TPK-x and TPACK reasoning used to justify the selection of dLM1 [65] in Study 2.
Applsci 15 06024 g004
Table 1. Summary of TCK-x, TPK-x, and TPACK reasoning.
Table 1. Summary of TCK-x, TPK-x, and TPACK reasoning.
TPACK ComponentCode #Description of the Code
TCK-xDifferent (TCK-1), real world (TCK-2), or dynamic representation (TCK-3)Reasoning that includes the selection of dLMs enabling new, real-world, and dynamic ways of presenting learning content that would not be possible with traditional material.
Decrease in (TCK-4), or modification of (TCK-5) or increase of (TCK-6) extraneous cognitive loadReasoning that dLMs supports the learning (decreasing extraneous cognitive load), changes the learning (modifying extraneous cognitive load), or hinders the learning (increasing extraneous cognitive load).
TPK-xMotivation (TPK-1)Reasoning that dLMs increase learners motivation or engagement.
Self-regulated learning (TPK-2)Reasoning that dLMs support self-regulated learning.
Try out, explore, discover (TPK-3)Reasoning that dLMs support exploration of learning content or discovery learning.
Differentiation and inclusion (TPK-4)Reasoning that dLMs support inclusion and differentiation.
Teacher efficiency
(TPK-5)
Reasoning that dLMs lead to lecture time savings or increase the efficiency of teachers by automating assessment or feedback.
Distraction of learners
(TPK-6)
Reasoning that dLMs distract learners from the intended learning objective.
TPACKCombination of TCK-x and TPK-xReasoning that includes TCK-x and TPK-x reasoning.
Table 2. Items designed for assessing “selecting dLMs”.
Table 2. Items designed for assessing “selecting dLMs”.
Item No.TPACK
Component
Item WordingType of Item
1Content knowledgeFor which learner age do you think the presented digital learning material is suitable? single-choice selection
of a grade level 1
2In your opinion, is the presented digital learning material suitable for learners with special educational needs? If so, select one or none. single-choice selection
of (no) special need 2
3Describe the learning content for which you think the presented digital learning material is intended. open-text item
4TCK-x,TPK-x, and/or TPACK
reasoning
For the specified grade level, special educational needs, and your description of the learning content of the presented dLM, justify why or why not you would select the presented digital learning material. open-text item
1 To capture the learner’s age, we provided a list of grade levels in increments of two grade levels aligned with the local curriculum. The following options were given: 1–2, 3–4, 5–6, 7–8, 9–10, 11–13. 2 To capture the special needs of learners, we provided a list of special needs designations aligned to the local terminology in Austria and Germany. The following options were given: not suitable for learners with special needs, social and emotional needs, mental development needs, hearing and communication needs, motoric needs, learning needs, speech needs.
Table 3. Initial and adjusted sample sizes for both studies.
Table 3. Initial and adjusted sample sizes for both studies.
StudyRQsSize of
Sample
Sample Size Per University (Austria/Germany) 1dLMs
Used in Study
Mean Processing Time of Task in Minutes
1RQ 1.116461/103dLM1-49.84
2RQ 2.x39555/324dLM14.29 2
1 In Study 1, we included all responses to evaluate the coding and scoring approach’s robustness and to establish a lower time limit for carefully processing the items (2.50 min). In Study 2, we excluded all responses that did not meet the established minimum processing time. 2 The lower mean processing speed for an individual dLM in Study 1 (dLM1–dLM4), compared to Study 2 (dLM1 only), may be attributed to the task’s repetitiveness and the participant’s increasing familiarity with the items.
Table 4. Descriptive statistics and ANOVA results for the external assessment of “selecting dLMs” by the number of semesters of study, and Kruskal–Wallis test results for self-reported TPACK.
Table 4. Descriptive statistics and ANOVA results for the external assessment of “selecting dLMs” by the number of semesters of study, and Kruskal–Wallis test results for self-reported TPACK.
Type of AssessmentSem. 1, 2
(n = 57)
Sem. 3, 4
(n = 149)
Sem. 5, 6
(n = 71)
Sem. ≥ 7
(n = 102)
MSDMSDMSDMSD
external 11.37 a1.432.12 b1.602.27 b1.552.65 b1.51F (3375) 8.51ηp2
0.06
self-report 23.12 c0.863.27 c0.733.440.723.55 d0.76Chi2 (3) 11.99
1 Items 1–4 for assessing “selecting dLMs” proposed in this manuscript, scale 0–7. 2 Schmid et al. [32] with Likert scale (1 = strongly disagree to 5 = strongly agree). a,b Means without a common superscript differ; post hoc tests with Bonferroni (p < 0.011). c,d Means without a common superscript differ; post hoc tests with Dunn–Bonferroni (p < 0.007).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gonscherowski, P.; Lindenbauer, E.; Kaspar, K.; Rott, B. Assessing the Selection of Digital Learning Materials: A Facet of Pre-Service Teachers’ Digital Competence. Appl. Sci. 2025, 15, 6024. https://doi.org/10.3390/app15116024

AMA Style

Gonscherowski P, Lindenbauer E, Kaspar K, Rott B. Assessing the Selection of Digital Learning Materials: A Facet of Pre-Service Teachers’ Digital Competence. Applied Sciences. 2025; 15(11):6024. https://doi.org/10.3390/app15116024

Chicago/Turabian Style

Gonscherowski, Peter, Edith Lindenbauer, Kai Kaspar, and Benjamin Rott. 2025. "Assessing the Selection of Digital Learning Materials: A Facet of Pre-Service Teachers’ Digital Competence" Applied Sciences 15, no. 11: 6024. https://doi.org/10.3390/app15116024

APA Style

Gonscherowski, P., Lindenbauer, E., Kaspar, K., & Rott, B. (2025). Assessing the Selection of Digital Learning Materials: A Facet of Pre-Service Teachers’ Digital Competence. Applied Sciences, 15(11), 6024. https://doi.org/10.3390/app15116024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop