Implementing Digital Competencies in University Science Education Seminars Following the DiKoLAN Framework

: Prospective teachers must acquire subject-speciﬁc digital competencies to design contem-porary lessons and to promote digital competencies among students themselves. The DiKoLAN framework (Digital Competencies for Teaching in Science Education) describes basic digital competencies for the teaching profession in the natural sciences precisely for this purpose. In this article, we describe the development, implementation, and evaluation of a university course based on DiKoLAN which promotes the digital competencies of science teachers. As an example, the learning module Data Processing in Science Education is presented, and its effectiveness is investigated. For this purpose, we used a questionnaire developed by the Working Group Digital Core Competencies to measure self-efﬁcacy, which can also be used in the future to promote digital competencies among pre-service teachers. The course evaluation showed a positive increase in the students’ self-efﬁcacy expectations. Overall, the paper thus contributes to teacher education by using the course as a best-practice example—a blueprint for designing new courses and for implementing a test instrument for a valid evaluation.


Introduction
More and more schools are equipped with a continuously improving digital infrastructure including school-wide wireless network access, school cloud storage, interactive whiteboards, video projectors, and devices such as computers, laptops, or tablet computers. This opens up a lot of new opportunities but at the same time requires teachers to be trained in new or adapted competencies to fruitfully utilise these digital tools. These competencies are described in various frameworks such as UNESCO's ICT Competency Framework for Teachers [1], the ISTE Standards for Educators [2], or the European Competence Framework for Educators (DigCompEdu) [3], all of which focus on slightly different aspects of the competence needed by teachers for making maximum use of the digital environment. In addition to those generic non-subject-specific frameworks, the DiKoLAN framework (Digital Competencies for Teaching in Science Education) focuses on digital competence for teaching the natural sciences [4,5].
Despite belonging to the generation of so called 'digital natives,' today's young teachers need explicit instruction on how to productively use digital technology in schools [6,7]. Most researchers agree that digital technology needs to be integrated in teacher education curricula, and numerous strategies have been proposed in the literature to facilitate this effort [8]. To address the specific needs of science teachers, the DiKoLAN framework ( Figure 1) gives a comprehensive guideline on the topics to be addressed [5]. This guideline has been used to design, teach, and evaluate a course for students in teacher education in the three natural sciences at the University of Konstanz. The aim of this research paper is to provide an overview of the current research on the DiKoLAN framework, as well as to present the design and the evaluation of a special pre-service teacher training course tailored to foster the digital competencies described in DiKoLAN. Additionally, the investigation of the effectiveness of the individual learning modules offers a blueprint for future research on the effectiveness of university teacher training on the subject-specific use of ICT in science education.

Research following the DiKoLAN Framework
The DiKoLAN framework was first presented in 2020 by the Working Group Digital Core Competencies [4]. The framework was first developed for Germany and Austria and later introduced in Switzerland [9]. It was based on initiatives to promote digitisation in schools and to promote the digital competencies of prospective teachers and also based on DigiCompEdu [3], the TPACK framework [10,11], and the DPaCK model [12,13].
The curricular integration of essential digital competencies into the first phase of teacher education requires specific preliminary considerations. To be able to integrate ICT-related elements of future-proof education into the teaching practices of all faculty involved in teacher training at universities, basic digital competencies need to be structured in advance [14].
Based on core elements of the natural sciences, the authors of DiKoLAN propose seven central competency areas [15]: Documentation, Presentation, Communication/Collaboration, Information Search and Evaluation, Data Acquisition, Data Processing, and Simulation and Modelling ( Figure 1). These seven central competency areas are framed by Technical Core Competencies and the Legal Framework. The unique feature of DiKoLAN is that the DPaCKrelated competencies are described in great detail and take into account subject-specific, subject-didactic (e.g., [16,17]), and pedagogical perspectives from all three natural sciences (biology, chemistry, and physics).
The framework thus coordinates and structures university curricula [14,15], which has been demonstrated, e.g., for the competency area Presentation [18], using the example of low-cost photometry with a smartphone [19], or by means of a project on scientific work [20,21]. Such coordination makes cooperation between different universities, which has been suggested by Zimmermann et al., possible without any significant difficulties [22].

Introduction
Phase I -Successive treatment of the competence areas Phase II -Design & Coaching Phase III -Realisation For each area of competency: • Teaching and learning theory and subject-didactic principles • Methodological notes on the use in teaching situations • Subject-specific references • Overview, comparison and evaluation of available tools • Exercises on how to use the tools and how to integrate them into lessons Pretest Posttest

Introductory Module
In the first module, background information is given on the use of ICT in the science classroom, the current situation regarding digital media in schools is examined [23], and initial frameworks such as SAMR [24], ICAP [25], TPACK [10,11,26], and DPaCK [13] are presented and critically questioned. Moreover, the approach to the integration of digital media in the classroom is illuminated, and the didactic functions of digital media in science are explained [27].

Workshop Phase: Overview of Modules on Areas of Competencies
In the module on the competency area of Documentation (DOC), the data storage processes (from documentation to versioning to archiving) are scrutinised, the documentation of experiments with a digital worksheet is introduced [28], and the documentation of experiments, specifically by students themselves using videos called EXPlainistry [29], is presented. As it can be assumed from previous surveys that advanced students already have basic knowledge in the field of documentation [30], the focus in this module is less on the technical aspects and more on the subject-specific context, questions of methods and digitality, and, above all, the integration of documentation techniques into teaching.
The module on the second competency area, Presentation (PRE), includes a discussion of the available hardware at schools for presentation and possible scenarios in which digital media are used for presentation. Theoretical principles are presented on multimedia, especially multimodality (which, despite its proven effectiveness, is surprisingly rarely mentioned in physics teacher journals [31]) and multicodality [32,33], as well as cognitive load theory [34]. Recommendations for action on text and image design [33] are presented. Since a certain prior knowledge can also be assumed in this competency area [30], the focus is on presentation forms specific to the natural sciences and methodological aspects.
The third module on Communication and Collaboration (COM) revolves around planning collaborative learning settings [35]. Tools for the collaborative editing of texts, mind maps, pin boards, wikis, online whiteboards, and learning management systems are presented and tried out. Finally, different accompanying communication channels between students and the teacher are discussed.
The Information Search and Evaluation (ISE) module focuses on the five steps of digital research using the IPS-I model [36]. Various scientific and science didactic databases are presented, and examples of different types of literature are examined. Since it can be assumed that advanced students have a basic background in this area of competency [30], the focus in this module is on methodological issues and integration into lesson planning.
In the module for Data Acquisition (DAQ), the possibilities of data acquisition are discussed, especially using a smartphone (e.g., [19][20][21][37][38][39][40]). Various options such as video analysis or taking measurements using an app are tried out. Experimentation in the Remote Lab is also introduced [41]. Furthermore, the necessary steps of teaching with digital data acquisition and the possibilities and challenges of teaching in this manner are discussed.
The penultimate module, Data Processing (DAP), presents different coding options for characters and numbers as well as typical problems that arise when importing data, which the students test by using an iPad. The differences between pixel and vector graphics are discussed. The focus is on the structure of the formats, i.e., xml and mp4.
In the last module, digital tools for Simulation and Modelling (SIM) are presented along with the competence expectations listed in DiKoLAN and tested in the exercises. Tools are discussed for which empirical findings are available [42][43][44][45][46] or which have already been successfully integrated in other DiKoLAN-oriented teaching concepts [47,48]. The tool types presented are spreadsheet programs, modelling systems, computer simulations, StopMotion programs [49], and programs for digital modelling and animation. In addition, Augmented Reality (AR) is discussed as a technique for representing models [50][51][52].

Free-Work Phase: Designing a Lesson Plan
In the free work phase, teams of two students design a lesson on a scenario of their own choosing. In doing so, they are asked to consider what the benefit of using digital media in the learning unit would be for the students and what skills the teaching staff need. During the process, the students write a seminar paper in which they present the scenario and the associated planning and also explain their approach and why they considered the planning to be didactically appropriate. Throughout the 4 weeks before the exam, the supervisors are available for individual coaching, which is used by students to varying degrees. All materials needed for the lesson are to be created and turned in, even if the lesson is not completely implemented.

Presenting the Lesson Plan in a Mock Trial
Finally, the students present their plans at a block meeting. Each participant in the seminar plenum is asked to try out the digital elements of the teaching scenario for themselves as completely as possible. For the supervisors, the following questions play a role in the evaluation: Both the presentations and the written assignments, which have to be handed in before the first presentation, are considered in the evaluation.

Design of the Individual Modules (Using the Example of Data Processing)
For each workshop, the areas to be covered in the module are selected based on the competency expectations defined in the DiKoLAN framework. When deriving the learning objectives of a module from the orientation framework, three categories were distinguished: Main learning objectives, secondary learning goals, and non-addressed competency expectations (see Figure 3 for an example). Using the area of data processing as an example, the majority of competencies on the level of Name and Describe are covered in a lecture. For instance, relevant software is introduced and data types common in the context of teaching the natural sciences are shown. Additionally, typical scenarios for the application of digital data processing appropriate to the school curriculum are shown. As an accompaniment to this part, in-lecture and at-home activities are designed to allow for timely application of the topics learned. This includes drawing on an example from data processing, exporting data from digital data acquisition applications, and importing the data into spreadsheet software. There, the data are manipulated by performing various analyses.
To get a first impression of the students' previous experience, the students are asked in the introductory phase to identify which data processing software they have used before and which data manipulations they already know. In the next step, relevant software is introduced, and data types common in the context of teaching and natural sciences are shown. For this purpose, the export and import of data is presented in the first input phase using the example of csv files in the MeasureAPP app [53]. In the following phase, common issues related to tablets and data storage locations are addressed. In this context, the difference between csv and Excel files is highlighted. Examples are used to introduce the integer and float number formats. In particular, the coding of characters and numbers is discussed in this context. At the end of the first input phase, the visualisation of data using Excel [54] is demonstrated. DAP.T.N2 Name scenarios for the use of the mentioned possibilities of data processing in speci�c teaching-learning situations with �t to a context that is relevant to the subject.
DAP.M.N1 Name prior knowledge and competences of the learners necessary for a teaching-learning situation in order to use the techniques.
DAP.M.N2 Name methodological aspects of learning and teaching about digital data processing, e.g. regarding: Time Form of organization Equipment and material requirements DAP.M.N3 State points to be observed when processing personal data in the context of work steps.
DAP.C.N1 Name quasi-established procedures of digital data processing in the subject area.
DAP.C.N2 Name subject-speci�c scienti�c scenarios with associated methods of subject-speci�c data processing, e.g.: Determination and extraction of curve maxima (e.g. sound levels, acceleration measurements) Colorimetry (DNA arrays, concentration measurements) Measurement uncertainties, standard errors, dispersion, etc. in the evaluation of measurement data Concentration calculations from substance quantity and volume data including a contextualisation in the subject area (partly also Big Data analyses) DAP.S.N1 Name different data types and encodings and associated data or �le formats (and operations allowed with them), e.g. for:  Using the integrated microphone of the iPad, the students record an audio oscilloscope of a sung vowel sound in individual work during the practice phase using the phyphox app [55]. They then export the measurement data as a csv file and then import it into Excel to display the data graphically.
In the second input phase, ways of calculating new data in Excel and using spreadsheets to analyse data are demonstrated, including the aspects of measurement uncertainties, statistics, and regression. The instruction is concluded with an introduction to the differentiation of formats for images into vector and pixel graphics and to the structure of video formats as containers.
In a final step, the challenges students have encountered so far during the acquiring and processing of measurement data were discussed, and possible solutions were shown.
As a follow-up task, the students recorded a series of measurements of a cooling teacup from which they are to determine the mean decay constant using a spreadsheet program of their choice.
With these initial practical experiences and theoretical foundations from the areas of Name and Describe, the students then set about working out teaching scenarios in the further course of the seminar to consolidate and extend the skills they have acquired in each module.

Evaluation
To investigate the effectiveness of the newly designed teaching-learning modules, the change in the participants' self-efficacy expectations is used as a measure of effectiveness and is measured with an online test provided by the Working Group Digital Core Competencies [5]. So, the question to be answered is: Is it possible to measure a significant increase in students' self-efficacy expectations in relation to the competences covered in the course? Due to the structure of the seminar, a large effect on students' self-efficacy expectations is assumed for the main learning objectives, a medium effect for the secondary learning goals, and no effects for the areas not addressed.
The measurement of self-efficacy expectation was chosen for two reasons. First, it is precisely self-efficacy expectation that is influenced by experiences during studies and thus ultimately also has an effect on motivational orientation towards the later use of ICT and digital media in one's own teaching [30]. Second, the subject-specific self-efficacy expectation can be assessed much more economically than a specific competency itself [31]. Accordingly, most of the digital competence questionnaires published so far measure self-efficacy expectations, e.g., [5,[56][57][58][59][60][61][62].
The individual items are based on the competence expectations contained in DiKoLAN and are designed as Likert items. The participants indicate on an eight-point scale their agreement with a statement that describes their ability in the corresponding competence expectation, e.g., • "I can name several computer-aided measurement systems developed for school use (e.g., for ECG, pH, temperature, current, voltage or motion measurements)," • "I can describe several systems of wireless mobile sensors for digital data acquisition with mobile devices such as smartphones and tablets, including the necessary procedure with reference to current hardware and software," or • "I can perform measurement acquisition using a system of wireless mobile sensors for digital measurement acquisition with mobile devices such as smartphones and tablets." The items of the questionnaire can each be directly assigned to a single competence expectation. The naming of the items in the data set created in the survey follows the nomenclature in the tables with competence expectations listed in DiKoLAN ( Figure 4).
Many competency expectations cover several individual aspects or are described using several examples. In such cases, several items were created, which, taken together, cover the competence expectation as a whole.
The questionnaire was implemented as an online survey with LimeSurvey [63] and made available to the participants of the course in each case as a pre-test in the week before the synchronous seminar session via individual e-mail invitation. Seven days later, the students received the same questionnaire again as a post-test.  [4,5]. Adapted with permission from Ref. [4]. © 2020 Joachim Herz Stiftung.
It was hypothesised that the participants would have a higher self-efficacy expectation in the competency areas addressed in the respective modules after the intervention than before. It is also assumed that large effects can be measured for the main learning objectives, whereas at least medium effects can be measured for the secondary learning objectives, the acquisition of which can only be attributed to the brief learning time in the seminar.

Sample
The participants included N = 16 pre-service German Gymnasium teachers for science subjects who participated in the newly designed seminar on promoting digital core competencies for teaching in science education according to the DiKoLAN framework. The course is developed for Master's students in the 1st or 2nd semester but is also open for Bachelor's students in the 5th or 6th semester. More than three quarters of the students participated in the voluntary pre-and post-test surveys. However, three participants failed to complete the single surveys. Hence, data from those participants were removed, resulting in a final total of n = 13 participants (5 male, 8 female, aged M = 23.5 (SD = 2.9) years). These 13 participants indicated they studied the following science subjects (multiple answers possible; usually, students must study two subjects): 10 Biology (76.9%), 6 Chemistry (46.2%), 1 Physics (7.7%), and 1 Mathematics (7.7%). They were attending the following semesters at the time of the study: 5th BEd (1; 7.7%), 6th BEd (1; 7.7%), 1st MEd (6; 46.2%), 2nd MEd (4; 30.8%), or 3rd MEd (1; 7.7%).

Statistical Analysis
The responses were analysed using R statistical software [64]. Means and standard deviations were computed for each item in the pre-tests and post-tests. Wilcoxon signed-rank tests were conducted for each pre-test post-test item pair to test for growth in item means.
The results of the descriptive and inferential statistics are listed in tables in the Appendices A-G. As an example, the results for the competency area Data Processing (DAP) are also presented here. Table 1 shows the results for the main learning objectives, and the results for the secondary learning goals are listed in Table 2 (for an overview, the main and secondary learning objectives are marked in the respective table of competence expectations, Figure 3). If several items of the questionnaire can be assigned to a competence expectation listed in DiKoLAN, a mean effect size averaged over the associated Wilcoxon signed-rank tests (in italics) is given in addition to the effect sizes of the individual Wilcoxon signed-rank tests. For example, the competency expectation DAP.S.N2 ("Name digital tools [ . . . ]") is assessed with seven items, DAP.S.N2a-g, which reflect the individual examples mentioned in DiKoLAN (e.g., "Filtering", "Calculation of new variables", . . . ). The results show that there is an increase in self-efficacy expectations in all of the competency expectations addressed as the main learning objectives in the module. All of the tested hypotheses can be accepted.

Data Processing (DAP)
According to Cohen, the effect sizes determined as correlation coefficient r can be roughly interpreted as follows: 0.10 → small effect, 0.3 → medium effect, and 0.50 → large effect [65] (p. 532). However, it must be taken into account that the interpretation of effect sizes should always depend on the context [65]. Since the learning goals addressed in the intervention and the tested self-efficacy expectations were both derived from the competency expectations defined in DiKoLAN and thus correlate very highly, larger overall effects are to be expected than in other studies. Therefore, we raise the thresholds for the classification of the observed effects into small, medium, and large effects for the following evaluations as follows: 0.20 → small effect, 0.40 → medium effect, and 0.60 → large effect. The effect sizes of the intervention in this area are always 0.62 or higher if the mean effect size is considered for broken down sub-competencies. Hence, the hypothesised growth in self-efficacy can be observed with large effects of the intervention.
The results of the Wilcoxon signed-rank tests for the secondary learning goals show significant increases in self-efficacy for most of the hypotheses tested. Where single hypotheses must be rejected, only partial aspects of a competence expectation were addressed, as can be expected for a secondary learning objective. The averaged effect sizes mostly show medium effects of the intervention on self-efficacy expectations in these areas, as hypothesised.
For comparison, the mean values of the self-efficacy expectations in sub-competencies not explicitly addressed in the course are listed and examined for differences in mean values ( Table 3). As expected, no significant differences are observed between the two test times. Table 3. Results of Wilcoxon signed-rank tests for competencies in the area of Data Processing (DAP) NOT explicitly addressed in the respective module and thus NOT hypothesised to change during intervention (for comparison). n = 13. S: Special Tools, C: Content-specific Context, M: Methods/Digitality, T: Teaching, N: Name, D: Describe, A: Use/Apply.

Competency
Pre For a better overview, the averaged effect sizes are clearly plotted in Table 4. Table 4. Overview of (average) effect sizes of the effects of the intervention on the competence expectations in the area of Data Processing (DAP). n = 13. S: Special Tools, C: Content-specific Context, M: Methods/Digitality, T: Teaching, N: Name, D: Describe, A: Use/Apply. Note: main learning objectives (bold magenta), secondary learning goals (italic cyan), and non-addressed competencies (yellow). * The average effect size is given for competencies assessed with more than one item. • Not tested.

Documentation (DOC)
Due to the students' previous experience, which is expected to be well developed (the comparatively high item means in the pre-test support this assumption), the focus in this module is less on the technical aspects and more on the areas of Teaching, Methods/Digitality, and Content-specific context ( Figure A1). For the main learning objectives, large effects of the intervention are observed, in line with the expectations (Table A1). As expected, mostly medium (average) effects were measured for the secondary learning objectives (Table A2). The measured effects also show, for example, that within the sub-competency DOC.S.N1, the focus was specifically on versioning management and the possibilities of using corresponding tools, which is why a particularly large effect is measurable for item DOC.S.N1c ("I can name technical options for version management and file archiving (e.g., file naming with sequential numbering, date-based file names, Windows file version history, Apple Time Machine, Subversion, Git, etc.).") but not for DOC.S.N1a ("I can name technical possibilities for digital documentation of e.g., protocols, experiments, data or analysis processes (e.g., using a word processor, a spreadsheet, OneNote, Etherpad).") and DOC.S.N1b ("I can name technical options for permanent data storage and corresponding software offers/archives (e.g., network storage, archiving servers, cloud storage)."). As expected, there were no significant differences in the pre-test and post-test results for the sub-competencies that were not addressed (Table A3).

Presentation (PRE)
In the competency area of presentation, as expected, the item mean values in the pre-test are also quite high in some cases, and the students rate their own competencies in this area quite highly. Hence, the main learning objectives are in the areas of Teaching, Methods/Digitality, and Content-specific context ( Figure A2). The intervention achieved strong (averaged) effects on the self-efficacy expectations for all main learning objectives (Table A5). Even if not all facets of a sub-competency can always be recorded (PRE.C.N1, PRE.C.D1), a clear increase can still be observed on average. As expected, mostly medium effects are achieved for the secondary learning goals (Table A6). The sub-competencies that were not addressed show no differences except for one (Table A7). Only the item PRE.S.A1c ("I can set up and use at least one tool/system to represent processes on different time scales.") shows a clear increase in self-efficacy expectations.

Communication and Collaboration (COM)
In the module on the competency area of Communication/Collaboration, three central topics are placed in the foreground: firstly, the use of digital technologies for joint work on documents (by students as well as among colleagues) and the associated requirements, secondly, the instruction of students to communicate with each other, and thirdly, the exemplary integration into lesson planning. While mainly technical issues and tools are discussed and tested as the main learning objectives, methodological-didactic issues can only be considered on the basis of individual examples. Accordingly, the main learning objectives concentrate on the area of special tools ( Figure A3).
The results show no significant improvement in self-efficacy expectations in the learning areas of the main learning objectives (Table A9). For the secondary learning goals, the picture is mixed (Table A10). Although there is a significant effect of the intervention on the assessment of the ability to integrate communication and collaboration into lesson planning (COM.T.A1), it is precisely in the case of the very complex learning objectives (COM.M.N1 and COM.M.D1) that no (or only smaller) effects can be observed in individual sub-aspects. In the competence expectations that were not addressed, no significant differences between the test times can be measured (Table A11).
Overall, it should be noted that the participants already assess their abilities as comparatively high in the pre-test.

Information Search and Evaluation (ISE)
The focus of the module Information Search and Evaluation is clearly on methodology and lesson planning ( Figure A4). The analyses show large effects of the intervention in almost all sub-competencies addressed as the main learning objective (Table A13). As expected, medium effects were observed for the secondary learning objectives (Table A14). In areas that were not addressed, no differences were found between pre-test and post-test (Table A15).

Data Acquisition (DAQ)
In the Data Acquisition module, a variety of possibilities for the acquisition of measurement data-especially in distance learning-are presented, discussed, and tried out as examples ( Figure A5). Accordingly, the contents of the main learning objectives, which all lie in the technical area, can only be briefly touched upon. In individual sub-aspects of the sub-competencies, pronounced effects can be seen, but the average effect strengths are in the range of medium effects (Table A17). Medium effects of the intervention on self-efficacy expectations can also be observed for the secondary learning goals (Table A18). As expected, in the sub-competencies that were not addressed, no differences are registered between the two test times (Table A19). Figure A7 shows the competency expectations addressed in the module Simulation and Modelling and distinguishes between main and secondary learning objectives. In the main learning objectives, the intervention results in an increase in self-efficacy expectations with large effect sizes (Table A25). For the secondary learning goals, the intervention had medium to large effects, exceeding expectations (Table A26). For the competence expectations that were not addressed, no significant differences can be determined between the test times (Table A27).

Discussion
This section first discusses the effects observed across all modules and the general classification into main and secondary learning objectives. Then, the individual modules are discussed, and implications for improving the teaching-learning modules as well as for designing and developing similar teaching-learning units to promote digital competences are given. Overall, the results are largely in line with the expectations. In five of the seven central competency areas (DOC, PRE, ISE, DAP, and SIM), the expected increase in the students' self-efficacy expectation was observed in all main learning objectives with large effects (r of 0.60 to 0.91). However, it should be noted that, in some cases, not all aspects of a main learning objective can be addressed, so the effect sizes for individual items may well be lower (r of 0.26 to 0.91), even if the averaged effect over all items depicting the competence expectation can nevertheless be considered a large effect.
Only in the competency area Communication/Collaboration (COM) does the intervention not lead to a significant increase in self-efficacy expectations in the main learning objectives. It should be noted that the item mean values are already extremely high in the pre-test, which means that the students consider their own abilities in this area to be very high even before the intervention. A similar picture emerges for the secondary learning goals, even though an effect of the intervention can certainly be recognised. Therefore, the competency area Communication/Collaboration (COM) will not be considered in the following observations, and this module will be discussed again afterwards.

Effectiveness of the Intervention in the Secondary Learning Objectives
For the secondary learning objectives, the expected picture also emerges for five of the seven central competency areas (DOC, PRE, DAQ, DAP, SIM). For learning objectives that are only tested with one item, the observed effect sizes are in the medium range, as expected (r from 0.40 to 0.67). In the module Information Search and Evaluation (ISE), contrary to the hypothesis, no significant increase in self-efficacy expectations was observed for the learning objective ISE.C.N2 ("Name several literature databases or search engines [ . . . ]"), although this was clearly the content of the course. However, the students already indicated a comparatively high level of prior knowledge in the pre-test.
In the case of secondary learning objectives, which are regarded as such because only individual selected examples are deepened within the sub-competency areas, the effect sizes to be expected vary accordingly when comparing the items assigned to this learning objective with each other. This observation applies, for example, to DOC.S.N1 ("Name technical approaches [ . . . ]") in the competency area of Documentation. In the associated module, less emphasis was placed on word processing (DOC.S.N1a) and permanent data storage (DOC.S.N1b), and instead, the possibilities of digital version management (DOC.S.N1c) were discussed in depth, so a significant increase can only be recorded for the third item (DOC.S.N1c) The selection of this sub-aspect was based on the assumption that the students would have less prior knowledge of digital version management than of the other sub-aspects. The pre-test item mean values support this assumption (DOC.S.N1a: 5.46 (1.90), b: 5.69 (1.89), c: 4.00 (2.35)).

Differences between the Test Times in Sub-Areas which Were Not Addressed
Differences between the test times belonging to a module (pre-test and post-test) can only be found for one item (PRE.S.A1c: "I can initialise and use at least one tool/system to represent processes on different time scales."). The results from the pre-test (M = 4.92, SD = 1.71) and post-test (M = 6.00, SD = 1.91) indicate that the intervention resulted in an improvement in self-efficacy expectation, V = 51.5, p = 0.014, r = 0.71. This is understandable, since the creation of stop motion videos was specifically practised here, but not all of the presentation forms expected in this sub-competency were covered in the module. Figure 5 shows boxplots of the observed (averaged) effect sizes r for the main learning objectives and secondary learning goals for each competency area. Except for the competence areas of Communication/Collaboration and Data Processing, there are clear separations between the effect sizes of the main learning objectives and the secondary learning goals, which supports the division into main and secondary learning objectives.

Discussion of the Individual Teaching-Learning Modules
In the following section, the results of the individual learning modules are examined in more detail separately.

Data Processing (DAP)
Out of the 26 sub-competencies in the DAP competency area, 13 were selected as major and 9 as minor learning objectives. Less prior experience was assumed in the areas of Content-specific context and Special tools, which is why more attention was paid to these areas in the design of the unit. Large effects (r = 0.62 . . . 0.86) were found between the pre-and post-test for all major learning objectives, as well as medium to large effects for the minor learning objectives (d = 0.50 . . . 0.63), except for the test items DAP.S.A1b ("I can apply procedures for calculating new quantities in data processing."), DAP.S.A1d ("I can apply procedures for statistical analysis in data processing."), and DAP.S.A1e ("I can apply image/audio and video analysis procedures in data processing."). The structure of the session can be seen well, as the application level played a minor role here and, similarly, for the secondary learning objective test item DAP.T.D1a ("I can describe the didactic prerequisites of using digital data processing in the classroom."). Looking at the averaged effect sizes in the module (Table 4), it can be confirmed that the areas with greater focus produced stronger effects. Consequently, the focus on the content specific context and the specific tools has proven to be suitable and can be maintained for further courses. In this evaluation, the pre-and post-tests accompanying the synchronous session were considered. However, a significant change in self-assessment in the area of application is expected for the lesson design phase. Therefore, it can be said that, through the module, the competency area DAP can be promoted very well and that this module serves as a basis for further modules for the promotion of digital competences among prospective teachers at other locations.

Documentation (DOC)
From the 13 sub-competencies of the competency area of DOC, 8 were selected as the main objectives and 4 as secondary learning goals. Particular attention was paid to the levels of Name and Describe. In all main learning objectives, a large effect on the growth of the students' self-efficacy expectations (r = 0.65 . . . 0.88) can be determined by the measuring instrument. As already discussed before, the secondary learning objectives in the area of DOC focused on the students' previous experience, which is why less emphasis was placed on word processing (DOC.S.N1a) and permanent data storage (DOC.S.N1b) and, instead, the possibilities of digital version management (DOC.S.N1c) were discussed in depth, so a significant increase can only be recorded for the third item (DOC.S.N1c). Nevertheless, besides single items with a large effect (DOC.S.N1c), medium effects were found across all competencies of the secondary learning objectives (r = 0.53 . . . 0.67). As expected, no significant increases in students' self-efficacy ratings were detected in the domains that were not addressed. If the focus is placed on the individual results, it can be seen that high effect sizes were obtainedm especially in the Teaching (T) category, reflecting the structure of the session. Therefore, it could be shown that the intervention has a great effect in the areas of the main learning objectives on the students' self-efficacy expectation, which is why this session needs only minor adjustments for further implementations and can be used as a model example for courses at other universities. To be a little more prepared for the session on communication and collaboration (see below), further elaboration could be made in the area of specific technology (DOC.S.N1). Thus, the module fully covers the competency areas taken from the framework.

Presentation (PRE)
Out of the 17 sub-competencies that the competency area PRE comprises, only 8 sub-competencies were declared as main and 4 as secondary learning objectives due to the limited time available and based on the assumed prior experience. Particular emphasis was placed on the competencies of the Name and Describe competency levels and, as described before, mainly in the areas of Teaching, Methods/Digitality, and Content-specific context (Table A5). Out of the 36 test items used to assess the sub-competencies addressed, no significant effect on the students' self-concept was found in 7 cases. In the area of the main learning objectives, these were one item at the naming level and two items at the describing level (see Table A6), each of which is a subitem of a supercategory (PRE.C.N1/D1). Nevertheless, by averaging all of the effect sizes of these supercategories, a large effect (r = 0.61 . . . 0.90) could also be shown for these two. The same applies to the effect sizes of the superordinate sub-competencies (PRE.S.N1/D1) of the four rejected items from the area of secondary learning objectives (r = 0.40 . . . 0.54). Thus, based on the results from the evaluation, an area-wide increase in self-efficacy expectations for the addressed competency domains can be determined. The individual results, which show comparatively high effect sizes in all areas of the category Methods/Digitality (TPK), reflect, on the one hand, the module structure, since, in this session, the focus was put more on the discussion among the students about the possible effects of the use in the classroom. On the other hand, students estimated their prior experience in the context of Principles and Criteria for Designing Digital Presentation Media (PRE.M.N1/D1) to be comparatively low. Thus, the focus on individual items in the competencies has proven successful, and the unit on presentation can be used as a successful example for the area-wide integration of the promotion of digital competencies in a master's seminar for student teachers.

Communication and Collaboration (COM)
For this module, due to time considerations, 4 of the 29 competency expectations were selected as major learning objectives and 11 as minor learning objectives. Thus, only about half of the competencies could be covered. In order to get a better overview of the entire competency area and to better link the different areas of teaching, methods, context, and tools, it would certainly be advisable to extend this module to two sessions for future implementations. Nevertheless, for a first session, the focus on the use of digital technologies for joint work on documents (by students as well as among colleagues) and the associated requirements, as well as the instruction of students to communicate with each other and ultimately the exemplary integration into lesson planning, is considered correct. A Dunning-Kruger effect [66,67] is suspected, indicating that, in the area of the main learning objectives, no major effect on the self-assessment of the students could be achieved, because they overestimated their previous experience. During the course, the students first had to learn that, although they experience themselves as very competent in everyday digital communication, guiding digital collaboration between pupils goes far beyond the skills in everyday life and that completely different tools can be used for corresponding learning activities. Due to this overestimation of their previous experience, mainly technical issues and tools were discussed and tested, whereas methodological-didactic issues could only be considered on the basis of individual examples. If, as described above, some technical tools and tricks are already presented in the Documentation module, there is more time for methodology and teaching at this point in the course. The significant effect of the intervention on the assessment of being able to integrate communication and collaboration into lesson planning (COM.T.A1b) particularly shows that this module was able to achieve the goal of strengthening the students' ability to use digital media in the classroom. With the changes described, this unit thus also serves as an adequate starting point for the development of similar modules elsewhere.

Information Search and Evaluation (ISE)
The focus of the module Information Search and Evaluation is clearly on methodology and lesson planning (Table A13). From the 32 sub-competencies of the competency area ISE, 21 were selected as the main learning goals and 7 as the secondary learning goals. As suspected, the students already rated their self-efficacy expectancy in the areas of Content-specific Context and Special Tools comparatively high at the Naming level (M pre = 5.23 (1.92) . . . 6.46 (1.61), which is why only a subordinate urgency was assigned to these areas in the design of the unit. Moderate to strong effects (r = 0.60 . . . 0.91) were found between the pre-and post-test for all main learning objectives, as well as moderate effects for minor learning objectives (r = 0.42 . . . 0.60). As discussed before, the students already indicated a comparatively high level of prior knowledge in the pre-test. As in the previous competency areas, the module structure can also be recognised here with a view to the individual results. Particularly, high effects are visible in the area of Methods/Digitality, which also played a major role in the course. Thus, the intervention was found to have a large effect on students' self-efficacy expectations in the areas of the main learning objectives, which is why this session requires only minor adjustments for further implementations and can be used as a model for courses at other universities.

Data Acquisition (DAQ)
For this session, only 3 of 16 competencies were chosen as major learning objectives, and another two were chosen as minor learning objectives. As suspected, students' selfefficacy expectations were low in the area of specific technology, particularly on the "apply" level compared to other competency areas, which is why it was emphasised. The guided application of the tools in the area of data acquisition requires special time in this module, which, however, is necessary because the students come with little previous experience. The guidance on data collection can be considered successful when looking at the results. In order to be able to integrate further competencies into this module, it would be conceivable to outsource the practical phases into a self-study unit so that the synchronous main session can focus even more on the areas of methodology and teaching. Likewise, an expansion to two sessions would be useful so that students can continue to be guided as well. This session is a good example of integrating the competencies from the area of special tools and can be used as a blueprint for such implementations.

Simulation and Modelling (SIM)
The finding of a significant effect of the module on the self-concept of the students in 22 of 25 sub-competencies suggests that the students have received a comprehensive overview of the basic competency area of Simulation and Modelling with the module according to the addressed competence expectations. The strong average effect of the module on the students' self-efficacy confirms that a targeted promotion of digital competencies from DiKoLAN in university teaching-learning arrangements can in principle be successful. Looking at the individual results, comparatively high effect sizes were obtained in the category Special Technology. This is probably due to the weak assessment of prior knowledge by the students compared to the other three categories (Tables A26 and A28). Thus, the effectiveness measurement procedure identified a thematic area with great potential for development in this teaching-learning arrangement. The identified knowledge gap among the students can be explained, since prior knowledge of "special technology" cannot be expected from any of the previous stages of the teacher training program in Konstanz, in comparison to its subject-specific, pedagogical, and subject-didactic overlapping fields. Thus, the intervention was found to have a large effect on students' self-efficacy expectations in the domains of the main learning objectives, which is why this session requires only minor adjustments for further implementations and can be used as a model for courses at other universities.

Final Discussion of the Course Design
It has been helpful to dedicate a separate week to each competency area, allowing us to cover large areas of the DiKoLAN competency framework in one term, achieving a significant gain in all areas. In addition, it became apparent that some areas (for example, the sessions on Documentation-DOC and Communication-COM) offer the opportunity to link content across multiple sessions, which can be integrated in future courses. The accompanying tasks create further need for support but also allow for a deepening of the topics addressed in the sessions, for which there would otherwise have been no time. The design of teaching units in particular provides students with initial teaching concepts in which digital media are integrated into lessons.

Final Discussion of the Methodology of Evaluation
The detailed monitoring of all the modules through separate pre-and post-tests allowed for a very precise observation of the effect of each module on the students' selfefficacy expectations in the different areas. Since a high response rate was achieved despite the voluntary nature of the pre-and post-test, the additional time required of the students is not considered to be too high, but the benefit generated for the further development and confirmation of the course structure is immense. With the help of the test instrument used, we were able to confirm the effectiveness of existing structures and diagnose areas in need of further development.

Conclusions
With the help of the test instrument provided by the Working Group Digital Core Competencies [5], it was possible to show that the newly designed course aimed at promoting students' digital competencies can specifically promote students' self-efficacy expectations. Accordingly, pre-service teachers feel more self-efficacious after the seminar in large parts of the digital core competencies listed in the DiKoLAN framework. Thus, initial teaching and learning arrangements have been developed and implemented for all seven competency areas relevant to the science teaching profession. Therefore, a repetition and adaptation of such teaching concepts in the university context can be a proven method to fight against the current issues in the use of digital tools in schools. The piloting of the self-efficacy assessment instrument using the developed module as an example shows that it can be used to optimise such teaching concepts: For example, the content of a teaching-learning module could be adapted to the students' prior knowledge and thus made even more effective by means of an anticipated learning level survey in the pre-test. At the same time, the strengths and weaknesses of already-tested modules (as in the presented course) can be revealed so that the modules can be improved and re-tested. Furthermore, this work presents a course that can be used as a best practice example for the development and design of new courses due to its effectiveness demonstrated here. Anyone interested in using and expanding on the material is invited to contact the corresponding author to obtain access to it.  Institutional Review Board Statement: All participants were students at a German university. They took part voluntarily and signed an informed consent form. Pseudonymization of participants was guaranteed during the study. Due to all these measures in the implementation of the study, an audit by an ethics committee was waived.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the ongoing study.

Acknowledgments:
The authors would especially like to thank our colleagues in the Digital Core Competencies Working Group (Sebastian Becker, Till Bruckermann, Alexander Finger, Lena von Kotzebue, Erik Kremser, Monique Meier, and Christoph Thyssen) for their support of this research project, the provision of the test instrument, and for the lively exchange on the research design and the evaluation of the results. We also thank Lukas Müller, who helped us as a student assistant during the seminar. Lastly, we thank the Joachim Herz Foundation, which supports all DiKoLAN projects.

Conflicts of Interest:
The authors declare no conflict of interest. DOC.C.N1 Name options for professional digital documentation/ versioning and data archiving (e.g., gene databases, spectral databases, data sheets) while taking citation rules into account.

Appendix A. Documentation (DOC)
DOC.C.N2 Name methods of digital data documentation in research scenarios (e.g., image documentation: gel documentation, voxel �les from MRI scans).
Possibilities of systems for permanent data �ling/storage and corresponding software offerings/archives (e.g., network storage, archiving servers, cloud storage). DOC.C.D1 Describe options for proper digital documentation/versioning and data archiving (e.g., gene databases, spectral databases, data sheets), taking into account citation rules.

DOC.S.D1
With regard to existing functions, technical framework conditions, technical requirements, technical advantages and disadvantages (e.g. automated back-ups), the possibilities to describe technical approaches to documentation listed under DOC.S.N1 shall be described.

DOC.S.D2
Describe the need to perform back-ups as part of digital data management and the procedure for performing a back-up, including restoring (recovering) the data.         COM.S.A1 Use collaborative software for text and data processing.
COM.S.A4 Use systems for data management.
COM.S.A5 Create and revise (synchronously and asynchronously) collaborative text and data �les.     ISE.M.N2 List advantages, disadvantages, and limitations for using digital sources in teachinglearning scenarios.
ISE.C.N3 Name at least two quality criteria for evaluating digital sources from a discipline perspective e.g.:

Recency
Necessary scope/style/design Necessary data volume/resolution ISE.C.D5 Describe at least two of the quality criteria listed in ISE.C.N3, e.g., scope, data volume/resolution, professionalism/scienti�city, validity, reliability, and review procedures.           DAP.C.N1 Name quasi-established procedures of digital data processing in the subject area.
DAP.C.N2 Name subject-speci�c scienti�c scenarios with associated methods of subject-speci�c data processing, e.g.: Determination and extraction of curve maxima (e.g. sound levels, acceleration measurements)       Name SIM.T.N1 Name scenarios for appropriate use of digital simulations and modeling (e.g., spreadsheet, Geogebra for use in teaching) as well as software and strategies for use in a speci�c teaching-learning scenario, e.g.,

As a way of gaining knowledge
For lack of other affordable, accessible and safe methods As a subject-speci�c working method As a temporally optimized form of data acquisition As an interactive method SIM.C.N1 Name several science scenarios in which simulation or modeling is used to gain knowledge (e.g., temperature �elds, magnetic �elds, climate models).
SIM.C.N2 Name at least two methods of digital simulation or modeling in research scenarios (e.g., Lotka-Volterra population dynamics).
SIM.C.N3 Name several data sources from which data applicable to modeling can be drawn/referenced (e.g., weather data, populations, measurements from professional sciences).
SIM.C.N4 Name insights gained from simulations (e.g., material stress, crash testing, weather forecasting, global warming).     Main learning objectives (bold magenta), secondary learning goals (italic cyan), and non-addressed competencies (yellow). * The average effect size is given for competencies assessed with more than one item. • Not tested.