From Teachers’ Perspective: Can an Online Digital Competence Certiﬁcation System Be Successfully Implemented in Schools?

: This study aims to assess the implementation effectiveness of the online platform for digital competence (DC) certiﬁcation in schools. The testing platform was a prototype of a DC certiﬁcation system developed and piloted during 2019 in primary and secondary schools in six European countries involving more than 800 teachers and 6000 students. The study resulted in positive proof that the effective integration and evaluation of the DC acquisition, evaluation, and certiﬁcation within formal curricula in primary and secondary schools is possible. In addition, it was conﬁrmed that information quality is a signiﬁcant predictor of the impact on the platform end-users. In contrast, the quality of service is not a signiﬁcant predictor of a successful implementation of the cloud-based platform with an intuitive user interface and proper online help, i.e., massive open online courses (MOOCs). Furthermore, the developed instrument can help schools implement and assess platforms for DC certiﬁcation and help policymakers pursue and monitor the implementation of such platforms in schools.


Introduction
The European Commission recognized the relevance of digital competence (DC) among citizens by releasing many publications (e.g., DigCompEdu, DigCompOrg, Dig-CompConsumers) and aiming to bring a digital transformation to society while focusing on five areas of DC [1]: (1) information and data literacy, (2) communication and collaboration, (3) digital content creation, (4) safety, and (5) problem-solving. In a broader sense, DC "involves the confident, critical and responsible use of, and engagement with, digital technologies for learning, at work, and for participation in society" [2].
The DC acquisition and assessment should be started early in primary schools [3][4][5]. Furthermore, it should be integrated into the formal educational curriculum [6,7]. This approach would enable schools to identify a lack of specific DCs and introduce a plan for their implementation. More studies are found on the assessment of DC in higher education [8] than in primary and secondary education [9]. In general, validated assessments are essential in primary and secondary education because they help students improve their learning, develop necessary abilities, and measure their progress [10]. A three-year longitudinal study [11] further contributes to that matter by questioning whether the application of designed DC teaching can accelerate the natural course of DC development.
Therefore, it is necessary to study the effectiveness of the implementation of DC development systems that could enable young students to acquire the desired set of DCs. In this respect, new cloud-based teaching methodologies and services that enable DC acquisition in primary and secondary schools in Europe (hereinafter referred to as the CRISS platform) were developed. The CRISS platform was piloted from 2017 until 2019 as a part of the Horizon 2020 project. This platform aims to deliver user-driven and adaptive technological solutions that allow guided acquisition, evaluation, and certification of DC in primary and secondary education. It was used both by students and teachers.
This study aims to assess the implementation effectiveness of the CRISS platform as an online DC certification system in primary and secondary schools following the well-known DeLone and McLean [12,13] Information System Success Model (D&M Model). In that sense, the following research questions (RQs) are prompted: • RQ1: What factors impact teachers' perspectives on the successful implementation of an online DC certification system in primary and secondary schools? • RQ2: What are the relationships between those factors?

Literature Review
This literature review followed the recommendations from [14] to thoroughly investigate the current state of DC in education concerning their measurement and assessment supported by ICT. All studies satisfied one of the set criteria: • Indicate a lack of DC in children; • Describe systems for assessment and certification of DC; • Describe the acquisition of DC through the curriculum.
First, it is interesting to report that various studies dealt with teachers' competences [4,[15][16][17][18][19][20] and the ways they adopted the technology in their classes [21,22]. One of the possible reasons is that teachers are considered responsible for students' DC assessment and acquisition at all levels of education [23]. In contrast, numerous studies discussed the lack of and need for developing DC among university students [8,23,24]. A smaller number of studies included primary and secondary students [25,26], as confirmed by recent research [11,27]. Several authors [3][4][5]9] suggested that the acquisition and assessment of DC should start early in primary schools as a part of the formal curriculum.
Our literature review revealed only a few examples of tools for competence development or assessment (e.g., [28,29]). Identified tools are implemented as a part of formal education, employment training, or life-long learning for citizens. Neither of those tools was assessed for its exploitation and successful implementation to the best of our knowledge. However, their further development is expected in the future [30]. From a theoretical point of view, the findings from the authors of [31] are very substantial. They analyzed 32 different frameworks for 21st-century competences and indicated that competences must be operationally defined and embedded within and across core subjects to facilitate competence implementation in schools. However, in terms of interdisciplinarity, " . . . intentions and practice seemed still far apart" [31] (p. 299).
It can be concluded that there is a lack of assessment instruments and tools for education based on competence [32]. According to gathered information [33], there is a need for such instruments to optimize students' learning and inform them about their progress. Moreover, ref. [34] recognized that it is necessary to develop the assessment criteria for each competence based on which students' progress at an individual or group level could be tracked. Based on the described literature review, we could conclude the following:

1.
A modest number of studies deal with the DC acquisition and certification in primary and secondary schools in a way of only indicating its importance without providing an implementation solution.

2.
None of the studies tried to assess the implementation of the systems for DC acquisition.

3.
There seems to be a need to start with DC education and assessment from the earliest age and integrate it into the formal curriculum.

Research Aims
Following conclusions from the previous section, this study further investigates the field of tools for DC assessment by assessing the effectiveness of an online system for DC assessment and certification for students in primary and secondary schools from teachers' perspectives. Since recent findings [23,35,36] suggest that teachers should be responsible for incorporating DC assessment and acquisition into schools, we focused our work on their perceptions of DC acquisition and assessment in classes. Teachers need to understand how ICT supports the learning-teaching environment to advance education [37]. Furthermore, teachers are the ones who will have to adapt their teaching practices and materials or apply certain technology according to new competency-based curricula. Therefore, within the context of this study, it is crucial to analyze the teachers' perceptions of the system that supports their work and contributes to the development of students' DC. The following research objectives (ROs) are defined to answer the research questions from the first section: To propose a DC certification system success model; • RO2. To develop and validate a survey instrument that can empirically test and theorize the model; • RO3. To examine the relationships among the variables and their relative impact on DC certification system success.

Research Context
The appropriate research context was set up to answer the main research questions and test the hypotheses proposed in the previous sections. The CRISS platform was provided within the Horizon 2020 project to selected primary and secondary schools in six European countries-Croatia, Greece, Italy, Romania, Spain, and Sweden. In each country, a project partner monitored the platform's whole process of use by teachers and students for several months. For ease of use, the interface of the CRISS platform was translated into target languages as well as scenarios, tasks, and activities. The latter were also adapted to fit the country-specific context and different educational levels (primary or secondary).
The CRISS platform is a cloud-based platform consisting of teaching methodologies and services that enable the acquisition of DC in schools. It is based on the validated CRISS DC Framework [38] aligned with the European DigComp. It also follows the "integration pedagogy" concept introduced by [39] as a valid approach for developing the competence assessment that focuses on learning (mastering) DC. In the CRISS DC Framework, DCs are divided into 12 sub-competences and form five areas. Each student should produce certain evidence according to a defined set of performance criteria to achieve an individual sub-competence. Moreover, each performance criterion consists of indicators that provide measurements or conditions required to analyze the evidence regarding performance criterion and competence attainment.
Teachers can plan students' learning within the CRISS platform by choosing activities and tasks from the repository and applying them according to their teaching practice. They also evaluate students' activities and tasks performed within the platform and provide them with feedback. Each successfully evaluated activity brings students closer to attaining a certain sub-competence. An overview of a CRISS DC certification process integrated into the CRISS platform is presented in Figure A1 of Appendix A.
DC can be evaluated through human or technological interventions on the CRISS platform. Human interventions are performed by teachers and students using various adaptable tools (e.g., checklists, rubrics, scales) implemented within the platform. These tools are designed to be easily used by the teacher and for students' self and/or peer evaluation. Furthermore, the CRISS platform automatically performs a technological intervention by tracking students' activities, based on which it collects relevant information.

Research Model and Methodology
To assess and identify the most relevant factors for effective implementation of the CRISS platform, we used the D&M IS Success Model [12,13]. It is one of the most cited models [40]. It serves as a reference point for many other models that tried to encompass the information system (IS) success or effectiveness (e.g., [41][42][43]).
The D&M IS Success Model and the IS theory can be applied here because the CRISS platform is an information system. The platform utilizes processes and procedures for DC acquisition, evaluation, and certification as defined in the validated CRISS DC Framework [38]. Moreover, it involves people (teachers and students), equipment, and other related software for online learning and collaboration.
The authors of the D&M Model identified six components or constructs of IS success: System Quality, Information Quality, Service Quality, System Use, User Satisfaction, and Net Impacts. Each of them is described very briefly in the next section by paraphrasing the original authors [12,13]. System Quality measures the desirable technical characteristics of IS. Since this dimension captures the system itself, it is oriented toward technical specifications such as data processing capabilities, response time, ease of use, system reliability, and sophistication. Information Quality includes the desirable characteristics of system output in the form of information such as its relevance, understandability, accuracy, completeness, usability, and importance. Service Quality measures the quality of support that system users receive from the IS department and the IT support personnel. System Use indicates the degree and manner in which staff and customers utilize the capabilities of an IS. At the same time, User Satisfaction measures users' satisfaction with reports, platforms, and support services. Finally, the extent to which the IS contributes to the success of individuals, groups, and other stakeholders is represented as Net Impacts. It measures the system's outcomes and is inevitably compared to its purpose. For this reason, the Net Impacts construct "will be the most contextual dependent and varied of the six D&M Model success dimensions" [13] (p. 59).
The authors of the D&M Model suggest that constructs and related measures should be systematically selected. This should be done considering contextual contingencies (such as the organization's size or structure, technology, and individual characteristics of the system) to develop a comprehensive measurement model and instrument for a particular context. Therefore, to detect factors that influence the successful implementation of a DC system (RQ1) and analyze their relationships (RQ2), we proposed the research model with hypotheses as indicated in the D&M Model [13] (see Figure 1):

Hypothesis 1 (H1).
The quality of the CRISS platform positively affects its use.

Hypothesis 2 (H2).
The quality of the CRISS platform positively affects user satisfaction with it.

Hypothesis 3 (H3)
. The information quality produced by the CRISS platform positively affects its use.

Hypothesis 4 (H4)
. The information quality produced by the CRISS platform positively affects user satisfaction with it.  This study adopted a quantitative methodology. Data were collected via survey and administered to primary and secondary teachers to achieve previously proposed research aims. The research methodology followed the typical procedures [44] for measurement instrument development, data collection, and analysis and was conducted as follows: (1) measurement instrument development; (2) sample and procedure; (3) measurement model assessment; and (4) structural model testing. Next, we describe the research context and then elaborate on all phases of the research methodology.

Measurement Instrument Development
A complete process of measurement instrument development is shown in Figure 2. As noted in the previous sections, analysis of a successful implementation of DC systems, such as the CRISS platform, has not yet been recorded. Therefore, we designed the instrument from the ground up using the constructs from the D&M Model. In doing so, we relied on the literature review and focus groups of experts, as suggested by [45][46][47]. Experts were engaged based on their expertise in e-learning, pedagogy, teaching methodology, and assessment activities. After establishing content validity and prior construct validity, the survey instrument was translated into six target languages and implemented in LimeSurvey, a free, open-source tool for creating online surveys. The final instrument ready for the field test consisted of 48 items: 7 items in System Quality, 7 items in Information Quality, 6 items in Service Quality, 10 items in System Use, 5 items in User Satisfaction, and 13 items in Net Impacts. The reliability and validity of the final instrument were tested with a sample of 298 teachers.

Sample and Procedure
The sample was drawn from 145 schools according to the guidelines established within the project, which aimed to ensure extensive participation and an equally high completion rate of activities on the platform. Teachers at selected primary and secondary schools had to actively participate in the project and use the platform for more than three months to be included in the research. In total, 1102 teachers were contacted about the study's purpose and were sent the link to an online survey.
The data were collected at the end of the school year 2018/2019, between May and September 2019. It was four months after the CRISS platform was introduced to selected schools. As stated in the previous section, the assessment of the platform was performed by 400 teachers who participated voluntarily. However, the obtained data were carefully analyzed; outliers were removed and 298 complete answers were left for further processing in R [48]. The overall response rate was 36.3% before data exclusions and 27.04% after. Both rates align with the findings that showed that the average response rate in online surveys ranges between 20 and 47% [49].
The demographic characteristics of the sample are shown in Table 1. A sample size of 298 teachers reached a subject-to-item ratio of 6:1. Therefore, it was decided to apply PLS-SEM through SmartPLS software version 3 (SmartPLS GmbH, Boenningstedt, Germany) [50] in further work, which proved very robust for smaller samples [51].

Results
The evaluation of measurement and structural models was performed using the variance-based SEM. The narrow focus was on examining the relationships among latent variables and their indicators. First, the measurement model was examined for reliability and validity. After, the fit of the structural model was tested, and the significance of path coefficients was determined.

Measurement Model Assessment
An indicator had to load over 0.6 into its posited construct to be retained and to ensure good convergent validity [45]. Eight indicators (SQ6, SQ7, SVQ5, SVQ6, SU1, SU2, SU3, and SU4) were removed, considering the previously defined rule-of-thumb across three constructs. One exception was made with the indicator SVQ4, which was retained for theoretical reasons, and it did not impact the internal consistency reliability to a greater extent. Items that did not converge on the predicted construct or had cross-loads on the two constructs were removed from the reflective measurement model. Those items were SQ5, IQ2, NI1, and NI2.
The assessment of the reflective measurement model is shown in Table 2. Indicator loadings were higher than already mentioned 0.60, which was a criterion of a good measurement of the latent variables. All Cronbach's alpha (CA) and composite reliability (CR) values were higher than 0.70 [45,51], which showed satisfactory internal consistency reliability. The average variance extracted (AVE) values were also greater than 0.50, indicating good convergent validity [51]. After removing the indicators whose cross-loadings were higher than the outer loadings on prospective constructs, the HTMT statistics indicated that discriminant validity was established (see Table 3). This was additionally examined by conducting the bootstrapping procedure. There were no HTMT confidence intervals (97.5%) found containing the value of one, indicating the lack of discriminant validity [51].  After omitting the mentioned indicators, the analysis showed that construct measures had adequate convergent and discriminant validity and showed good reliability. The final instrument with 36 remaining indicators is presented in Table A1 of Appendix B.

Structural Model Testing
Structural model testing started with examining a set of predictor constructs for the variance inflation factor (VIF). All values were below the cut-off value of 5, which means no collinearity issues were found [51]. Next, we calculated the coefficient of determination for endogenous variables (R 2 ) and their predictive relevance (Q 2 ). Results are shown in Table 4. The evaluation of goodness-of-fit indices for the structural model was performed in R since SmartPLS software provides less detailed data. All constructs showed satisfactory fit since all indices were in the desired range (see Table 5) [52]. These results suggested that this research model can confirm and explain the teachers' perception of the CRISS platform. Note. Goodness-of-fit index (GFI) > 0.9, adjusted goodness-of-fit index (AGFI) > 0.9, root mean square error of approximation (RMSEA) < 0.06, comparative fit index (CFI) > 0.9, chi-square(df) (degrees of freedom) = the smaller, the better.
As shown in Table 6, the path coefficient estimates for the hypothesized relationships ranged from 0.10 to 0.70. They were all significant at a 1% significance level except in two cases [51]. The path between Service Quality and User Satisfaction was significant at a 5% significance level. The path between Service Quality and System Use was nonsignificant; therefore, this hypothesis was rejected. The measured f 2 values of the relationships between the constructs ranged from 0.03 to 0.10 or from 0.15 to 0.21 or were equal to 0.90, indicating low, medium, and large effect sizes, respectively [53]. The relationship of Service Quality to System Use was not supported in line with H5.  Figure 3 shows the revised research model, i.e., relationships between the constructs based on supported hypotheses.

Importance-Performance Map Analysis (IPMA)
The importance-performance map analysis (IPMA) was applied as additional support to standard results reported in PLS-SEM and to bring more clarity to the impact of exogenous constructs on the target construct in this model, Net Impacts [51]. In a practical sense, we detected the areas that have proven to be highly important but relatively underperformed on the CRISS platform. The indirect result of the IPMA is a priority list of potential improvements in the platform. In statistical terms, we discovered the total effects (importance) and average latent variables scores (performance) of the Net Impacts predecessors [51]. Since all indicators were measured with the same 5-point Likert scale, no corrections were needed during the analysis conducted in SmartPLS software version 3 [50]. The summary of the calculated IPMA data for Net Impacts is shown in Table 7. User Satisfaction had the strongest total effect over the target Net Impacts, followed by Information Quality, System Use, System Quality, and Service Quality. The lowest performance value for the Net Impacts was exhibited by System Use, then System Quality, User Satisfaction, Information Quality, and Service Quality. The values of predecessor constructs were plotted in an importance-performance map divided into four quadrants (see Figure 4). The cut-off values for the quadrants and interpretation were determined according to [54]. A cut-off value of 0.35 for the x-axis was a mean of importance scores. The value of 50 was used for performance (y-axis) as a midpoint of the 0-100 range. Information Quality and User Satisfaction fell under the first quadrant, which corresponds to "keep up the good work", meaning they both have high importance and performance levels. There were no constructs targeted in the second quadrant. These would be the constructs respondents considered essential and on which improvement should concentrate. On the borderline between the second and third quadrant, System Use should be targeted first with undertaken managerial actions. The second area to improve would be System Quality, which belongs to the third quadrant and is characterized as "low priority". However, since the second quadrant contained no construct, excess resources should be allocated to the third quadrant [55]. The fourth quadrant indicated "possible overkill" and included Service Quality. It means respondents mainly agreed that they had received proper support from responsible persons for using the CRISS platform. Still, they did not consider it very important for the Net Impacts.

Discussion
This study tackled the effectiveness of the online platform for acquisition, evaluation, and certification of DC in schools concentrating on teachers' attitudes. It provided a followup to previous work done by scholars [3][4][5][6][7]. They indicated it is needed to integrate DC acquisition and evaluation into the formal education curriculum and thus start with the process very early in schools. This study also substantially contributes to the field since it explores the possibilities for implementing such systems in schools. It was found that DC assessment in higher education is more studied [8] than that in primary and secondary education [9]. To the best of our knowledge, none of the previous studies examined the effectiveness of tools for DC assessment in a formal curriculum, e.g., [28,29,56].
This study's findings confirmed that the overall D&M Model can measure the successful implementation of online DC certification systems in primary and secondary schools, which is related to the first research question (RQ1). Our research model showed that Information Quality, besides System Quality, is a significant predictor of the actual use of the CRISS platform, thus leading to greater satisfaction with the platform. This is in line with findings that also confirmed a strong connection between Information Quality and Net Impacts [13]. Overall, the model showed valid psychometric properties and acceptable goodness-of-fit. We can conclude that the research model can effectively measure the success of the CRISS platform.
For the second research question (RQ2), this study identified and analyzed relationships between the constructs of successful implementation of the CRISS platform. Specifically, it is interesting to conclude that User Satisfaction has the largest effect on perceived Net Impacts. Information Quality has stronger effects on System Use and User Satisfaction than the quality of the platform itself. The IPMA has also confirmed this impact, identifying Information Quality and User Satisfaction as the constructs with the highest importance for the CRISS platform's overall effects (Net Impacts).
This study did not reveal a significant relationship between Service Quality and the actual usage of the CRISS platform (System Use). It only detected a weak effect that the quality of service has on User Satisfaction. This could be due to two sound reasons. The first reason is probably a massive open online course (MOOC) that was created instead of an instruction manual to help teachers and students use the platform. The second reason is that, during the project, helpdesk officers often called schools and communicated with teachers to identify problems, motivate them to use the platform, and boost their self-confidence. Moreover, one of the project's aims was to have a sustainable platform that teachers could use without requiring conventional IT service support. Furthermore, the IPMA analysis indicated that the online help (such as MOOCs), intuitive interface, and platform's ease of use could be adequate means of support for teachers.

Limitations
The current research was focused solely on the teachers' perspective of using the CRISS platform and the overall concept of DC acquisition, evaluation, and certification. However, the measurement instrument was designed in a general way enabling its application to any system that deals with DC evaluation or certification. With slight modifications, it can be adapted to different target audiences. Although teachers are the primary users of the platform, students are also affected by the proposed changes and are actively using the CRISS platform. In that respect, the authors of this study have already started working on adjusting the CRISS success instrument for students to assess their perception of the newly introduced CRISS concept.
In addition, since the number of teachers has not reached an optimal number to carry out the model analysis on a country level, the responses could be culturally conditioned. Therefore, this study should be developed and carried out in each country separately to adapt to the socio-cultural aspects.

Conclusions
Conceptually, this study extends the utilization of the D&M Model to the setting of DC evaluation and certification from the teachers' point of view. It considerably contributes to the field of education by showing it is possible to effectively implement DC evaluation and certification within the compulsory curriculum in schools, thus starting with DC education from the earliest age. Additionally, it reveals that the quality of service support is not vital for the successful implementation of such a platform, as long as it is easy to use and supported by online instructions (e.g., MOOCs).
To the best of found knowledge, the CRISS platform is the first endeavor to deliver a complete, cloud-based solution for the acquisition, evaluation, and certification of DC in Europe through a formal school curriculum. Considering the found study [31], which summarizes theoretical recommendations from a dozen frameworks dealing with 21st-century competences, this study presents one step forward.
Although the measurement instrument was applied to the CRISS platform, it can be generalized and applied to other similar platforms. Henceforth, schools can use the measurement instrument developed in this study to assess the need for improving their systems for certification of DC or elements that impact the effectiveness of such systems.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to GDPR.

Conflicts of Interest:
The authors declare no conflict of interest. Appendix A Figure A1. An overview of a CRISS DC certification process. Table A1. Final CRISS success instrument.

SQ1:
The CRISS platform is easy to use. SQ2: The CRISS platform is available whenever I need to use it. SQ3: The CRISS platform runs fast. SQ4: The CRISS platform has all the features necessary to accomplish my tasks (e.g., create and share content, work with others, import data from other tools).

Information Quality
IQ1: The information I find on the CRISS platform is useful to perform my activities. IQ3: I can easily find the information I need on the CRISS platform. IQ4: The information I find on the CRISS platform is complete. IQ5: The information I find on the CRISS platform is accurate. IQ6: The presentation of the assessment results is easy to understand. IQ7: The information about the students' progress is clear.

Service Quality
SVQ1: Helpdesk is available to help me use the CRISS platform. SVQ2: Helpdesk responds promptly when I have a problem with the CRISS platform. SVQ3: Helpdesk provides a useful response when I have a problem with the CRISS platform. SVQ4: Other forms of online help are available (e.g., chat, social networks) for using the CRISS platform.

System Use
SU5: I use the CRISS platform to collaborate with my colleagues (e.g., creation of CAS, assessments). SU6: I use the CRISS platform to communicate/give feedback with/to my students. SU7: I use the CRISS platform to tag my content (e.g., CAS). SU8: I use the CRISS platform to track the progress and achievements of my students.

System Use
SU9: I use the CRISS platform to provide additional CAS or activities to students based on their assessment results. SU10: I use the CRISS platform to integrate the content or activities from external tools (e.g., YouTube, Facebook, Flickr, Google Drive).

User Satisfaction
US1: I feel comfortable using the CRISS platform. US2: I find the CRISS platform useful for additional assessment of my students. US3: I think it is worthwhile to use the CRISS platform. US4: I feel confident using the CRISS platform. US5: I am satisfied with the CRISS platform possibilities.

Net Impacts
NI3: The CRISS platform helps me to improve the engagement of my students. NI4: The CRISS platform enables me to provide clear evaluation criteria to my students. NI5: I am able to provide better feedback to my students through the CRISS platform. NI6: I am able to provide timely feedback to my students. NI7: The CRISS platform extends my capacity for assessment. NI8: The CRISS platform saves me time by supporting my teaching activities (planning process, guiding students, assigning tasks, monitoring students' activities, etc.). NI9: The CRISS platform allows me to track the progress of my students much better than I could do without CRISS platform. NI10: I am able to detect underperforming students more quickly than I would without the CRISS platform. NI11: The CRISS platform helps me to make more suitable decisions to enable students' progress. NI12: The CRISS platform enables me to propose tasks that allow students to be creative in solving them (ingenious, original). NI13: The CRISS platform enables me to track my students' reasoning when solving the tasks.