Next Article in Journal
MiniatureVQNet: A Light-Weight Deep Neural Network for Non-Intrusive Evaluation of VoIP Speech Quality
Next Article in Special Issue
Supporting Learning Analytics Adoption: Evaluating the Learning Analytics Capability Model in a Real-World Setting
Previous Article in Journal
Study on the Properties of Cement-Based Cementitious Materials Modified by Nano-CaCO3
Previous Article in Special Issue
A Case Study to Explore a UDL Evaluation Framework Based on MOOCs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Key Quality Factors in Digital Competence Assessment: A Validation Study from Teachers’ Perspective

by
Lourdes Guàrdia
,
Marcelo Maina
*,
Federica Mancini
* and
Montserrat Martinez Melo
Faculty of Psychology and Education Sciences, Universitat Oberta Catalunya, 08018 Barcelona, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2450; https://doi.org/10.3390/app13042450
Submission received: 28 November 2022 / Revised: 16 December 2022 / Accepted: 3 February 2023 / Published: 14 February 2023
(This article belongs to the Special Issue Information and Communication Technology (ICT) in Education)

Abstract

:

Featured Application

The study provides a conceptual and methodological approach for the evaluation of Digital Competence in the school context. Most of all, it helps highlight four factors that, according to teaching staff, describe the quality of the proposed model when applied to daily teaching practices.

Abstract

The progressive shift from a transmissive to a competence-based approach emphasises the need to design new models and tools for competence assessment. In order to address this need, this study, firstly, introduces the assessment model for digital competence in primary and secondary schools developed within the H2020 CRISS project. The Competence Assessment Model (CAM) (1) embraces the integration pedagogy approach in which competences are embedded within the curriculum and (2) provides competence assessment scenarios (CAS). Secondly, the study presents the results of the evaluation of its quality from the teachers’ perspective through an instrument designed on the basis of nine competence assessment quality criteria. The CAM was tested in a large-scale pilot study conducted in 535 primary and secondary schools across Europe. The outcomes show that the CAM mainly fulfills two out of the nine quality criteria: meaningfulness and authenticity. However, the exploratory factor analysis also reveals that all these quality criteria can be summarized into four key quality factors that more accurately describe how CAM is experienced in school teaching. Out of these, the most relevant is the (1) efficiency over time, student monitoring, support and evaluation. The three secondary factors identified are (2) fairness and cognitive complexity, (3) meaningfulness and authenticity and (4) reproducibility and transparency. The study provides a conceptual and methodological approach for the evaluation of Digital Competence in the school context. Most of all, it helps highlight four factors that, according to teaching staff, describe the quality of the proposed model when applied to daily teaching practices.

1. Introduction

The mastery of competences has become increasingly crucial in today’s complex and global societies. In this section, we first describe the resulting shift towards competence-based education and assessment, focusing on their cognitive and social processes and some of its challenges: the design of effective competence-based assessment (CBA) approaches and the identification and application of criteria for determining their quality. Next, we present the CAM designed to assess digital competence (DC) in primary and secondary schools and the large-scale pilot study aimed at determining its effectiveness. Finally, we present the objective of our research, which involves the assessment of the quality of the model after its practical application in a real educational setting.

1.1. The Shift toward Competence-Based Assessment

Education is progressively shifting from a content-focused to a competence-based approach through which high-level skills and complex competences [1] are developed as prerequisites for thriving in today’s hyper-connected global societies. According to the Organisation for Economic Co-operation and Development Learning Framework 2030, the mastery of competences and the mobilisation of knowledge, skills, attitudes and values is considered crucial to cope with rapid technological advancements, navigate through uncertainty across a wide variety of contexts and meet complex demands [2].
Special emphasis is also placed on transversal competences which are outlined as critically important for succeeding at school, in higher education and in the world of work [3]. Among them, the new recommendations launched in 2018 by the European Commission [4] with the aim of updating the previous release, particularly emphasise the increasing relevance of digital competence for personal fulfillment and participation in society at large. The improvement of DC, in fact, goes hand in hand with the development of a broader range of other key competences [5], for example, personal development and civic competences. As a result, educational policies have made evident efforts to introduce DC into the school curriculum and foster new pedagogical approaches impacting pupils’ learning [6].
Against the progressive transition towards competence-based education, the exploration of new assessment methods and tools able to capture the processes inherent in competence development has become an increasingly central issue. According to Motschnig et al. [7], the goal to build competences and make “inferences about individual knowledge, skills and attitudes using information collected through tests, observation, interviews, projects or portfolios usually in regards to predefined criteria” [8] (p. 1), demands a paradigm shift towards a learner-centred approach associated with active learning methods and assessment tasks providing authentic problem contexts, meaningful to students [9,10,11].
Authentic assessment contexts, in particular, are key for competence assessment as they enable the development and reporting of more sophisticated educational achievements [12]. Growing emphasis is also placed on the opportunity for authentic assessment tasks to grasp the student’s thinking processes level, that is to say, the way students think, make decisions and provide a rationale for their judgments when performing a task [13,14,15]. According to Wesselink et al. [16], each student should be confronted with several authentic situations, directly aligned with a particular CBA and including problems and challenges that reflect the complexity of real life. Such alignment is crucial for the quality of assessment and, consequently, for the quality of teaching and learning.
CBA also calls for multiple forms of assessment and assessment instruments to collect data on different aspects of competence [14,17]. The collection of evidence of learning across many assessment methods, in fact, allows the development of a broad picture of student learning and its evolution over time [18]. In parallel, the assessment of competence also requires establishing clear learning outcomes [19] and informing students of their progress towards them [20]. Their relevance in planning and assessing learning also stresses the need to translate them into more specific statements, defined by Pepper [21] as sub-competences.
Designing an effective competence assessment model is currently a challenge, especially in the school environment where the lack of clear implementation models, normative beliefs about grading, absence of common definitions and the tendency to reproduce inherited practices affecting change [22,23] represent key barriers for CBA.
The call for assessment methods that adequately determine the acquisition of competence also leads to a shift in the way to evaluate their quality. Criteria such as validity and reliability are, in fact, no more sufficient for assessing competences. According to Baartman et al. [13], although still relevant, they need to be operationalised in a way that includes the qualitative assessment practices envisaged in competence assessment (e.g., the systematic use of observation and situations that may not be exactly the same for all the students). The criterion of reliability in the sense of objectivity and standardisation, for example, in competence assessment needs to be replaced with both the concepts of reproducibility and comparability. The latter, in fact, refers to situations that are reproducible between different assessors and comparable between different students, without requiring the assessment to be identical. Moreover, the criterion of validity is often not conceptually clear anymore when applied to competence assessments, in part because of the multiple definitions provided by scholars and, in part, because of the use of several assessment instruments measuring diverse aspects of a given competence.
In addition, the traditional quality criteria need to be complemented with new ones more suitable to capture the complexity of CBA systems and the close alignment between assessment, teaching and learning [8,13,19,24,25]. Baartman et al. [13], synthesising the work of many other authors, suggest the criteria of fairness, authenticity, cognitive complexity, meaningfulness, fairness, transparency, educational consequences, directness, reproducibility of decisions, comparability and costs and efficiency for the new quality framework. Criteria of equity [8,24,25] and inclusiveness [19] are also increasingly gaining relevance, pointing to the need for learners to demonstrate what they know without being unfairly disadvantaged by individual characteristics.
Judgements about the value and relative merits of new forms of assessments, therefore, will depend on the criteria used to evaluate them and this requires a clear definition of each of them and their further operationalisation into an instrument [13]. For this purpose, quality criteria might be further broken down into indicators, which increases their usefulness in practice, their transparency and the understanding of what high-quality assessment looks like and functions like in practice. According to Baartman [26], this is crucial, because assessment quality in competence-based education is not only determined by the ‘correct’ design of the assessment but rather by the actual use of the assessments in practice.
Quality criteria, however, are theoretical constructs established to assess the quality of a CBA and, most of the time, are endorsed through a validation process involving relevant target groups. The quality framework proposed by Baartman et al., for example, is the result of an extensive literature review and a validation process that involves, first, experts, and then, at a later stage, pre-vocational and vocational teachers [27]. Likewise, Gerritsen van Leeuwenkamp et al. [28] identify a six-factor structure for capturing students’ expectations and perceptions of assessment quality by selecting 98 assessment quality criteria from the literature and, then, asking students to validate the corresponding items through a large-scale pilot.
Therefore, considering the theoretical nature of the quality criteria, when a CBA is implemented in a real educational setting to assess a specific competence, they may fail to optimally describe its quality. In fact, despite the increased body of theoretical knowledge, empirical studies and, thus, practical knowledge on how these constructs should be designed when applied to a real-world context are still scarce. As such, it is very important to question the meaningfulness of the constructs once operationalised into an instrument and employed to analyse the quality of a given competence assessment in a specific context.

1.2. CRISS: Acquisition, Assessment and Certification of the Digital Competence in Primary and Secondary Schools

The CRISS (CRISS is a project co-funded by the Horizon 2020 Research and Innovation Programme of the European Union (ID:732489, 2017-2019). http://www.crissh2020.eu/ accessed on 1 October 2020) project presents a model for the development, assessment and certification of students’ DC in European primary and secondary education as part of the CRISS H2020 project. It involved the creation of a DC operational concept and the development of a Competence Assessment Model (CAM) integrated into a dedicated ePortfolio platform. The DC operational concept was elaborated on the basis of the Digital Competence European Framework for Citizens, and the analysis of seven DC frameworks applied in the school context in Europe [29]. It is composed of five areas (digital citizenship, communication and collaboration, search and manage information, content creation, and problem-solving) and 12 sub-competences, each subsequently divided into performance criteria and indicators.
The CAM [30], in turn, provides a means for the explicit and measurable tracking of students’ DC performance. It encompasses Roegiers’ [11] integration pedagogy approach, where competences are embedded within the curriculum and proposes competence assessment scenarios (CAS) as complex activities relevant to the development and mastery of competences along with a set of assessment rules. CAS adopts advanced instructional approaches where the learner or learners are situated at the centre of the learning process to solve problems, develop projects or search for solutions in realistic contexts and meaningful situations. They are composed of activities and tasks enabling the assessment of one or more performance criteria and can be combined in different sets according to the area or sub-competence to be assessed. As competences develop over time, learners are required to perform the set of CAS during one or more academic terms, offering teachers the possibility of sorting the scenarios according to their complexity.
The CAM integrates key literature recommendations and entails a student-centred approach and the use of active learning pedagogies [7,31], a variety of methods and instruments that support collecting multiple measures for triangulation and inference [8,14,30,32], authentic situations [9,10,11], conditions where students present evidence showcasing their thinking and reasoning [13,14,15], and clear and meaningful learning outcomes [19,33,34]. The use of the rule of 2/3 [35] provides the students with three opportunities to verify their competence development and achievements, ensuring assessment validity in the development of competence over time. Moreover, the combination in the same CAS of different assessment methodologies (self-assessment, peer evaluation, group assessment, teacher assessment) and instruments (questionnaire, rubrics, observation grid, etc.) contributes to tracking different aspects of the progressive development of DC. In addition, the performance of CAS enables students to deal with complex and authentic activities, encouraging active discussions, exchanges, problem-solving, knowledge creation and the mobilisation of multidisciplinary knowledge.
The CAS implementation was supported by an ePortfolio platform developed under the CRISS project where the students’ present evidence of their achievements, and teachers assess them based on the pre-established performance criteria. The ePortfolio also integrates a set of features supporting the educational activities towards the development and assessment of DC. Among them is a CAS creation tool to design activities and tasks and relate them with assessment indicators, an ICT planning tool helping teachers to set up a plan and assign the corresponding tasks and a dashboard showing detailed results on the students’ progress towards achieving the competence.
Student progress is also displayed in the ICT dynamic profile as a picture of the students’ daily learning activity and level of achievement in regard to the DC operational concept. This feature, moreover, enables the teachers and students to follow up on their progression and to reflect on the strengths, weaknesses and opportunities to improve their performance.
The ePortfolio, in addition to providing the option to create, upload and manage different types of evidence, also integrates an advanced search engine enabling users to browse and filter contents by year, subject, typology, format, etc. A set of tools for communication, including a messenger, a private request channel and a notification system alerting teachers and students to all messages, questions, evaluations and recommendations received, also support users’ interactions and timely feedback.
The setup of a large-scale pilot involved primary and secondary school teachers across Europe who co-designed CAS for testing [36], implemented them in the classroom and then evaluated their experience.

1.3. Aims of the Study and Research Questions

The pilot explored the CAM implementation through the experience of teachers and students, focusing on the model’s capacity to provide students with opportunities for competence development and teachers with adequate support to perform a reliable assessment. Starting from the outcomes of this pilot, this study aims at measuring the quality of the CAM in primary and secondary education by answering the following research questions:
  • RQ1. How does the Competence Assessment Model of Digital Competence comply with the most relevant Quality Criteria for Competence-based Assessment?
  • RQ2. Which are the key quality factors of the Competence Assessment Model applied to digital competence from the teachers’ perspective?

2. Materials and Methods

2.1. Design of the Instrument and Participants

The assessment of the CAM was based on the 10 quality criteria (QC) for the assessment of competences that emerged from an analysis of the work of many different authors elaborated by Baartman et al. [13]. The criterion Directness, however, was excluded as the outcomes of a focus group of international experts conducted by the authors revealed its similarity to Authenticity and Cognitive Complexity. The remaining nine QC (see Table 1) were used to develop an operational tool aimed at validating the quality of the CAM. With this in mind, the QC were translated into indicators to assess the extent to which the model meets them. These indicators were subsequently validated by a sample of teachers who were experts in the topics of our research. The improvements suggested by the experts were then discussed and integrated into the initial proposal.
A questionnaire based on the final indicators was then created in Google Forms and distributed to all teachers participating in the pilots. The final version comprised 120 items, 25 of which related to QC indicators (see Table 2). Teachers evaluated each indicator via a self-report questionnaire using a 5-point Likert scale from 1 = strongly disagree to 5 = strongly agree.
A total of 2529 teachers participated in the CRISS project and implemented a set of 10 CAS with their students for about 6 months. The estimated workload for performing the entire set was approximately 83 hours. Around 65% of them were secondary school teachers. All teachers were invited to answer the questionnaire and a total sample of 420 teachers took part in the evaluation process. This means a global error of +4.46 under the assumption of SRS (simple random sampling) on finite universes, in case of maximum uncertainty (p = q = 50) for a confidence level of 95.5%. Demographic information related to the teachers’ samples is reported in Table 3.

2.2. Data Analysis

The data analysis started with basic univariate descriptive statistics and continued with multivariate data analysis and bivariable contrast tests.
To answer the first research question, univariate analysis implied that descriptive statistics including mean (M), standard deviation (SD) and quartiles were analysed. Secondly, the inferential Kolmogorov–Smirnov (KS) test was used to check the normal distribution of data. Since it was significant, Friedman and Wilcoxon’s non-parametric tests were used to test the differences between items and criteria (CQ).
To address the second research question several factor analyses were conducted: first EFA (Exploratory Factor Analysis) and CFA (Confirmatory Factor Analysis) using Principal Component Analysis (PCA). A better quality factor model has been identified as explained in the results in Section 3.2. Internal consistency of the new factors has been also evaluated with Cronbach’s α coefficient.
All data analysis produced for this article was conducted using SPSS version 24. A minimum alpha level of 0.05 was considered for all statistical tests.

3. Results

3.1. The Quality Criteria of the Competence Assessment Model

The CRISS CAM achieves positive assessment in all the QC, as in a 1–5 scale all the means are greater than 3. In order to facilitate reading the results, three sets of QC can be addressed (Figure 1): set 1 includes meaningfulness (M = 3.69; SD = 0.872) and authenticity (M = 3.64; SD = 0.713), as well as comparability (M = 3.51; SD = 0.823), which are the QC that achieved better results. Set 2 are QC with a mean of around 3.3: educational consequences (M = 3.34; SD = 0.850), reproducibility of decisions (M = 3.32; SD = 0.736) and fairness (M = 3.32; SD = 0.677). Finally, set 3 clusters cognitive complexity (M = 3.29; SD = 0.773), transparency (M = 3.22; SD = 0.848) and cost efficiency (M = 3.11; SD = 0.899). The three QC included in set 1 are to be considered as the strengths of the CAM. Table 2 provides the descriptive statistics for each indicator of the aforementioned QC.
The correlation analysis between the different items included in the nine QCs revealed that some of them were more correlated with the indicators of other QCs than with those of their own set. Thus considering the overall results, the scores of the indicators belonging to the same QC showed a low correlation. Teachers evaluating authenticity considered that the components of the DC (knowledge, skills and attitudes) are integrated into the scenarios (M = 3.82; SD = 0.748). However, they reported that activities are not fully related to situations familiar to the students (M = 3.46; SD = 0.947). In relation to educational consequences, teachers considered that the CAM provides activities that help students to stay engaged in the learning process (M = 3.69; SD = 0.850). However, the result of this indicator contrasts with the scores on the platform’s capability to extend capacity for assessment (M = 3.33; SD = 1.028), improve student engagement (M = 3.24; SD = 1.067), and be used for other assessments (M = 3.04; SD = 1.178). Reproducibility of decisions has a high score for the application of different assessment instruments (M = 3.62; SD = 0.795) and the assessment on different occasions (M = 3.59; SD = 0.807). However, the capability of the platform to support more suitable decisions to boost students’ progress (M = 3.04; SD = 1.001) and to track it, obtain a lower score (M = 3.02; SD = 1.005).
The score achieved on fairness is mainly linked to the high rating of the activities in relation to the coherence with what is being assessed (M = 3.54; SD = 0.832), how it allows students to advance according to their capability (M = 3.47; SD = 0.885), and the assessment instruments ensuring unbiased results (M = 3.45; SD = 0.822). However, teachers state that the platform is not more efficient than other platforms in identifying underperforming students (M = 2.83; SD = 1.021). This polarisation in the rating of the indicators is also evident in the case of cognitive complexity: while the teachers consider that the feedback helps students understand the way they solve the activities (M = 3.61; SD = 0.828), they also give a low score to the platform for its capacity to track students’ reasoning when solving the tasks (M = 2.94; SD = 0.973). Transparency is one of the criteria that scores quite similarly in all its indicators: teachers considered that the information about the students’ progress is clear (M = 3.24; SD = 1.062), it provides clear evaluation criteria to students (M = 3.22; SD = 1.029), the presentation of the assessment results facilitates their understanding (M = 3.17; SD = 1.029), and, the systems allow teachers to provide better feedback (M = 3.16; SD = 1.028).
Finally, the cost-efficiency can be viewed from two angles: on the one hand, teachers consider that it enhances teaching effectiveness (M = 3.42; SD = 0.960), but on the other hand, they considered that the investment in time and resources is not justified when taking into account the improvement in competence assessment accuracy (M = 3.11; SD = 1.077). Furthermore, they reported that the platform does not save time in supporting teaching activities (planning process, guiding students, assigning tasks, monitoring student activities, etc.) (M = 2.73; SD = 1.124).
These findings led us to reflect on the structure of the QC selected from the literature and, hence, to explore alternative factors that might better explain the teachers’ experience with the competence assessment model after its implementation in the school context.

3.2. Key Quality Factors of the Competence Assessment Model

3.2.1. Factor Analysis Procedure

After analysing the QC, we conducted a factor analysis with the aim of further understanding teachers’ experience with the model. With this goal in mind, first, we compared the scores achieved by each QC to check the similarities and differences between them. The Friedman test confirmed that these criteria are significantly different (X2 (8) = 273,449; p < 0.001). Their comparison through the Wilcoxon test also highlights these differences, indicating that teachers value them in a different way. However, some of the criteria also present similarities in how teachers experienced them. As shown in Table 4, the CAM is similar in terms of authenticity and meaningfulness. Cognitive complexity, fairness, transparency, educational consequences and reproducibility of the decisions achieve similar scores. The same applies to fairness except for the fact that teachers consider it different from educational consequences. Transparency is valued quite similarly to cognitive complexity and fairness but differs from the other criteria. The same happens with the reproducibility of decisions, but it can be considered quite similar to educational consequences. So, the educational consequences of the CAM are significantly different from all the rest except for cognitive complexity, reproducibility of decisions and comparability. Teachers consider comparability as quite similar to educational consequences but significantly different from the other two criteria. Finally, cost efficiency is not comparable with any other one.
Before conducting the factor analysis, we also checked its suitability by evaluating some relevant statistical indicators with the application of the KMO and Bartlett’s test. Results revealed a high correlation between indicators to proceed with the analysis (see Table 5). Bartlett’s test and KMO also indicated that data sets are adequate for factor analysis (χ2(300) = 6,382,569, p < 0.001) [37,38].
The factor analysis, performed as the final step of this procedure, enabled synthesising the QC and establishing hierarchies among them.

3.2.2. Factor Analysis Results

The factor analysis revealed the existence of four factors (see Table 6) which explain 65.77% of the total variance. The factors that emerged from the analysis are (KQF1) efficiency in time, student tracking, support and assessment (73.3% of the model); (KQF2) fairness and cognitive complexity (13.3%); (KQF3) meaningfulness and authenticity (7.6%) and (KQF4) reproducibility and transparency (6%).
From the teachers’ perspective, the most important quality factor is (KQF1) a combination of efficiency in time, student tracking and support, and assessment (Table 7). This first KQF is also very consistent, as in many CFA it appears with the same composition. It consists of a combination of items that are included in the costs and efficiency (CE), educational consequences (E), reproducibility of decisions (R), transparency (T) and fairness (F) QC. Specifically, it emerges that teachers consider a series of elements as part of a unique quality factor. These elements enable the tracking of the students’ progress more accurately than other methods, the student engagement and the detection of underperforming cases. Similarly, it increases teachers’ capacity for assessment, including additional assessments, while providing better feedback and greater accuracy. Additionally, the CAM allows teachers to provide clear evaluation criteria to students and enhance teaching effectiveness. All in all, these elements focus on efficient procedures and their positive educational impact.
The other three KQC are consistent enough to be taken into account. KQF2 (13.28% of the final model) is mainly a combination of two items: fairness and cognitive complexity. It focuses on the way assessment instruments support consistent scoring among all the students, ensure unbiased results and implement activities coherent with what is being assessed. It also includes the capacity of the CAM to track students’ reasoning, to enable them to advance according to their level and to increase awareness through the feedback received. In sum, this factor addresses the assessment consistency and reliability but also the scaffold of learning.
KQF3 (7.54% of the final model), named ‘meaningfulness and authenticity’, concerns the engagement of students in situations that are relevant and familiar through the CAS and the integration of the components of the DC. As such, this factor is linked to the concept of meaningful learning.
Finally, KQF4, named ‘reproducibility and transparency’ (5.91% of the total model), although explaining a lesser variance than the other factors, has to be considered independent and reflecting a specific identity. It highlights how the model provides clear information about the students’ progress and the assessment results, and reliable outcomes based on multiple assessment occasions and instruments.
For a better understanding of the KQF, it is also useful to observe the distribution of the original QC into factors (see Figure 2). Some of them are completely included in one factor while others are divided into two or three.
Those QC integrated into one unique factor (type 1) are meaningfulness (QC4) and authenticity (QC1) which are integrated into KQF3 (meaningfulness and authenticity). Comparability (QC8) and cognitive complexity (QC2) are completely integrated into KQF2 (fairness and cognitive complexity). Costs and efficiency (QC9) are completely integrated into the biggest KQF1 (efficiency in time, student tracking and support, and assessment). As such, these QC prove not to be a construct on their own but rather part of a greater and stronger concept.
The rest of the original criteria are divided into different factors, which provide insight into the consistency in an applied CAM. Firstly, educational consequences (QC6) contribute mainly to KQF1 (efficiency in time, student tracking and support, and assessment) in terms of helping to improve student engagement, extending teachers’ capacity for assessment and being useful for additional student assessment. In turn, the indicator referring to the activities that help students to stay engaged in the learning process is linked to KQF3 (meaningfulness and authenticity).
Fairness (QC3) originally addresses unbiased procedures towards certain (groups of) learners. However, this QC applied to a real educational context is divided into two KQF: a) fairness and cognitive complexity (KQF2) which refers to the use of assessment instruments that ensure unbiased results and activities that are coherent with what is being assessed, enabling students to advance according to their capability, and b) efficiency in time, student tracking and support, and assessment (KQF1) which addresses the teachers’ ability to detect underperforming students more quickly.
Finally, transparency (QC5) and reproducibility of decisions (QC7) are divided into KQF 1 and 4. Transparency (CQ5), defined as the requirement for a CAM to be clear and understandable to all participants, reflects two different ideas that link it to different factors. The ability to provide better feedback to students and clear evaluation criteria is related to efficiency in time, student tracking and support, and assessment (KQF1). On the other hand, transparency, conceived as clarity of information on students’ progress and the presentation of the assessment results, proves to be linked to the main concept of KQF 4: reproducibility and transparency.
A similar scenario is observed with the reproducibility of decisions (QC7). This addressed the fact that (high-stakes) decisions made about students should be based on multiple assessments, carried out by multiple assessors and on multiple occasions. In a real educational context, teachers understand this criterion as divided into two different factors: efficiency in time, student tracking and support, and assessment (KQF1) and reproducibility and transparency (KQF4). The support provided by the platform for suitable decisions regarding students’ progress and their tracking is linked to the main factor of efficiency in time, student tracking and support, and assessment (KQF1). Conversely, the clarity and transparency of the information on the students’ achievements determined by the combination of various assessment occasions and instruments, is related to reproducibility and transparency (KQF4).

4. Discussion

4.1. The Competence Assessment Model for Primary and Secondary Education’s Compliance with the Quality Criteria

Designing and implementing a robust assessment model capable of capturing competence development is a challenge. In fact, the main features that CBA should envisage must then be translated into concrete implementation strategies to be applied to a real context. As such, they require, first, a framework for breaking competences down into sub-competences and indicators, and, secondly, concrete tools and methods addressing active learning approaches, and authentic problem contexts [9,10,11], multiple forms of assessment and assessment instruments [14,17] and the transparency of the learning outcomes and assessment criteria [19].
With the first research question, we intended to verify whether the CAM of DC for primary and secondary schools fulfils the quality criteria selected from the literature by analysing the results obtained from the questionnaire distributed to a large sample of teachers. The outcomes (see Figure 1) show that the CAM primarily supports a meaningful and authentic assessment. Secondly, it also achieves positive outcomes in terms of comparability followed by educational consequences, reproducibility of decisions, fairness and cognitive complexity, while transparency and cost efficiency have room for improvement.
The application of the CRISS CAM, therefore, offers in the first place a meaningful learning experience (QC4. Meaningfulness). This means that the assessment tasks included in the CAS propose situations relevant to the students and get them to deal with real problems. The CAS, in fact, involves the students in the design and elaboration of products significant to their context (e.g., original routes to promote a region, touristic maps for exchange students or electronic newspapers for local stakeholders) and propose real problems to be solved (e.g., troubleshooting during the use of school technology, managing legal aspects linked to unethical online behaviour). Moreover, the option for CAS to be adapted or even created from scratch on the basis of specific guidelines also enables the content to be customised, thus enhancing its educational significance. The meaningfulness is also improved by relevant feedback provided by teachers and peers at key points of the learning process and the sharing of the assessment criteria with the students.
The meaningfulness of the CAM is related to its authenticity (QC1. Authenticity), which also obtains positive results. From the teachers’ perspective, the activities included in the CAS are authentic as they entail tasks familiar to students and reflect real situations. Real environments and social situations also enable the mobilisation of knowledge, skills and attitudes which can be assessed in an integrated way [9,10,11]. In addition, the performance criteria provided in the rubrics were designed to enable the assessment of all the components of the competence to be integrated.
Comparability (QC8), which also reaches a satisfactory level, is a quality criterion that traditional assessments have paid much attention to and is still crucial in CBA. According to teachers, it is ensured by the assessment instruments which enable consistent scoring for all learners and, hence, the comparability of the conditions under which the assessment is carried out. To this end, the implementation of the CAM also provided the teachers with concrete indications (Teaching Notes) on how to use the assessment instruments (checklists, rubrics, scales, etc.) embedded within the scenarios to assess the students’ evidence (learning diaries, presentations, digital maps, proposals, algorithm implementation, etc.). Furthermore, the rubrics integrated within the ePortfolio aimed at ensuring the application of the same criteria for all learners in a classroom.
The outcomes of this first set of QC (Figure 1) highlight that the assessment scenarios proposed are meaningful and authentic. Meaningfulness is also enhanced by significant feedback and relevant performance criteria. Furthermore, the assessment demonstrates robustness and consistency by providing tasks, criteria and working conditions coherent with the aim of developing and assessing DC.
Positive scores, albeit with room for improvement, are achieved by the QC included in the second set (Figure 1). The effects of the CAM on students’ competence-based learning and education, in general, are quite positive (QC 6—Educational Consequences). From the teachers’ perspective, the assessment activities proposed in the scenarios and the features integrated into the CRISS platform help students to stay engaged in the learning process and contribute to extending the teachers’ capacity to assess students.
Reproducibility of decisions (QC 7), as with comparability, is commonly used in traditional assessment. It relates to whether the decisions made are accurate and constant over time. The CAM complies quite satisfactorily with the goal of drawing reliable conclusions about a learner’s competences by providing multiple assessments, carried out by multiple assessors (teacher–teacher assessment, student-self-assessment, peer assessment, group assessment), on multiple occasions. The application of the rule of 2/3 [35] gives the student three opportunities (events) to practise each performance criterion, recognising its fulfillment when mastery is successful on two out of three occasions [30]. Additionally, different assessment instruments are provided in each CAS to assess the evidence presented by the students and thus pick up different aspects of a given performance criterion [14,17]. This result indicates that despite the CAM relying on a more subjective form of assessment compared to standardised tests, it is considered by teachers fairly reliable when brought into practice.
Fairness (QC3), which also reaches an intermediate score, indicates that the activities included in the CAS roughly allow students to advance according to their capability and that the assessment tasks are sufficiently varied to cover the entire domain of competence. In fact, the CAS, as well as the assessment criteria, could be adapted to the learners’ educational level. Furthermore, teachers were able to adapt the content of the CAS to the students’ context by changing topics, locations, language, etc. Likewise, the external resources, tools and devices needed to perform the task could be customised according to personal preferences and needs.
The findings also highlight that the CAM partially supports both teachers and students in gaining insight into the thinking processes applied when performing a task (QC2. Cognitive complexity). The student’s thinking processes level is also captured quite satisfactorily through the evidence required from students (e.g., technical problems diaries, group discussion recordings) [14,15].
The findings of the analysis finally show that transparency (QC5. Transparency) and cost-efficiency (QC9. Costs and efficiency) obtained the lowest score. Despite the information on the students’ progress towards DC available in the platform’s dashboard and its graphical representation in a digital badge automatically updated after each assessment, in fact, the level of transparency (QC5), identified as the clarity and understandability of the scoring criteria, could be improved. This finding might be due to the structure of the scoring system implemented in the platform and the weights assigned to each performance criterion that might be complex to assimilate in a short time and also to clarify to students.
Finally, cost-efficiency (QC9) obtains the lowest rating from the teachers, meaning that the investments of time and resources to implement the CAM are not fully justified by the improvement of competence assessment accuracy. Likewise, the management of the teaching activities through the platform (planning, assigning tasks, monitoring the performance, etc.) does not offer great advantages for time optimisation and the effective administration of the tasks. It can be assumed that the low score obtained by this QC may have been influenced by the learning curve required to integrate a new CBA model and a new digital platform into the teachers’ instructional practices.
These results, therefore, indicate the CAM’s compliance with the quality criteria selected from the literature and embedded within a framework aimed at establishing the quality of competence assessment models and programmes. Nevertheless, the analysis of the data shows that these constructs could be reformulated in order to provide educational stakeholders with a better idea of the quality of this assessment model.

4.2. The Key Quality Factors of the Competence Assessment Model

The factor analysis conducted after observing the contradictory scores between indicators belonging to the same QC leads to identifying a reduced number of factors explaining the quality of the CAM for DC from the perspective of primary and secondary school teachers. These results enable us to answer the second research question addressed in this study and come up with a meaningful description of the strengths of the model when implemented in teaching and assessment practices.
Following the outcomes of the analysis, we see that, among the KQF, ‘Efficiency in time, student tracking, support and assessment’ (KQC1) (see Figure 2) proves to be the most relevant and consistent. Costs and efficiency (QC9) is fully integrated into it, indicating that teachers perceive the CAM as a model that supports their regular teaching practices. The planning, assignment of tasks, student follow up and monitoring of their progress throughout the performance of the different scenarios are effectively enhanced by the CAM and this enables teachers to save time. Compliance with this goal is fostered by the platform through different features among which are a dashboard showcasing the assessment outcomes and student progress towards each sub-competence, a robust notification system informing the teachers about students’ activities and the multiple assessment instruments embedded within the CAS.
This KQF also includes indicators from educational consequences (QC6), fairness (QC3), transparency (QC5) and reproducibility of decisions (QC7) (see Figure 2). This combination highlights that the CAM enables student tracking during their learning process and the rapid identification of underperforming cases. The students’ needs, therefore, are detected faster through the CAM which, in turn, also supports teachers in making suitable decisions on their progress and providing them with timely feedback. The messaging channels included in the platform also contribute to enhancing the communication and the exchange of feedback and thus extending the assessment capacity. The positive impact on student learning also makes it suitable to be applied for the assessment of additional students or for the assessment of different competences. All in all, these results suggest that the quality of the competence assessment model according to teachers relies on the efficiency in running classroom assessments including the student follow-up and provision of tailored meaningful feedback.
The KQF2 named ‘Fairness and cognitive complexity’ includes indicators from cognitive complexity (QC2), fairness (QC3), and comparability (QC8). More specifically, this quality factor emerging from the analysis points to the capacity of the model to reflect the presence and level of students’ thinking process while performing the tasks. This means that understanding the reasoning applied by the students is made possible through CAS that include problem-solving tasks, technical problem diaries, recordings of group discussions and other strategies that encourage reflection and reporting on the steps taken to achieve a result. Various tools such as observation grids adaptable by teaching staff have been made available on the platform within the teaching notes section to support the observation and annotation of processes and strategies applied by students.
At the same time, the activities and tools included in the CAS provide students with valuable and timely feedback that enables them to become aware of the strategies and reasoning used to accomplish a task. The ability to capture the higher-order skills deployed during the performance of a task is combined with indicators of fairness which is linked to the opportunity for students to advance according to their capability, ensuring unbiased results for certain groups of learners. This aspect of the model is provided by an assessment system that relies on indicators adapted to different educational levels and adaptable by the teachers. At the same time, CAS are adapted to the context that learners are familiar with and to the characteristics of the students. Comparability, which is also included in this factor, is somewhat related to the concept of reliability [13], emphasising the consistency of the assessment that in the CAM is fostered by multiple assessment occasions and instruments. In sum, this second KQF brings together under the same construct the visibility of the student’s thinking process, the comparability of its observations at different moments of the scenario and the coherence between activities and what is being assessed (QC3). The CAM, therefore, promotes the observation of complex cognitive processes and the assessment is focused on these multiple observations in a context that is adapted to the targeted students, thus providing results that are unbiased.
KQF3 named ‘Meaningfulness and authenticity’ integrates meaningfulness (QC4) and authenticity (QC1). These two criteria are closely related and prove the CAM keeps students engaged in the learning process [13,14,15] through meaningful and authentic scenarios. Real problems and a meaningful social context also enable the mobilisation of knowledge, skills and attitudes of a competence [9,10,11], hence facilitating its assessment.
Finally, KQF4, named ‘Reproducibility and transparency’, is a combination of transparency (QC5) and reproducibility of decisions (QC7). The indicators from the QC5 included in this factor refer to the clarity of information about the students’ progress and the presentation of the assessment results, while those from the QC7 focus on the reliability of the information about the students’ achievements. This factor, therefore, brings together the view of the CAM as a model which conveys clear information that, in turn, is also reliable for the multiple assessment occasions and the application of different instruments.
Considering the results obtained from this exploratory factor analysis, we can hence observe that the assessment model for digital competence, once applied to the school context, leads to the identification of new emerging factors describing its quality. Defining the strengths of the model and making explicit the factors in which its quality lies could serve to guide practitioners in its application to the school context. However, it is important to emphasise that the teaching intervention and the tools to be used by teaching staff, while drawing on the evidence of the study and the recommendations provided, must be calibrated according to the specific context of application in which the competency assessment is embedded.

5. Conclusions

This article presents the results of a study conducted with teachers to evaluate the quality of a CAM for DC after its large-scale test in primary and secondary schools. The assessment of the CAM was carried out through an instrument designed on the basis of nine QC for CBA.
The results of the application of this instrument show that the quality of the model achieves positive results according to the aforementioned QC. However, the factor analysis reveals four KQC as new constructs describing the quality of the model for DC assessment in a more accurate way. The most relevant one is a combination of time efficiency, student monitoring and support and assessment. Specifically, it indicates that the CAM supports regular teaching practices such as planning, assigning tasks, monitoring students’ progress, providing timely feedback, detecting underperforming cases and optimising assessment performance. The other three KQC, resulting from a combination of fairness and cognitive complexity, meaningfulness and authenticity, and the reproducibility and transparency of the CAM, albeit less consistent, must also be considered independent because of their specific characteristics.
These new factors that emerged from the results of the practical implementation of the model in a real educational setting, lead us to reflect on how the quality criteria identified in the literature succeed in describing the quality of a competence assessment model applied to a real context. In parallel, the identification of these factors aims to be helpful for institutions and teachers willing to implement the CAM in their teaching practices and design CAS for the assessment of their students’ DC.
The findings of this study also present some limitations which suggest future research in this field. First, the present study only addresses the teachers’ perspective so it could be interesting to analyse also how students experience the quality of the CAM for DC. Secondly, the results obtained might be influenced by different variables such as educational level, personal profile, country of origin or level of digital competence. The analysis of their relevance would help to shed light on the outcomes of the factor analysis. Likewise, the technological performance of the platform could also have had an impact on the perception of the quality of the model so it would be crucial to study this variable in a separate way.

Author Contributions

L.G.: conceptualization, methodology, validation, supervision, project administration. M.M.: conceptualization, methodology, investigation, supervision. F.M.: conceptualization, methodology, writing- original draft preparation, investigation, visualization. M.M.M.: conceptualization, methodology, formal analysis, data curation, investigation, review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

CRISS Project was co-funded by the Horizon 2020 Research and Innovation Programme of the European Union, with Grant Agreement number 732489. This research has been published with the support of the Secretariat of Universities and Research of the Department of Business and Knowledge of the Generalitat de Catalunya. Applsci 13 02450 i001

Institutional Review Board Statement

The study was conducted in compliance with the Declaration of Helsinki, the H2020 Ethical Standards and the Spanish Organic Law 3/2018 on Data Protection and Digital Rights. This compliance was ensured by the Ethics Committee and the Data Protection Officer of the CRISS H2020 project, as well as the Data Protection Officer of the Fundació per a Universitat Oberta de Catalunya.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sale, D. Assessing Twenty-First Century Competencies. In Creative Teachers; Springer: Singapore, 2020; pp. 263–289. [Google Scholar] [CrossRef]
  2. Organisation for Economic Co-operation and Development. In The Future of Education and Skills. Education 2030; OECD: Paris, France, 2018.
  3. Economou, A. ATS2020-Assessment of Transversal Skills. Reflections and Policy Recommendations on Transversal Skills Development and Assessment; ATS2020 (Assessment of Transversal Skills) Consortium: 2018. Available online: http://www.ats2020.eu/images/documents/ATS2020-Reflections.pdf (accessed on 8 September 2022).
  4. European Commission. Proposal for a Council Recommendation on Key Competences for Lifelong Learning. 2018. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018SC0014 (accessed on 12 June 2022).
  5. Organisation for Economic Co-operation and Development. Skills for Social Progress. The Power of Social and Emotional Skills; OECD Publishing: Paris, France, 2015. [Google Scholar] [CrossRef]
  6. Napal Fraile, M.; Peñalva-Vélez, A.; Mendióroz Lacambra, A. Development of Digital Competence in Secondary Education Teachers’ Training. Educ. Sci. 2018, 8, 104. [Google Scholar] [CrossRef]
  7. Motschnig, R.; Sedlmair, M.; Schröder, S.; Möller, T. A Team-Approach to Putting Learner-Centered Principles to Practice in a Large Course on Human-Computer Interaction. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Erie, PA, USA, 12–15 October 2016. [Google Scholar] [CrossRef]
  8. Pepper, D. KeyCoNet 2013 Literature Review: Assessment for Key Competences; KeyCoNet: 2013. Available online: http://keyconet.eun.org/c/document_library/get_file?uuid=b1475317-108c-4cf5-a650-dae772a7d943&groupId=11028 (accessed on 21 April 2022).
  9. Hidi, S.; Harackiewicz, J.M. Motivating the Academically Unmotivated: A Critical Issue for the 21st Century. Rev. Educ. Res. 2000, 70, 151–179. [Google Scholar] [CrossRef]
  10. Jisc. The Future of Assessment: Five Principles, Five Targets for 2025; Jisc: Bristol, UK, 2020. [Google Scholar]
  11. Roegiers, X. Pedagogy of Integration: Education and Training Systems at the Heart of Our Societies; De Boeck: Brussels, Belgium, 2010. [Google Scholar]
  12. Crisp, G.; Guàrdia, L.; Hillier, M. Using E-Assessment to Enhance Student Learning and Evidence Learning Outcomes. Int. J. Educ. Technol. High. Educ. 2016, 13, 18. [Google Scholar] [CrossRef]
  13. Baartman, L.K.J.; Bastiaens, T.J.; Kirschner, P.A.; van der Vleuten, C.P.M. The Wheel of Competency Assessment: Presenting Quality Criteria for Competency Assessment Programs. Stud. Educ. Eval. 2006, 32, 153–170. [Google Scholar] [CrossRef]
  14. Lai, E.R.; Viering, M. Assessing 21st century skills: Integrating research findings. In National Council on Measurement in Education; National Council on Measurement in Education: Vancouver, BC, Canada, 2012. [Google Scholar]
  15. Sargent, C. Teacher Guide. Assessment of Key Competences in School Education; European Schoolnet: Brussels, Belgium, 2014; Available online: https://goo.gl/62mBHr (accessed on 15 December 2021).
  16. Wesselink, R.; Biemans, H.; Gulikers, J.; Mulder, M. Models and Principles for Designing Competence-Based Curricula, Teaching, Learning and Assessment. In Technical and Vocational Education and Training: Issues, Concerns and Prospects; Springer International Publishing AG: Cham, Switzerland, 2016; pp. 533–553. [Google Scholar] [CrossRef]
  17. Treffinger, D.J.; Young, G.C.; Selby, E.C.; Shepardson, C. Assessing Creativity: A Guide for Educators; National Research Center on the Gifted and Talented: Storrs, CT, USA, 2002. [Google Scholar]
  18. Fjørtoft, H. Multimodal Digital Classroom Assessments. Comput. Educ. 2020, 152, 103892. [Google Scholar] [CrossRef]
  19. Siarova, H.; Sternadel, D.; Mašidlauskaitė, R. Assessment Practices for 21st-Century Learning: Review of Evidence; Publications Office of the European Union: Luxembourg, 2017. [Google Scholar]
  20. European Commission. Developing Key Competences at School in Europe: Challenges and Opportunities for Policy; Eurydice: Garges-lès-Gonesse, France, 2012. [Google Scholar]
  21. Pepper, D. Assessing Key Competences across the Curriculum—And Europe. Eur. J. Educ. 2011, 46, 335–353. [Google Scholar] [CrossRef]
  22. Pane, J.F.; Steiner, E.D.; Baird, M.D.; Hamilton, L.S.; Pane, J.D. Informing Progress: Insights on Personalized Learning Implementation and Effects. Available online: https://www.rand.org/pubs/research_reports/RR2042.html (accessed on 15 July 2022).
  23. Scheopner Torres, A.; Brett, J.; Cox, J.; Greller, S. Competency Education Implementation: Examining the Influence of Contextual Forces in Three New Hampshire Secondary Schools. AERA Open 2018, 4, 233285841878288. [Google Scholar] [CrossRef]
  24. Kirova, A.; Hennig, K. Culturally Responsive Assessment Practices: Examples from an Intercultural Multilingual Early Learning Program for Newcomer Children. Power Educ. 2013, 5, 106–119. [Google Scholar] [CrossRef]
  25. Montenegro, E.; Jankowski, N. National Institute for Learning Outcomes Assessment|1 Equity and Assessment: Moving towards Culturally Responsive Assessment; National Institute for Learning Outcomes Assessment: Champaign, IL, USA, 2017. [Google Scholar]
  26. Baartman, L.K.J. “Assessing the Assessment” Development and Use of Quality Criteria for Competence Assessment Programmes. 2008. Available online: https://edepot.wur.nl/117554 (accessed on 29 May 2022).
  27. Baartman, L.K.J.; Bastiaens, T.J.; Kirschner, P.A.; Van der Vleuten, C.P.M. Teachers’ Opinions on Quality Criteria for Competency Assessment Programs. Teach. Teach. Educ. 2007, 23, 857–867. [Google Scholar] [CrossRef]
  28. Gerritsen-van Leeuwenkamp, K.J.; Joosten-ten Brinke, D.; Kester, L. Developing Questionnaires to Measure Students’ Expectations and Perceptions of Assessment Quality. Cogent Educ. 2018, 5, 1464425. [Google Scholar] [CrossRef]
  29. Guitert, M.; Romeu, T.; Baztán, P. The Digital Competence Framework for Primary and Secondary Schools in Europe. Eur. J. Educ. 2020, 56, 133–149. [Google Scholar] [CrossRef]
  30. Guàrdia, L.; Maina, M.; Baztán, P. Digital Competence Assessment Framework for Primary and Secondary Schools in Europe: The CRISS Project. In Proceedings of the EDEN Conference, Barcelona, Spain, 24–26 October 2018. [Google Scholar] [CrossRef]
  31. Peyser, A.; Gerard, J.F.; Roegiers, X. Implementing a Pedagogy of Integration: Some thoughts based on a textbook elaboration experience in Vietnam. Plan. Chang. 2006, 37, 37–55. [Google Scholar]
  32. Gulikers, J.T.M.; Bastiaens, T.J.; Kirschner, P.A.; Kester, L. Authenticity Is in the Eye of the Beholder: Student and Teacher Perceptions of Assessment Authenticity. J. Vocat. Educ. Train. 2008, 60, 401–412. [Google Scholar] [CrossRef]
  33. Bae, S.; Kokka, K. Student Engagement in Assessments: What Students and Teachers Find Engaging; Stanford Center for Opportunity Policy in Education: Stanford, CA, USA, 2016. [Google Scholar]
  34. Prøitz, T.S. Learning Outcomes as a Key Concept in Policy Documents throughout Policy Changes. Scand. J. Educ. Res. 2014, 59, 275–296. [Google Scholar] [CrossRef]
  35. De Ketele, J.M. L’évaluation des acquis scolaires: Quoi? pourquoi? pour quoi? Rev. Tunis. Des Sci. L’éducation 1996, 23, 17–36. [Google Scholar]
  36. Maina, M.F.; Santos-Hermosa, G.; Mancini, F.; Guàrdia Ortiz, L. Open Educational Practices (OEP) in the Design of Digital Competence Assessment. Distance Educ. 2020, 41, 261–278. [Google Scholar] [CrossRef]
  37. Harman, H.H. Modern Factor Analysis; University Of Chicago Press: Chicago, IL, USA, 1976. [Google Scholar]
  38. Kaiser, H.F. A Second Generation Little Jiffy. Psychometrika 1970, 35, 401–415. [Google Scholar] [CrossRef]
Figure 1. Scores achieved on every criterion of the CRISS CAM. (mean on a 1–5 scale, n [teachers]: 420).
Figure 1. Scores achieved on every criterion of the CRISS CAM. (mean on a 1–5 scale, n [teachers]: 420).
Applsci 13 02450 g001
Figure 2. Four Factors of QC of CAMs. * Cronbach’s alpha (α) not calculable. 1 Not divided criteria. 2 Divided criteria.
Figure 2. Four Factors of QC of CAMs. * Cronbach’s alpha (α) not calculable. 1 Not divided criteria. 2 Divided criteria.
Applsci 13 02450 g002
Table 1. QC: definitions.
Table 1. QC: definitions.
QCDefinition
QC1. Authenticity (A)The degree of resemblance of a CAM to the criterion situation. The type of tasks should be as realistic as possible (it includes the school environment and the social context), and knowledge, skills and attitudes have to be assessed in an integrated way.
QC2. Cognitive complexity (CC)The assessment tasks should reflect the presence and level of higher cognitive skills. It requires an analysis of the thinking processes (cognitive complexity) used when solving the CAS activities and tasks.)
QC3. Fairness (F)CAM should not show bias toward certain groups of learners.
Assessment tasks should be adjusted to the educational level of the learners and reflect the knowledge, skills and attitudes of the competence.
QC4. Meaningfulness (M)CAM should have a significant value for both teachers and learners that provides a worthwhile educational experience and guidance in learning processes. For example, learners need meaningful feedback and assessment criteria to guide their learning process.
QC5. Transparency (T)CAM should be clear and understandable to all stakeholders. Learners should know the scoring criteria, who the assessors are, and what the purpose of the assessment is.
QC6. Educational consequences (E)Educational consequences entail the intended, unintended, positive and negative effects of a CAM on learning and instruction, and how teachers and learners view the goals of education and adjust their learning and teaching activities accordingly. A CAM must have a positive effect on student learning.
QC7. Reproducibility of decisions (R)The decisions made on the basis of the results of a CAM should be accurate and constant over situations and assessors. Decisions made about students should be based on multiple assessments, carried out by multiple assessors and on multiple occasions.
QC8. Comparability (CB)The conditions under which the assessment is carried out should be conducted in a consistent and responsible way and be the same for all learners. Tasks, scoring criteria and circumstances should occur in a consistent way, using the same criteria for all learners.
QC9. Costs and efficiency (CE)The time and resources needed to develop and carry out the CAM, compared to the benefits. Additional investments in time and resources are justified by the positive effects, such as improvements in learning and teaching.
Table 2. QC results of the CAM of digital competencies.
Table 2. QC results of the CAM of digital competencies.
Criteria and IndicatorNMSD
QC4_MEANINGFULNESS (M)4053.690.872
m_The activities provide situations relevant to the students4053.690.872s
QC1_AUTHENTICITY (A)3983.640.713
a2_The components of the digital competence (knowledge, skills and attitudes) are integrated into the scenarios.4063.820.748
a1_The activities are related to situations familiar to the students4053.460.947
QC8_COMPARABILITY (CB)3953.510.823
cb_The assessment instruments support consistent scoring among all students3953.510.823
QC6_EDUCATIONAL CONSEQUENCES (E)3843.340.850
e1_Working with activities helps students to stay engaged in learning process4013.690.850
e3_The CRISS platform extends my capacity for assessment4103.330.103
e4_The CRISS platform helps me to improve the engagement of my students4063.240.107
e2_I find the CRISS platform useful for additional assessment of my students4113.040.118
QC7_REPRODUCIVILITY OF DECISIONS (R)3753.320.736
r1_Applying different assessment instruments provide3993.620.795
trustful information about the students’ achievements
r2_Assessing on various occasions provide trustful information about the students achievements3983.590.807
r4_The CRISS platform helps me to make more suitable decisions to enable students’ progress3943.040.100
r3_The CRISS platform allows me to track the progress of my students much better than I could do without CRISS platform4033.020.101
QC3_FAIRNESS (F)3673.320.677
f3_The activities are coherent with what is being assessed4033.540.832
f1_The activities allow students to advance according to their capability3993.470.885
f2_The assessment instruments ensure unbiased results3943.450.822
f4_I am able to detect underperforming students more quickly than I would do it without CRISS platform3892.830.102
QC2_COGNITIVE COMPLEXITY (CC)3773.290.773
cc1_The feedback helps students understand the way they solve the activities3903.610.828
cc2_The CRISS platform enables me to track my students’ process when performing the tasks3882.940.973
QC5_TRANSPARENCY (T)3783.220.848
t1_The information about the students’ progress is clear3953.240.106
t2_The CRISS platform enables me to provide clear evaluation criteria to my students.4063.220.103
t3_The presentation of the assessment results is easy to understand.3993.170.103
t4_I am able to provide better feedback to my students through the CRISS platform4033.160.103
QC9_COST EFFICIENCY (CE)3773.110.899
ce3_The CRISS platform enhances my teaching effectiveness.3943.420.960
ce1_The investments in time and resources are justified by the improvement on competence assessment accuracy3993.110.108
ce2_The CRISS platform allows me to better optimise my time because it helps me to carry out my activities4.042.730.112
Table 3. Teacher sample characteristics.
Table 3. Teacher sample characteristics.
n%
Demographic profileCountry Spain15136.0
Italy9622.9
Croatia6515.5
Greece6014.3
Romania266.2
Sweden225.2
GenderFemale28166.9
Male13933.1
Age25–29256.0
30–3911928.3
40–4916238.6
50–5910725.5
Over 6071.7
Academic profileSchool levelPrimary 9221.9
Secondary 32878.1
MeanSt. Dev
Students total101.1997.300
Students CRISS30.7660.456
Table 4. Significant differences between every QC: test for paired samples.
Table 4. Significant differences between every QC: test for paired samples.
QC1_AUTHENTI-CITY (A)QC2_COGNITIVE COMPLEXITY (CC)QC3_FAIRNESS (F)QC4_MEANING-FULNESS (M)QC5_TRANSPA-RENCY (T)QC6_EDUCATIO-NAL CONSEQUENCES (E)QC7_C7_REPRODUCIVIBILITY OF DECISIONS (R) QC8_COMPARA-BILITY (CB)QC9_COST EFFICIENCY (CE)
QC1_AUTHENTICITY (A)xZ = −8.008; p = 0.000Z = −8.135; p = 0.000NO SIGZ = −8.855;
p = 0.000
Z = −6.632;
p = 0.000
Z = −7.662; p = 0.000Z = −3.542; p = 0.000Z = −10.273 p = 0.000
QC2_COGNITIVE COMPLEXITY (CC)Z = −8.008; p = 0.000xNO SIGZ = −6.247;
p = 0.000
NO SIGNO SIGNO SIGZ = −3.151; p = 0.002Z = −3.462; p = 0.001
QC3_FAIRNESS (F)Z = −8.135; p = 0.000NO SIGxZ = −9.192;
p = 0.000
NO SIGZ = −1.996;
p = 0.046
NO SIGZ = −4.759; p = 0.000Z = −4.681; p = 0.000
QC4_MEANINGFULNESS (M)NO SIGZ = −6.247; p = 0.000Z = −9.192; p = 0.000xZ = −9.562; p = 0.000Z = −7.642;
p = 0.000
Z = −8.099; p = 0.000Z = −4.799; p = 0.000Z = −11.016; p = 0.000
QC5_TRANSPA-RENCY (T)Z = −8.855; p = 0.000NO SIGNO SIGZ = −9.562;
p = 0.000
xZ = −4.018;
p = 0.000
Z = −3.382;
p = 0.000
Z = −4.690; p = 0.000Z = −3.522; p = 0.000
QC6_EDUCATIONAL CONSEQUENCES (E)Z = −6.632; p = 0.000NO SIGZ = −1.996; p = 0.046Z = −7.642;
p = 0.000
Z = −4.018;
p = 0.000
xNO SIGNO SIGZ = −8.162; p = 0.000
QC7_REPRODUCIVILITY OF DECISIONS (R) Z = −7.662; p = 0.000NO SIGNO SIGZ = −8.099;
p = 0.000
Z = −3.382;
p = 0.000
NO SIGxZ = −3.266; p = 0.001Z = −6.302; p = 0.000
QC8_COMPARABILITY (CB)Z = −3.542; p = 0.000Z = −3.151; p = 0.002Z = −4.759; p = 0.000Z = −4.799;
p = 0.000
Z = −4.690;
p = 0.000
NO SIGZ = −3.266;
p = 0.001
xZ = −6.191; p = 0.000
QC9_COST EFFICIENCY (CE)Z = −10.273 p = 0.000Z = −3.462; p = 0.001Z = −4.681; p = 0.000Z = −11.016; p = 0.000Z = −3.522;
p = 0.000
Z = −8.162;
p = 0.000
Z = −6.302;
p = 0.000
Z = −6.191; p = 0.000x
Wilcoxon signed rank test. Note: read table by row.
Table 5. KMO and Bartlett’s test of sphericity.
Table 5. KMO and Bartlett’s test of sphericity.
Result
Determinant 0.0000
KMO 0.951
Bartlett’s test Chi26382.569
Df300
Sig.0.000
Table 6. KQF identified and related variance.
Table 6. KQF identified and related variance.
Extraction Sums of Squared LoadingsFinal Model %
Total% of VarianceCumulative %
KQF1Efficiency in time, student tracking and support, and assessment12.04648.18348.18373.257
KQF2Fairness, and cognitive complexity2.1848.73756.92013.284
KQF3Meaningfulness and authenticity1.2414.96461.8847.547
KQF4Reproducibility and transparency 0.9723.88965.7735.912
TOTAL16.44365.773 100.000
Table 7. Rotated component matrix.
Table 7. Rotated component matrix.
Key Factors and IndicatorsOriginal CriterionComponent
F1F2F3F4
KQF1. Efficiency in time, student tracking and support, and assessment
ce2_The CRISS platform allows me to better optimise my time because it helps me to carry out my activitiesC9. Costs and efficiency (CE)0.805
r4_The CRISS platform helps me to make more suitable decisions to enable students’ progress.C7. Reproducibility of decisions (R)0.774
r3_The CRISS platform allows me to track the progress of my students much better than I could do without CRISS platformC7. Reproducibility of decisions (R)0.763
e4_The CRISS platform helps me to improve the engagement of my studentsC6. Educational consequences (E)0.744
f4_I am able to detect underperforming students more quickly than I would do it without CRISS platformC3. Fairness (F)0.742
e3_The CRISS platform extends my capacity for assessmentC6. Educational consequences (E)0.730
e2_I find the CRISS platform useful for additional assessment of my studentsC6. Educational consequences (E)0.716
t4_I am able to provide better feedback to my students through the CRISS platformC5. Transparency (T)0.703
ce1_The investments in time and resources are justified by the improvement on competence assessment accuracyC9. Costs and efficiency (CE)0.669
t2_The CRISS platform enables me to provide clear evaluation criteria to my students.C5. Transparency (T)0.655
ce3_The CRISS platform enhances my teaching effectiveness.C9. Costs and efficiency (CE)0.527
KQF2.Fairness and cognitive complexity
cb_The assessment instruments support consistent scoring among all studentsC8. Comparability (CB) 0.796
f2_The assessment instruments ensure unbiased resultsC3. Fairness (F) 0.780
f3_The activities are coherent with what is being assessedC3. Fairness (F) 0.645
c2_The CRISS platform enables me to track my students’ reasoning when solving tasksC2. Cognitive complexity (CC) 0.602
f1_The activities allow students to advance according to their capabilityC3. Fairness (F) 0.542
c1_The feedback helps students understand the way they solve activitiesC2. Cognitive complexity (CC) 0.475
KQF3. Meaningfulness and authenticity
m_The activities provide situations relevant to the studentsC4. Meaningfulness (M) 0.769
a1_The activities are related to situations familiar to the studentsC1. Authenticity (A) 0.760
a2_The components of digital competence (knowledge, skills and attitudes) are integrated in the scenarioC1. Authenticity (A) 0.615
e1_Working with activities helps students to stay engaged in learning processC6. Educational consequences (E) 0.465
KQF4. Reproducibility and transparency
t1_The information about the students’ progress is clearC5. Transparency (T) 0.693
t3_The presentation of the assessment results is easy to understandC5. Transparency (T) 0.641
r2_Assessing on various occasions provide trustful information about the students’ achievementsC7. Reproducibility of decisions (R) 0.553
r1_Applying different assessment instruments provides trustful information about the students’ achievementsC7. Reproducibility of decisions (R) 0.506
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Note for the whole factor process: Rotation converged in 8 iterations.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guàrdia, L.; Maina, M.; Mancini, F.; Martinez Melo, M. Key Quality Factors in Digital Competence Assessment: A Validation Study from Teachers’ Perspective. Appl. Sci. 2023, 13, 2450. https://doi.org/10.3390/app13042450

AMA Style

Guàrdia L, Maina M, Mancini F, Martinez Melo M. Key Quality Factors in Digital Competence Assessment: A Validation Study from Teachers’ Perspective. Applied Sciences. 2023; 13(4):2450. https://doi.org/10.3390/app13042450

Chicago/Turabian Style

Guàrdia, Lourdes, Marcelo Maina, Federica Mancini, and Montserrat Martinez Melo. 2023. "Key Quality Factors in Digital Competence Assessment: A Validation Study from Teachers’ Perspective" Applied Sciences 13, no. 4: 2450. https://doi.org/10.3390/app13042450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop