Next Article in Journal
Higher Education Students’ Perceptions of Online Learning during COVID-19—A Comparative Study
Previous Article in Journal
The Light and Shadow Brought to Teacher Education by Digitizing the Educational Environment: The Case of Japan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Digital Competence Assessment Methods in Higher Education: A Systematic Literature Review

School of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2021, 11(8), 402; https://doi.org/10.3390/educsci11080402
Submission received: 31 May 2021 / Revised: 25 June 2021 / Accepted: 30 June 2021 / Published: 4 August 2021
(This article belongs to the Section Higher Education)

Abstract

:
The rapid increase in recent years in the number of different digital competency frameworks, models, and strategies has prompted an increasing popularity for making the argument in favor of the need to evaluate and assess digital competence. To support the process of digital competence assessment, it is consequently necessary to understand the different approaches and methods. This paper carries out a systematic literature review and includes an analysis of the existing proposals and conceptions of digital competence assessment processes and methods in higher education, with the aim of better understanding the field of research. The review follows three objectives: (i) describe the characteristics of digital competence assessment processes and methods in higher education; (ii) provide an overview of current trends; and, finally, (iii) identify challenges and issues in digital competence assessment in higher education with a focus on the reliability and validity of the proposed methods. On the basis of the findings, and as a result of the COVID-19 pandemic, digital competence assessment in higher education requires more attention, with a specific focus on instrument validity and reliability. Furthermore, it will be of great importance to further investigate the use of assessment tools to support systematic digital competence assessment processes. The analysis includes possible opportunities and ideas for future lines of work in digital competence evaluation in higher education.

1. Introduction

Digital technologies have brought changes in, and, thus, challenges to, our everyday life, in which the use of technology is inevitable. Therefore, the concept of digital competence has become a central part of our lives, and this set of skills has quickly risen to become a key area of competence when dealing with different policy-related documents, and is a clear topic of focus in European policies. It can be assumed that the concept of digital competence has become a focus of policy because of its future-oriented nature: it comprises skills that are expected from a workforce that, in the future, will need to operate effectively in a knowledge-intensive society. Thus, when talking about supporting the development of such skills, it is also important to find ways to evaluate digital competence.
The situation today is similar to that described by Bawden [1], who already stated more than 15 years ago, in 2001, that the literature regarding digital skills is not consistent with respect to the terms used or the underlying concepts. When talking about digital competence, we use a variety of similar concepts: digital skills, digital literacy, media literacy, information literacy, transversal skills, new media literacy, e-skills, e-competences, and in some cases digital intelligence. Often, these concepts are discussed as being 21st-century skills. Furthermore, while the concept of digital competence was once considered mainly to comprise skills related to computer use, today the concept refers to a wider sense of knowledge, skills, and attitudes that are largely affected by the labor market. As we move into more comprehensive work situations that include open learning spaces, informal learning and working situations, and a greater amount of interactive technologies, digital competence is taking on new dimensions, becoming more context specific, as well as more conceptually intricate. Following on from this, different digital competency models and frameworks have been developed for different target groups. Most notably, in Europe, these include the DigComp Frameworks for Citizens [2], which was first published in 2013 and last renewed in 2017, while in the United States, these include the ISTE [3] digital competency strategy for teachers and the NETS model for students’ digital skills. Additionally, the UNESCO ICT Competency Framework for Teachers [4] aims to help countries to develop comprehensive policies and standards for teachers, while the DigCompEdu [5] framework was published late 2017 with the aim of creating standards for evaluating the digital competence of educators in Europe. The rapid increase in the number of different digital competency frameworks, models, and strategies has profoundly shifted the focus from the measurement aspects and operational interpretations of digital competence towards aspects related to definitions, indicators, and indexes [6].
In addition, among the various concepts of digital competence and the frameworks that set out the initial scales for their measurement, one of the most complex topics is how to assess such future skills, when digital competence is considered to comprise essential future skills at a policy level in knowledge-intensive societies. The evaluation of digital skills has proven to be a real challenge, and existing systems have failed to initiate effective and systematic processes [7]. The aforementioned arguments frame the lack of and need for systematic digital competency assessment methods and, furthermore, tools that can be implemented in consideration of the particular field of expertise. Several attempts have been made to assess and measure digital competence, but an overview of the different possibilities with their strengths and weaknesses is lacking. At the same time, such an overview is needed to guide the various policy-level initiatives and research groups aiming to assess such competence, but who are struggling with similar issues: what instruments are less time-consuming, give enough evidence, are based on real-life situations and tasks, and, at the same time, are valid and reliable.
In light of the statements mentioned and the small number of previous literature reviews focusing on the assessment of digital competence in the context of higher education, including the students and academic staff within the sample, the following gives a brief overview of various definitions of digital competence and equivalent terms. The large number of different attitudes, values and approaches to digital competence provides us with a plethora of similar concepts [1]. Based on DigComp 2.1, The Digital Competence Framework for Citizens [2], which includes eight proficiency levels and examples of their use, includes the following competence areas: information and data literacy; communication and collaboration; problem solving; content creation; and safety.
Ala-Mutka [8] presented in her research, which might be considered as the first holistic visual representation of digital literacies in the context of 21st century skills in the age of knowledge society. The 21st century skills and digital competence are seen as important future skills in a knowledge-intensive society, although the relationship between those concepts is unclear. Van Laar et al. [9] have proposed using the term 21st century digital skills, which are critical for both people and organizations for keeping up with developments and innovating products and processes. However, these skills go beyond the technical annotation and have a greater impact on a person’s ability to function in a technologically rich society than just being knowledgeable about specific software. Although technologies are the foundation of innovation, technology does not create a knowledge-based innovation—people do that by making human capital within the workforce decisive, which means that workplaces in the future demand a workforce with 21st century skills: collaboration, communication, digital literacy, citizenship, problem solving, critical thinking, creativity, and productivity [10]. In the current fast-changing knowledge economy, 21st century digital skills drive organizations’ competitiveness and innovation capacity [9].
The ongoing COVID-19 pandemic has highlighted the importance of digital competence in implementing technology enhanced learning practices in all levels of education. Zhao, Pinto Llorente, and Sanchez Gomez [11] stated in their literature review that digital competence perceptions vary, which is evident in their level of digital competence based on self-assessment.
The paper provides an overview of the field of digital competence assessment methods, instruments and tools in higher education using a systematic literature review of digital competency assessment practices and conceptions focusing on the instruments used for digital competence assessment. Bearing in mind the previous, the review focuses on three main goals. The first goal is to (i) describe the characteristics of digital competence assessment processes and methods in higher education. The second aim of the review is to (ii) provide an overview of the current trends. Finally, the review aims to (iii) identify the challenges and issues in digital competence assessment in higher education focusing on the reliability and validity of the proposed methods. The review concludes with possible opportunities and ideas for future lines of work in digital competence evaluation, mainly in higher education.

2. Systematic Review Methodology

The current literature review follows the principles of a systematic literature review presented by Siddiq, Hatlevik, Olsen, Throndsen, and Scherer [12] where a proposed predefined procedure by Gough, Oliver, and Thomas [13] and Petticrew and Roberts [14] was used. The review lies on the principles of The Cochrane Community who have defined the systematic literature review as collecting empirical evidence that presents itself coherent with the predetermined eligibility criteria in order to provide a specific answer to a research question [15]. The literature review focuses on minimizing the possibility of bias in research as the conclusions and decisions are drawn from a vast amount of diverse research findings. With the aim of analyzing evidence on the evaluation of digital competences in higher education the review includes the formulation of research questions, research and assessment of literature based on the criteria, description of the analysis and assessment of the literature suitability.

2.1. Search Procedures

The review focuses on higher education with the aim of understanding and describing the digital competence assessment processes and implications. When describing the topic of digital competence, a variety of notions is used. For a clear overview of the field a set of synonyms and alternative phrases were used in the literature search, resulting in using the following terms:
  • Digital competence: Digital competency, ICT literacy, digital literacy, ICT skills, digital skills, computer skills, technology literacies, digital competencies, 21st century skills;
  • Measurement: Assessment, evaluation, testing, measuring, questionnaire.
The current research mainly focused on two lines of research. On one hand, it includes the description of the digital competence instrument and tool development process which includes a sample in the validation process. The second line of research focused specifically on the assessment of digital competence. Both lines of research were limited to higher education settings, thus giving an overview of the used methods, instruments, and tools implemented for digital competence assessment in adult learning.
The literature search analysis was carried out from March 2018 to January 2019. The search in the databases was restricted to years 2000–2018. The data were queried from five academic databases: IEEE Xplore, Science Direct, Springer Link, ACM Digital Library, and Web of Science. This selection of databases was used due to the large capacity of computer science and ICT topic research published. In addition, Google Scholar was used to find grey-literature and have an overall view of the topic of digital competency assessment in higher education. In addition to the database search, professional social networks, such as Academia.edu and ResearchGate, were analyzed for relevant articles. In the final step experts were included, who provided some extra studies carried out in the field.

2.2. Eligibility Criteria and Screening Process

Initial screening of the literature was done by reading the titles and abstracts. During the process all studies which included the context of higher education and presented digital competence assessment as the core topic were included in the next round of analysis. This was considered as the initial eligibility criteria. Before the next stage of the screening process the final eligibility criteria were set out.
The criteria include the following:
  • Research focused on digital competence assessment;
  • ICT literacy or other equally defined concept assessment in higher education;
  • The articles describe the development process of a digital competence (or equal concept) assessment tool and include a sample in the validation process;
  • The research method includes digital competence self-assessment or self-reporting;
  • Research which focuses on variety of interventions in technology enhanced learning and include digital competence assessment as a preliminary stage of the research.
In the elimination process the following criteria were used:
  • Insufficient or no reporting on the sample, digital competence assessment instrument or educational setting;
  • Characteristics of the used tool or instrument was not given.
The full screening process of the literature is presented in Figure 1, which shows that the initial search produced 6399 studies by screening the titles and abstracts. Relevant studies were selected based on the initial eligibility criteria. After the database query 316 papers were left for further review. Each of the 316 studies were analyzed based on the eligibility criteria and by reading the full text. All literature that focused on digital competency assessment in secondary or lower education was excluded. All together 274 publications were excluded based on the context of the research, used methodologies and research focus. The focus was on the studies that included a sample for validation in the research.
Furthermore, as the same keywords were used in all five academic databases some of the results included duplicates. Finally, as the databases do not allow a detailed search of keywords only in the titles, in some cases it resulted in the low relevance of papers for the literature review based on the eligibility criteria. Consequently, the literature screening process revealed 40 suitable studies which make up the literature used in the systematic review. Based on the research objectives, a content analysis was carried out which supported the aim of providing an overview of the characteristics of digital competence evaluation in higher education by identifying the trends and challenges.

3. Results

The systematic review includes 40 of publications gathered from sources presented and visualized in the methodology chapter. The research includes both technological tools and instruments developed with the purpose of digital competence assessment. This includes fully developed platforms [16,17], digital competence scales [18,19], and tools developed within international projects [20].

3.1. RQ1—Characteristics of Digital Competence Assessment Processes and Methods in Higher Education

To give a clear overview of the characteristics of digital competence assessment process and methods in higher education we focused on defining the geographical mapping of the analyzed research. We also viewed other characteristics like the year of publication, number of respondents, publications, research methods, task design, and used competency frameworks or models.
Table 1 gives an overview of the characteristics of the research included in the literature review. The final set of publications used in the research included studies from 25 different countries, of which four include a sample from two countries [9,20,21,22]. The most represented county was Spain, with nine instances, which is affected by the national level policies which state the importance of supporting the development of digital competence in higher education. Similar conclusion can be made based on the articles published based on the sample from United States and Portugal. The remainder of the studies focus mainly on field-specific digital competence which again shows that there is a growing trend in describing and evaluating digital competence in a specific context.
We also looked into the year of publication and the number of respondents to understand the scope of the research. The studies included were published between 2007 [23] and early 2019 [54] and sample sizes varied between 35 [24] and 35,000 subjects [22], covering both the academic staff of higher education, but also student teachers [54] and students from other fields [51]. In one case a sample size was not introduced [45] but was included in the study due to the use of a specially developed tool, which was also used in another study [17]. The majority of the research was published in 2018 (n = 13).
One of the objectives of the study was to understand what research methods were used in the included studies and what were the underpinnings of digital competence instrument task and item design.
Majority (n = 35) of the studies included in the review used quantitative methodology where either a self-assessment instrument or knowledge-based testing was implemented. The rest of the studies included in the research employed either a qualitative methodology (n = 3) or a mixed method (n = 2) design in the digital competence test validation and evaluation process. Sixteen of the analyzed studies include a multiple-choice test as an instrument for digital competency assessment and are categorized as MC (Table 1). In most cases, a simple computerized questionnaire was used (n = 14), but in two cases a digital competence assessment tool was developed [17,45]. Based on further investigation, it can be said that it was not an open-access tool. Fourteen of the studies included a multiple-choice and interactive task (MC-INT) in the competency evaluation instrument. This means that the participants had to interact with a variety of tasks to provide a holistic view of the level of digital competence. Four of these cases included a technological platform development as part of the research. The literature revealed that in the task and item design it is important to include multiple scales in item development to assess a variety of skills.
Studies introduced factor analysis as a basis for item development to prove the structural validity. The researchers included that although the participant number for factor analysis was sufficient it is not sufficient to use the same dataset for the confirmatory analysis and, thus, a replication of the study would be advised [40]. This was mentioned in multiple studies, which suggests that although in most cases the studies include a representative sample they do not include a test group for assessment instrument reliability testing. One of the papers implemented item response theory (IRT) to measure the ability of respondents to deal with the given information, interpret and act on it. Methodology for analyzing the validity of the items produced was also introduced. The results indicated that, in most cases, the items fall into three levels of difficulty (easy, medium, difficult) with the focus on medium level items. In the construction stage of the items they based on the conceptual model that included two main dimensions—cognitive and critical, and creative. Based on the used framework and conceptual model a matrix was produced to support the instrument design.
Finally, we analyzed the included research based on the used digital competence frameworks or alternative basis documents. Majority of the studies (n = 30) included a localized competency model as a competence instrument basis. The localized models were based on national curriculum, international frameworks, such as DigComp 2.0 and 2.1 by European Commission [2], or UNESCO ICT competency framework for teachers [4]. In four of the studies DigComp was used as the framework for evaluation instrument development [29,41], in all instances a digital competence tool development was not included. Two of the studies were based on the ISTE standard for teachers [16,25]. The use of alternative frameworks and models further shows that there is a lack of common understanding when describing digital competence, which often influences the instrument design and the digital competence assessment results. The analyzed research state that there is an opportunity to build a wider scale research on digital competences when basing the instrument design on a common framework. One of the options presented is DigCompEdu [5] which could be used in understanding educators. digital competence. Simultaneously, we can argue that localized competence models better describe the implementation context.

3.2. RQ2; RQ3—Overview of Current Trends and Challenges in Digital Competence Assessment in Higher Education

To provide a clear answer on the second research objective, we looked into the development of digital competency assessment tools and tests. The analysis of the research revealed [16,20] that, until now, very little has been done to critically evaluate ICT literacy skills in higher education. Thus, it was important to introduce and develop an innovative way to measure the level of such skills that would be internet-delivered and computerized. In one of the studies the researchers set out to develop a tool to measure students’ abilities to research, organize, and communicate information while using technology as the initial development expectations [23].
Based on the need for scalable digital competence assessment methods we focused on analyzing how the studies have introduced validity argument in the research and what were the main reliability evidence provided. Several of the studies stressed the fact that in addition to the validity of the framework used in the digital competence assessment it is also important to validate the assessment instrument. The majority of research used self-reporting or self-assessment tools as the main instrument for data collection, it was stated that although the results give a good overview of the perceived level of digital competence in higher competence assessment research done over the past fifteen years included a sample of 4048 university students and use the Educational Testing Service (ETS) online tool. The ICT literacy test included seven proficiency categories which reflected problem-solving and critical thinking aspects: define, access, manage, integrate, evaluate, create, and communicate. The tool is designed for students to undertake information-handling tasks in the context of simulated software where tasks are separated into five- and fifteen-minute tasks, and proficiency levels are scalable accordingly. The test was divided into four sections of which the first part was based on demographic and academic performances, second was a self-assessment test, third part was based on the simulated tasks and finally the participants filled out a form based on their experiences, the final section results were used on the validating process. The study used a fairly common approach to validating the ICT literacy assessment, where they administered the assessment and other measures to a sample drawn from the population, which was made up of university students. Convergent validity was used if the assessment scores correlate to other measures related to ICT literacy and discriminant validity in case the scores do not correlate with measures [23].
Additionally, the literature revealed that, as digital technology is fast developing and by nature exists on many levels, it is a subject of constant arguments. This is highlighted in the fact that in most cases it is difficult to access the knowledge on the use of specific technological tools or platforms, as they are almost always based on the context and the users’ preference [31]. In many cases, the authors recommend using a maximum of five level proficiency scales as the more complexed scales confused the participants and steered the attention from the task in hand which was to assess ones’ digital competence [6].
Based on the study, we identified that one of the biggest challenges was the reliability of the tools used and the evidence collected, due to the use of self-evaluation test where the interpretation of the statement meaning is left to the individual and bias may appear. In most cases, this means that the research is not repeatable as the results on the same sample are subjective in nature [25,29,54]. In the context of the review it highlights that the evaluation and development of digital competence in the majority of the cases are set on a formal educational setting and not in the organizational level, but among small groups. This can be related to the fact that most analyzed proposals aimed to get a wider understanding of digital competence in the area of research rather than developing and validating a digital competency assessment instrument.

4. Discussion

Digital competence assessment processes, instruments, and tools have been the focus of international research since the start of rapid development of technologies. Analyzed research revealed the most commonly self-assessment instruments are used for digital competence assessment. This can be related to the fact that the analyzed proposals aim to get a understanding of digital competence in the area of research rather than developing and validating a digital competency assessment instrument for universal use. This is also evident in the use of multitude of frameworks when designing the assessment instruments. The European Commission Joint Research Centre (JRC) has focused their efforts in mapping and understanding digital competence assessment of citizens and educators [2,5]. The future research focuses include using various digital competency models, frameworks, and strategies in implementing similar competence evaluation instruments to find common characteristics. It is also important to analyze the suitability of the digital competence assessments instruments based on the field-specific concepts. Understanding the concept of digital competence is a challenge and, thus, digital competence assessment should not be addressed with one type evaluation instrument. To ensure the validity of the instrument and value of the results it is important to consider other assessment approaches and implementing multi-level competence assessment.
The authors of the reviewed studies bring into light that, although there is a significant amount of theoretical approaches for digital competency assessment, including the conceptual bases, a more flexible and adaptable theoretical approach is needed in order to compare data and independently assess digital competence either for educational setting or personal development.
Digital competence is central in several policy documents and national trends as key competence and future skill in knowledge-intensive economy supporting digital transformation of the society. Due to the COVID-19 pandemic digital competence has become even more important in understanding the use digital technologies in educational settings [55]. Additionally, we need to understand educators’ digital competence to support their professional development and consequently the quality of education migrating to more online teaching and learning.
The research revealed that the connection between emerging new technologies and educators’ barriers in integrating technology in evident and requires a deeper analysis. Analyzed research showed that in multiple cases the focus was on the development of digital competence not on the evaluation but consequently includes the competency assessment in the process as a by-product. It is important to include digital competence assessment as a starting point of digital competence development when designing wholistic and systematic approach.
As the focus on digital competency assessment has been on general education it is vital to provide adaptable solutions for digital competency assessment in higher education based on the research results in current studies. Similarly stated by Zhao, Pinto Llorente, and Sanchez Gomez [11], who suggest that there is little knowledge on how digital competence is immersed in teaching and learning in higher education. This incorporates the importance of understanding specifics and scope where digital competence assessment is implemented and the digital competence needs. In order to provide high-quality professional development in higher education we are forced to consider the field-specific digital competence and how we can support the development of such competence cross-curricula when implementing competence assessment and development into teaching strategies and teacher training. Furthermore, to support the assessment process of field-specific digital competence requires the understanding of such competence and related constructs. The failure to develop effective and scalable methods for digital competence assessment demands an adaption of a methodological framework which would support a more universal methodological approach to digital competence assessment. One of the suitable frameworks considered is the evidence-centred assessment design, which has been previously used for evaluating the more traditional learning outcomes but, nevertheless, has the potential to be included in the evaluation process of digital competence. The evidence-centered framework allows to both clearly articulate design goals and decisions but also to analyze and process large amounts of data which is imperative in digital competency assessment. Furthermore, the evidence-centred framework pull focus on how the data are collected and if the analysis process is coherent with the purposes of the assessment intended to address [56].
Furthermore, one of the principal issues of digital competence assessment is the scale of the research completed until now. Digital competency assessment necessity is mostly recognized in formal educational settings and, consequently, gives a partial overview of the situation. The studies also bring out the size of the used sample and the reliability of the sampling methods, which in most cases is either a random selection of participants or based on geographical distribution. In order to make more ground-breaking conclusions, future research must include a more comprehensive approach and research-based decisions already in the research design process.

5. Conclusions

It is important to state the complexity of the topic, as the field of competency assessment includes multiple layers. The reviewed research highlights one of the biggest issues in digital competence assessment, which is the focus on quantitative studies using self-assessment tools. Furthermore, the analyzed research presents the lack of qualitative research to accompany the results in analyzing the reliability and validity of the instruments and digital competence assessment process. As most of the research focuses on applying self-assessment instruments for digital competence assessment it would be useful to look into alternative assessment approaches by including summative, formative, and diagnostic assessment. Self-assessment is often one-dimensional, meaning there is a low possibility to understand why and how students and educators in higher education approach digital competence self-assessment. Thus, it is important to embed authentic assessment approaches—including portfolios, reflective journals, and observations to understand educators’ perceptions of digital competence and giving evidence on their technology enhanced learning practices. Additionally, future research be conducted in the field of digital competency assessment should be based upon developing validated and adaptable guidelines for the competency assessment processes. The future of digital competency assessment research should include a participatory design process where the different focus-groups are already included in the process of development and validation of the digital competency model and design of the assessment instrument.
The current literature review gives an overview of the constant development in the field of digital competence assessment. The analyzed proposals describe the state of digital competence assessment we face today, and highlight the importance of systematic and repetitive processes in the context of digital competence assessment in higher education.

Author Contributions

Conceptualization, L.H.S. and K.T.; methodology, L.H.S. and M.L.; analysis and results, L.H.S., K.T. and M.L.; writing—original draft preparation, L.H.S.; writing—review and editing, L.H.S.; visualization, L.H.S.; supervision, K.T. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the length of the study. The analyzed papers used in this study are published on open-access platforms and accessible online.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bawden, D. Information and digital literacies: A review of concepts. J. Doc. 2001, 57, 218–259. [Google Scholar] [CrossRef] [Green Version]
  2. Carretero, S.; Vuorikari, R.; Punie, Y. DigComp 2.1: The Digital Competence Framework for Citizens. With eight proficiency levels and examples of use. Publ. Off. Eur. Union 2017. [Google Scholar] [CrossRef]
  3. International Society for Technology in Education. ISTE Standards. 2007. Available online: https://iste.org/iste-standards (accessed on 13 March 2018).
  4. UNESCO. Unesco ict Competency Framework for Teachers; UNESCO: Paris, France, 2011. [Google Scholar]
  5. Redecker, C.; Punie, Y. Digital Competence of Educators. DigCompEdu. 2017. [Google Scholar] [CrossRef]
  6. Van Deursen, A.; van Dijk, J. Measuring Digital Skills: Performance Tests of Operational, Formal, Informal and Strategic Internet Skills among the Dutch Population. ICA Conference. 2008. Available online: https://www.utwente.nl/en/bms/vandijk/news/measuring_digital_skills/MDS.pdf (accessed on 13 March 2018).
  7. Cukurova, M.; Mavrikis, M.; Luckin, R. Evidence-Centered Design and Its Application to Collaborative Problem Solving in Practice-based Learning Environments About Analytics for Learning (A4L); SRI International: London, UK, 2017. [Google Scholar]
  8. Ala-Mutka, K. Institute for Prospective Technological Studies. 2011. Mapping Digital Competence: Towards a Conceptual Understanding. Available online: https://pdfs.semanticscholar.org/6282/f40a4146985cfef2f44f2c8d45fdb59c7e9c.pdf%0Ahttp://ftp.jrc.es/EURdoc/JRC67075_TN.pdf%5Cnftp://ftp.jrc.es/pub/EURdoc/EURdoc/JRC67075_TN.pdf (accessed on 13 March 2021).
  9. Van Laar, E.; Van Deursen, A.J.A.M.; Van Dijk, J.A.G.M.; De Haan, J. The relation between 21st-century skills and digital skills: A systematic literature review. Comput. Hum. Behav. 2017, 72, 577–588. [Google Scholar] [CrossRef]
  10. Voogt, J.; Roblin, N.P. A comparative analysis of international frameworks for 21stcentury competences: Implications for national curriculum policies. J. Curric. Stud. 2012, 44, 299–321. [Google Scholar] [CrossRef] [Green Version]
  11. Zhao, Y.; Llorente, A.M.P.; Gómez, M.C.S. Digital competence in higher education research: A systematic literature review. Comput. Educ. 2021, 168, 104212. [Google Scholar] [CrossRef]
  12. Siddiq, F.; Hatlevik, O.E.; Olsen, R.V.; Throndsen, I.; Scherer, R. Taking a future perspective by learning from the past—A systematic review of assessment instruments that aim to measure primary and secondary school students’ ICT literacy. Educ. Res. Rev. 2016, 19, 58–84. [Google Scholar] [CrossRef] [Green Version]
  13. Gough, D.; Oliver, S.; Thomas, J. An Introduction to Systematic Reviews; SAGE Publications Ltd.: Newbury Park, CA, USA, 2012. [Google Scholar]
  14. Petticrew, M.; Roberts, H. Systematic Reviews in the Social Sciences: A Practical Guide; Wilet-Blackwell: Hoboken, NJ, USA, 2005. [Google Scholar]
  15. Chandler, J.; Higgins, J.P.; Deeks, J.J.; Davenport, C.; Clarke, M.J. Cochrane Handbook for Systematic Reviews of Interventions Handbook for Systematic Reviews of Interventions; Cochrane: London, UK, 2017. [Google Scholar]
  16. Põldoja, H.; Väljataga, T.; Laanpere, M.; Tammets, K. Web-based self- and peer-assessment of teachers’ digital competencies. World Wide Web 2012, 17, 255–269. [Google Scholar] [CrossRef] [Green Version]
  17. Vázquez-Cano, E.; Meneses, E.; García-Garzón, E. Differences in basic digital competences between male and female university students of Social Sciences in Spain. Int. J. Educ. Technol. High. Educ. 2017, 14. [Google Scholar] [CrossRef] [Green Version]
  18. Pinto, M.; Pascual, R.F.; Puertas, S. Undergraduates’ information literacy competency: A pilot study of assessment tools based on a latent trait model. Libr. Inf. Sci. Res. 2016, 38, 180–189. [Google Scholar] [CrossRef]
  19. Soomro, K.A.; Kale, U.; Curtis, R.; Akcaoglu, M.; Bernstein, M. Development of an instrument to measure Faculty’s information and communication technology access (FICTA). Educ. Inf. Technol. 2018, 23, 253–269. [Google Scholar] [CrossRef]
  20. Blayone, T.J.B.; Mykhailenko, O.; Kavtaradze, M.; Kokhan, M.; Vanoostveen, R.; Barber, W. Profiling the digital readiness of higher education students for transformative online learning in the post-soviet nations of Georgia and Ukraine. Int. J. Educ. Technol. High. Educ. 2018, 15, 37. [Google Scholar] [CrossRef] [Green Version]
  21. Kiss, G.; Gastelú, C.A.T. Comparison of the ICT Literacy Level of the Mexican and Hungarian Students in the Higher Education. Procedia Soc. Behav. Sci. 2015, 176, 824–833. [Google Scholar] [CrossRef] [Green Version]
  22. Pérez-Escoda, A.; Rodríguez-Conde, M.J. Digital literacy and digital competences in the educational evaluation. In Proceedings of the Third International Conference on Technological Ecosystems for Enhancing Multiculturality—TEEM’15, Porto, Portugal, 7–9 October 2015; pp. 355–360. [Google Scholar]
  23. Katz, I.R.; Macklin, A.S. Information and communication technology (ICT) literacy: Integration and assessment in higher education. J. Syst. Cybern. Inf. 2007, 5, 50–55. [Google Scholar]
  24. Basque, J.; Ruelland, D.; Lavoie, M.-C. A Digital Tool for Self-Assessing Information Literacy Skills. In Proceedings of the World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, Quebec City, QC, Canada, 15 October 2007; pp. 6997–7003. Available online: http://biblio.teluq.uquebec.ca/portals/950/docs/pdf/digital.pdf%0Ahttp://www.editlib.org/p/26893 (accessed on 13 March 2021).
  25. Guo, R.X.; Dobson, T.; Petrina, S. Digital Natives, Digital Immigrants: An Analysis of Age and Ict Competency in Teacher Education. J. Educ. Comput. Res. 2008, 38, 235–254. [Google Scholar] [CrossRef]
  26. Peeraer, J.; Van Petegem, P. ICT in teacher education in an emerging developing country: Vietnam’s baseline situation at the start of ‘The Year of ICT’. Comput. Educ. 2011, 56, 974–982. [Google Scholar] [CrossRef]
  27. Soh, T.M.T.; Osman, K.; Arsad, N.M. M-21CSI: A Validated 21st Century Skills Instrument for Secondary Science Students. Asian Soc. Sci. 2012, 8, 38. [Google Scholar] [CrossRef] [Green Version]
  28. Wong, S.L.; Aziz, S.A.; Yunus AS, M.; Sidek, Z.; Bakar, K.A.; Meseran, H.; Atan, H. Gender Differences in ICT Competencies among Academicians at University Putra Malaysia. Malays. Online J. Instr. Technol. MOJIT 2005, 2, 62–69. Available online: http://myais.fsktm.um.edu.my/1648/ (accessed on 13 March 2021).
  29. Evangelinos, G.; Holley, D. Developing a Digital Competence Self-Assessment Toolkit for Nursing Students. Eur. Distance E Learn. Netw. EDEN 2014, 1, 206–212. Available online: http://www.eden-online.org/publications/proceedings.html (accessed on 13 March 2018).
  30. Costa, F.A.; Viana, J.; Cruz, E.; Pereira, C. Digital literacy of adults education needs for the full exercise of citizenship. In Proceedings of the 2015 International Symposium on Computers in Education, SIIE 2015, Setubal, Portugal, 25–27 November 2015; IEEE: Piscataway, NJ, USA, 2016; pp. 92–96. [Google Scholar] [CrossRef]
  31. Maderick, J.A.; Zhang, S.; Hartley, K.; Marchand, G. Preservice Teachers and Self-Assessing Digital Competence. J. Educ. Comput. Res. 2016, 54, 326–351. [Google Scholar] [CrossRef]
  32. Tolic, M.; Pejakovic, S. Self-assessment of digital competences of Higher Education professors. In Proceedings of the 5th International Scientific Symposium Economy of Eastern Croatia—Vision and Growth, Osijek, Croatia, 2–4 June 2016; pp. 570–578. [Google Scholar]
  33. Heerwegh, D.; De Wit, K.; Verhoeven, J.C. Exploring the Self-Reported ICT Skill Levels of Undergraduate Science Students. J. Inf. Technol. Educ. Res. 2016, 15, 019–047. [Google Scholar] [CrossRef] [Green Version]
  34. Cazco, G.H.O.; González, M.C.; Abad, F.M.; Mercado-Varela, M.A. Digital competence of the university faculty. In Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2–4 November 2016; pp. 147–154. [Google Scholar] [CrossRef]
  35. Almerich, G.; Orellana, N.; Suárez-Rodríguez, J.; Díaz-García, I. Teachers’ information and communication technology competences: A structural approach. Comput. Educ. 2016, 100, 110–125. [Google Scholar] [CrossRef]
  36. Turiman, P.; Osman, K.; Wook, T.S.M.T. Digital age literacy proficiency among science preparatory course students. In Proceedings of the 2017 6th International Conference on Electrical Engineering and Informatics (ICEEI), Langkawi, Malaysia, 25–27 November 2017; pp. 1–7. [Google Scholar]
  37. Corona, A.G.-F.; Martínez-Abad, F.; Rodríguez-Conde, M.-J. Evaluation of Digital Competence in Teacher Training. TEEM 2017, 2017, 1–5. [Google Scholar] [CrossRef]
  38. Casillas, S.; Cabezas, M.; Ibarra, M.S.; Rodríguez, G. Evaluation of digital competence from a gender perspective. In Proceedings of the 5th International Conference on Mobile Software Engineering and Systems, ACM, Cádiz, Spain, 18–20 October 2017; p. 25. [Google Scholar]
  39. Deshpande, N.; Deshpande, A.; Robin, E. Evaluation of ICT skills of Dental Educators in the Dental Schools of India—A questionnaire study. J. Contemp. Med Educ. 2016, 4, 149. [Google Scholar] [CrossRef]
  40. Türel, Y.K.; Özdemir, T.Y.; Varol, F. Teachers’ ICT Skills Scale (TICTS): Reliability and Validity. Cukurova Univ. Fac. Educ. J. 2017, 46, 503–516. [Google Scholar] [CrossRef]
  41. Instefjord, E.J.; Munthe, E. Educating digitally competent teachers: A study of integration of professional digital competence in teacher education. Teach. Teach. Educ. 2017, 67, 37–45. [Google Scholar] [CrossRef]
  42. Guzmán-Simón, F.; Jiménez, E.G.; López-Cobo, I. Undergraduate students’ perspectives on digital competence and academic literacy in a Spanish University. Comput. Hum. Behav. 2017, 74, 196–204. [Google Scholar] [CrossRef] [Green Version]
  43. Alam, K.; Erdiaw-Kwasie, M.O.; Shahiduzzaman; Ryan, B. Assessing regional digital competence: Digital futures and strategic planning implications. J. Rural Stud. 2018, 60, 60–69. [Google Scholar] [CrossRef] [Green Version]
  44. Belda-Medina, J. ICTs and Project-Based Learning (PBL) in EFL: Pre-service Teachers’ Attitudes and Digital Skills. Int. J. Appl. Linguist. Engl. Lit. 2021, 10, 63–70. [Google Scholar] [CrossRef]
  45. Moreno-Fernández, O.; Moreno-Crespo, P.; Hunt-Gómez, C. University Students in Southwestern Spain Digital Competences. SHS Web Conf. 2018, 48, 01012. [Google Scholar] [CrossRef]
  46. Lopes, P.; Costa, P.; Araujo, L.; Ávila, P. Measuring media and information literacy skills: Construction of a test. Communications 2018, 43, 508–534. [Google Scholar] [CrossRef]
  47. Napal Fraile, M.; Peñalva-Vélez, A.; Mendióroz Lacambra, A. Development of Digital Competence in Secondary Education Teachers’ Training. Educ. Sci. 2018, 8, 104. [Google Scholar] [CrossRef] [Green Version]
  48. Bartol, T.; Dolničar, D.; Podgornik, B.B.; Rodič, B.; Zoranović, T. A Comparative Study of Information Literacy Skill Performance of Students in Agricultural Sciences. J. Acad. Libr. 2018, 44, 374–382. [Google Scholar] [CrossRef]
  49. Güneş, E.; Bahçivan, E. A mixed research-based model for pre-service science teachers’ digital literacy: Responses to “which beliefs” and “how and why they interact” questions. Comput. Educ. 2018, 118, 96–106. [Google Scholar] [CrossRef]
  50. Tondeur, J.; Aesaert, K.; Prestridge, S.; Consuegra, E. A multilevel analysis of what matters in the training of pre-service teacher’s ICT competencies. Comput. Educ. 2018, 122, 32–42. [Google Scholar] [CrossRef]
  51. Saxena, P.; Gupta, S.K.; Mehrotra, D.; Kamthan, S.; Sabir, H.; Katiyar, P.; Prasad, S.S. Assessment of digital literacy and use of smart phones among Central Indian dental students. J. Oral Biol. Craniofacial Res. 2018, 8, 40–43. [Google Scholar] [CrossRef] [Green Version]
  52. Techataweewan, W.; Prasertsin, U. Development of digital literacy indicators for Thai undergraduate students using mixed method research. Kasetsart J. Soc. Sci. 2018, 39, 215–221. [Google Scholar] [CrossRef]
  53. Miranda, P.; Isaias, P.; Pifano, S. Digital Literacy in Higher Education: A Survey on Students’ Self-Assessment; Springer Nature; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; pp. 71–87. [Google Scholar]
  54. Serafín, Č.; Depešová, J. Understanding Digital Competences of Teachers in Czech Republic. Eur. J. Sci. Theol. 2019, 15, 125–132. [Google Scholar]
  55. König, J.; Jäger-Biela, D.J.; Glutsch, N. Adapting to online teaching during COVID-19 school closure: Teacher education and teacher competence effects among early career teachers in Germany. Eur. J. Teach. Educ. 2020, 43, 608–622. [Google Scholar] [CrossRef]
  56. Mislevy, R.J.; Almond, R.; Lukas, J.F. A brief introduction to evidence-centered design. ETS Res. Rep. Ser. 2003, 2003. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the literature screening process.
Figure 1. Flowchart of the literature screening process.
Education 11 00402 g001
Table 1. Summary descriptions of studies included in the systematic review.
Table 1. Summary descriptions of studies included in the systematic review.
Author (Year)YearCountryYear (Data Collection)Sample SizeMethodologyCompetency ModelAssessment MethodToolTask/Item DesignTest Time
Katz & Macklin [23]2007USA20054048QuantitativeLTESTETSAUTH30 min
2 × 60 min
Basque et al. [24].2007Canada-35QuantitativeLTESTinfoCompétences+MC-INTNR
Guo, Dobson & Petrina [25]2008Canada2001–20042000QuantitativeF (ISTE)SurveySurvey toolMCNR
Peeraer & Van Petegem [26]2011Vietnam2008–2009783QuantitativeLQuestionnaire-MCNR
Soh, Osman, & Arsad [27]2012Malaysia-760QualitativeL (M-21CSI)Interview + validity by factor analysisM-21CSIINTNR
Wong & Cheung [28]2012China2011640Quantitative-Summative assessment-INTNR
Evangelinos & Holley2014UK-102QuantitativeF (DigComp)Test (iterative Delphi-type survey)Survey MonkeyMCNR
Van [29] van Laar. et al. [9]2014UK Netherland2013–2014630QuantitativeF (Internet Skills)Questionnaire-MC-INTNR
Põldoja et al. [16]2014Estonia-50MixedF (ISTE)TESTDigiMinaMC-INTNR
Costa et al. [30].2015Portugal2015106MixedLSurvey/Interview-MC-INTNR
Maderick et al. [31]2015USA2013174QuantitativeLSurvey-MCNR
Perez-Escoda & Rodriguez-Conde [22]2015USA, Spain-35 000QuantitativeF (The framework of 21st century skills)TESTATCS21SAUTH30 min + 10 min + 10 min
Kiss & Torres Gasteú [21]2015Mexico, Hungary-720QuantitativeLQuestionnaire-MCNR
Pinto, Fernandez-Pascual & Puertas [18]2016Spain2013195QuantitativeLQuestionnaireIL-HUMASSMC-INT
Tolic & Pejakovic [32]2016Croatia20161800QuantitativeLQuestionnaire-MCNR
Heerwegh et al. [33].2016Belgium-297QuantitativeLQuestionnaireSurvey toolMCNR
Cazco et al. [34].2016Ecuador2016178QuantitativeLQuestionnaire-MC-INTNR
Almerich et al. [35]2016Spain-1095QuantitativeLQuestionnaire-MCNR
Turiman, Osman &Wook. [36]2017Malaysia-240QuantitativeL (M-21CSI)SurveyM-21CSIMC-INTNR
Corona et al. [37]2017Spain-316QuantitativeF (DigComp)TESTGoogle FormsMC-INTNR
Casillas et al. [38]2017Spain-580QuantitativeLQuestionnaire-MC-INT
Vazques-Cano et al. [17].2017Spain2014–2016923QuantitativeLTESTCOBADIAUTHNR
Deshpande et al. [39]2017India2016320QuantitativeLQuestionnaire-MCNR
Türel et al. [40].2017Turkey2012–2013304QuantitativeLQuestionnaire-MCNR
Instefjord & Munthe. [41]2017Norway20141381QuantitativeF (DigComp)Questionnaire-MCNR
Guzman-Simon et al. [42]2017Spain2012–2014786QuantitativeLSurveySurveyMonkeyMCNR
Alam et al. [43].2018Australia201795QualitativeLFocus-group interviews (SWOT analysis)-INT50 min
Belda-Medina [44]2021Spain 188QuantitativeLTEST-MC-INTNR
Moreno-Fernandez et al. [45]2018Spain--QuantitativeLTESTCOBADIMCNR
Soomro et al. [19]2018Pakistan-322QuantitativeLSurveyFICTA scaleMC-INTNR
Lopes et al. [46]2018Portugal2011–2012500QuantitativeLTEST (item response theory)Media and Information Literacy TestAUTHNR
Nepal-Fraile et al. [47]2018Spain2015–201844QualitativeF (DigComp)TEST (S-A)Physical testINT45 min
Bartol et al. [48].2018Serbia-310QuantitativeLILT (information literacy) testILTAUTH45 min
Günes & Bahcivan [49]2018Turkey-979QuantitativeLQuestionnaire-MC-INTNR
Tondeur et al. [50]2018Belgium-931QuantitativeLQuestionnaire-MCNR
Saxena et al. [51]2018India2016260QuantitativeLSurvey-MC-INTNR
Techataweewan & Prasertsin [52]2018Thailand20151183QuantitativeLQuestionnaire-MC-INTNR
Blayone et al. [20]2018Georgia Ukraine2017279QuantitativeF (GRCU Digital Competency Framework)TESTDCP-Digital Competency ProfilerAUTHNR
Miranda, Isaias & Pifano [53]2018Portugal-177QuantitativeLQuestionnaire-MC-INTNR
Serafin & Depešova [54]2019Slovak Republic2016351QuantitativeLQuestionnaire-MCNR
Note. NR—no recording. In the category Framework—the ‘F’ refers to an international framework used in the digital competence assessment process. The ‘L’ indicates that the research was based on a localized framework, model, curriculum or strategy; In the task/item design category—INT indicates tasks with interactive content, MC indicates multiple-choice tasks/items, MC-INT indicates that both multiple-choice and interactive content was used in the digital competence evaluation, and AUTH indicates that the evaluation process was carried out in completely authentic digital platform. In the tool category, the used technological solutions have been fully written out. This includes both already existing testing platforms and new developed tools for digital competence assessment.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sillat, L.H.; Tammets, K.; Laanpere, M. Digital Competence Assessment Methods in Higher Education: A Systematic Literature Review. Educ. Sci. 2021, 11, 402. https://doi.org/10.3390/educsci11080402

AMA Style

Sillat LH, Tammets K, Laanpere M. Digital Competence Assessment Methods in Higher Education: A Systematic Literature Review. Education Sciences. 2021; 11(8):402. https://doi.org/10.3390/educsci11080402

Chicago/Turabian Style

Sillat, Linda Helene, Kairit Tammets, and Mart Laanpere. 2021. "Digital Competence Assessment Methods in Higher Education: A Systematic Literature Review" Education Sciences 11, no. 8: 402. https://doi.org/10.3390/educsci11080402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop