Digital Competences of Students of Library Studies: Comparison of Research Results for 2018–2020

: This study focuses on the analysis of changes in the digital competence proﬁle of students of Information and Library Studies at Masaryk University in Czechia. As a research tool, we used the DigComp self-assessment questionnaire that students were asked to ﬁll in after completing the course. Our research shows that students are insufﬁciently prepared for work as highly qualiﬁed information specialists. At the same time, we found that their competence proﬁle remained very stable between 2018 and 2020. This ﬁnding indicates that students do not readily respond to new societal changes at the level of individual competences. The research results are based on data collected from 152 students during three runs of a compulsory course at the university. Information Science and Library Science students have long perceived their competences to be strongest in the domains of information and data literacy and communication and collaboration. Programming is the weakest competency among the competences, followed by solving technical problems and engaging in active citizenship through digital technologies. These ﬁndings can be used to innovate the curriculum to meet the demands of digitally competent information workers.


Introduction
The work of a librarian or information scientist is increasingly associated with digital technologies. A future librarian can therefore be expected to be able to work with modern technologies and use them to change individual processes involved in the practice of an information specialist. Beverley et al. [1] presented 10 areas of work of an information specialist in the context of healthcare. The authors claim that an information specialist should be a project leader and manager; an expert in finding information and working with resources, providing assistance in working with documents (and gaining access to them); should be able to critically evaluate information; work with data sources; have the ability to synthesize diverse data; create usable and comprehensible reports and disseminate their conclusions within the institution.
Such a delimitation of the role of the information specialist is certainly possible, but it raises two significant issues. The first is that Beverley et al. [1] did not describe the role of the information specialist as a profession, but in terms of a certain general requirement for any scientist that, in addition to the characteristics mentioned above, may be supplemented with other attributes pertaining to a particular professional domain. In this sense, the notion of scientific literacy (or according to ALA, information literacy) completely replicates the role of the information specialist [2,3]. The second issue is that the study referred to above is anchored in only one specific discourse, namely the medical discourse, which does not easily allow for an extension of the role of the information specialist. At the same time, the approach represented by Beverly et al. does not explicitly reflect technology as a catalyst for changes and shifts in the field.
Miller [4] has also adhered to the concept of the information specialist as a kind of "meta-scientist", emphasizing that it is information specialists who can support scientific work in libraries. She argues that they should take an active part in education and build partnerships between information specialists and specialised scientists. In this respect, the author approaches the library as a service in an academic institution, which is developed through information specialists.
Engerer and Sabir [5] have distinguished three different discourses: information specialists, librarians facilitating research, and iHumanists. Those belonging to the last category are perceived as full-fledged members of the professional community as autonomous scientists. The authors draw attention to the fact that while the first two discourses are traditional, they do not reflect the real needs of the current state of the world. iHumanists or information humanists have knowledge in three basic domains: the first concerns the ability to comprehensively analyse complex interconnected systems, the second area focuses on a certain methodological proficiency and the third focuses on technological skills.
In their study, Engerer and Sabir [5] placed a strong emphasis on what we call digital competences in our study. This is evident at the analytical level, at which a sound knowledge of technologies and the ability to place them in a broader social and scientific framework is essential for iHumanists, and also in the third dimension, which focuses on what we might call "engineering" skills; i.e., specific knowledge, procedures and the ability to solve a specific problem with the use of certain technologies.
As indicated above and confirmed by other studies [6,7], librarianship, or the work of information specialists embedded in it, requires strongly developed digital competences, such as a broad ability (and willingness) to use technology to solve a variety of problems ranging from civic needs and links to eGovernment to the support of science.
The aim of our study was to analyse and reflect the digital competences of students of the Library Studies (LIS) Bachelor's degree programme at Faculty of Art at Masaryk University in Brno (Czechia). In our research, we applied the digital competences model (DigComp) defined by the European Union for citizens [8]. There are several reasons why we used this model. In general, DigComp is currently probably the most recognised and highest quality model of digital competences that is widely applicable and usable. In the Czech context, it is also used, for example, by the Ministry of Labour and Social Affairs and other state institutions.
The second motivation for using this framework was the very role of libraries, which we understand as community-centred and socially engaged [9][10][11] and which is also often associated with digital services. In order for a librarian to fulfil their social role, it is necessary to be a citizen in the full sense of the word but also be digitally competent, which is the direction that we pursued in our research.

Mater Definition of the Concept of Digital Competence
Janssen et al. [12] pointed out the lexical issue concerning the use of the notion of digital literacy by many authors, while in the Scandinavian environment [13,14], the notion of competence with an emphasis on a broader educational concept is preferred. Digital competences do not exist on their own, and it is problematic to evaluate them separately, because they form an interconnected whole with a broader personal and educational background. Janssen et al. [12] presented a model that uses thematic blocks: (1) competence as a tool of everyday life; (2) the ability to communicate and cooperate through ICT; (3) the ability to work with information; (4) the ethical and legal dimension; (5) a certain sociological understanding of digital competences and (6) the ability to learn and develop through ICT.
Although we applied the DigComp model [8] in our analysis, Janssen's position is crucial for us. On the one hand, it determined our research methods, but above all, it also frames the overall design of the curriculum in the field of digital competences for our students.
Although some authors argue that digital competences are tied to a profession [15,16] or only to work on a computer [17], our view is that to consider competences as something that serves the labour market can be short-sighted. In our library context, we see as more important their integration either into the role of iHumanist [5] or into the level of the civic competence profile.
The DigComp framework [8] distinguishes 21 competences, which are divided into five basic domains. These competences as a whole are aimed at the ability of fully-fledged digitally conceived citizenship. However, citizenship is not understood individually. It is clear from the structure and scaling of competences that it is moving towards a social dimension-competences are something to help both the individual and their surroundings; in the case of the two highest levels, there is even the expectation to change the nature of work as such and the approach to certain problems in society at large or in the field of work activity. In this respect, we can note that the DigComp framework shows a rare alignment with the model presented by Engerer and Sabir [5] and represents a good springboard for its further development and possibly also its evaluation.
The DigComp framework distinguishes the following five dimensions of competence (individual competences are listed in the results tables): information and data literacy, communication and collaboration, digital content creation, safety and problem solving. It is clear from looking at this list that this is indeed a relatively comprehensive approach to what every European citizen (and from our perspective, also every librarian, information specialist or iHumanist) should be able to do (and in what context).
The relationship between the concepts of information literacy and digital literacy is not clearly defined. Numerous studies and competence frameworks, such as the MIL from UNESCO [18], perceive information literacy as a superior concept, which includes digital competences [19] or other technical skills [20]. On the contrary, for example, DigComp [5], which we follow, and other studies [21,22] tend to the opposite opinion-namely considering a general concept of which an integral part is information literacy. In this research, we will lean towards this second (DigComp) variant.

Self-Assessment as a Method of Evaluating Digital Competences
Research into students' digital competences is now commonplace [23][24][25]. There are several ways to approach this; one can encounter test-based or practice-oriented tasks as well as self-assessment [26][27][28]. Digital competences can be thought of as transferable competences [12]. Therefore, self-evaluation is a crucial tool for measuring this, as it reveals how people perceive their capabilities in specific life situations. Some cultures or research of this kind could lead to an overestimation of competences [29,30], but this is not a general problem, and this research does not account for students' overestimation of their strengths as it does not seem to occur; rather, the opposite is found [31]. As Aesaert et al. [32] point out, as ability increases, so does the underestimation of the self. Our study population of university students belongs to this overestimated population. Measuring digital competence according to the DigComp framework can be found relatively frequently in current research discourse [33][34][35].
The advantages of self-assessment in transferable competences can be considered to include the fact that self-assessment leads to self-reflection and possible further development [24,25]. The degree of self-esteem also determines how a person works with competences in everyday life and to what extent they are able and willing to rely on them to solve specific problems. Although self-esteem can be affected by many factors, it is something that each person subconsciously calls upon whenever they think about their ability to solve a problem. The research is focused on the area of university students in the field of librarianship and information studies [5].

Methods
Student self-evaluation is a well-established concept in pedagogical literature and has been mostly associated with positive effects on the educational process and personal development of the student [36][37][38][39][40]. At the same time, we agree with Stallings and Tascoine [39], who argue that self-assessment is a key tool for personal development and reflection, even though the way students rate themselves is not unproblematic. In our Educ. Sci. 2021, 11, 729 4 of 13 research, for example, we identified a significant degree of systematic underrating oneself among the students, which, in our opinion, has a cultural tradition. Students who are able to complete tasks that require at least level 5 without major problems oscillated around level 3 in their self-assessment, which clearly results in a discrepancy.
In our research, we used a self-evaluation questionnaire, in which students assigned themselves to individual levels of digital competences as described by the DigComp framework [8]. We do not use a simple numerical scale, but in each case we indicate a specific description of the level in order to reduce the impact of interpretation of the level by individual students. The DigComp model defines 21 competences with 8 levels of difficulty (from foundation to highly specialised levels). In our research, we added level 0, which indicates that the student does not have the given competence at all. This level was introduced in response to the fact that DigComp level 1 already assumes some (although limited) degree of competence.
Given that the research took place in the Czech environment (i.e., with Czech-speaking students), it was necessary to localise the DigComp framework, including the description of individual items. A translation of the framework can be found in a monograph on digital competences [41]. In Czech, there is an official translation of only specific competences, published by the Ministry of Labour and Social Affairs, which has been produced with the decisive participation of the author of the present research. Nevertheless, we perceive the problem of localisation as significant, because linguistic shifts in meaning can affect the understanding of certain concepts.
The research we conducted was designed is such a way that students completed a standardised course (with the same study materials, test questions, assignments, etc.), but there were variable lectures within the course. The lectures were given by the same teacher and thus the influence of this parameter should be limited. The course is compulsory for Bachelor's degree students of Information and Library studies.
At the end of the semester, after submitting all the necessary assignments and passing the tests, students filled in a self-evaluation questionnaire. This was a mandatory part of the course, but its content was not reflected in the evaluation and its primary function was to help students to self-evaluate or analyse the areas in which they could further develop and improve. The questionnaire was completed in the university information system. Students were informed that anonymised results were to be published as part of the research.
The self-evaluation questionnaire was filled in by students in the university's internal information system. Response time was not limited. The student always saw the level number and its description. Individual competences in the questionnaire were sorted chronologically according to DigComp 2.1. Individual levels were sorted from 0 to 8.
This research was conducted and we were able compare the results from Autumn 2018, Autumn 2019 and Spring 2020 semesters. The last course (Spring 2020) was specific in that, although it had the same content, it was included in the new Bachelor's degree accreditation, which had an impact both on the semester of completion (it was moved from the third semester to the second semester) and on the context-students in this semester lacked the preceding course focused on algorithmisation, which subsequently affected the score in programming. The code of the course also changed. In presenting the results, however, we follow only the scheme indicating the courses by semesters.
The dimension of self-evaluation is essential for us in this research-it is not an exact score that is important for the performance of job tasks, but rather the personal belief that the student has mastered the activity or competence at a certain level. Our position is that self-evaluation provides a certain indication with regard to the possibility of a student's involvement in specific areas of work, whether the position of a librarian, an information specialist, iHumanist or any other role is concerned.
A specific limit to the representativeness of the data may be that the course in which the data were collected is mandatory for students. On the other hand, self-evaluation was not part of the course evaluation, so students had no reason to skew the results in their favour intentionally. We do not have a tool for determining the external validity Educ. Sci. 2021, 11, 729 5 of 13 of data. However, from the results of the knowledge test at the end of the course and continuous tasks, it can be estimated that the absolute level of competences is around level 5 for students, with a higher point in selected areas (especially competences related to information literacy and online communication and cooperation), or exceptionally one point lower (programming). Thus, it can be estimated that the self-assessment results mimic the students' competency profile well but are not "calibrated". Students know what they can or cannot do, but they cannot determine an adequate level of competence according to the description of the DigComp framework [5].
The research limit was a short time of three years for data collection (this is not longterm research) and a small sample, given by a limited number of students in a given study program (see Table 1). Our research is thus a probe into a specific student population in a particular curriculum, rather than simply generalisable research of a quantitative nature. Our research sample consisted of students of the Information and Library Studies Bachelor's study program at Faculty of Art at Masaryk University in Brno. In the Autumn 2018 semester, there were 41 students (of whom 65% were women), in Autumn 2019 there were 47 students (72% women) and in Spring 2020 there were 64 students (75% women). In all cases, the sample included students of both full-time and combined form of studies. In total, we analysed data collected from 152 students in the course of three years. We use normalised data; therefore, the different number of respondents in individual years is not problematic.
Our research sought to answer two research questions: 1. Is there any difference between the competence profile of students of teacher training, as described by Napal Fraile et al. [42], and the results of students of information and library studies? 2.
Are the digital competences of students of information and library studies gradually increasing in line with the development of information society?

Results
We used a scale of competences from 0 to 8-i.e., a nine-point scale-in our research. The calculation of the average value is constructed in such a way that it expects a linear character of the competence. In other words, individual frequencies listed in the tables are multiplied by the level and subsequently normalised. As we stated in the methodological part of our study, it seems that the results are affected by a relatively large statistical error related to the lower ability of students to evaluate themselves. However, what the results demonstrate quite clearly is a certain distribution of knowledge among individual competences or their domains.
The data are presented in summary in two graphs. Figures 1-3 provides a view of the distribution of competences according to student responses by course. We have used box plots to illustrate the competency distribution in the sample as a whole. Between Fall 2019 and Spring 2020, we can see a significant shift in the subjective reflection of digital competency levels; there is a decrease in the number of responses corresponding to complete beginners (levels 0 and 1) and an increase in the representation of advanced users (levels 5 and 6). The data do not provide an answer to whether this is a broader trend or whether (concerning the Fall 2018 data) this is just a drop in competency for students in the Fall 2019 semester (see Figure 4). competency levels; there is a decrease in the number of responses corresponding to complete beginners (levels 0 and 1) and an increase in the representation of advanced users (levels 5 and 6). The data do not provide an answer to whether this is a broader trend or whether (concerning the Fall 2018 data) this is just a drop in competency for students in the Fall 2019 semester (see Figure 4).
The distribution of competences in the student distribution is normal, as indicated by the values χ2 (Fall 2018) = 1.13; χ2 (Fall 2019) = 4.69 and χ2 (Spring 2020) = 1.25.   2019 and Spring 2020, we can see a significant shift in the subjective reflection of digital competency levels; there is a decrease in the number of responses corresponding to complete beginners (levels 0 and 1) and an increase in the representation of advanced users (levels 5 and 6). The data do not provide an answer to whether this is a broader trend or whether (concerning the Fall 2018 data) this is just a drop in competency for students in the Fall 2019 semester (see Figure 4).
The distribution of competences in the student distribution is normal, as indicated by the values χ2 (Fall 2018) = 1.13; χ2 (Fall 2019) = 4.69 and χ2 (Spring 2020) = 1.25.    Table 2.      Table 2. The table above shows quite clearly that students rated themselves highest in information and data literacy with a score exceeding others by 0.51-0.71 points. The results for the ability to cooperate and communicate were also above average (0.22-0.31). The result for competences in the field of safety is interesting in that it corresponds well with the average value (variance of −0.14 to −0.17)). Digital content creation and problem solving were evaluated very similarly, and students felt weaker than the average. Table 2 shows the highest and lowest students of the evaluated competence for individual years.
In the case of content creation, this decline can be attributed to programming, which in the long run appears to be the weakest competence with the highest deviation from the average in negative values (0.69-2.05 below the average of the whole). The category of problem solving is the least homogeneous and clear-cut domain within the entire DigComp framework, which can impact the way students perceive and understand it.
The spring semester was affected by the partial need to switch to the online teaching mode due to the COVID-19 pandemic, which may have had a negative effect on some competences. However, we cannot sufficiently analyse this effect from the data we collected. The decline in students' self-assessment in the Autumn 2019 semester is unclear, and we do not have a robust enough model to explain it. It seems that a poorly explained assignment or misinterpretation of the questionnaire is unlikely because the same model (textual description of the questionnaire, study materials, ongoing tasks, questionnaire format, instructions, etc.) was used as in the other two examined intervals. Above all, there is a significant drop in assessment at grades 5-7 compared to previous years, which could be related to a socio-psychological anomaly in the specific self-assessment in the group among students, which could indicate lower self-confidence. Because the result was a finding with a considerable time lag, it is not usable for further research. We see the group perception of the competence profile among the students themselves as a probable cause, which (due to the design of the whole course) probably has a strong influence on the result. Table 3 indicates that students have difficulties with technically oriented competences, even in areas where this is contrary to expectation, such as copyright or identifying digital competence gaps, which are topics that are accentuated by the LIS (Library and Information Science) curriculum of the field of study (not only in this course). Conversely, the highest scores point to three competences related to information and data literacy and also to elements directed towards online cooperation and communication in various forms. As we have already pointed out, when interpreting the results provided above, we need to take into consideration the systematic error associated with self-evaluation, specifically with a systematic underrating of oneself. Nevertheless, the results show a relatively good consistency in most competences when monitoring deviations, which means that it is not possible to consider them as a measurement error or to question their reliability, as has been quite clearly demonstrated by the last two tables. Figure 5 demonstrates that the weakest digital competence in terms of self-perception is programming. This fact can be explained by the research being carried out at the Faculty of Arts. At the same time, we must emphasize that the field of librarianship and information science is computerised and that its digital transformation is unquestionable [5]. The development of programming skills must therefore become part of the curriculum, and students should, in addition to formal and informal support, also receive psychological intervention in this area. Students perceive the competence associated with programming as significantly more demanding than other competences, which is certainly not the goal of the DigComp framework. mation science is computerised and that its digital transformation is unquestionable [5]. The development of programming skills must therefore become part of the curriculum, and students should, in addition to formal and informal support, also receive psychological intervention in this area. Students perceive the competence associated with programming as significantly more demanding than other competences, which is certainly not the goal of the DigComp framework.

Analysis and Discussion
Fraile et al. [42] focused in their empirical study on the self-assessment of students of teacher training in the field of natural and technical sciences in the area of 21 competences defined by the DigComp framework. The data showed an extremely low level of perceived competences, which is probably related to the inability of students to assess themselves appropriately, but also to the fact that, in Spain, there are limited opportunities for training students in this area. The authors distinguished three levels of competence (low, medium, high). We would like to at least briefly compare our results with the results of this research.
Fraile et al. [42] also reported the worst results for the competence related to programming (over 75% of respondents rated it as low), followed by the integration of services and content and licenses and copyright. The research also showed a strong position of information and data literacy, which was, however, associated with a very small number of respondents who achieved a high level and with a very large variance. The remaining four competence areas were perceived relatively evenly (the strongest was the area of safety, which also had the highest number of excellent respondents), which is also not apparent from our research. The homogeneity of results for individual competences decreased with the semesters.
These results seem to clearly indicate that students of LIS cannot be easily compared with Spanish science teachers. Librarians are, naturally, stronger in the domain of information and data literacy, but also in the field of communication and collaboration tools.

Analysis and Discussion
Fraile et al. [42] focused in their empirical study on the self-assessment of students of teacher training in the field of natural and technical sciences in the area of 21 competences defined by the DigComp framework. The data showed an extremely low level of perceived competences, which is probably related to the inability of students to assess themselves appropriately, but also to the fact that, in Spain, there are limited opportunities for training students in this area. The authors distinguished three levels of competence (low, medium, high). We would like to at least briefly compare our results with the results of this research.
Fraile et al. [42] also reported the worst results for the competence related to programming (over 75% of respondents rated it as low), followed by the integration of services and content and licenses and copyright. The research also showed a strong position of information and data literacy, which was, however, associated with a very small number of respondents who achieved a high level and with a very large variance. The remaining four competence areas were perceived relatively evenly (the strongest was the area of safety, which also had the highest number of excellent respondents), which is also not apparent from our research. The homogeneity of results for individual competences decreased with the semesters.
These results seem to clearly indicate that students of LIS cannot be easily compared with Spanish science teachers. Librarians are, naturally, stronger in the domain of information and data literacy, but also in the field of communication and collaboration tools. Here, we agree with Napal Fraile et al. [42] that a closer examination might reveal that greater differences occur during out-of-school learning than within a formal curriculum. As part of our course, we performed measurements before and after the semester in 2020, and the difference in average competence corresponds to an increase of 0.85 points. We can also perceive this result as favorable for possible further educational activities-it seems that targeted courses for the development of digital competences can improve these competences relatively quickly at the level of university studies, although we cannot claim that the result depends only on such a course, disregarding the overall impact of the curriculum and society at large.
If we consider possible changes at the curriculum level, we would argue for changes that to a large extent coincide with the recommendations of the above research focused on future teachers. We see it as necessary to focus on supporting hard technical skillsdeveloping programming competences, integrating services and content or solving problems through technology. If students improved in these three areas, there would be a significant shift in the entire competence profile, and subsequent shifts in other areas can be expected as well.
Self-directed learning [43] seems to be inaccessible to a large proportion of students, although it is strongly emphasised by the LIS curriculum. It is necessary to look for ways to strengthen it in this respect, both at the level of personal development and specific learning competences.
However, in relation to professional identity, we consider the development of competences that are included in the domain of information and data literacy to be essential. The fact that only 8-15% of students indicated the two highest levels in their self-assessment is definitely not optimal. This may suggest that students do not have sufficient professional qualities or that their professional self-confidence, but also the ability to profile themselves as experts, is extremely low in this area.
As far as the structure of competences in the DigComp framework is concerned, there is a certain turning point in the proficiency levels-moving beyond level 4 signifies the ability not only to work independently but also to help others. Competence, therefore, includes the area of capacity as well as the dimension of obligation. The question is whether the students feel ready to accept the commitment element of competence. On the one hand, we consider competence as a personal skill but, on the other hand, as a commitment to be honoured.
At first glance, it is clear that in comparison with the "ordinary citizen", the ability of students of Library and Information studies to work with information can be expected to be substantially higher than in the general population. Moreover, previous research [43] has shown that these students consider working with information to be probably the most important competence, and they often reduce the other competences to those of some kind of a "pendant". In our perspective, the goal of using of these types of frameworks is not to create an "average user", but to analyse areas in which the respondents have the greatest problems and areas in which, on the other hand, they excel.
Our results, together with the research presented by Fraile et al. [42], raise another important question: the question of the design of the DigComp model. It turns out that it does not lead-at least for information specialists and teachers-to a consistent level of competence and that the levels defined, according to which we could explicitly expect a level around 4-6, are not detected. The result is therefore a widely accepted model [44][45][46][47], which, however, does not fulfill one of its key functions: i.e., the possibility of the adequate self-assessment of individuals who want to use it to learn something about digital competences. Its general and universal character, which was probably supposed to lead to good sustainability, made it essentially a very problematic tool in terms of producing scores that can be relied upon without knowledge of a broader research context.
Another topic that needs to be discussed is the presence of programming as a competence directed towards creation. Does an active citizen really have to be able to create new objects using code? Would it not be more appropriate to aim at the ability to apply simple algorithmisation to certain tasks to save time and to use a computer for routine tasks? Here, our research converges with the results of Napal Fraile et al. [42], even though when we consider the competences of iHumanists [5], it is beyond any doubt that they would need a more thorough and more deeply technically oriented foundation.
The answer to the second research question is more difficult-we can see that the competence profile of students with regard to their average competence has been relatively stable, which may imply that during the three years of our research, no deeply structural changes in the information society [48,49] have occurred. In other words, the changes that have taken place in the last three years were not found to have any significant effect on the competence profile of students.
At the same time, we did not observe any systematic increase in the level of competences-students reached the highest level in the Autumn 2018 semester, which was the first year of measurement. On the one hand, we encountered the declared development of the information society; on the other hand, only a very slow adaptation of students to this kind of change was seen. However, we could probably identify a visible shift if longer time scales were used.
A question that our quantitative research cannot answer is how students approach selfassessment as such. It is possible that unlocking new possibilities and the ability to work with an increasing number of tools may lead to the Socratic attitude of "I know that I know nothing", because deeper skills and knowledge reveal a wider horizon of possibilities.
We suggest that the teaching of programming and algorithmic thinking should become not only a separate course (e. g. "Introduction to Programming"), which would be undoubtedly helpful but, above all, one of the most unusual courses in the curriculum. Students will also develop a lasting competency to program if they learn to solve problems in their field-specific topics thanks to programming. Therefore, programming must not be fixed to one course but must appear in various forms throughout the curriculum. Hauck [45] argues that such an approach leads to active study and can be very effective. We believe that such an approach could also be helpful for other problematic competences.

Conclusions
The results that we present in this study have not yet been published, apart from the results for Autumn 2018. They represent a systematic, time-multiplied probe into the structure of digital competences according to the DigComp framework [8]. The ethical aspect of the research is not problematic; we ensured a high degree of anonymity of the data, and the students were acquainted with the research design in advance.
Let us summarise the main research findings of our study: • Self-evaluation is as an important and interesting tool for digital competence research for students and for curriculum developers and educators, asit allows observers to respond to the level of competences perceived by the students. • There seems to be a relatively clear agreement as to which areas are problematic for non-technical students (programming, integration of services and content, technical problem solving, copyright). These areas are supplemented with specific, often probably locally conditioned, weak competences, such as engagement in active citizenship through digital technologies in Czechia. On the other hand, we identified a strong emphasis on the area of information and data literacy and, naturally, fieldspecific strengths. • Both students in the research of Napal Fraile et al. [42] and students in our research had a difficulty with appropriate self-evaluation. Napal Fraile et al. [42] therefore applied a reduced three-level model; in our case, the level descriptors turned out to be a problematic point, most probably also due to a certain degree of vagueness. • Our research revealed a relatively high degree of competence stability, in the sense that in the course of the (fewer than) three years during which our research was conducted, there were no significant changes in the competence domains in which students achieved above average or below average scores. A high degree of stability was also found in the competences which showed extreme values. • In our sample, the variance in students' self-evaluation also gradually increasedstudents are becoming an increasingly less homogeneous group in terms of their digital competences, although education seems to at least partially reduce this variance. • A research challenge to be addressed by further studies is undoubtedly a more detailed examination of the process of student self-evaluation and the search for certain clusters of students with similar competence characteristics, which could have significant educational implications.
Funding: Technology Agency of the Czech Republic-Eta 2 program; project Platform for knowledge transfer: information literacy for high school students in an open collaborative virtual learning environment (TL02000040).

Institutional Review Board Statement:
Students were informed at the first face-to-face meeting about the research and were also continuously informed about how the data would be processed.
The course had a website with teaching materials where research information was provided as well.
Students who did not participate in the research were not penalised in any way during their studies. There was no need to use particular forms for informed consent for this type of result. In the research, we worked only with aggregated data so that no individual student could be identified according to the published data. The researcher was also a course teacher, so he had access to all the students' learning outcomes based on their enrollment in the course. The data were not passed on to any third party.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The data were obtained manually from the university's closed information system.

Conflicts of Interest:
The authors declare no conflict of interest.