1. Introduction
The accelerating digital transformation of society has not only changed the competences required for effective teaching at all levels of education, but has also become a central element in achieving long-term sustainability goals, especially in the context of inclusive and equitable quality education [
1,
2,
3]. In this context, digital competence has emerged as a key qualification for teachers that goes beyond mere mastery of technology to encompass pedagogical innovation, ethical awareness and the critical use of digital tools [
4]. Frameworks such as the European DigCompEdu model emphasise that digital competence encompasses professional engagement, the sourcing and creation of digital resources, the harmonisation of digital teaching and learning, the use of technology for assessment, the empowerment of learners and the promotion of students’ own digital skills, thus contributing to more sustainable, inclusive and resilient education systems [
2,
4]).
The digital transformation of higher education is not just a technological trend, but a crucial part of the sustainability vision emphasised in the global Sustainable Development Goals (SDG 4: Quality Education) and the European strategies for digital education [
5]. The development of digital competences among higher education teachers contributes to more inclusive, accessible and adaptable learning environments that are resilient to disruptions such as pandemics or rapidly changing labour market demands [
2,
3,
6]. In medical education, the aim is not only to improve pedagogical practises, but also to strengthen the digital readiness of future healthcare professionals, thereby supporting sustainable and high-quality healthcare in the long term [
7,
8].
In higher education, and particularly in the medical sciences, digital competences are critical to preparing students for increasingly technology-intensive clinical environments [
8]. Research emphasises that effective digital teaching not only increases pedagogical flexibility and inclusivity, but also promotes deeper learning, critical thinking and student engagement [
9]. However, studies consistently show that while many teachers have basic technical skills, there are still significant gaps in the pedagogical integration of digital technologies [
10,
11]. With the increasing integration of artificial intelligence (AI) tools such as ChatGPT into education, there is also a new demand for AI skills alongside broader digital competence [
12,
13].
In medical education, the situation is particularly complex. Medical teachers must not only master content knowledge and clinical expertise, but also integrate innovative digital methods such as telemedicine, simulation-based learning and AI-supported decision support systems into their pedagogical practise [
8,
14]. Despite the recognised importance of digital competence for medical teachers, existing evidence points to significant differences in competence levels, with many teachers expressing a need for structured training and institutional support [
15,
16]. Furthermore, it is becoming increasingly clear that digital competence is not static, but evolves with technological advances, including the ethical challenges and opportunities presented by AI, big data and immersive technologies [
9,
15].
Given these challenges and opportunities, it is critical to systematically assess the current state of digital competence of medical teachers to identify gaps and develop targeted strategies for professional development. While several international studies have examined digital competence in healthcare and education separately, there is little data that focuses specifically on the competences of medical teachers in a European context.
Aim of the Study
To address this gap, the present study was designed as a pilot cross-sectional study, aiming to explore the feasibility and relevance of assessing digital competences specifically among medical teachers. Conducted in Slovenia, the pilot focused exclusively on self-perceived competences to inform the design of a larger-scale research effort. The aim of the study was to investigate the strengths and weaknesses in the most important areas of digital competence and to identify priority areas for faculty development initiatives. The guiding research question was “What is the current level of self-perceived digital competence among medical teachers, and which areas require the most support for future professional development initiatives?”
3. Results
The mean age of the participants was 50.56 years (SD = 9.656), with a range of 35 to 73 years. The distribution of academic status among the participants was as follows: 54.2% assistant professors, 37.5% associate professors, and 8.3% full professors. Most university teachers have a positive attitude towards digital technologies in teaching. 58.4% stated that they used them either confidently or skilfully. The majority used digital tools frequently or always (60.4%), while only a minority said they rarely used them (10.4%). In addition, interest in learning about digital technologies was high, with 62.5% stating that they were actively or strongly motivated to improve their digital teaching methods.
Table 1 presents the medians, interquartile ranges (IQR), observed minimum and maximum values, and 95% bootstrap confidence intervals for the DCS-UT total score and each subscale in the pilot sample (
n = 48).
Results show that median scores were highest for Digital literacy (Mdn = 57.0, 95% CI [48.0, 64.0]) and lowest for Technology integration (Mdn = 20.0, 95% CI [19.0, 24.0]). The total DCS-UT scale had a median of 113 (95% CI [100.0, 139.0]). Interquartile ranges (IQRs) indicated moderate variability, from 8.0 for Digital skills to 52.0 for the total scale. Observed scores ranged from 61 to 175, which is close to the full theoretical range (35–175), indicating reasonable dispersion and no serious floor or ceiling effects.
To investigate differences in self-perceived digital competence, participants were asked whether they had previously participated in a structured training or workshop aimed at improving the implementation of e-learning (excluding the general use of platforms such as Zoom). A Mann–Whitney U-test showed that participants with previous training had significantly higher scores on the digital literacy (U = 179.00, Z = −2.17, p = 0.030, r = 0.369) and the technology integration subscale (U = 175.00, Z = −2.27, p = 0.023, r = 0.383). However, no statistically significant differences were found for the digital skills subscale (U = 203.00, Z = −1.69, p = 0.091), digital interaction (U = 225.50, Z = −1.21, p = 0.226) or the DCS-UT (U = 204.00, Z = −1.66, p = 0.098).
To further investigate how self-perceived integration, frequency of use and interest in digital technologies relate to the level of self-perceived digital competence, the participants’ responses to three single-item questions were compared with the four subscales and the DCS-UT using the Kruskal–Wallis H-test. The DCS-UT in turn comprises four subscales: Digital Literacy (17–85), Digital Skills (6–30), Digital Interaction (6–30) and Technology Integration (6–30) (
Table 2).
Statistically significant differences were observed in all subscales: digital literacy, H(4) = 13.45, p = 0.009; digital skills, H(4) = 16.52, p = 0.002; digital interaction, H(4) = 16.20, p = 0.003; and technology integration, H(4) = 10.26, p = 0.036. The DCS-UT total score also showed a significant difference between the groups, H(4) = 9.61, p = 0.048. The effect sizes for these Kruskal–Wallis tests ranged from η2 = 0.22 to 0.36, indicating small to moderate effects by conventional standards. Post hoc comparisons with the Dwass–Steel–Critchlow–Fligner test further clarified these differences. Participants who reported ‘skilful integration’ of digital technologies scored significantly higher on digital literacy than ‘frustrated learners’ (W = 4.68, p = 0.008). In the digital skills subscale, a significant difference was found between ‘skilful integrators’ and ‘frustrated learners’ (W = 5.24, p = 0.002). In digital interaction, the ‘skilful integrators’ scored significantly higher than the “gain confidence” group (W = 4.24, p = 0.022), and similar differences were observed in technology integration (W = 3.90, p = 0.046). On the DCS-UT total scale, the ‘skilful integrators’ also differed significantly from the ‘gain confidence’ group (W = 4.26, p = 0.021).
Significant group differences were also found at all levels of reported frequency of use of digital technologies in the teaching (p < 0.05), except for technology integration (H(4) = 9.164, p = 0.057). For the frequency of use of digital technology in teaching, the Kruskal–Wallis tests yielded η2 values between 0.20 and 0.40, indicating small to moderate effect sizes by conventional standards. Post hoc comparisons showed that participants who reported always using digital technologies scored significantly higher than less frequent users. In terms of digital literacy, significant differences were found between the ‘always’ and ‘rarely’ (W = 4.17, p = 0.026) and ‘occasionally’ (W = 4.93, p = 0.004) groups. In terms of digital skills, the ‘always’ users performed significantly better than the ‘rarely’ (W = 4.42, p = 0.015), ‘occasionally’ (W = 4.62, p = 0.010) and ‘frequently’ users (W = 4.29, p = 0.020). Significant differences in digital interaction were found between ‘always’ and ‘occasionally’ (W = 4.29, p = 0.020) and ‘always’ and ‘rarely’ (W = 3.99, p = 0.038). A significant difference was found on the DCS-UT scale between ‘always’ and ‘occasionally’ (W = 4.82, p = 0.006).
The differences in self-perceived digital competence were also statistically significant between the levels of interest in learning and use of digital technologies for teaching: digital literacy, H(3) = 18.59,
p < 0.001; digital skills, H(3) = 16.47,
p < 0.001; digital interaction, H(3) = 14.18,
p = 0.003; technology integration, H(3) = 13.43,
p = 0.004; and DCS-UT score, H(3) = 14.15,
p = 0.003. The Kruskal–Wallis analyses for this set of comparisons yielded η
2 values between 0.29 and 0.40, reflecting moderate effect sizes and indicating significant group differences in interest in learning and the use of digital technologies for teaching. A post hoc analysis revealed that participants who were ‘highly interested and motivated’ performed significantly better on digital literacy than those who were ‘slightly interested’ (W = 3.83,
p = 0.034), ‘moderately interested’ (W = 4.09,
p = 0.020) and ‘fairly interested’ (W = 4.12,
p = 0.019). A significant difference was found between the ‘highly interested’ and ‘moderately interested’ groups on the digital skills subscale (W = 4.83,
p = 0.004). On the DCS-UT scale, the ‘highly interested’ group scored significantly higher than the ‘fairly interested’ group (W = 5.28,
p = 0.001). Further analyses were conducted to better understand the observed differences in self-perceived digital competence between academic status (
Table 3).
For the total DCS-UT scale, the median score for assistant professors was 139.00 (IQR = 56.0), compared to 99.00 (IQR = 26.0) for associate professors and 98.50 (IQR = 12.0) for full professors. A Kruskal–Wallis H-test confirmed a statistically significant difference between the three groups, H(2) = 11.94, p = 0.003. Significant differences were also found in all four subscales (p < 0.05). The effect sizes for these Kruskal–Wallis tests ranged from η2 = 0.20 to 0.28, indicating small to moderate effects by conventional standards. A post hoc test revealed that assistant professors performed significantly better than associate professors in all five domains: digital literacy (W = 4.04, p = 0.012), digital skills (W = 4.62, p = 0.003), digital interaction (W = 3.90, p = 0.016), technology integration (W = 3.98, p = 0.013) and the DCS-UT (W = 4.19, p = 0.009). Significant differences were also found between assistant professors and full professors in digital literacy (W = 3.72, p = 0.023), digital interaction (W = 4.14, p = 0.010) and DCS-UT (W = 3.46, p = 0.038). No significant differences were found between associate and full professors in any domain (p > 0.05).
The Mann–Whitney U-test showed no significant difference in self-perceived digital competence scores between male (n = 29) and female (n = 19) participants (U = 204.500, p = 0.134).
Table 4 presents the correlation coefficients between age, years of teaching experience, and years of e-learning experience with the four subscales as well as with the total DCS-UT score.
The Spearman correlation analysis showed that age was negatively associated with all dimensions of digital competence, with older teachers consistently reporting lower scores (p < 0.001). A similar pattern emerged for years of teaching experience, which was associated with lower self-perceived digital competence scores (all p < 0.001). On the other hand, experience with e-learning was positively related to the integration of technology into teaching (p = 0.05), which could suggest that hands-on engagement with digital tools may help teachers to incorporate them more effectively into their work.
4. Discussion
This pilot study provides important preliminary insights into the current state of self-perceived digital competence of medical teachers in Slovenia.
Despite the generally positive attitude of the participants and the relatively frequent use of digital tools in teaching, the study revealed key areas where skills are still underdeveloped. In particular, lower self-assessment in the creative use of digital technologies shows a gap in skills related to digital content creation, including ethical issues such as licencing. Boté-Vericad et al. [
10] pointed out that while many teachers can operate digital platforms, their ability to design, adapt and evaluate digital learning materials in a pedagogically meaningful and legally compliant way is often limited. The analysis also revealed that teachers who regularly used digital technologies in their teaching achieved higher scores in almost all competence dimensions. Those who reported always using digital tools scored significantly higher in the areas of self-perceived digital competence, interaction and technical skills. These differences indicate that regular and deliberate use of technology in everyday classroom practice was positively associated with higher confidence and more advanced pedagogical application [
18]. These observations are consistent with the idea that digital literacy is not static but develops through habitual engagement and critical reflection [
2].
A further pattern emerged in relation to teachers’ interest in digital technologies. Participants with a strong motivation to learn and use digital methods scored higher on the competence scale than their peers. High levels of personal engagement were positively associated with better developed digital skills, supporting the interpretation that motivation is positively related to teachers’ professional development [
19]. Similar correlations have been found in previous studies, suggesting that teachers with a proactive and self-directed approach are more likely to explore and apply innovative digital strategies [
20]. Participants who considered themselves capable of skilfully integrating digital tools into the classroom also scored significantly higher in all areas. This internal coherence within the self-report instrument supports the view that professional self-perception can serve as a meaningful indicator of digital readiness. The ability to plan, implement and reflect on the use of digital tools appears to be associated with more advanced pedagogical digital practises [
8]. Comparable European data confirm this association. Ersoy et al. [
21] reported that faculty members with higher personal engagement and frequent use of digital tools achieved significantly higher scores across digital competence domains, underscoring the cross-national relevance of motivation as a key driver of digital readiness.
Teachers with more extensive experience in e-learning environments tended to score higher in the area of technology integration, indicating that hands-on use of digital tools was positively associated with higher self-perceived applied competence. Rather than acquiring digital skills in abstract or isolated contexts, participants seemed to benefit most from the practical application that allowed experimentation and customisation in real classroom situations. This is in line with the findings of Ramírez-Montoya et al. [
22], who showed that participation in targeted online training programmes such as Massive Open Online Courses (MOOCs) can not only promote general digital competencies, but also increase teachers’ willingness to create and use open digital resources in teaching. Similarly, the DigCompEdu framework [
4] emphasises that digital competence develops through practise, reflection and context-specific application.
Differences in self-perceived digital competence by academic status were evident, with assistant professors outperforming their senior colleagues. Similar patterns were reported by Hautz et al. [
23], who found that institutional barriers often hinder the systematic integration of digital competencies into curricula and faculty development. This is consistent with the findings of previous studies showing that younger or less experienced teachers tend to have greater adaptability and familiarity with digital tools [
12,
24], a trend likely driven by generational shifts in professional development, changes in educational technology use, and more frequent engagement with technology in early academic roles [
6]. As the digital transformation of healthcare and medical education accelerates, these results emphasise the need for vertically integrated faculty development strategies that engage all academic ranks in continuing education [
7]. Similarly, a recent international consensus on digital health education highlights that sustained competence growth depends on coordinated institutional and national strategies, a conclusion echoed by our findings on the pivotal role of structured faculty development [
25]. In addition, a positive correlation was found between participation in structured digital courses and higher levels of competence, particularly in digital literacy and technology integration. Although the effects were not the same across all subdomains, this supports previous evidence that targeted, practice-orientated training is positively associated with higher self-perceived digital skills [
26] and enhance teachers’ ability to effectively navigate and use digital tools in educational contexts [
2,
22].
This study found no significant gender-specific differences in self-perceived digital competences. This is in contrast to previous findings, such as those of O’Doherty et al. [
24], where female teachers reported lower levels of confidence and competence in certain digital areas, particularly in the use of mobile technology. The absence of such gender differences in our sample may reflect a gradual shift towards greater gender parity in digital use in the Slovenian academic context. European-level evidence also suggests that gender gaps in educators’ digital competence are narrowing, though not uniformly across contexts, supporting our observation of reduced gender effects in Slovenia [
27]. It is plausible that systemic efforts, such as institutional digital literacy strategies and professional development programmes, have contributed to narrowing the previously observed differences [
28]. Recent studies support this trend and suggest that the gender digital divide decreases in higher education environments where equal access to technology and structured education is promoted [
29,
30]. In addition, EU-level initiatives such as the Digital Education Action Plan 2021–2027 [
31] have emphasised the promotion of inclusive digital education with the aim of reducing inequalities in digital skills acquisition across gender and socio-demographic boundaries [
1]. In contexts with mature institutional and policy frameworks that support digital capacity building, gender may no longer be the most important factor in digital literacy.
The observed negative correlation between age and self-perceived digital competence, particularly in areas such as digital literacy, technology integration and interaction, mirrors similar patterns found in other studies. For example, O’Doherty et al. [
24] reported that senior medical teachers scored significantly lower in the creative and mobile subdomains of digital competence, highlighting the need for targeted interventions that recognise age-related differences in digital readiness and support teachers in updating their pedagogical approaches. This pattern resonates with a recent European mapping study showing that older health professionals across EU Member States systematically display lower digital skills, reinforcing the need for age-sensitive faculty development at a European scale [
27].
The results of this pilot study have several practical implications for teacher development and curriculum design. First, institutions need to recognise that digital competence is not static but dynamic and is shaped by rapid technological development, including artificial intelligence, telemedicine and immersive learning tools. Therefore, teacher development programmes should go beyond the general teaching of digital skills and include pedagogically relevant, domain-specific applications [
3]. Secondly, the differences in competence according to academic status suggest that training should be differentiated and personalised. Senior faculty members and those in higher positions may benefit from mentoring models or peer learning programmes that promote intergenerational knowledge exchange [
23].
Strengthening individual and collective digital competences enables sustainable medical education that goes beyond the mere acquisition of technical skills. Higher levels of digital literacy support flexible and inclusive forms of teaching that maintain continuity of learning during disruptions such as pandemics and provide equitable access for diverse student groups. At the institutional level, digitally competent educators co-create curricula and assessment systems that are resilient, evidence-based and aligned with sustainability principles. These priorities are consistent with the European DigCompEdu framework and the Digital Education Action Plan 2021–2027, which explicitly promote equity, resilience and systemic readiness, core objectives of Sustainable Development Goal 4 (SDG 4): Quality Education [
4,
31].
4.1. Practical Implications
In line with the principles of sustainable higher education, the results show that long-term pedagogical change depends on coherent and well-supported institutional structures. Sustainable digital teaching capacity requires more than individual motivation. It is strengthened by consistent access to educational technologists, protected time for continuous professional development and formal recognition of achievements in digital pedagogy. Embedding teacher education in a broader institutional and policy framework such as DigCompEdu [
2,
4] ensures alignment with European policies and contributes to the resilience of education systems in the face of technological and societal change.
Teacher development programmes should be structured in such a way that they offer modular, needs-based and work-integrated learning opportunities. These should address areas such as digital assessment, the ethical and legal use of digital content and the integration of artificial intelligence into medical curricula [
32]. For senior faculty members, long-term mentoring and peer learning can help to balance generational differences and strengthen institutional knowledge. Students should be involved as active contributors through peer-teaching programmes and participatory curriculum design, drawing on their experiences as digital natives [
23]. In addition, digital competence training should also include legal and ethical knowledge, particularly in relation to the licencing and use of online content, where many teachers reported uncertainty [
33]. These recommendations are intended to be integrated flexibly into institutional development plans so that universities can adapt them to their specific contexts and ensure lasting growth of digital competence.
Progress in digital competence should be regularly monitored using validated frameworks such as DigCompEdu [
2,
4]. This allows institutions to measure development, recognise skills gaps and adapt training to achieve sustainable improvements. A cyclical process of planning, implementation and review helps to ensure that digital skills development is effective and environmentally, socially and educationally sustainable [
34,
35].
4.2. Study Limitations
This study has several limitations that need to be considered. Although non-parametric tests with a conventional significance threshold (
p < 0.05) were used, the large number of comparisons means that the possibility of occasional false-positive findings cannot be fully excluded. We addressed this by reporting effect sizes and using the Dwass–Steel–Critchlow–Fligner procedure, which provides
p-values already adjusted for multiple pairwise comparisons, but the results should still be interpreted with appropriate caution. The use of self-assessments introduces potential biases, including social desirability and self-assessment inaccuracies, especially in areas such as digital creativity and inclusion, which may be over- or underestimated depending on the respondent’s confidence and interpretation of the scale [
36]. Because all variables were derived from the same self-report survey, the risk of common-method bias cannot be completely ruled out. In addition, the use of convenience sampling from two Slovenian medical faculties may not capture the full diversity of experiences, institutional context and levels of digital support available to medical educators nationally or internationally. Furthermore, although both Slovenian medical faculties were included, the response rate was modest, resulting in some very small subgroups by academic rank, for these subgroups, dispersion estimates (IQR) are less stable and should be interpreted with caution. Despite these limitations, the pilot study provides important preliminary findings and serves as a valuable basis for the design of a more extensive and representative investigation.