Next Article in Journal
Advancing Equity Through Transformative SEL: Insights from Preservice Secondary STEM Teachers
Previous Article in Journal
Developing Mathematical Creativity in High-Potential Kindergarten English Learners Through Enrichment and Tangram Activities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Aligning the Operationalization of Digital Competences with Perceived AI Literacy: The Case of HE Students in IT Engineering and Teacher Education

1
Faculty of Technical Sciences Čačak, University of Kragujevac, Svetog Save 65, 32000 Čačak, Serbia
2
Faculty of Education in Sombor, University of Novi Sad, Podgorička 4, 25000 Sombor, Serbia
3
Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(12), 1582; https://doi.org/10.3390/educsci15121582
Submission received: 26 October 2025 / Revised: 18 November 2025 / Accepted: 21 November 2025 / Published: 24 November 2025

Abstract

The paper presents research and preliminary findings aimed at improving curricula so that digital competencies are aligned with the required Artificial Intelligence (AI) literacy. The research was conducted at the Faculty of Technical Sciences in Čačak, University of Kragujevac (Serbia). The participants in the research were future computer science teachers and IT engineering students. The research tool for self-evaluation of AI literacy was a questionnaire based on the Serbian version of the AILS (Artificial Intelligence Literacy Scale), while digital competencies, based on the DigComp framework, were determined by objective testing. The research took into account the socioeconomic status of the students, demographic characteristics, and English language proficiency. Preliminary results indicated the persistence of significant relationships between certain digital competencies (such as programming, digital signal processing, and creative thinking) and all four constructs of AI literacy. The research findings highlight the impact of AI literacy on data analysis performance and problem solving.

1. Introduction

Artificial intelligence (AI) has become an indispensable part of everyday life, permeating all spheres of human life. AI technologies are redefining the way we learn, work, and live.
AI is now a globally present technology, with application in almost all scientific disciplines. According to a study by H. Wang et al. (2023), AI is used to generate hypotheses, design experiments, and analyze large data sets in fields ranging from biology to physics. Ding et al. (2025) and Bajpai et al. (2025) emphasize the exponential growth of artificial intelligence not only in computer science, but also in other fields such as biology, medicine, finance, production, social science, and engineering. Ding et al. (2025) also points to the wide geographical spread of research in this area. AI is transforming the entire development process in the IT industry in the following areas: automated code generation, intelligent debugging, predictive maintenance of software systems, decision-making in development teams (Alenezi & Akour, 2025). AI-based tools, especially those that automatically generate programming code and use large language models, significantly improve the efficiency and quality of software development while simultaneously changing the way developers do their jobs and function within the industry (Becker et al., 2025). The use of artificial intelligence also raises numerous ethical questions, including the risk of bias, lack of transparency, and accountability (Bird et al., 2020). It is therefore particularly important that students at all levels of education are familiar with artificial intelligence, with an emphasis on the possibilities of its application and potential drawbacks. This is especially true for students who will work in the IT field after graduation.
The European Commission has developed DigComp—Digital Competence Framework for Citizens—as a tool for assessing and developing the digital skills of citizens across the EU. DigComp was first published in 2013 by the Joint Research Center (JRC) of the European Commission, in response to the need for a common language in the field of digital skills (Oberländer et al., 2020). DigCompEdu, a framework grounded in DigComp that delineates the digital skills and competencies expected of educators across all educational stages, has been proposed as a suitable basis for extending digital competence frameworks to integrate AI-specific competencies. (Tenberga & Daniela, 2024). In light of the importance of DigComp and AI, as well as the established link between these frameworks (Tenberga & Daniela, 2024), this research aimed to assess students’ knowledge across both domains and to explore the potential statistically significant relationship between them.
This paper presents a study conducted on a population of higher education students of IT engineering and the informatics teacher education program, with the following main objectives: determining, through self-evaluation, the level of AI literacy in four areas (awareness, usage, evaluation, ethics), taking into account the sociodemographic characteristics of the students; examining the performance of respondents in solving practical tasks that measured the levels of certain key digital competencies (DigComp framework); examining the correlations between certain digital competencies and self-evaluated AI knowledge and skills.

2. Related Work

2.1. AI Literacy

AI literacy, in its basic definition, represents the ability to understand, use, and critically evaluate AI systems and AI outputs (Lintner, 2024). Lintner lists four aspects that are essential for AI literacy: know, apply, evaluate, and understand ethical principles in the context of using AI. Also, Ng et al. (2021), as part of a comprehensive review research, highlight four aspects (i.e., knowledge and understanding, use, evaluation, and ethical issues) for developing AI literacy. OECD (2025, p. 6) defines AI literacy as “the technical knowledge, durable skills, and future-ready attitudes required to thrive in a world influenced by AI. It enables learners to engage, create with, manage, and design AI, while critically evaluating its benefits, risks, and ethical implications”. Chiu (2025), when defining AI literacy, particularly emphasizes the clear distinction between the terms AI literacy and AI competence. According to the author, AI literacy refers to understanding (“what does this AI do”), while AI competence represents a higher-level skill and refers to the improvement of AI systems. Long and Magerko (2020) define AI literacy as a set of competencies that enable people to critically examine AI, communicate and collaborate with AI, and use AI as a tool, whether online, at home, or at work.

2.2. Objective Assessment of AI Literacy Among Higher Education Students

Hornberger et al. (2025) presents a cross-national study comparing AI knowledge and literacy among students in Germany, the UK, and the US considering sociodemographic background variables. The results show differences in AI knowledge levels across countries, as well as varying attitudes toward AI and the relationship between prior learning experiences and test scores. The survey showed that most students have a basic level of literacy in the field of AI, as well as a satisfactory level of positive attitudes and interest in AI. The research presented by Wen et al. (2025) investigates how collaboration with AI (co-creation) affects AI literacy in 401 Chinese students. The results show that trust in AI, readiness for technology, and personalization of AI tools significantly contribute to the development of AI literacy.
In the contemporary literature, there are many papers dealing with the development of tests and the objective assessment of AI literacy. Thus, Jin et al. (2024) presented the Generative AI Literacy Assessment Test (GLAT), a 20-item multiple-choice test for assessing generative AI literacy. Wu et al. (2025) introduced the AI Literacy Evaluation System created based on the KSAVE (knowledge, skill, attitude, value, and ethics) model and UNESCO AI competency framework. The system includes objective indicators of AI attitude, AI knowledge, AI capability, and AI ethics and was applied to a large sample of students (1651 undergraduate students at Wuhan University). Other authors, such as Biagini et al. (2024), have also developed instruments to assess AI literacy. Their study presents the AI Competency Objective Scale (AICOS), developed for the objective measurement of AI literacy.

2.3. Self-Assessment of AI Literacy Among Higher Education Students

Also, recent papers give great importance to students’ self-evaluation of their AI literacy. Although the existing literature does not always state explicitly that student self-assessment is equally useful compared to objectively assessing AI literacy, empirical studies (Shen & Cui, 2024; Bećirović et al., 2025) employ self-assessment instruments and find significant predictive relationships between self-assessed AI literacy and outcomes such as self-efficacy and output quality. In the study by Carolus et al. (2023), the MAILS scale was developed and confirmed for measuring self-reported AI literacy. The instrument includes important psychological competencies in addition to AI literacy. It is used as a questionnaire, without tasks. Mansoor et al. (2024) presents a cross-border survey on AI literacy among 1800 university students from four Asian and African countries. The questions were based on self-assessments of knowledge, attitudes, and readiness to use AI in learning and future careers. Nationality and academic degree were recognized as significant factors, followed by scientific specialization. Bewersdorff et al. (2024) conducted a survey of undergraduate and graduate students from the United Kingdom, the United States, and Germany. The survey included (self) assessments of their self-efficacy, attitudes toward AI, interest in AI, and applications of AI. The response format was a five-point Likert scale, while AI literacy was measured using an objective test. Research shows that positive attitudes toward AI contribute to higher levels of self-evaluation and interest in AI. Laupichler et al. (2024) conducted on medical students examined their self-assessment of AI knowledge and attitudes towards future use in healthcare. The results show the need for a stronger introduction of technical aspects into AI literacy courses and the correlations between AI literacy and attitudes towards AI.

2.4. Research on AI Literacy Among Pre-Service Teachers and IT/Computer Science Engineers

Given the particular importance of AI literacy among future teachers and IT engineering/computer science students, the contemporary literature contains a substantial number of studies that focus exclusively on examining AI literacy within this specific student population. Thus, Ayanwale et al. (2024) analyzes AI literacy among pre-service teachers through aspects of knowledge, usage, ethics, and problem solving. The results indicate positive attitudes and ethical awareness, but insufficient practical knowledge, especially in creating AI solutions and problem-solving using AI applications. Runge et al. (2025) address the readiness of future teachers to embrace AI in teaching and highlight the importance of courses for developing AI-related competencies. Findings show that participation in AI courses is positively associated with AI-related technological pedagogical content knowledge, which further influences a positive attitude towards using AI in future teaching. Weichert et al. (2025) present a study of the attitudes of post-secondary computer science students towards the ethics and politics of artificial intelligence. The results of the research show that students generally have positive attitudes towards AI but are also aware of the potential dangers associated with the use of these technologies, especially regarding job loss. Siddharth et al. (2025) introduce an interdisciplinary course called “The World of AI”, aimed at first-year engineering students with no prior experience in AI. The course integrates the technical fundamentals and ethical aspects of AI without complex mathematical formulas. The results show a significant improvement in students’ understanding of the ecological and social dimensions of AI, as well as in their perception of AI as a transformative influence in everyday life. Ebert and Kramarczuk (2025) present a study of undergraduate students in a computing major with the aim of establishing how students think about AI. The authors point to a clear gap between students’ practical AI competencies and their understanding of AI principles. As a result of this gap, they propose a methodological balance: a combination of surveys (for self-assessment/confidence) and objective tests (for measuring actual competence).

2.5. Digital Competence Framework for Citizens (DigComp) and Its Relationship with AI Literacy

The Digital Competence Framework for Citizens (DigComp) defines digital competence through five main areas (information and data literacy, communication and collaboration, digital content creation, safety, problem solving) and a total of 21 specific skills. The framework also includes eight levels of achievement and provides examples of knowledge, skills, and attitudes, as well as illustrations of their application in educational and professional settings (European Commission, 2022). In the study by Abubakari et al. (2025), the authors examined the reliability, validity, and applicability of the DigComp framework in assessing digital competencies. The research results show different levels of digital competence, with students rating themselves worse in the domains of digital content creation and problem solving, while digital safety and communication and collaboration received better results. Rioseco-Pais et al. (2024) applied a special DIGCOMP-PED instrument based on DigComp 2.1 specifically adapted to assess the level of engagement in Chilean higher education. The research indicated a correlation between years of experience in using technologies and knowledge of specific DigComp frameworks. López-Meneses et al. (2020) assessed the digital competencies of students at two Spanish and one Italian university using three of the five areas of the DigComp 2.1 model (information and data literacy, communication and collaboration, and digital content creation). The research showed that students have a higher average level of competence in the areas of information and digital literacy and communication and cooperation, while in the area of creating digital content, they are at a lower average level. It is especially emphasized that they have less developed skills in creating and distributing multimedia content using different tools. AI literacy is often viewed as an advanced version of digital literacy (Lintner, 2024). Long and Magerko (2020) believe that digital literacy is a prerequisite for AI literacy. Van Audenhove et al. (2024) analyze the data literacy within the new 2022 DigComp 2.2. framework, which is more focused on artificial intelligence. This paper confirms that the DigComp framework does not ignore AI, but rather integrates it. In the presented analysis, AI is considered together with data and algorithms, as algorithms imply decision-making and can include AI. Hong (2025) develops and validates a “competency-based ladder development pathway” for the improvement of AI literacy in higher vocational school students, relying on a combination of the UNESCO AI competency framework, EU DigComp 2.2, and OECD AI Literacy guidelines. Zhang and Tian (2025) investigate the competencies, based on the adopted DigComp framework, that students need to develop to use generative artificial intelligence. Their findings indicate that universities emphasize digital literacy, security, and critical thinking, while the areas of communication and collaboration are often neglected.

3. Methodology

The research problem is to examine how HE students’ (future IT engineers and teachers) operationalized digital competences are related with self-perceived levels of AI literacy. The research was realized between July and September 2025 at the Faculty of Technical Sciences in Čačak, University of Kragujevac (Serbia). A total of N = 93 students (age 19 to 28) participated in the research. The sample was defined to evenly represent students from various socioeconomic environments and with various levels of academic success. Given the gender imbalance in our sample, statistical power to detect gender differences was limited. Post-hoc power analysis indicated that with α = 0.05 and the observed sample sizes, our study had approximately 50–60% power to detect medium effect sizes (d = 0.5). Consequently, non-significant gender comparisons should be interpreted with caution, as they may reflect insufficient statistical power rather than true absence of differences.
The first part of the survey was a questionnaire that students completed on faculty premises in order to collect sociodemographic data and assess their self-perceived level of AI literacy based on the Serbian translated version of the AILS (Artificial Intelligence Literacy Scale) (B. Wang et al., 2022), following the performance grading of a series of short representative practical activities and tasks for ten selected digital competences and skills. Since the EU DigComp 2.2 framework is complex, it was neither possible nor rational to operationalize practical tasks for all five areas in a relatively short timeframe. The selected ten competences and skills were measured and evaluated reliably and efficiently. We used the following selection matrix. From area 1 (information and data literacy), we selected two competences, 1.2 Evaluating data, information, and digital content (referred to in our research as DSP) and 1.3 Managing data, information, and digital content (referred to in our research as computer architecture); from area 2 (communication and collaboration), we selected 2.2 Sharing information and content through digital technologies (referred to in our research as teamwork/communication) and 2.4 Collaborating through digital technologies (referred to in our research as project management); from area 3 (digital content creation), we selected 3.1 Developing digital content (referred to in our research as digital design, 3D modeling, and computer animation), 3.2 Integrating and re-elaborating digital content (referred to in our research as software development), and 3.4 Programming; and from the fifth area (problem solving) we selected 5.3 Creatively using digital technologies (referred to in our research as creative thinking).
The competencies were selected based on their focus for our student group and its future professional practice, where skills in software development, programming, and creative problem-solving are core professional requirements. The selection of specific competences and skills was significantly influenced by the feasibility of assessment of student actions in simulated real-life problems that could be reliably evaluated through practical performance tasks within the limited study’s timeframe. We prioritized evaluating the competencies most frequently engaged in academic coursework at our institution, thus ensuring that students had the prior experience, knowledge, and skills necessary for performing practical tasks and valid assessment, but also having in mind the theoretical alignment of our research aim to explore relationships with AI literacy, focusing on competencies involving computational thinking, content creation, and information management.
Following the theoretical and empirical aspects of our research, students were examined combining descriptive-analytical and quantitative observation methods, by which we established the property distribution and analyzed the variable relationships. Data analysis was performed using IBM SPSS Statistics v22. We used the following methods: descriptive statistics (frequency, percentage, arithmetic mean, standard deviation, minimum, maximum, skewness, kurtosis), independent samples t-test, Cronbach’s alpha internal consistency coefficient, Shapiro–Wilk test, Mann–Whitney U test, and correlation analysis.

4. Results and Discussion

The valid sample of N = 93 students consisted of NM = 63 (73.1%) males and NF = 25 (26.9%) females. The average participant age was 20.3 years (SD = 1.18). In total, NU = 52 (55.9%) of the students lived in urban areas, while NR = 41 (44.1%) lived in rural environments. As linguistic competence is one of the fundamental ones with regard to whether participants’ personal and professional lives included digital technologies, we asked students to self-assess their perceived English proficiency level. To our surprise, the distribution was relatively equal even though responders studied English through formal education for the minimum of 14 years, e.g., NL5 = 25 (26.9%) reported advanced, NL4 = 32 (34.4%) reported upper intermediate, NL3 = 32 (34.4%) reported intermediate, and NL2 = 4 (4.3%) reported only elementary level. In order to get a more comprehensive picture of the students, we queried about their economic status (i.e., financial wealth), as this factor could also influence their access to digital technologies. Based on the low economic parameters of Serbian society, our expectations were confirmed. Only NF5 = 3 (3.2%) students reported excellent, NF4 = 27 (29.0%) reported good, NF3 = 53 (57.0%) reported fair, and NF2 = 10 (10.8%) reported poor. An independent samples t-test was performed to examine whether there was a significant difference in the type of students’ living environments in relation to their self-estimated English proficiency level or financial wealth. The test revealed no statistically significant differences between these two groups, with t (91) = 1.29; p = 0.201 and t (91) = −0.566; p = 0.574, respectively.
Most of the students graduated from vocational education (e.g., IT, computer science, electrotechnics) (NSE2 = 52 (55.9%), followed by gymnasium (NSE1 = 18 (19.4%)) secondary education. There was a statistically significant gender difference regarding their secondary education GPA, t (72.5) = −3.30; p = 0.001. Female students (M = 3.84; SD = 0.374) achieved significantly higher grades than males (M = 3.49; SD = 0.635), which was in accordance with similar research (Qazi et al., 2021). However, the average tertiary education GPA between male (M = 7.88; SD = 0.806) and female students (M = 7.85; SD = 0.769) showed no statistically significant differences (t (91) = 0.197; p = 0.844).
The inter-item matrix for each of the four-factor AI literacy constructs based on the 12 self-reported seven-point Likert scales shows the existence of statistically significant correlations that confirmed its validity. The average inter-item correlation for each AI literacy construct was as follows: Awareness (0.569), Usage (0.561), Evaluation (0.375), and Ethics (0.463). The internal consistencies for the self-reported scores are shown in Table 1.
Since the sample size was relatively small (N = 93), a Shapiro–Wilk test was performed to examine whether there was a significant gender difference between students in relation to their perceived AI literacy and showed significant departure from normality (W = 0.932; p < 0.001). Based on this outcome, a non-parametric Mann–Whitney U test was used. The results indicate that there was no significant difference between the Awareness (z = −0.479; p = 0.632), Usage (z = −0.903; p = 0.367), Evaluation (z = −0.662; p = 0.508), and Ethics (z = −1.047; p = 0.295) AI literacy constructs and gender, which was in accordance with Xia et al. (2022)’s research results. Given the gender imbalance in our sample, these findings should be interpreted cautiously, as the study may have been underpowered to detect existing gender differences. The results indicate that there was no significant difference between the Awareness (z = −0.758; p = 0.448), Usage (z = −1.538; p = 0.124), and Evaluation (z = −1.76; p = 0.078) AI literacy constructs and type of living environment, though the marginally significant p-value for Evaluation suggests a potential trend worthy of investigation in larger samples. However, the results indicate that students living in rural environments perceived higher levels in the Ethics AI literacy construct than students living in urban environments (z = −2.34; p = 0.02).
A Pearson correlation coefficient was computed to assess the relationship between English proficiency level and perceived AI literacy scores. There were positive correlations between proficiency level of English, Awareness, r (93) = 0.313, p = 0.002, and Ethics AI literacy, r (93) = 0.208, p = 0.045. Increases in proficiency level of English were correlated with increases in perceived levels of these two AI literacies, as shown in Figure 1. There were no significant correlations between perceived AI literacy levels and students’ financial wealth or tertiary education GPA.
The results of practical activity performance evaluation are shown in Table 2.
As the Shapiro–Wilk test showed a significant (p < 0.05) violation of the normality in variable distribution related to gender difference between students and their scores in digital competences and skills, a non-parametric Mann–Whitney U test was used. The results indicate that there was no significant difference between the Digital signal processing (z = −1.235; p = 0.217), Computer architecture (z = −1.442; p = 0.149), Programming (z = −1.670; p = 0.095), Software development (z = −1.554; p = 0.120), 3D modeling (z = −1.577; p = 0.115), Computer animation (z = −0.607; p = 0.544), Teamwork/Communication (z = −1.936; p = 0.053), Creative thinking (z = −1.632; p = 0.103), Digital design (z = −1.478; p = 0.139), and Project management (z = −1.960; p = 0.050) digital competences and skills and gender. Given the limited female sample size, these findings should not be interpreted as evidence of gender similarity, but rather as indicating that any existing differences were not detectable with the current sample composition.
There were significant correlations between students’ English proficiency level and Digital signal processing, r (93) = 0.236, p = 0.023, Programming, r (93) = 0.289, p = 0.005, Software development, r (93) = 0.235, p = 0.023, and Digital design competences and skills, r (93) = 0.220, p = 0.034. As digital technologies and environments use English, these findings were expected. English is essential for understanding, evaluating, and digital competency and skills operationalization, which is consistent with the Audrin and Audrin (2022) findings.
A Pearson correlation coefficient was computed to assess the relationship between perceived AI literacy and digital competences and skills performance scores, as presented in Table 3.
We detected weak and positive correlations between Awareness AI literacy and the Digital signal processing, r (93) = 0.352, p < 0.001, Computer architecture, r (93) = 0.297, p = 0.004, Programming, r (93) = 0.320, p = 0.002, Creative thinking, r (93) = 0.358, p < 0.001, and Digital design digital competences and skills, r (93) = 0.330, p = 0.001. Very weak positive correlations were also detected between Usage AI literacy and Digital signal processing, r (93) = 0.290, p = 0.005, Programming, r (93) = 0.282, p = 0.006, and Creative thinking digital competences and skills, r (93) = 0.437, p < 0.001. Evaluation AI literacy correlated very weakly and positively with Digital signal processing, r (93) = 0.342, p < 0.001, Computer architecture, r (93) = 0.299, p = 0.004, Programming, r (93) = 0.239, p = 0.021, Creative thinking, r (93) = 0.270, p = 0.009, and Digital design digital competences and skills, r (93) = 0.212, p = 0.041. Finally, the Ethics AI literacy construct correlated very weakly and positively with Digital signal processing, r (93) = 0.295, p = 0.004, Computer architecture, r (93) = 0.294, p = 0.004, Programming, r (93) = 0.297, p = 0.007, Creative thinking, r (93) = 0.429, p < 0.001, and Digital design digital competences and skills, r (93) = 0.269, p = 0.009. In accordance with our findings, we state that the Digital signal processing, Programming, and Creative thinking digital competences and skills positively correlate with all four AI literacy constructs, thus confirming the model validity, as presented in Figure 2.
To examine whether the identified relationships between digital competencies and AI literacy were independent of potential confounding factors, multiple regression analyses were conducted for each AI literacy construct. Control variables were as follows: English proficiency, gender, living environment, and secondary education GPA. The control variables for the Awareness AI literacy construct explained 13.6% of variance (R2 = 0.136, F (4, 88) = 3.47, p = 0.011). Adding digital competencies did not significantly increased explained variance (ΔR2 = 0.163, F (10, 78) = 1.81, p = 0.072). The control variables for the Usage AI literacy construct explained 7.3% of variance (R2 = 0.073, F (4, 88) = 1.74, p = 0.15). Adding digital competencies significantly increased explained variance by 26.8% (ΔR2 = 0.268, F (10, 78) = 3.17, p = 0.002), with the full model explaining 34.1% of variance. After controlling for confounds, Creative thinking (β = 0.216, p = 0.001) remained a significant independent predictor. The control variables for the Evaluation AI literacy construct explained 8.0% of variance (R2 = 0.080, F (4, 88) = 2.92, p = 0.116). Adding digital competencies did not significantly increase explained variance (ΔR2 = 0.110, F (10, 78) = 1.06, p = 0.401). The control variables for the Ethics AI literacy construct explained 14.9% of variance (R2 = 0.149, F (4, 88) = 3.84, p = 0.006). Adding digital competencies significantly increased explained variance by 17.4% (ΔR2 = 0.174, F (10, 78) = 1.99, p = 0.045), with the full model explaining 32.2% of variance. After controlling for confounds, Creative thinking (β = 0.193, p = 0.002) remained a significant independent predictor. These findings confirm that the relationships between digital competencies and AI literacy constructs were not merely artifacts of sociodemographic factors but represent genuine independent associations.
The identified competences are critical in the problem-solving and systematic thinking skills that are the core of problem-solving theory (Simon & Newell, 1971). Dörner and Funke (2017) defined problem-solving skills as individuals’ capability to respond and manage problems effectively. They are essential for productive and quality living and working in contemporary post-digitalized society, as digital competences and skills embedded in the AI literacy construct require knowledge of digital data processing, handling data structures, information evaluation, and effective creative thinking. Our research results identified positive correlation between the beforementioned competences and skills and perceived AI literacy scores, thus proving the theoretical presumption. Our findings are aligned with similar research (Brandl et al., 2025; Knoth et al., 2024) that demonstrated that proficient AI literacy enhances data analysis performance and complex problem solving.
The positive correlations detected between the Ethics AI literacy construct and several more or less technical competencies (Digital signal processing, Computer architecture, Programming, Creative thinking, and Digital design) warrant particular attention, as the relationship between numbered technical skills and ethical awareness is not intuitive at the first glance. Students with a higher level of programming skills may have greater exposure to ethical dilemmas in the course of technical implementation. For instance, programming involves decisions about data handling, information transparency, algorithm fairness, etc. Students who understand well how algorithms process data and how software systems are structured are better equipped to recognize ethical implications of AI in decision-making process. The level of creative thinking skills showed the strongest correlation with the Ethics AI literacy construct, suggesting that ethical reasoning about AI requires better imaginative capacity to envision potential consequences, consider perspectives, and identify unintended societal impacts. However, it is important to note that the identified correlations are relatively weak, indicating that technical competency alone does not guarantee ethical literacy. This relationship most likely operates bidirectionally—better digital/technical understanding enables better recognition of ethical issues, while ethical awareness simultaneously better motivates students to thoughtfully develop digital/technical skills.
AI literacy integration in HE courses is closely related with the development of several key digital competences, particularly those related to problem-solving, creative and analytical thinking, and ethical reasoning in applying AI technologies. Study subject, program, and curriculum innovation is crucial for providing IT students with the ability and competence to quickly adapt and make informed decisions in solving new, previously unknown problems, which are increasing in numbers in contemporary fast-paced new technology working environments. This prioritization should involve all educational levels, as rapid AI technological advancement changes labor market expectations and demands on daily basis.

4.1. Pedagogical Implications

Our research determined that students with digital competences in digital signal processing, programming, and creative thinking were assessed to have better results in AI literacy. These results can serve as a good reference for planning learning and training of HE IT students, as well as for promoting existing or new services supported with AI-based tools and technologies. Based on observations during the research implementation and the previously stated results, the following practical didactic recommendations for the successful use of this approach were created: (1) when using AI in teaching, it is necessary to enable students (in addition to created authentic situations) to participate independently, explore, and interact with other students; (2) teachers must provide clear and appropriate guidelines for work and enable students to have access to key information; and (3) given that the potential for successful use of AI is very broad, it is necessary to assess and take into account students’ prior knowledge and digital competences when designing activities.
The recommendations presented here align with findings in the relevant literature. Garzón et al. (2025) summarize that learning is an active process involving collaboration with others and that AI should support this by providing interactive environments that foster autonomous and experiential learning. According to Luckin et al. (2016), AI in education has the potential to enable collaborative learning and, at the same time, alleviate the pressure on teachers by preventing them from having to manage this task independently. The authors additionally highlight that AI provides intelligent support for collaborative learning and identify this as one of the three core categories of AI applications in education. Walter (2024) emphasizes the importance of comprehensive teacher training and the accessibility of AI resources. The author further notes that teaching scaffolding plays a crucial role in fostering students’ critical thinking skills within an AI-driven learning environment and that the teacher’s task is to provide clear tasks and instructions. Hornberger et al. (2023) argue that differences in students’ prior knowledge should be considered when designing AI-related courses. Similarly, Hsu and Chen (2024) conclude that educational institutions should first assess students’ existing knowledge and subsequently adapt instructional approaches accordingly. Likewise, Pusey et al. (2024) emphasize the importance of evaluating students’ prior knowledge in the domain of AI.

4.2. Limitations

The research was conducted with certain limitations. Although the sample was adequate and the psychometric instrument was reliable and valid, conclusions about the identified causal relationship between student digital competences and AI literacy could not be confirmed due to the correlational nature of the research. Therefore, the focus of identified relationships should be clarified through longitudinal research that would add a dynamic dimension. Another limitation of this study was the gender imbalance, which is representative of IT education enrolment patterns in Serbia. This fact limited statistical power for gender comparisons, i.e., our analyses were underpowered to detect small to medium effect sizes, thus increasing the risk of false negatives. Future research should oversample female students or collaborate across institutions to achieve more balanced representation.

5. Conclusions

The paper presents a study on the extent to which students (future IT engineers and teachers) possess digital competences and how these are related to their self-evaluated AI literacy. The research confirmed a significant relationship between operationalized digital competences and self-perceived AI literacy. In particular, digital signal processing, programming, and creative thinking emerged as competences consistently correlated with all four constructs of AI literacy. As fundamental elements of problem-solving and systematic thinking, these competences prove essential for effective functioning in today’s post-digital society shaped by AI-based technologies and tools. Our findings align with previous studies highlighting the link between proficient AI literacy and improved performance in data analysis and complex problem-solving. The results suggest that strengthening specific digital competences can enhance students’ understanding and application of AI technologies. Practical didactic recommendations include designing authentic learning situations that foster active student engagement, providing clear and structured guidance by teachers, and tailoring activities to students’ prior knowledge and digital competences. The research also indicates that there were significant correlations between students’ English proficiency level and some components of AI literacy and digital competencies. These findings should also be taken into account when improving existing curricula.
Future work will be focused in several main directions. One is to investigate AI literacy using objective tests (such as the AILT) and to determine the correlation between the results obtained and the digital competencies of students. It is also necessary to conduct longitudinal research that would add a dynamic dimension. Another important direction of future work is to include other competencies of the DigComp framework in the research.

Author Contributions

Conceptualization, V.A., M.M. and M.I.; Methodology, V.A., M.M. and M.I.; Validation, V.A.,M.M. and M.I.; Formal analysis, M.I.; Investigation, V.A. and M.M.; Resources, V.A. and M.M.; Writing – original draft, V.A., M.M. and M.I.; Writing – review & editing, V.A., M.M. and M.I.; Visualization, V.A.; Supervision, M.M. and M.I.; Funding acquisition, V.A., M.M. and M.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

According to Serbian practice and national context, research involving university students via anonymous questionnaires and standard objective tests, without clinical intervention or sensitive personal data, is considered minimal risk and does not require formal ethical committee approval.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article, further inquiries can be directed to the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abubakari, M. S., Zakaria, G. A. N., & Musa, J. (2025). Validating the DigComp framework among university students across different educational systems. Discover Education, 4(1), 200. [Google Scholar] [CrossRef]
  2. Alenezi, M., & Akour, M. (2025). AI-driven innovations in software engineering: A review of current practices and future directions. Applied Sciences, 15(3), 1344. [Google Scholar] [CrossRef]
  3. Audrin, C., & Audrin, B. (2022). Key factors in digital literacy in learning and education: A systematic literature review using text mining. Education and Information Technologies, 27(6), 7395–7419. [Google Scholar] [CrossRef]
  4. Ayanwale, M. A., Adelana, O. P., Molefi, R. R., Adeeko, O., & Ishola, A. M. (2024). Examining artificial intelligence literacy among pre-service teachers for future classrooms. Computers and Education Open, 6, 100179. [Google Scholar] [CrossRef]
  5. Bajpai, A., Yadav, S., & Nagwani, N. K. (2025). An extensive bibliometric analysis of artificial intelligence techniques from 2013 to 2023. The Journal of Supercomputing, 81(4), 540. [Google Scholar] [CrossRef]
  6. Becker, J., Rush, N., Barnes, E., & Rein, D. (2025). Measuring the impact of early-2025 AI on experienced open-source developer productivity. arXiv, arXiv:2507.09089. [Google Scholar] [CrossRef]
  7. Bećirović, S., Polz, E., & Tinkel, I. (2025). Exploring students’ AI literacy and its effects on their AI output quality, self-efficacy, and academic performance. Smart Learning Environments, 12(1), 29. [Google Scholar] [CrossRef]
  8. Bewersdorff, A., Hornberger, M., Nerdel, C., & Schiff, D. S. (2024). AI advocates and cautious critics: How AI attitudes, AI interest, use of AI, and AI literacy build university students’ AI self-efficacy. Computers & Education: Artificial Intelligence, 8, 100340. [Google Scholar] [CrossRef]
  9. Biagini, G., Cuomo, S., & Ranieri, M. (2024). Developing and validating a multidimensional AI literacy questionnaire: Operationalizing AI literacy for higher education. Available online: https://ceur-ws.org/Vol-3605/1.pdf (accessed on 25 October 2025).
  10. Bird, E., Fox-Skelly, J., Jenner, N., Larbey, R., Weitkamp, E., & Winfield, A. (2020). The ethics of artificial intelligence: Issues and initiatives. European Parliamentary Research Service, Directorate-General for Parliamentary Research Services. [Google Scholar] [CrossRef]
  11. Brandl, L., Richters, C., Kolb, N., & Stadler, M. (2025, March 3). Can generative artificial intelligence ever be a true collaborator? Rethinking the nature of collaborative problem-solving. 2nd Workshop on Generative AI for Learning Analytics (GenAI-LA) (pp. 80–87), Dublin, Ireland. Available online: https://ceur-ws.org/Vol-3994/short6.pdf (accessed on 25 October 2025).
  12. Carolus, A., Koch, M. J., Straka, S., Latoschik, M. E., & Wienrich, C. (2023). MAILS—Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Computers in Human Behavior Artificial Humans, 1(2), 100014. [Google Scholar] [CrossRef]
  13. Chiu, T. K. F. (2025). AI literacy and competency: Definitions, frameworks, and implications. Interactive Learning Environments, 33(5), 3225–3229. [Google Scholar] [CrossRef]
  14. Ding, L., Lawson, C., & Shapira, P. (2025). Rise of generative artificial intelligence in science. Scientometrics, 130, 5093–5114. [Google Scholar] [CrossRef]
  15. Dörner, D., & Funke, J. (2017). Complex problem solving: What it is and what it is not. Frontiers in Psychology, 8(8), 1153. [Google Scholar] [CrossRef]
  16. Ebert, J., & Kramarczuk, K. (2025, February 26–March 1). Leveraging undergraduate perspectives to redefine AI literacy. 56th ACM Technical Symposium on Computer Science Education (SIGCSE 2025) (pp. 290–296), Pittsburgh, PA, USA. [Google Scholar] [CrossRef]
  17. European Commission. (2022). DigComp 2.2, the digital competence framework for citizens—With new examples of knowledge, skills and attitudes. Publications Office of the European Union. [Google Scholar] [CrossRef]
  18. Garzón, J., Patiño, E., & Marulanda, C. (2025). Systematic review of artificial intelligence in education: Trends, benefits, and challenges. Multimodal Technologies and Interaction, 9(8), 84. [Google Scholar] [CrossRef]
  19. Hong, L. (2025). Development and validation of a competency-based ladder pathway for AI literacy enhancement among higher vocational students. Scientific Reports, 15(1), 29898. [Google Scholar] [CrossRef] [PubMed]
  20. Hornberger, M., Bewersdorff, A., & Nerdel, C. (2023). What do university students know about artificial intelligence? Development and validation of an AI literacy test. Computers and Education: Artificial Intelligence, 5, 100165. [Google Scholar] [CrossRef]
  21. Hornberger, M., Bewersdorff, A., Schiff, D. S., & Nerdel, C. (2025). A multinational assessment of AI literacy among university students in Germany, the UK, and the US. Computers in Human Behavior: Artificial Humans, 4, 100132. [Google Scholar] [CrossRef]
  22. Hsu, T. C., & Chen, M. S. (2024). Effects of students using different learning approaches for learning computational thinking and AI applications. Education and Information Technologies, 30, 7549–7757. [Google Scholar] [CrossRef]
  23. Jin, Y., Martinez-Maldonado, R., Gašević, D., & Yan, L. (2024). GLAT: The generative AI literacy assessment test. arXiv, arXiv:2411.00283. [Google Scholar] [CrossRef]
  24. Knoth, N., Decker, M., Laupichler, M. C., Pinski, M., Buchholtz, N., Bata, K., & Schultz, B. (2024). Developing a holistic AI literacy assessment matrix—Bridging generic, domain-specific, and ethical competencies. Computers and Education Open, 6, 100177. [Google Scholar] [CrossRef]
  25. Laupichler, M. C., Aster, A., Meyerheim, M., Raupach, T., & Mergen, M. (2024). Medical students’ AI literacy and attitudes towards AI: A cross-sectional two-center study using pre-validated assessment instruments. BMC Medical Education, 24(1), 401. [Google Scholar] [CrossRef]
  26. Lintner, T. (2024). A systematic review of AI literacy scales. NPJ Science of Learning, 9(1), 50. [Google Scholar] [CrossRef]
  27. Long, D., & Magerko, B. (2020, April 25–30). What is AI literacy? Competencies and design considerations. 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  28. López-Meneses, E., Sirignano, F. M., Vázquez-Cano, E., & Ramírez-Hurtado, J. M. (2020). University students’ digital competence in three areas of the DigComp 2.1 model: A comparative study at three European universities. Australasian Journal of Educational Technology, 36(3), 69–88. [Google Scholar] [CrossRef]
  29. Luckin, R., Holmes, W., Griffiths, M., & Pearson, L. (2016). Intelligence unleashed: An argument for AI in education. Open Ideas Series. Pearson. Available online: https://edu.google.com/pdfs/Intelligence-Unleashed-Publication.pdf (accessed on 15 November 2025).
  30. Mansoor, H. M. H., Bawazir, A., Alsabri, M. A., Alharbi, A., & Okela, A. H. (2024). Artificial intelligence literacy among university students—A comparative transnational survey. Frontiers in Communication, 9, 1478476. [Google Scholar] [CrossRef]
  31. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. (2021). Conceptualizing AI literacy: An exploratory review. Computers & Education: Artificial Intelligence, 2, 100041. [Google Scholar] [CrossRef]
  32. Oberländer, M., Beinicke, A., & Bipp, T. (2020). Digital competencies: A review of the literature and applications in the workplace. Computers & Education, 146, 103752. [Google Scholar] [CrossRef]
  33. OECD. (2025). Empowering learners for the age of AI: An AI literacy framework for primary and secondary education. Available online: https://ailiteracyframework.org/ (accessed on 20 October 2025).
  34. Pusey, P., Gupta, M., Mittal, S., & Abdelsalam, M. (2024). An analysis of prerequisites for artificial intelligence/machine learning-assisted malware analysis learning modules. Journal of the Colloquium for Information Systems Security Education, 11(1), 5. [Google Scholar] [CrossRef]
  35. Qazi, A., Hasan, N., Abayomi-Alli, O., Hardaker, G., Scherer, R., Sarker, Y., Kumar Paul, S., & Maitama, J. Z. (2021). Gender differences in information and communication technology use & skills: A systematic review and meta-analysis. Education and Information Technologies, 27(3), 4225–4258. [Google Scholar] [CrossRef]
  36. Rioseco-Pais, M., Silva-Quiroz, J., & Vargas-Vitoria, R. (2024). Digital competences and years of access to technologies among Chilean university students: An analysis based on the DigComp framework. Sustainability, 16(22), 9876. [Google Scholar] [CrossRef]
  37. Runge, I., Hebibi, F., & Lazarides, R. (2025). Acceptance of pre-service teachers towards artificial intelligence (AI): The role of AI-related teacher training courses and AI-TPACK within the technology acceptance model. Education Sciences, 15(2), 167. [Google Scholar] [CrossRef]
  38. Shen, Y., & Cui, W. (2024). Perceived support and AI literacy: The mediating role of psychological needs satisfaction. Frontiers in Psychology, 15, 1415248. [Google Scholar] [CrossRef]
  39. Siddharth, S., Prince, B., Harsh, A., & Ramachandran, S. (2025). ‘The World of AI’: A novel approach to AI literacy for first-year engineering students. In A. I. Cristea, E. Walker, Y. Lu, O. C. Santos, & S. Isotani (Eds.), Artificial intelligence in education. Posters and late breaking results, workshops and tutorials, industry and innovation tracks, practitioners, doctoral consortium, blue sky, and WideAIED (pp. 250–257). Springer. [Google Scholar] [CrossRef]
  40. Simon, H. A., & Newell, A. (1971). Human problem solving: The state of the theory in 1970. American Psychologist, 26(2), 145–159. [Google Scholar] [CrossRef]
  41. Tenberga, A., & Daniela, L. (2024). Artificial intelligence literacy competencies for teachers through self-assessment tools. Sustainability, 16(23), 10386. [Google Scholar] [CrossRef]
  42. Van Audenhove, L., Vermeire, L., Van den Broeck, W., & Demeulenaere, A. (2024). Data literacy in the new EU DigComp 2.2 framework: How DigComp defines competences on artificial intelligence, internet of things and data. Information and Learning Sciences, 125(5–6), 406–436. [Google Scholar] [CrossRef]
  43. Walter, Y. (2024). Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. [Google Scholar] [CrossRef]
  44. Wang, B., Rau, P. L. P., & Yuan, T. (2022). Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behaviour & Information Technology, 42(9), 1324–1337. [Google Scholar] [CrossRef]
  45. Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., Chandak, P., Liu, S., Van Katwyk, P., Deac, A., Anandkumar, A., Bergen, K., Gomes, C. P., Ho, S., Kohli, P., Lasenby, J., Leskovec, J., Liu, T.-Y., Manrai, A., & Marks, D. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620(7972), 47–60. [Google Scholar] [CrossRef]
  46. Weichert, J., Kim, D., Zhu, Q., Kim, J., & Eldardiry, H. (2025). Assessing computer science student attitudes towards AI ethics and policy. AI and Ethics, 5, 5985–6006. [Google Scholar] [CrossRef]
  47. Wen, H., Lin, X., Liu, R., & Su, C. (2025). Enhancing college students’ AI literacy through human-AI co-creation: A quantitative study. Interactive Learning Environments, 1–19. [Google Scholar] [CrossRef]
  48. Wu, D., Sun, X., Liang, S., Qiu, C., & Wei, Z. (2025). Construction of AI literacy evaluation system for college students and an empirical study at Wuhan University. Frontiers of Digital Education, 2, 6. [Google Scholar] [CrossRef]
  49. Xia, Q., Chiu, T. K. F., Lee, M., Sanusi, I. T., Dai, Y., & Chai, C. S. (2022). A self-determination theory (SDT) design approach for inclusive and diverse artificial intelligence (AI) education. Computers & Education, 189, 104582. [Google Scholar] [CrossRef]
  50. Zhang, Y., & Tian, Z. (2025). Digital competence in student learning with generative artificial intelligence: Policy implications from world-class universities. Journal of University Teaching and Learning Practice, 22(2), 1–22. [Google Scholar] [CrossRef]
Figure 1. Significant correlations between English proficiency level and perceived AI literacy scores.
Figure 1. Significant correlations between English proficiency level and perceived AI literacy scores.
Education 15 01582 g001
Figure 2. Significant correlations between the levels of Digital signal processing, Programming, and Creative thinking and AI literacy scores.
Figure 2. Significant correlations between the levels of Digital signal processing, Programming, and Creative thinking and AI literacy scores.
Education 15 01582 g002aEducation 15 01582 g002b
Table 1. AI literacy scores.
Table 1. AI literacy scores.
MaleFemale
AI Literacy ConstructItemsαMSDMSD
Awareness30.7695.161.545.321.64
Usage30.7885.291.485.531.50
Evaluation30.6315.041.525.231.54
Ethics30.7155.291.485.591.48
Table 2. Scores in digital competences and skills.
Table 2. Scores in digital competences and skills.
MaleFemale
Digital Competence and SkillMSDMSD
Digital signal processing5.012.465.843.26
Computer architecture5.002.325.922.48
Programming6.512.407.362.20
Software development3.372.502.482.35
3D modeling3.792.354.723.19
Computer animation3.652.514.243.39
Teamwork/communication2.872.404.243.09
Creative thinking6.092.497.122.31
Digital design4.992.646.003.19
Project management5.402.476.482.96
Table 3. Correlations between AI literacy and digital competences and skills.
Table 3. Correlations between AI literacy and digital competences and skills.
AI Literacy
Digital Competence and SkillAwarenessUsageEvaluationEthics
Digital signal processing0.35 **0.29 **0.34 **0.29 **
Computer architecture0.30 **0.170.30 **0.30 **
Programming0.32 **0.28 **0.24 *0.28 **
Software development0.10−0.130.040.10
3D modeling0.08−0.090.090.16
Computer animation0.11−0.100.090.08
Teamwork/communication−0.02−0.130.060.05
Creative thinking0.36 **0.44 **0.27 **0.43 **
Digital design0.33 **0.200.21 *0.27 **
Project management0.130.050.130.14
* p < 0.05, ** p < 0.01.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aleksić, V.; Mandić, M.; Ivanović, M. Aligning the Operationalization of Digital Competences with Perceived AI Literacy: The Case of HE Students in IT Engineering and Teacher Education. Educ. Sci. 2025, 15, 1582. https://doi.org/10.3390/educsci15121582

AMA Style

Aleksić V, Mandić M, Ivanović M. Aligning the Operationalization of Digital Competences with Perceived AI Literacy: The Case of HE Students in IT Engineering and Teacher Education. Education Sciences. 2025; 15(12):1582. https://doi.org/10.3390/educsci15121582

Chicago/Turabian Style

Aleksić, Veljko, Milinko Mandić, and Mirjana Ivanović. 2025. "Aligning the Operationalization of Digital Competences with Perceived AI Literacy: The Case of HE Students in IT Engineering and Teacher Education" Education Sciences 15, no. 12: 1582. https://doi.org/10.3390/educsci15121582

APA Style

Aleksić, V., Mandić, M., & Ivanović, M. (2025). Aligning the Operationalization of Digital Competences with Perceived AI Literacy: The Case of HE Students in IT Engineering and Teacher Education. Education Sciences, 15(12), 1582. https://doi.org/10.3390/educsci15121582

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop