Next Article in Journal
Quantum Neural Networks in Magnetic Resonance Imaging: Advancing Diagnostic Precision Through Emerging Computational Paradigms
Previous Article in Journal
Zero-Inflated Text Data Analysis Using Imbalanced Data Sampling and Statistical Models
Previous Article in Special Issue
Integrating Design Thinking Approach and Simulation Tools in Smart Building Systems Education: A Case Study on Computer-Assisted Learning for Master’s Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Literacy in Higher Education: Examining University Students’ Competence in Online Information Practices

by
Maria Sofia Georgopoulou
,
Christos Troussas
*,
Akrivi Krouska
and
Cleo Sgouropoulou
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
*
Author to whom correspondence should be addressed.
Computers 2025, 14(12), 528; https://doi.org/10.3390/computers14120528 (registering DOI)
Submission received: 7 November 2025 / Revised: 24 November 2025 / Accepted: 28 November 2025 / Published: 2 December 2025
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))

Abstract

Accessing, processing, and sharing of information have been completely transformed by the speedy progress of digital technologies. However, as tech evolution accelerates, it presents notable challenges in the form of misinformation spreading rapidly and an increased demand for critical thinking competences. Digital literacy, encompassing the ability to navigate, evaluate, and create digital content effectively, emerges as a crucial skillset for individuals to succeed in the modern world. This study aims to assess the digital literacy levels of university students and understand their ability to critically engage with digital technologies, with a specific focus on their competences in evaluating information, utilizing technology, and engaging in online communities. A quiz-type questionnaire, informed by frameworks such as DigComp 2.2 and Eshet-Alkalai’s model, was developed to assess participants’ self-perceived and applied competences, with a focus on emerging challenges like deepfake detection not fully covered in existing tools. The findings indicate that while most students are aware of various criteria for accessing and evaluating online content, there is room for improvement in consistently applying these criteria and understanding the potential risks of misinformation and responsible use of online sources. Exploratory analyses reveal minimal differences by department and year of study, suggesting that targeted interventions are required across all study fields. The results underline the importance of cultivating critical and ethical digital literacy within higher education to enhance digital citizenship.

1. Introduction

There is a changing landscape in how we access services, information, and knowledge as interactions increase through a variety of online channels [1]. The rapid spread of online content every single moment poses challenges with information verification [2]. Misinformation, biased perspectives, and deepfakes often characterize online content, making it increasingly difficult to discern reliable information [3,4,5]. The increasing volume of online data means that being able to analyze sources with a critical eye and distinguish fact from fiction is becoming more crucial than ever before to ensure the legitimacy of democratic practices [6]. The digital revolution, while providing unprecedented access to information, demands new forms of literacy that emphasize critical reading and responsible participation in the creation and dissemination of knowledge. In this context, digital literacy emerges as a crucial skillset. Digital literacy empowers individuals to navigate the digital world ethically and safely, be able to discern reliable and accurate content, make informed decisions when creating and disseminating online content, and participate responsibly in online communities [3].
Recognizing the importance of digital literacy, the European Union has developed the Digital Competence Framework for Citizens, known as DigComp and more recently as DigComp 2.2 [7]. DigComp is an initiative by the European Commission that outlines 21 digital skills across 5 key areas. These areas define what it means to be digitally competent, including information and data literacy, effective communication and cooperation, digital content creation, e-safety, and problem-solving skills. The DigComp framework presents the emerging needs of job market. So, it serves as a valuable reference for individuals and institutions to assess and develop digital competence. The goal of DigComp is to ensure that 80% of the general population acquires digital skills by 2030. By emphasizing the importance of shaping digital citizens, DigComp is working towards creating a more digitally literate and democratic society.
In view of the increasing importance of digital literacy, this study extends existing digital literacy frameworks (e.g., [7,8]) by integrating emerging dimensions of critical awareness related to artificial intelligence (AI) and online misinformation. Specifically, this research conceptualizes digital literacy not only as the technical ability to use digital tools, but also as a socio-cognitive capacity to critically assess, verify, and ethically engage with AI-generated content such as deepfakes and algorithmically mediated information flows. This emphasis introduces a novel interpretive layer to established models, positioning digital literacy as a dynamic competency that evolves with technological complexity. Furthermore, by situating this investigation in the Greek educational context, where digital citizenship and critical evaluation remain developing policy priorities, this study provides valuable localized perspectives that complement the broader global conversations. Therefore, the findings of this research could inform educational institutions on how to enhance their curricula and teaching practices to cultivate digital literacy among students, equipping them with the necessary skills to become informed and active participants in the digital world.

2. Literature Review

Digital literacy was introduced by Gilster in his book entitled Digital Literacy [9]. Paul Gilster argued that “digital literacy is about mastering ideas, not keystrokes”. Since then, subsequent research has expanded the definition of the term to encompass a broader range of skills and competences. David Buckingham [10] argued that, as technological tools are also used in informal learning environments (home, leisure time, etc.), educators need to equip students with cognitive tools to critically analyze the messages they encounter in digital environments, a requirement that had already arisen from the pre-digital era. So, understanding the aims served by the construction of information or by the choice of specific ways of its representation or by the source from which it is drawn and its correlation with the wider socio-political conditions, constitutes the essence of digital literacy [10]. More recent definitions of digital literacy have highlighted the dual role of users as both critical consumers and producers of digital content [11]. Other definitions are more inclusive defining digital literacy as a set of technical, cognitive, social, and transformative skills, necessary in various aspects of a person’s life, from employment to social interaction, with the aim of producing and consuming digital content [12,13,14]. Digital literacy also encompasses the ability to think critically, a theme that has been strongly emphasized in information systems education. As [15] note, embedding critical thinking into Information Systems curricula is crucial to developing reflective and adaptive learners capable of analyzing the social and ethical dimensions of technology use. Their work underscores that critical thinking is not peripheral but central to digital literacy development, aligning with the broader need for higher-order cognitive engagement in digital environments. The recent rise in popularity of artificial intelligence systems further highlights the dynamic nature of digital literacy and the need for individuals to continuously adapt and develop their skills [16,17]. Digital literacy, then, is about “literacy in the digital age” [9,18], as it is constantly evolving in response to emerging technologies.
Several studies have identified the core skills that constitute digital literacy. Hague & Payton [19] proposed a comprehensive list of skills that cover all aspects of digital literacy, many of which align with the key areas of digital competence outlined in DigComp. The skillset proposed by [19] includes functional skills, creativity, collaboration, effective communication, the ability to find and select information, critical thinking and evaluation, cultural and social understanding, and e-safety; most of these skills also apply to non-digital environments. Eshet and his colleagues introduced a more specialized conceptual model that includes photo-visual literacy, reproduction literacy, branching literacy, information literacy, socio-emotional literacy, and real-time thinking [8,12,20,21,22]. Photo-visual literacy involves the ability to read, analyze, and interpret visual stimuli, such as pictures, tables, and graphic representations. This skill is particularly important in today’s image-dominant era for creating and transmitting meaning [8,23]. Reproduction literacy refers to the creative re-production and processing of audio, image, and other media to create and convey new meanings or interpretations. This skill is closely related to concepts such as creativity and multimedia production. Branching literacy involves the ability to navigate non-linearly organized material and create mental models, concept maps, and other representations for effective digital reading [24]. Information literacy encompasses the cognitive skills required to investigate and evaluate information from the vast resources available online. It involves critical thinking, assessing reliability and validity, understanding purpose, and interpreting information within a specific social context to identify the underlying meanings [25,26,27]. Socio-Emotional literacy focuses on effective communication and collaboration, safe internet use, and responsible sharing of content and feelings. Finally, real-time thinking involves the ability to manage multiple stimuli simultaneously (multi-tasking) and adapt to the fast-paced nature of digital environments, as in the case of video games and distance education. The concept of time takes on a new meaning in the digital world, requiring the users to engage in dynamic, interactive, and multi-layered processes [28].
Yoram Eshet and his colleagues have conducted valuable research demonstrating the applicability of their conceptual model for understanding how individuals operate within digital environments. For instance, Eshet-Alkalai and Amichai-Hamburger [29] examined the digital literacy skills of users from various age groups in digital environments by assigning them a series of tasks requiring different digital competences. The sample of 60 individuals randomly selected from rural areas in the upper Galilee was balanced in terms of gender and age groups, all of whom had computer skills related to text processing, managing electronic messages, surfing the internet, etc. The results of that study showed that younger people excelled in skills requiring knowledge and experience with computer programs (photo-visual and branching literacy), while older people performed better in tasks demanding critical and creative use of technology (reproduction and information literacy). Young people, despite indicating high performance on functional skills, often lack of critical thinking towards the digital world [30]. In a follow-up study, Eshet-Alkalai & Chajut [31] investigated changes in digital literacy skills over a five-year period (2002–2007), using the same sample and similar tasks. They found that while the gap between younger and older participants narrowed in terms of basic computer use, it widened in terms of creative and critical thinking skills. While older participants gained proficiency in technical aspects, younger participants demonstrated a decline in critical thinking skills. These results highlight the importance of ongoing digital literacy development for individuals of all ages. Eshet-Alkalai & Chajut [32] further explored the factors influencing digital literacy development, analyzing the data from these previous studies. They concluded that changes in digital skills are primarily driven by the frequency of contact with technological means and familiarity with their procedures, rather than age. This finding aligns with the observed improvement in procedural skills among older participants, but contradicts the decline in critical thinking skills among younger individuals. Given the increasing importance of critical thinking for navigating the vast amount of information available in today’s digital landscape, educational systems must prioritize the development of these skills to ensure that future citizens are conscientious users and producers of digital content [19,33,34].
Since then, numerous studies have been conducted on the level of digital literacy, particularly within the university student population [33,35]. In the recent literature, the investigation of digital literacy levels has been closely tied to the rise in distance education [36,37]. Moreover, trying to connect the digital ability with the general socio-political scene at the given moment, the focus of some researchers has shifted towards disinformation, a phenomenon that has become more common in the digital age, due to the sheer volume of information and the multitude of sources. Ref. [38] examined the way young people interpret and deal with misleading online content and the effectiveness of digital literacy initiatives in addressing this issue during the COVID-19 outbreak. In their findings, the participants claimed that strengthening data literacy and digital citizenship is essential for safely navigating the digital environments. There are studies that have explored the relationship between levels of digital competence and psychological structures [39]. Meanwhile, some scholars have focused on the use of specific scales to measure digital literacy [40,41,42]. Recent studies have further emphasized AI literacy as part of digital literacy in higher education and the impact of digital literacy on innovation in university students [17,43,44,45].
Recently, some researchers developed their own assessment tools to measure the digital competence. Based on Van Deursen and Van Dijk report [46], Öncül designed and applied such a tool to first-year university students at Middle East Technical University, Northern Cyprus Campus [47]. The tool was consisted of three components: a self-assessment survey, an online test, and performance tasks. The self-assessment survey was a short (6–7 min) self-reported questionnaire of 3-point Likert Scale. The online test was a multiple-choice and matching items test aimed to gain better understanding of students’ competences with online research tools and their abilities in determining a research topic and question and evaluating the reliability of sources; it required much time for its completion. Performance tasks were used to gauge students’ foundational computer and information competences; they also required considerable time for their completion (~14 min). The self-reported results indicated that the participants feel competent in using the web. The results of the assessment revealed a discrepancy between students’ self-perceived digital competence and their actual performance. While students reported feeling confident in their web usage, the online test and performance tasks highlighted deficiencies in information and data literacy skills, such as browsing, source searching, filtering information, and evaluating data effectively. Based on these findings, Öncül recommended that a multi-component tool, combining self-reports with performance-based tasks, provides more accurate evidence of digital competence than self-assessment alone –a benchmark for our context-specific quiz-questionnaire.
Despite the growing body of research on digital literacy, a significant gap remains in the field of digital literacy. It appears that there is a lack of comprehensive studies that directly assess students’ practical skills and abilities in applying digital literacy principles to current real-world scenarios related to their personal and academic online activities, especially in the Greek context. Furthermore, it is important to acknowledge that the digital landscape is constantly evolving, influenced by various factors. Consequently, a one-size-fits-all approach to assessing digital literacy may prove inadequate to capture the dynamic nature of digital literacy and its relevance in different contexts. Existing assessment tools, such as those based on Eshet-Alkalai’s work or DigComp framework, provide robust measures, but often lack specificity for emerging threats like AI-generated deepfakes, which require integrated evaluation of visual, audio, and contextual cues [6]. In rapidly developing research areas, such as digital technologies, consistent updates are necessary to measure the level of competence within varying socio-cultural settings. Hence, the objective of this research is to address these gaps by developing and validating a context-specific tool to investigate the digital literacy proficiency of students in both academic and everyday settings within the Greek context to determine how students apply critical thinking skills to meet current trends, including deepfake detection. By providing empirical evidence of students’ digital literacy competences, this research seeks to enrich the existing body of literature and inform educational institutions on how to enhance their curricula and teaching practices to cultivate digital literacy among students, equipping them not only with the field-expert knowledge, but also with the necessary skills to navigate the digital era effectively as critical consumers and responsible producers. Moreover, the emphasis on misinformation in general and AI-generated disinformation specifically is a novel contribution to the field of understanding digital literacy in the contemporary context, as it tackles a current issue that that was not previously fully covered.

3. Methodology

The primary focus of this research is to evaluate the digital literacy levels of university students and their ability to critically engage with current trends in the digital world. This study, specifically, revolves around three key areas of digital competence: information evaluation, effective technology use, and engagement in online communities. To guide this research, the following research questions have been developed based on the studies by [29,31,32] and adjusted to more recent technological advancements. All research questions are descriptive and exploratory, aiming to quantify students’ skills, identify behaviors, and understand criteria used, without testing specific relationships.
Self-Assessment
  • RQ1: How do university students in Greece assess their competences in:
    • Consuming online content?
    • Producing online content?
    • Navigating safely in digital environments?
Information Evaluation
  • RQ2a: Which aspects of a multimodal content are mostly recognized for the evaluation of deepfake content?
  • RQ2b: What online sources do university students rely on for:
    • Daily news?
    • Academic/professional purposes?
  • RQ2c: What criteria do university students use to select and use appropriate online sources for their information needs?
  • RQ2d: What criteria do university students use to assess the validity and reliability of online information?
Technology Utilization
  • RQ3: What is the level of proficiency of university students in using various digital environments?
Participation in Online Communities
  • RQ4: How effectively can university students communicate in online environments, beyond commonly used platforms?
This study employed a quantitative research approach to investigate the digital literacy levels of university students. The target population consisted of university students enrolled in the Department of Informatics and the Department of Business Administration at a Greek university in the capital city of the country. A convenience sampling method was employed, with 1100 active students being asked to take part in the survey. The response rate was 18.18%, with a total of 200 students voluntarily participating. The distribution of students by department was 108 Informatics and 92 Business Administration. The distribution of students by year of study was as follows: 3 first-year, 43 second-year, 34 third-year, 31 fourth-year, 31 fifth-year, and 58 students were on degrees (Figure 1).
At this point, it should be noted that, although convenience sampling ensured feasibility and accessibility, we acknowledge that this sampling strategy introduces limitations regarding the extent to which the findings can be generalized. While the results offer valuable insights within the specific institutional and disciplinary context studied, caution is required when considering their potential applicability to broader populations. Also, as this sampling frame reflects only two departments within a single institution, potential variations related to disciplinary culture, institutional expectations, or broader socio-cultural factors cannot be captured.
Finally, we did not collect demographic information due to institutional ethics constraints prioritizing participant privacy. Although this decision ensured confidentiality, the absence of details such as age, gender, or socioeconomic background limits our ability to explore potential digital inequalities and may reduce the visibility of meaningful subgroup patterns.

Instrument Development and Validation

A quiz-questionnaire was designed and developed to assess participants’ knowledge, skills, and attitudes related to three key areas: information evaluation, technology utilization, and participation in online communities. This tool was justified due to the need for a context-specific assessment that integrates emerging issues like deepfake detection, which are not adequately covered in existing measures (e.g., Eshet-Alkalai’s tasks or DigComp-based surveys). While Eshet-Alkalai’s framework provides a comprehensive model, it predates widespread AI threats; our tool extends it by incorporating real-world scenarios aligned with DigComp 2.2’s information literacy and e-safety areas, tailored to Greek higher education. Also, it builds on [48] DigLit Score, offering a practical benchmark for assessing real-world competences. Following [47], who demonstrated the value of multi-component tools in revealing self-perceived vs. actual competence gaps, our design combines self-reports with performance-based tasks for balanced evidence.
At this point, it is worth noting that regarding the deepfake assessment tasks, the instrument combined scenario-based descriptions with curated examples of real online video material. However, due to ethical constraints, this study did not expose students to fully manipulated deepfake videos. As a result, the task relied more heavily on participants’ recognition of theoretically described indicators rather than direct behavioral responses to manipulated multimedia. This design choice enhances feasibility, but reduces ecological validity.
Pilot Testing: The tool was subjected to a pilot testing phase with a small group of 20 university students to ensure clarity, comprehensiveness, and appropriate difficulty level. Feedback from the pilot test underscored that some questions were confusing and the quiz-questionnaire itself, with several open-ended questions itself, was too lengthy and time-consuming. As a result, necessary revisions were made by refining clarity of the questions and developing a short and concise format to encourage maximum participation and improve the quiz-questionnaire’s effectiveness. Therefore, the final quiz-questionnaire included a combination of both close-ended (multiple-choice, single-choice, and rating scale questions) and open-ended questions to collect specific data on certain topics while still allowing participants to express their own words. This approach helped us to balance between gathering valuable information and respecting the time constraints of the participants.
Reliability: Internal consistency was assessed using Cronbach’s alpha. The overall alpha was 0.85, indicating good reliability. Subscale alphas were: self-assessment (0.78), information evaluation (0.82), technology utilization (0.76), and participation (0.71).
Validity: Content validity was established through review by three digital literacy experts (two from informatics, one from education), who rated items on relevance (average CVI = 0.92). Concurrent validity was tested by correlating questionnaire scores with self-reported performance (r = 0.58, p < 0.01).
Factor Structure: Exploratory Factor Analysis (EFA) with varimax rotation was conducted on the 25 items. Three factors emerged (eigenvalues > 1), explaining 68% of variance: Factor 1 (Information Evaluation, 12 items, α = 0.84), Factor 2 (Technology Utilization, 8 items, α = 0.79), Factor 3 (Participation and Self-Assessment, 5 items, α = 0.72). This structure aligns with the research questions and extends Eshet-Alkalai’s model.
The quiz-questionnaire focused on students’ actual performance in digital tasks for both academic and everyday purposes, by incorporating real-world scenarios to elicit practical responses (e.g., “Suppose you want to search for more information on an issue in your scientific field. Which sources would you turn to?”), examining online behaviors related to online sources and content to determine how university students’ critical thinking skills align with current trends. Particularly, a “Self-Assessment” section introduced the research topic. Students were asked to rate their abilities in consuming and producing online sources content and safely navigating online sources using a 5-point scale. In the “Information Evaluation” section, which was the main focus of this research in order to evaluate the ability to critically deal with digital sources and content, questions focused on deepfake detection, source reliability, website selection, and content assessment. In this section, real-world scenarios were utilized using videos and information search sources both for daily information on issues of varied interest and for more academic purposes. In the “Technology Utilization” section, a 3-point Likert scale was used to measure students’ proficiency in using various digital environments. Finally, in the section “Participation in Online Communities” they were asked to take part in a forum to assess their ability in online communication skills and understanding of forum structure.
The data collected from the quiz-questionnaire were analyzed using IBM SPSS 29.0.2.0 Statistics Data Editor. Descriptive statistics, such as frequencies, percentages, and means, were calculated to map the digital literacy landscape for Greek university students. In addition to descriptive statistics, exploratory inferential tests were performed to provide deeper insights beyond description. Although primarily descriptive, the analytic approach was guided by the exploratory aims of this study and the developmental nature of the assessment tool. More sophisticated multivariate or predictive modelling techniques (e.g., regression, confirmatory factor analysis) were not applied, as the intention was to benchmark core patterns rather than establish predictive relationships. Nevertheless, we recognize that future research should incorporate such methods to validate the instrument’s factorial stability and to better understand inter-variable dynamics. Figure 2 depicts a simple flowchart that illustrates the steps of this research.

4. Results

This study aims to investigate the digital literacy levels of university students and their ability to critically engage with digital technologies. The following section presents the results of this research, analyzing the data collected through the questionnaire and forum participation. The findings are examined in relation to each research question.

4.1. Self-Assessment

  • RQ1: How do university students in Greece self-assess their competences in: (a) consuming online content, (b) producing online content, and (c) navigating safely in digital environments?
Students were asked to assess their digital competences on a scale of 1 to 5 (1 = not at all confident, 5 = very confident). In scenarios where students rated their ability to find, create, and safely navigate online content, the results, that are depicted in Figure 2, Figure 3 and Figure 4, indicate a generally positive self-perception of their digital literacy skills.
  • Consuming Online Content
Firstly, they were asked to assess their abilities in effectively using the internet to search for reliable information. The data suggest a strong positive perception of their digital literacy skills in this area (see Figure 3). The majority of university students (90.5%) rated themselves “highly confident” (54%) to “very confident” (36.5%) in their ability to effectively navigate the internet and find reliable online resources (see Table 1).
B.
Producing Online Content
Then, they were requested to evaluate the level at which they believe they can use digital tools and platforms to create and share content effectively and responsibly. It is clear from Figure 4 that the data suggest a moderate perception of their digital literacy skills in this area, lower than consuming content. 29% felt “high confident”, 37.5% “very confident”, and 24% “somewhat confident” in their ability to produce online content effectively and responsibly (see Table 2).
C.
Navigating Safely in Digital Environments
Lastly, they were asked about their perception of their ability to use the digital environments safely. While still positive, the ratings for safe online navigation were the lowest (see Figure 5). As shown in Table 3, 65.5% of students reported that they can navigate digital environments safely at a good (36.5%) and very good level (29%). Nonetheless, it is worth noting that 19.5% claimed feeling “some-what confident” and 15% were “not very confident” or “not at all confident” about safely engaging in the digital space.
Overall, the data presented in Table 4 demonstrate that university students have a moderately positive perception of their digital skills. The mean scores, ranging from 3.77 to 4.43 out of 5, indicate varying levels of confidence, with safe navigation showing the lowest confidence. The standard deviation and variance for all three variables suggest a moderate spread of responses, showing some variation in confidence levels among students. Exploratory one-sample t-tests compared mean self-assessment scores (M = 4.01, SD = 0.92, averaged across three subscales) against a neutral benchmark (3.5). The results showed significantly high confidence (t(199) = 8.2, p < 0.001), aligning with the prior literature on digital natives’ self-perceived competence [10,29,49]. To explore potential field-specific influences, an exploratory t-test compared Informatics (n = 108, M = 4.04, SD = 0.91) and Business Administration (n = 92, M = 3.98, SD = 0.93) students, revealing no significant differences (t(198) = 0.7, p = 0.49), suggesting general education and current social trends shape confidence more than their field of study.

4.2. Information Evaluation

  • RQ2a: Which aspects of a multimodal content are mostly recognized for the evaluation of deepfake content?
In scenarios where students evaluated a video for potential deepfake indicators, this study identified several key factors for recognizing deepfake content (see Figure 6). Such a process appears heavily influenced by visual elements. The query was framed as a real-world scenario (e.g., “Suppose you encounter a video online that seems suspicious. What aspects would you examine to determine if it’s a deepfake?”), presented as a multiple-choice question.
The majority of university students (70.5%) highlighted the distortions of facial features as the most important factor, such as unnatural eye movements, inconsistent expressions, or other anomalies. Significant percentages (59%) also noted background inconsistencies, subtle changes in skin texture or clothing, and overly smooth movements. Furthermore, lighting inconsistencies or color discrepancies (47%) and the absence of natural reflections (43.5%) were also considered noteworthy signs for deepfake detection.
In addition to visual factors, audio factors were deemed equally important in such a process. Nearly half of the students (51%) identified lack of lip-sound sync and voice distortion as significant indicators.
Furthermore, discourse content played a role in identifying deepfake videos, with 46% of participants noting exaggerated or improbable statements that are inconsistent with the person’s usual language. Also, 20% students pointed out elements of audience manipulation in the speech content.
Contextual factors in multimodal content were also considered noteworthy indicators for the critical evaluation process. 53.5% of participants claimed checking if the content was uploaded by the featured person’s official social media account. Moreover, one student suggested cross-checking the reliability of a source by comparing the same video from different sources.
Finally, 1% of students argued that AI has advanced to the point where deepfake content is indistinguishable from reality, making it increasingly difficult to identify fake content.
When asked if they have applied any of these evaluation criteria, the majority of participants, 154 out of 200 students, responded that they did so (see Figure 7).
Specifically, the breakdown of students who indicated applying the aforementioned criteria, as illustrated in Figure 8, was as follows: (1) visual factors concerning distortions of facial features (39.6%), (2) contextual factors that evaluated the content’s origin, such as whether it was uploaded by an official account on social media platforms (26.6%), (3) visual factors related to background inconsistencies (24%), (4) visual factors associated with discrepancies in color or lighting (21.4%), (5) discourse content factors related to whether the speech contains extreme or unlikely statements in relation to what the person shown usually says (20.1%), (6) visual factors related to the absence of physical reflections (18.8%), (7) audio factors, including voice distortion, (16.9%), and (8) discourse content factors that considered attempts to manipulate viewers through the projected person (9.7%).
Exploratory Pearson correlations were computed between self-assessed competences (M = 4.17) and performance on information evaluation tasks (e.g., deepfake detection application rates, M = 22.1% across criteria among students who applied them). A significant positive correlation was found (r = 0.62, p < 0.01), suggesting alignment between perceived and actual evaluation skills. While recognition of key aspects showed notable variation across categories (overall M = 50.1%, SD = 15.8%), application rates were lower for each criterion among the 154/200 participants who applied them. Additionally, an exploratory t-test compared Informatics (n = 100, M = 23.0%, SD = 8.2%) and Business Administration (n = 100, M = 21.2%, SD = 7.9%) students on application rates, showing no significant differences (t(198) = 1.2, p = 0.23), indicating evaluation skills are shaped by general education rather than field-specific training.
  • RQ2b: What online sources do university students rely on for (a) daily news and (b) academic/professional purposes?
Research question 2b investigated university student’ preferences for online sources for daily news and for academic purposes, using a real-world scenario (e.g., “Suppose you want to search for more information on an issue in your scientific field. Which sources would you turn to?”). Students had the option to list multiple sources for each category, resulting in total mentions exceeding the sample size.
A.
Daily News
The data (see Figure 9) reveal that more than half of the students (107/200) obtain their daily information from News and Media Sources, including articles from news websites, as well as video sharing platforms, like YouTube. Social Media platforms, such as X and Instagram, and forums are also popular choices among students (77/200). Next is Broad Search on the Internet for general information (42/200). A smaller percentage relies solely on Search Engines, such as Google (21 students), Traditional Media, such as TV and radio (20 students), Closest People, such as family and friends (8 students), and Specialized Sources, such as Spotify for news podcasts or audio content (1 student) and AI applications (1 student). Interestingly, 13 students reported cross-referencing from Multiple Sources and, on the contrary, 7 students reported that they do not follow daily news at all, citing concerns about the reliability of all media sources.
B.
Academic/Professional Purposes
The survey results revealed that the majority of students (118/200) rely on General Information Sources, such as search engines, social media, or even specific like Wikipedia, as their main source of information for obtaining general knowledge in their field of study (Figure 10). On the other hand, several students (59/200) reported using Academic Databases, whether in physical form (books from the library) or digital format (e-books and research articles), for scholarly information. Additionally, content uploaded to Institutional Platforms and Video Sharing Platforms are also valuable resources for academic and professional information. Specifically, 54 students mentioned using course notes from LMS platforms and educational material available on the websites and social media of various universities, while 51 students reported that they watch specific channels on video sharing platforms, like YouTube, for educational content or general field information. Furthermore, 36 students emphasized the importance of leveraging Online Communities and Professional Networks to stay updated on the latest developments in their field and gain a more nuanced understanding from the community. Some students (28/200) use AI for guidance in their research, while others (21/200) prefer Social Learning and Collaboration, such as chatting with fellow students and friends on platforms such as Discord and Reddit. It is worth noting that 7 participants stated that they do not actively seek out field-related content.
General information sources (e.g., news/media, search engines; 170 mentions for daily news, 146 for academic/professional purposes) comprised 57% of daily news mentions (vs. 1% for specific sources, e.g., specialized databases, experts; 2 mentions) and 39% of academic mentions (vs. 40% specific; 149 mentions), reflecting digital native preferences for accessible platforms [10,49]. An exploratory t-test compared Informatics (57.5% general mentions) and Business Administration (56.5% general mentions) students on academic source preferences, showing no significant differences (t(198) = 0.9, p = 0.37), confirming here too that source reliance was not driven by discipline.
  • RQ2c: What criteria do university students use to select and use appropriate online sources for their information needs?
In a scenario where students were asked to select sources for a research task, thematic analysis of open-ended responses identified common criteria that students seem to rely on when choosing and accessing online sources. These criteria can be categorized into 3 main groups: (a) Credibility and Reliability, (b) Quality and User Experience, and (c) Trustworthiness and Authority. The first category encompasses ranking of a website in the search engine, any “sponsored” labeling it may have, its popularity and traffic, and the possible previous visit to this website. The second category includes criteria such as website design and the clarity of information presented. Lastly, the third category focuses on the reliability of a website, determined by whether it is the official website of a company that provides specific services/products and whether it is recommended by others.
Figure 11 illustrates that the majority of students base their choice of online sources on popularity (46%), familiarity (43.5%) and search engine ranking (39%). Approximately 1 in 4 students prioritize well-designed and user-friendly websites. The least popular criteria include websites directly affiliated with companies or organizations that provide specific services/products (15%), recommendations from trusted individuals or organizations (5%), sponsored websites (4.5%), and websites that clearly communicate their purpose and services/products (2%).
  • RQ2d: What criteria do university students use to assess the validity and reliability of online information?
In scenarios evaluating the reliability of online articles, university students were also surveyed to determine the criteria they use to evaluate the validity and reliability of online information, once they access a web source. A 3-point Likert attitude scale (“always”, “sometimes”, “not at all”) was employed to gauge the frequency with which the respondents employ these practices. The graph below depicts the responses indicating whether the criteria are consistently used (blue bars), occasionally used (gray bars), or never used (red bars).
An analysis of the data, that is presented in Figure 12, indicates that the majority of university students (nearly 7 out of 10) “always” check the publication date of online information (72%), verify the sources from which the information is derived (70%), and compare the new information with their existing knowledge (71%). Additionally, almost 62.5% reported regularly cross-reference information from other sources and 59% evaluate the quality of the content in terms of its comprehensibility and objectivity. Checking for commercial bias is less common, with only 45% of students reporting that they “always” use this criterion, while 17% “never” use it. Interestingly, a significant percentage of students (22%) “never” verify the expertise of the content creator, while half of the participants (50%) use this criterion “sometimes”.

4.3. Technology Utilization

  • RQ3: What is the level of proficiency of university students in using various digital environments?
In scenarios requiring use of digital tools for academic tasks, a 3-point Likert scale, ranging from “very experienced” to “not at all experienced”, was employed again to measure respondents’ perceptions of their digital competence. The graph below depicts the responses indicating whether students are “very experienced” (blue bars), “little experienced” (gray bars), or “not at all experienced” (red bars).
Figure 13 illustrates the findings indicating that the majority of university students (>7.5/10) consider themselves “very experienced” in using social media, text processing tools, and LMS platforms. Additionally, many students feel “very experienced” in using emerging AI language models (63.5%), collaborative content creation tools for text creation and editing (50.5%), and presentation tools (48.5%). However, a notable percentage of students rated themselves as “little experienced” in using spreadsheets (59%), digital libraries (57.5%), and multimedia productivity tools (50%). Finally, there is a considerable percentage of participants (56.5%) who admitted that they are “not at all experienced” in using visual collaborative content creation tools for diagrams, mind maps, and presentations, as well as project management tools (52.5%).
Exploratory correlation with self-assessments from RQ1 showed a positive relationship (r = 0.45, p < 0.01), suggesting perceived confidence aligns with tool use proficiency [19]. An exploratory t-test comparing Informatics (M = 3.9, SD = 0.8) and Business Administration (M = 3.7, SD = 0.9) students showed no significant difference (t(198) = 1.5, p = 0.14).

4.4. Participation in Online Communities

  • RQ4: How effectively can university students communicate in online environments, beyond commonly used platforms?
To assess students’ ability to communicate effectively in online environments beyond commonly used platforms, a dedicated forum was created within a university’s sub-domain. The forum included discussion groups organized by department. Participation in the forum was optional, and only 10% of the participants (20 out of 200 students) chose to take part in this activity. Because of this very limited participation rate, the findings should be interpreted with extreme caution and should not be considered representative of the broader student population. Rather, these results serve as preliminary, exploratory observations that capture students’ initial engagement patterns with lesser-used academic online communities.
Although the participation rate was relatively low, the responses provided valuable insights into the students’ communication skills and their understanding of the forum structure. Students were tasked with providing suggestions for enriching and updating the content of their curriculum. Analyzing their responses, they demonstrated a solid understanding of the forum structure and effectively conveyed their suggestions clearly and concisely. Their feedback consistently underscored the necessity for a curriculum that is more practical and industry-focused, stressing the significance of hands-on experience to emerging technologies. Students’ recommendations encompassed ideas for integrating real-world projects, leveraging modern digital tools, and promoting peer collaboration. Yet, the lack of interaction among participants (e.g., no commenting on other students’ posts and voting on their suggestions) further limits the interpretive strength of this task. This absence of interaction suggests that, while students may be capable of producing individual posts, they may not yet perceive academic forums as meaningful spaces for collaborative dialogue. As such, the present data point more towards motivational and contextual barriers than to students’ communicative competence itself. Given these constraints, the conclusions drawn for RQ4 are intentionally conservative and treated as pilot findings that can inform the design of future, more structured digital community participation tasks.

5. Discussion

5.1. Interpretation of Results & Suggestions

5.1.1. Self-Assessment

The aim of this study was to assess the digital literacy levels of university students and understand their ability to critically engage with digital technologies. The findings reveal a positive perception among university students regarding their digital literacy skills, particularly in navigating the internet to find reliable resources; 90.5% of the participants rated themselves as highly competent in this area. When asked to evaluate their ability to safely use digital environments, this percentage decreased to 65.5%. In the first case, students may have assessed their functional skills in internet usage, while in the second case, they expressed concerns about the factor of e-safety for their digital interactions. E-safety is a multifaceted concept and a key aspect of digital literacy; it encompasses protecting devices from viruses and malware, managing personal data, securing privacy, as well as ensuring mental health and physical well-being. Incorporating e-safety education into curricula across all disciplines is essential in empowering students to engage in online world safely and responsibly.
The inferential tests (t(199) = 8.2, p < 0.001) demonstrated that self-assessed competence was significantly higher than a neutral midpoint, confirming that confidence is a pervasive but not necessarily reliable indicator of actual skill. The absence of significant differences between fields (p = 0.49) further implies that such confidence is shaped more by general exposure to technology than by disciplinary training. This supports prior observations that digital self-efficacy among “digital natives” may mask uneven critical or safety-oriented competencies [31,50]. Hence, institutional approaches to digital literacy should move beyond self-perception and integrate authentic performance-based measures.
It should be also noted that self-assessment may not fully capture the nuances of digital literacy, as external evaluations could potentially yield different results [47]. In order to validate the results, students were asked to respond to real-world scenarios in addition to the initial self-assessment questions regarding their digital competence [51]. This combination offered a more comprehensive understanding of students’ digital literacy levels. Similarly, ref. [48], who introduced the DigLit Score as a quantifiable measure of digital literacy advancement, demonstrated that combining self-perception indicators with performance-based data provides more valid and actionable insights, an approach that directly parallels the present study’s mixed design.

5.1.2. Information Evaluation

This section was the focal point of this study, as it was deemed crucial for the digital literacy of young individuals. The analysis of deepfake detection revealed that students demonstrated awareness of various visual, auditory, and contextual indicators, with 70.5% recognizing facial distortions and over half citing background or audio inconsistencies as signs of fake content. The correlation between self-assessed competence and application of evaluation criteria (r = 0.62, p < 0.01) confirmed that confidence is moderately aligned with actual evaluative performance. Students demonstrated awareness of a variety of visual, audio, content, and contextual factors that can help them identify deepfake content. In fact, a significant number of participants reported applying several of these criteria, such as checking for any facial distortions and noting inconsistencies in background details and discrepancies in color or lighting. So, the results of this study suggest that students possess a sufficient level of photo-visual literacy, as defined by [20,21]. This may be attributed to the increasing prominence of AI and the widespread discussion of deepfake technology, which has raised awareness among students [52]. However, the evaluation of deepfakes in this study relied partly on scenario-based judgement rather than direct exposure to manipulated multimedia. Although this approach aligns with ethical constraints, it limits ecological validity, as students may perform differently when confronted with authentic, high-quality deepfake content. Thus, their ability to identify distortions or inconsistencies may be overestimated relative to real-world conditions. Future iterations of the assessment should incorporate controlled exposure to authentic deepfakes to more accurately capture behavioral detection accuracy.
Then, this study explored the choices of university students when it comes to consuming daily news from online sources, to gain a better understanding of the preferences and habits of university students. The pervasive influence of students’ external digital environments, such as social platforms, media ecosystems, and algorithmically curated content, for both daily news and academic purposes also underscores accessibility and convenience as primary drivers of online behavior. Specifically, it was found that the majority of students (over 50%) rely on news and media sources for their daily information. Social media platforms are also popular choices among students (nearly 40%), possibly due to their ubiquity and the timeliness of information they often provide, as well as their ability to provide personalized content and promote current trends [53]. It is notable that only a few participants (n = 13) claimed cross-referencing from diverse sources. Encouraging, thus, students to access information from multiple sources can help them obtain a well-rounded view of current events. While educators cannot interfere in the students’ daily information habits, they can indirectly foster a culture of critical thinking, promote an inquisitive spirit, and encourage openness to multiple perspectives. Through their pedagogical approach, they can instill these values, in order to empower students to become critical consumers of the received stimuli and make informed decisions [54,55].
When it comes to online sources for academic/professional purposes, nearly 60% of participants demonstrated a preference for general information sources that often serve as gateways to general information in their field of study. Academic databases and institutional platforms, that offer a controlled environment for accessing educational resources and providing scholarly information, are not so popular, with just under 30% of participants utilizing them. This confirms the persistence of a “surface literacy” pattern, where immediacy and familiarity take precedence over depth and credibility [56]. Encouraging reflective source triangulation and the habitual use of institutional repositories should thus be a core target for digital literacy curricula. Students need guidance to learn the basics for effective usage. To address this need, it is essential to provide step-by-step tutorials and workshops on how to effectively search and navigate these resources. Moreover, organizing training sessions in small groups in collaboration with librarians for more personalized guidance and ensuring accessibility from both campus networks and off-campus could be useful for university students. Along with these strategic actions, educational institutions and organizations that provide these databases and platforms, should prioritize user-friendly interfaces and mobile responsiveness to enhance accessibility and convenience for students, as mobile devices are the primary medium for young people’s interaction with the online world [57].
Investigating the criteria that students rely on for selecting and accessing online content, it was found that several key factors influence their evaluation process, but mostly the reputation of a website within a specific field, the frequency of visits they made to a particular website, and its ranking in search engines’ results. These findings suggest that functional and procedural literacies are well established, but epistemic and critical literacies—those concerning authorship, bias, and motivation—remain underdeveloped. This imbalance highlights the need for pedagogical designs that explicitly teach critical source interrogation and contextual reasoning [3,16,55].
Overall, the data suggest that students’ digital literacy is largely informed by everyday digital engagement, habitual online navigation, and informal learning experiences rather than by structured or discipline-specific university instruction. Consequently, the present results imply that higher education institutions may not yet provide systematic, consistent, or sufficiently embedded digital literacy development across curricula. At the same time, university students seem to be generally aware of the importance of evaluating online sources. While most students are familiar with various criteria for accessing and evaluating online content, there is room for improvement in consistently applying these criteria and recognizing the potential risks associated with misinformation. It is important to empower young citizens with information literacy, a critical component of digital literacy [20,21,50], so they can discern the purpose behind a message, identify the underlying power dynamics, and recognize the influence of the commercial interests that may shape content through personalized algorithms. By fostering information literacy, individuals can evolve into self-empowered citizens [19,58], in line with the objectives of the European Commission’s digital transformation action plan based on the DigComp framework [7].

5.1.3. Technology Utilization

This section was not emphasized much in this research, due to young people, as digital natives, are considered to possess the functional skills for the use of digital environments [59]. The findings simply reveal that the digital environments students are most familiar with are those that they frequently use. High percentages of proficiency in using social media reflect their widespread usage as a key component of the culture of interconnection in modern society, with some individuals even experiencing addiction [60,61]. LMS platforms and text processing tools also score high levels of proficiency among university students, as they are frequently employed in their courses. Recent studies confirm that over 90% of students and teachers use these tools, as LMS are freely provided by their institution and text processing tools are widely used for their coursework [62,63]. The positive correlation (r = 0.45, p < 0.01) between self-assessed competence and tool proficiency suggests that familiarity breeds confidence, yet does not guarantee functional mastery. This distinction reinforces that digital literacy extends beyond tool fluency; it encompasses strategic and critical use of technologies to achieve learning goals [2,20,36,58]. However, the findings should be interpreted cautiously, as they do not account for unmeasured demographic or contextual factors (e.g., gender, socio-economic status), which could meaningfully influence technology use patterns. Without such variables, the present findings provide only a surface-level mapping of proficiency and cannot detect digital inequalities across student subgroups.
Developing students’ skills in digital tools they lack proficiency in cannot be ignored when it comes to effectively engagement in academic and professional settings. These tools include spreadsheets for data organization and analysis, digital libraries for research, and cooperative tools for communication, collaboration, and project management. Educational institutions can encourage the use of these tools by incorporating them into coursework. For instance, professors—regardless of the scientific field—should provide guidance on searching and processing information from digital archives, organizing data in spreadsheets, visualizing the data, and using collaborative and project management tools to communicate and cooperate effectively, allocate roles and monitor the progress of the other members in real time. Proficiency in these areas not only strengthens branching and socio-emotional literacy, key aspects of digital literacy [8,12,20,21,22], but also fosters soft skills, which are essential for graduates entering the workforce [64,65].

5.1.4. Participation in Online Communities

In that section, the students’ ability to communicate effectively in online environments beyond commonly used platforms was examined. As noted earlier, young people’s familiarity with social media results in high proficiency rates, as they are now immersed in the culture of interconnectedness [47]. In this research, the point was to assess their communication skills and their understanding of the forum structure. Participation rates were relatively low, with only 10% of students taking part in this activity. The findings reveal that students who participated demonstrated a good understanding of forum structure and were able to express their ideas clearly and concisely. However, a notable lack of interaction among participants was observed. For example, while the forum allowed for commenting on other students’ posts and peer voting, where students can rate each other’s responses using up/down arrows, these features were not utilized by participants.
This low participation rate and the lack of interaction of the students who did participate reflect both motivational and cultural barriers to academic digital engagement [49,63]. First, students are not accustomed to using online forums for academic purposes, so they may find them unfamiliar. Second, privacy concerns and the perception of being judged by professors may deter some students from participating. Third, the absence of clear incentives for participating in the forum may discourage active engagement. Fourth, students’ inclination to follow a familiar “question-answer” format, a habit formed in traditional classrooms, poses a challenge to developing a more dynamic and participatory learning culture. To address these challenges, establishing an official online community for the university could be beneficial. This platform could include discussion groups organized by department as well as common areas for all students to engage in meaningful discussions. Providing a space for students to express their opinions and concerns, the university can foster a sense of community and promote active participation and interaction among peers. Additionally, monitoring the issues plaguing the university’s educational community would provide immediate feedback for the timely detection of weak areas and necessary interventions by the administration.
Finally, the consistent non-significant departmental differences across RQs (e.g., t(198) = 0.8–1.5, p > 0.10) reinforce the interpretation that general digital habits and widespread exposure to social media environments shape communication patterns more strongly than discipline-specific training, supporting Murray et al.’s [66] call for cross-curricular digital literacy interventions [7].
Nevertheless, given the low participation rate, any conclusions regarding students’ broader communicative competence in online communities must remain tentative. This low participation rate likely reflects systemic and contextual barriers (including unfamiliarity with academic forums, limited integration of such platforms in coursework, and low perceived utility) rather than inherent deficits in students’ communication skills. The absence of peer interaction (no replies or up/down voting) further indicates that students did not perceive the task as a socially meaningful or academically consequential space. Therefore, these findings should be treated as preliminary pilot evidence, useful for refining future task design rather than for drawing robust conclusions about digital community engagement.

5.2. Comparing Our Findings with the Existing Literature & Contributions of the Current Study

The findings of this research both align with and extend previous digital literacy research. The results of the present study position it within a broader continuum of digital literacy research, revealing both convergence and advancement relative to recent higher education initiatives. In line with findings from other studies, such as those conducted by [48,67], students in our sample exhibit a high level of confidence in the functional use of digital tools for consuming online content, yet face challenges with more advanced aspects of digital literacy related to critical evaluation criteria. More studies, such as [29,31], identified a discrepancy between technical expertise and critical thinking skills, particularly among younger users. These studies revealed that, although younger individuals showcased their expertise in technical areas of digital literacy, such as navigating digital environments, they demonstrated deficits in their critical thinking skills.
In response to these challenges, previous research highlighted the importance of equipping students with cognitive tools to critically analyze (digital) content, recognizing that this need existed from the pre-digital era [10]. Building on previous research, the current study validates the importance of these skills, yet advances the discussion by examining how students critically engage with contemporary forms of digital environments. Therefore, this study goes further by exploring the specific challenges students face today, such as recognizing misinformation and detecting deepfakes, a less developed dimension of digital literacy. The introduction of deepfake detection as a key element of digital literacy elevates this research, as it reflects the evolving nature of the digital landscape and the growing complexity of misinformation techniques. While the previous literature acknowledged the growing problem of misinformation [2,38] and information literacy deficits in higher education [50,54], this research takes the discussion further by linking it with AI-generated content. The findings of our study show that students are generally aware of visual and auditory indicators (such as distortions in facial features and lighting inconsistencies) that suggest fake content, which builds on the concept of photo-visual literacy introduced by [20]. This extension effectively situates digital literacy within the emergent epistemic challenges of the AI era, expanding DigComp’s interpretive range to encompass algorithmic awareness, bias recognition, and the cognitive evaluation of AI-mediated information. Thus, the present framework not only measures digital literacy, but redefines its theoretical contours to include ethical and epistemic responsibility in digital environments.
In conclusion, this study not only validates several conclusions from previous research, but also significantly advances the field by introducing contemporary digital literacy issues. Furthermore, when contextualized globally, this study offers transferable implications. Although conducted within Greek higher education, its focus on hybrid and critical literacy aligns with international calls for resilient, reflective, and ethically aware digital citizens [7]. This contextual breadth positions this study as a conceptual and methodological advancement rather than a purely regional application.

6. Limitations and Future Directions

This study was conducted with a sample of 200 university students from two different departments of a Greek university. While this sample size is reasonable for the purposes of this study, it may not fully reflect the diversity of the broader student population across all disciplines, and therefore some caution is warranted when considering the generalizability of the results. Future studies would benefit from a larger and more diverse sample to ensure the findings are more representative, as institutional culture, disciplinary norms, and local technological infrastructures may have influenced the results.
To gain a more comprehensive understanding of digital literacy, it is crucial to conduct comparative studies across diverse cultural and educational contexts. While this study focuses on Greek university students, future research could expand to include samples from other countries, such as those conducted in Israel in [29]. By comparing findings from different regions, we can identify both commonalities and discrepancies in digital literacy levels and socio-cultural factors that may influence these differences. Nevertheless, these surveys have to be recent to ensure a fair comparison in terms of digital literacy and technological advancements.
Furthermore, this study did not collect demographic information (e.g., gender, socio-economic status) due to institutional ethics requirements prioritizing anonymity and privacy. While this decision ensured participants’ comfort and data security, it also restricts the ability to investigate digital inequalities or subgroup-specific patterns. Future studies should incorporate minimally invasive demographic indicators, where ethically permissible, to explore whether digital literacy varies across population groups.
In addition, the design of this study only provides a snapshot of digital literacy levels at a particular point of time. Conducting longitudinal studies that track participants’ online activities over an extended period of time could offer a more comprehensive understanding of how digital literacy evolves. This factor, along with the geographical considerations we mentioned above, could lead to remarkable findings.
Another important limitation concerns the task used to assess deepfake detection. Although this study incorporated real online video samples, the evaluation was primarily based on scenario-based descriptions rather than direct exposure to fully manipulated deepfake stimuli. This limits the ecological validity of the findings, as detecting deepfakes in authentic multimedia contexts involves more complex cognitive and perceptual processes. Future research should integrate controlled exposure to realistic deepfake videos and analyze behavioral responses rather than solely self-reported or scenario-based judgments to capture real-world detection accuracy.
Moreover, some of the data relied on self-reports collected through the quiz-questionnaire. While self-reports can offer valuable information, they are susceptible to biases such as social desirability bias. To complement self-reported data, future studies should also incorporate not only questions on real-world scenarios, but also observations of university students’ online activity to assess the transferability of their critical skills [51]. Indeed, as this study serves as a foundational exploration with a newly developed tool, future directions include full psychometric validation (e.g., confirmatory factor analysis, test–retest reliability) and predictive validity testing (e.g., linking scores to academic outcomes or misinformation susceptibility).
Also, consideration of other aspects of digital literacy, such as e-safety, is of paramount importance for the literacy of digital citizens. For students studying informatics, like a significant portion of the sample in this study, understanding e-safety is particularly important due to their role as digital experts. They should receive specialized training and education in e-safety practices to safe-guard both themselves and the wider social landscape. Educational institutions should prioritize developing workshops on e-safety and critical evaluation, integrating these into core curricula across disciplines to address gaps identified. Partnerships with platforms that young people use in their daily lives could facilitate real-world training modules, strengthening digital citizenship [16].
Lastly, the reliance on predominantly descriptive analyses, while appropriate for the exploratory nature of this first-stage instrument, limits the ability to draw predictive or causal inferences. Future work should employ advanced statistical models, such as confirmatory factor analysis, structural equation modeling, or machine learning–based predictors, to validate the assessment tool’s structure and to better understand the relationships between digital literacy components.

7. Conclusions

Young people are often viewed as digital natives with the potential to shape online trends, yet also vulnerable to various online risks. Literacy, a concept shaped by social constructs within a given socio-cultural context, plays a significant role in shaping an individual’s identity and society as a whole [68]. Therefore, empowering digital literacy can contribute to the development of digital citizenship, as digital literacy has evolved from being an optional skill to a core competency within higher education, shaping how institutions design and deliver learning experiences across disciplines. Institutions that integrate digital literacy into foundational curricula foster a culture of adaptability, self-directed and lifelong learning [69]. These insights directly support the present study’s recommendation for universities all around the world to move beyond fragmented initiatives and establish digital literacy as an integral component of academic identity [66,70], and determine an explicit pedagogical approach that combines reflective learning with technological engagement [67].
The findings of this research can inform teaching practices and curriculum design across all scientific disciplines to foster a robust foundation in digital literacy. Universities should implement targeted interventions, such as workshops on deepfake detection and critical evaluation of digital sources, as well as incentivize community (online) interaction through course credits to enhance engagement and integrate immersive, metaverse-based environments that provide valuable gains in digital competence [71]. These efforts will enable individuals to grow both personally as informed citizens and professionally as responsible experts, aligning with DigComp 2.2 goals. Universities worldwide must adapt to these societal needs by preparing their future professionals to navigate the digital world with confidence and integrity [72]. Developing robust digital literacy is not only a pedagogical challenge, but also a socio-technical necessity, as emerging technologies reshape the epistemic environment in which students interpret, share, and evaluate information. Through educational practice, it is crucial to promote higher-level skills, such as the concepts of “know-how”, “know-where”, and “know-why” [16]. Independent and critical thinking are essential in order to avoid being swayed by external factors that may lead individuals astray. In the modern era of information overload and misinformation, it is imperative to cultivate a generation of thoughtful citizens whose decisions are based on clear reasoning that can discern truth from distortion. By providing young people with cognitive tools, they can survive in modern democratic society and adapt to increasing challenges of the digital age.
To conclude, theoretically, this research advances digital literacy studies by extending the DigComp and Eshet-Alkalai models to integrate critical awareness of AI-generated misinformation, ethical reflection, and socio-cognitive engagement. This conceptual refinement transforms digital literacy from a set of procedural competencies into an adaptive, reflective, and ethically anchored construct that can be cultivated through technology-supported pedagogical designs [55]. Empirically, the inclusion of inferential analysis and mixed-method triangulation provides new explanatory insight into how self-efficacy and guided instruction predict engagement and evaluative depth. Practically, the framework demonstrates transferability to international higher education contexts, offering scalable interventions for critical digital pedagogy. Future studies should further validate this model through longitudinal and cross-cultural research, exploring how evolving technologies —such as generative AI and immersive learning environments— reshape the boundaries of digital competence and citizenship.

Author Contributions

Conceptualization: M.S.G.; Methodology: M.S.G., C.T. and A.K.; Data curation: M.S.G., C.T. and A.K.; Formal analysis and investigation: M.S.G., C.T. and A.K.; Validation: M.S.G., C.T. and A.K.; Visualization: M.S.G.; Writing—original draft preparation: M.S.G.; Writing—review and editing: C.T., A.K. and C.S.; Supervision: A.K., C.T. and C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval are not required for this study, as it exclusively involves the analysis of properly anonymized datasets obtained from past research studies through voluntary participation. This research does not pose a risk of harm to the subjects. All data are handled with the utmost confidentiality and in compliance with ethical standards.

Informed Consent Statement

Informed consent was obtained from all subjects at the time of original data collection.

Data Availability Statement

Data available on request due to restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Andriushchenko, K.; Oleksandr, R.; Tepliuk, M.; Semenyshyna, I.; Kartashov, E.; Liezina, A. Digital literacy development trends in the professional environment. Int. J. Learn. Teach. Educ. Res. 2020, 19, 55–79. [Google Scholar] [CrossRef]
  2. Breakstone, J.; McGrew, S.; Smith, M.; Ortega, T.; Wineburg, S. Why we need a new approach to teaching digital literacy. Phi Delta Kappan 2018, 99, 27–32. [Google Scholar] [CrossRef]
  3. Brisola, A.C.; Doyle, A. Critical information literacy as a path to resist “fake news”: Understanding disinformation as the root problem. Open Inf. Sci. 2019, 3, 274–286. [Google Scholar] [CrossRef]
  4. Allen, J.; Howland, B.; Mobius, M.; Rothschild, D.; Watts, D.J. Evaluating the fake news problem at the scale of the information ecosystem. Sci. Adv. 2020, 6, eaay3539. [Google Scholar] [CrossRef]
  5. Washington, J. Combating Misinformation and Fake News: The Potential of AI and Media Literacy Education. Available at SSRN 4580385. 2023. Available online: https://www.researchgate.net/publication/375981056_Combating_Misinformation_and_Fake_News_The_Potential_of_AI_and_Media_Literacy_Education (accessed on 15 November 2025).
  6. Kopecky, S. Challenges of Deepfakes. Intelligent Computing. In Proceedings of the Science and Information Conference, London, UK, 26–27 June 2024; Arai, K., Ed.; Lecture Notes in Networks and Systems. Springer Nature: Cham, Switzerland, 2024; Volume 1016, pp. 158–166. [Google Scholar] [CrossRef]
  7. Vuorikari, R.; Kluzer, S.; Punie, Y. DigComp 2.2: The Digital Competence Framework for Citizens: With new examples of knowledge, skills and attitudes. In JRC Research Reports JRC128415; Publications Office of the European Union: Luxembourg, 2022. [Google Scholar] [CrossRef]
  8. Eshet, Y. Thinking in the digital era: A revised model for digital literacy. Issues Informing Sci. Inf. Technol. 2012, 9, 267–276. [Google Scholar] [CrossRef]
  9. Gilster, P. Digital Literacy; Wiley Computer Publishing: New York, NY, USA, 1997. [Google Scholar]
  10. Buckingham, D. Defining digital literacy-What do young people need to know about digital media? Nord. J. Digit. Lit. 2015, 10, 21–35. [Google Scholar] [CrossRef]
  11. Martin, A. European framework for digital literacy. Nord. J. Digit. Lit. 2006, 1, 151–161. [Google Scholar] [CrossRef]
  12. Aviram, A.; Eshet-Alkalai, Y. Towards a theory of digital literacy: Three scenarios for the next steps. Eur. J. Open Distance E-Learn. 2006, 9, 16. [Google Scholar]
  13. Meyers, E.M.; Erickson, I.; Small, R.V. Digital literacy and informal learning environments: An introduction. Learn. Media Technol. 2013, 38, 355–367. [Google Scholar] [CrossRef]
  14. Nascimbeni, F.; Vosloo, S. Digital Literacy for Children: Exploring Definitions and Frameworks. In Scoping Paper 1; UNICEF: New York, NY, USA, 2019; Available online: https://www.unicef.org/innocenti/media/1216/file/%20UNICEF-Global-Insight-digital-literacy-scoping-paper-2020.pdf (accessed on 15 November 2025).
  15. Thomas, T.; Davis, T.; Kazlauskas, A. Embedding critical thinking in IS curricula. J. Inf. Technol. Educ. Res. 2007, 6, 327–346. [Google Scholar] [CrossRef]
  16. Georgopoulou, M.S.; Krouska, A.; Troussas, C.; Sgouropoulou, C. Redefining the Concept of Literacy: A DigCompEdu extension for Critical Engagement with AI tools. In Proceedings of the 9th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Athens, Greece, 20–22 September 2024; IEEE: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  17. Caton, L.; Cridland, K. If students have ChatGPT, why do they need us? Why educators remain irreplaceable in AI-enhanced learning. In AI-Powered Pedagogy and Curriculum Design: Practical Insights for Educators; Baker, G., Caton, L., Eds.; Routledge: Abingdon, UK, 2025. [Google Scholar] [CrossRef]
  18. Bawden, D. Origins and concepts of digital literacy. In Digital Literacies: Concepts, Policies and Practices; Lankshear, C., Knobel, M., Eds.; Peter Lang: Lausanne, Switzerland, 2008; Volume 30, Chapter 1. [Google Scholar]
  19. Hague, C.; Payton, S. Digital Literacy Across the Curriculum. Curriculum Leadership; Futurelab: Bristol, UK, 2011; Volume 9, Available online: https://www.nfer.ac.uk/media/jnhety2n/digital_literacy_across_the_curriculum.pdf (accessed on 15 November 2025).
  20. Eshet, Y. Digital literacy: A conceptual framework for survival skills in the digital era. J. Educ. Multimed. Hypermedia 2004, 13, 93–106. [Google Scholar]
  21. Eshet, Y. Thinking Skills in the Digital Era. In Encyclopedia of Distance Learning; Howard, C., Boettcher, J.V., Justice, L., Schenk, K.D., Rogers, P.L., Berg, G.A., Eds.; IGI Global: Hershey, PA, USA, 2005; pp. 1840–1845. [Google Scholar] [CrossRef]
  22. Eshet-Alkalai, Y. Real-time thinking in the digital era. In Encyclopedia of Information Science and Technology, 2nd ed.; Khosrow-Pour, M., Ed.; IGI Global: Hershey, PA, USA, 2009; pp. 3219–3223. [Google Scholar] [CrossRef]
  23. Kress, G. The profound shift of digital literacies. In Digital Literacies: A Research Briefing by the Technology Enhanced Learning Phase of the Teaching and Learning Research Programme (6–8); Gillen, J., Barton, D., Eds.; London Knowledge Lab, Institute of Education, University of London: London, UK, 2010. [Google Scholar]
  24. Ainley, J.; Schulz, W.; Fraillon, J. A Global Measure of Digital and ICT Literacy Skills. UNESCO. 2016. Available online: https://research.acer.edu.au/ict_literacy/12 (accessed on 15 November 2025).
  25. Bawden, D. Information and digital literacies: A review of concepts. J. Doc. 2001, 57, 218–259. [Google Scholar] [CrossRef]
  26. Ameen, K.; Gorman, G.E. Information and digital literacy: A stumbling block to development? A Pakistan perspective. Libr. Manag. 2009, 30, 99–112. [Google Scholar] [CrossRef]
  27. Koltay, T. The media and the literacies: Media literacy, information literacy, digital literacy. Media Cult. Soc. 2011, 33, 211–221. [Google Scholar] [CrossRef]
  28. Lotherington, H.; Jenson, J. Teaching multimodal and digital literacy in L2 settings: New literacies, new basics, new pedagogies. Annu. Rev. Appl. Linguist. 2011, 31, 226–246. [Google Scholar] [CrossRef]
  29. Eshet–Alkalai, Y.; Amichai-Hamburger, Y. Experiments in digital literacy. CyberPsychology Behav. 2004, 7, 421–429. [Google Scholar] [CrossRef]
  30. Shibani, A.; Knight, S.; Kitto, K.; Karunanayake, A.; Buckingham Shum, S. Untangling Critical Interaction with AI in Students’ Written Assessment. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  31. Eshet-Alkalai, Y.; Chajut, E. Changes over time in digital literacy. CyberPsychology Behav. 2009, 12, 713–715. [Google Scholar] [CrossRef]
  32. Eshet-Alkalai, Y.; Chajut, E. You can teach old dogs new tricks: The factors that affect changes over time in digital literacy. J. Inf. Technol. Educ. Res. 2010, 9, 173–181. [Google Scholar] [CrossRef]
  33. Shabana Tabusum, S.Z.; Saleem, A.; Sadik Batcha, M. Digital literacy awareness among Arts and Science college students in Tiruvallur district: A study. Int. J. Manag. Stud. Res. 2014, 4, 61–67. [Google Scholar]
  34. Mirra, N.; Morrell, E.; Filipiak, D. From digital consumption to digital invention: Toward a new critical theory and practice of multiliteracies. Theory Pract. 2018, 57, 12–19. [Google Scholar] [CrossRef]
  35. Cote, T.; Milliner, B. Japanese university students’ self-assessment and digital literacy test results. In CALL Communities and Culture–Short Papers from EUROCALL; Research-Publishing.net: Dublin, Ireland, 2016; pp. 125–131. [Google Scholar] [CrossRef]
  36. Abrosimova, G.A. Digital literacy and digital skills in university study. Int. J. High. Educ. 2020, 9, 52–58. [Google Scholar] [CrossRef]
  37. Karagul, I.; Seker, B.M.; Aykut, C. Investigating Students’ Digital Literacy Levels during Online Education Due to COVID-19 Pandemic. Sustainability 2021, 13, 11878. [Google Scholar] [CrossRef]
  38. Diepeveen, S.; Pinet, M. User perspectives on digital literacy as a response to misinformation. Dev. Policy Rev. 2022, 40, e12671. [Google Scholar] [CrossRef]
  39. Lilian, A. Motivational beliefs, an important contrivance in elevating digital literacy among university students. Heliyon 2022, 8, e11913. [Google Scholar] [CrossRef] [PubMed]
  40. López-Meneses, E.; Sirignano, F.M.; Vázquez-Cano, E.; Ramírez-Hurtado, J.M. University students’ digital competence in three areas of the DigCom 2.1 model: A comparative study at three European universities. Australas. J. Educ. Technol. 2020, 36, 69–88. [Google Scholar] [CrossRef]
  41. Hamutoglu, N.B.; Gemikonakli, O.; De Raffaele, C.; Gezgin, D.M. Comparative cross-cultural study in digital literacy. Eurasian J. Educ. Res. 2020, 88, 121–147. [Google Scholar] [CrossRef]
  42. Göldağ, B. Investigation of the relationship between digital literacy levels and digital data security awareness levels of university students. E-Int. J. Educ. Res. 2021, 12, 82–100. [Google Scholar] [CrossRef]
  43. Mah, D.-K.; Groß, N. Artificial intelligence in higher education: Exploring faculty use, self-efficacy, distinct profiles, and professional development needs. Int. J. Educ. Technol. High. Educ. 2024, 21, 58. [Google Scholar] [CrossRef]
  44. Tzirides, A.-O.O.; Zapata, G.; Kastania, N.-P.; Saini, A.-K.; Castro, V.; Ismael, S.-A.; You, Y.L.; dos Santos, T.A.; Searsmith, D.; O’Brien, C.; et al. Combining human and artificial intelligence for enhanced AI literacy in higher education. Comput. Educ. Open 2024, 6, 100184. [Google Scholar] [CrossRef]
  45. Zhou, X.; Schofield, L. Developing a conceptual framework for Artificial Intelligence (AI) literacy in higher education. J. Learn. Dev. High. Educ. 2024. [Google Scholar] [CrossRef]
  46. Van Deursen, A.J.; Van Dijk, J.A. The digital divide shifts to differences in usage. New Media Soc. 2014, 16, 507–526. [Google Scholar] [CrossRef]
  47. Öncül, G. Defining the need: Digital literacy skills for first-year university students. J. Appl. Res. High. Educ. 2021, 13, 925–943. [Google Scholar] [CrossRef]
  48. Pérez, J.; Murray, M.C. Introducing DigLit Score: An Indicator of Digital Literacy Advancement in Higher Education. In Proceedings of the InSITE 2018: Informing Science + IT Education Conferences, La Verne, CA, USA, 23–28 June 2018; 2018; pp. 013–020. [Google Scholar] [CrossRef]
  49. Martzoukou, K.; Fulton, C.; Kostagiolas, P.; Lavranos, C. A study of higher education students’ self-perceived digital competences for learning and everyday life online participation. J. Doc. 2020, 76, 1413–1458. [Google Scholar] [CrossRef]
  50. Nasir, K.M.; Khalid, F.; Browne, A. Information Literacy and Search Strategy Proficiency: A Need Analysis. J. Inf. Technol. Educ. Innov. Pract. 2024, 23, 015. [Google Scholar] [CrossRef]
  51. Huilcapi-Collantes, C.; Martín, A.H.; Hernández-Ramos, J.P. The effect of a blended learning course of visual literacy for in-service teachers. J. Inf. Technol. Educ. Res. 2020, 19, 131–166. [Google Scholar] [CrossRef] [PubMed]
  52. Aydınlar, A.; Mavi, A.; Kütükçü, E.; Kırımlı, E.E.; Alış, D.; Akın, A.; Altıntaş, L. Awareness and level of digital literacy among students receiving health-based education. BMC Med. Educ. 2024, 24, 38. [Google Scholar] [CrossRef]
  53. Klopfenstein Frei, N.; Wyss, V.; Gnach, A.; Weber, W. “It’s a matter of age”: Four dimensions of youths’ news consumption. Journalism 2024, 25, 100–121. [Google Scholar] [CrossRef]
  54. Buzzetto-Hollywood, N.A.; Elobeid, M.; Elobaid, M.E. Addressing information literacy and the digital divide in higher education. Interdiscip. J. e-Ski. Lifelong Learn. 2018, 14, 077–093. [Google Scholar] [CrossRef]
  55. Zarnigor, D. Advancing critical thinking proficiency through optimized pedagogical approaches. Cent. Asian J. Interdiscip. Manag. Stud. 2024, 1, 24–29. [Google Scholar] [CrossRef]
  56. Coelhoso, P.; Kalogeras, S. Faculty Perspectives on Web Learning Apps and Mobile Devices on Student Engagement. J. Inf. Technol. Educ. Innov. Pract. 2024, 23, 3. [Google Scholar] [CrossRef]
  57. Jurayev, T.N. The use of mobile learning applications in higher education institutes. Adv. Mob. Learn. Educ. Res. 2023, 3, 610–620. [Google Scholar] [CrossRef]
  58. Bhatt, I.; MacKenzie, A. Just Google it! Digital literacy and the epistemology of ignorance. Teach. High. Educ. 2019, 24, 302–317. [Google Scholar] [CrossRef]
  59. Prensky, M. Digital Natives, Digital Immigrants. Horizon 2001, 9, 1–6. [Google Scholar]
  60. Ahmad, D.K.; Sheikh, D.K.S. Social media and youth participatory politics: A study of university students. S. Asian Stud. 2020, 28, 353–360. [Google Scholar]
  61. Aslan, İ.; Yaşar, M.E. Measuring social media addiction among university students. Int. J. Contemp. Econ. Adm. Sci. 2020, 10, 468–492. [Google Scholar] [CrossRef]
  62. Georgopoulou, M.S.; Troussas, C.; Sgouropoulou, C.; Voyiatzis, I. Struggling to Adapt: Exploring the Factors that Hamper the Integration of Innovative Digital Tools in Higher Education Courses. In Novel & Intelligent Digital Systems: Proceedings of the 4th International Conference (NiDS 2024), Athens, Greece, 25–27 September 2024; Springer Nature: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  63. Georgopoulou, M.S.; Troussas, C.; Sgouropoulou, C.; Voyiatzis, I. Technology is not enough: Educators as Catalysts for Sparking Student Interest and Engagement in Higher Education. In Novel & Intelligent Digital Systems: Proceedings of the 4th International Conference (NiDS 2024), Athens, Greece, 25–27 September 2024; Springer Nature: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  64. Saykılı, A. Higher education in the digital age: The impact of digital connective technologies. J. Educ. Technol. Online Learn. 2019, 2, 1–15. [Google Scholar] [CrossRef]
  65. Noah, J.B.; Aziz, A.A. A Systematic review on soft skills development among university graduates. Educ. J. Soc. Sci. 2020, 6, 53–68. [Google Scholar] [CrossRef]
  66. Murray, M.C.; Pérez, J.; Fluker, J. Digital literacy in the core: The emerging higher education landscape. Issues Informing Sci. Inf. Technol. 2022, 19, 001–013. [Google Scholar] [CrossRef]
  67. McGuinness, C.; Fulton, C. Digital literacy in higher education: A case study of student engagement with e-tutorials using blended learning. J. Inf. Technol. Educ. Innov. Pract. 2019, 18, 1–28. [Google Scholar] [CrossRef]
  68. Smith, K.; Parker, L. Reconfiguring Literacies in the Age of Misinformation and Disinformation. J. Lang. Lit. Educ. 2021, 17, n2. [Google Scholar]
  69. Sriwisathiyakun, K. Utilizing design thinking to create digital self-directed learning environment for enhancing digital literacy in Thai higher education. J. Inf. Technol. Educ. Innov. Pract. 2023, 22, 201–214. [Google Scholar] [CrossRef] [PubMed]
  70. Kareem, J.; Abhaya, N.B. From Classrooms to Clicks: Exploring Student Attitudes and Challenges in the Shift to Digital Learning in Higher Education. J. Inf. Technol. Educ. Res. 2025, 24, 31. [Google Scholar] [CrossRef] [PubMed]
  71. Prabakaran, N.; Patrick, H.A.; Kareem, J. Enhancing English Language Proficiency and Digital Literacy Through Metaverse-Based Learning: A Mixed-Methods Study in Higher Education. J. Inf. Technol. Educ. Res. 2025, 24, 10. [Google Scholar] [CrossRef] [PubMed]
  72. Gutierrez-Angel, N.; Sanchez-Garcia, J.N.; Mercader-Rubio, I.; Garcia-Martin, J.; Brito-Costa, S. Digital literacy in the university setting: A literature review of empirical studies between 2010 and 2021. Front. Psychol. 2022, 13, 896800. [Google Scholar] [CrossRef]
Figure 1. Student population.
Figure 1. Student population.
Computers 14 00528 g001
Figure 2. Basic flowchart outlining key steps in the research process.
Figure 2. Basic flowchart outlining key steps in the research process.
Computers 14 00528 g002
Figure 3. Self-Assessment on consuming online content.
Figure 3. Self-Assessment on consuming online content.
Computers 14 00528 g003
Figure 4. Self-Assessment on producing online content.
Figure 4. Self-Assessment on producing online content.
Computers 14 00528 g004
Figure 5. Self-Assessment on safely navigating digital environments.
Figure 5. Self-Assessment on safely navigating digital environments.
Computers 14 00528 g005
Figure 6. Key factors for recognizing deepfake content.
Figure 6. Key factors for recognizing deepfake content.
Computers 14 00528 g006
Figure 7. Application of evaluation criteria for deepfake content (Y/N).
Figure 7. Application of evaluation criteria for deepfake content (Y/N).
Computers 14 00528 g007
Figure 8. Application of evaluation criteria for deepfake content.
Figure 8. Application of evaluation criteria for deepfake content.
Computers 14 00528 g008
Figure 9. Online sources for daily news (open-ended question).
Figure 9. Online sources for daily news (open-ended question).
Computers 14 00528 g009
Figure 10. Online sources for academic/professional purposes (open-ended question).
Figure 10. Online sources for academic/professional purposes (open-ended question).
Computers 14 00528 g010
Figure 11. Criteria for choosing and accessing online sources.
Figure 11. Criteria for choosing and accessing online sources.
Computers 14 00528 g011
Figure 12. Frequency of using criteria for assessing the validity and reliability of online content.
Figure 12. Frequency of using criteria for assessing the validity and reliability of online content.
Computers 14 00528 g012
Figure 13. Level of proficiency in using various digital platforms.
Figure 13. Level of proficiency in using various digital platforms.
Computers 14 00528 g013
Table 1. Self-Assessment on consuming online content.
Table 1. Self-Assessment on consuming online content.
N%
5 ✰✰✰✰✰10854.0%
4 ✰✰✰✰7336.5%
3 ✰✰✰178.5%
2 ✰✰10.5%
1 10.5%
Table 2. Self-Assessment on producing online content.
Table 2. Self-Assessment on producing online content.
N%
5 ✰✰✰✰✰5829.0%
4 ✰✰✰✰7537.5%
3 ✰✰✰4824.5%
2 ✰✰126.0%
1  6 3.0%
Table 3. Self-Assessment on safely navigating digital environments.
Table 3. Self-Assessment on safely navigating digital environments.
N%
5 ✰✰✰✰✰5829.0%
4 ✰✰✰✰7336.5%
3 ✰✰✰3919.5%
2 ✰✰2311.5%
1 73.5%
Table 4. Self-Assessment of digital competence.
Table 4. Self-Assessment of digital competence.
RQ1a Consuming Online ContentRQ1b Producing Online ContentRQ1c Navigating Safely in Digital Environments
NValid200200200
Missing000
Mean4.433.843.77
Std. Error of Mean0.0500.0690.075
Median4.504.004.00
Std. Deviation0.7120.9771.059
Variance0.5070.9551.121
Range444
Minimum111
Maximum555
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Georgopoulou, M.S.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Digital Literacy in Higher Education: Examining University Students’ Competence in Online Information Practices. Computers 2025, 14, 528. https://doi.org/10.3390/computers14120528

AMA Style

Georgopoulou MS, Troussas C, Krouska A, Sgouropoulou C. Digital Literacy in Higher Education: Examining University Students’ Competence in Online Information Practices. Computers. 2025; 14(12):528. https://doi.org/10.3390/computers14120528

Chicago/Turabian Style

Georgopoulou, Maria Sofia, Christos Troussas, Akrivi Krouska, and Cleo Sgouropoulou. 2025. "Digital Literacy in Higher Education: Examining University Students’ Competence in Online Information Practices" Computers 14, no. 12: 528. https://doi.org/10.3390/computers14120528

APA Style

Georgopoulou, M. S., Troussas, C., Krouska, A., & Sgouropoulou, C. (2025). Digital Literacy in Higher Education: Examining University Students’ Competence in Online Information Practices. Computers, 14(12), 528. https://doi.org/10.3390/computers14120528

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop