Next Article in Journal
Selective Laser-Melted Alloy 625: Optimization of Stress-Relieving and Aging Treatments
Previous Article in Journal
Relationship Between Refractive Error, Visual Acuity, and Postural Stability in Elite Football Players
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of Digital Safety Competence on Cognitive Competence, AI Self-Efficacy, and Character

Hong Kong Polytechnic University, Hong Kong
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(10), 5440; https://doi.org/10.3390/app15105440
Submission received: 13 April 2025 / Revised: 5 May 2025 / Accepted: 5 May 2025 / Published: 13 May 2025

Abstract

:
Although there are studies on digital competence in higher education, there are very few studies on digital safety competence. This study aims to explore the impact of digital safety competence on students’ higher-order thinking and AI-related outcomes. Using a cross-sectional design, 159 university students completed an online questionnaire to measure cognitive competence, Al self-efficacy, AI ethics, and moral competence. Results showed that digital safety competence was positively and significantly related to cognitive competence, AI self-efficacy, AI ethics, and moral competence (p < 0.05). Our study extends the literature by highlighting the role of digital safety competence. Educators and university policy makers may consider incorporating digital competence, especially in the area of safety, into their teaching and learning strategies.

1. Introduction

Nowadays, technology has been embedded in all aspects of our lives. With the advancement of digital technology, digital competence has become indispensable in our lives [1]. Digitalization not only transforms our lives but also revolutionizes education contexts. With the advancement of digital technology, teaching and learning have become increasingly technology-oriented [2]. Recognizing the importance of digital technology, universities are increasingly integrating these elements into classrooms to meet the needs of students from diverse backgrounds and enhance academic achievement. This trend has been further accelerated by the rise in artificial intelligence (AI) technology, which has revolutionized the educational landscape.

1.1. Digital Competence

Literacy is defined as the ability to read and write (p. 2, [3]). Digital literacy (DL) is defined as the attitudes, knowledge, and skills that enable individuals to effectively use digital tools in the face of the digital-oriented world [4]. Going beyond the DL, digital competence (DC) is a multi-dimensional concept [5,6]. It refers not only to basic technical skills but also to attitudes and mindsets [7,8]. DC plays a pivotal role in empowering individuals to navigate a technology-oriented world [9]. It helps them critically evaluate AI tools and make ethical decisions, therefore cultivating a respectful and inclusive online learning environment. Additionally, it fosters positive development through constructive interactions and the development of emotional skills [10]. It is considered an essential skill in 21st-century society [11,12] and should be regarded as a lifelong learning skill [13,14].
With the widespread integration of digital technology, the European Union introduced the Digital Competence Framework (DigComp) to foster the development of digital skills [6]. Initially published in 2013, the framework outlined five key competence areas: information search, data management, digital content creation, communication and collaboration, safety and security, and problem-solving [15,16]. Over time, the framework has evolved. In 2016, a conceptual reference model, DigComp 2.0, was introduced, and one year later, eight additional proficiency levels were incorporated into DigComp 2.1. In 2022, DigComp 2.2 expanded the framework by incorporating more than 250 new examples of knowledge, skills, and attitudes.
Among the five competence areas, digital safety received less attention compared to other domains [17,18]. This is further supported by a recent meta-analysis [18]. In the DigComp framework, digital safety and physical safety are two distinct yet connected aspects of safety in digital environments. Digital safety involves protecting devices, content, data, and privacy, including understanding dangers and threats in digital environments, digital security, and managing privacy settings to protect digital assets and information. This also includes understanding online threats, such as phishing, malware, scams, and knowledge of how to protect oneself and others from them. On the other hand, physical safety in the use of digital technologies is about protecting one’s physical health and well-being. This involves awareness of health risks related to the use of digital technologies, including physical ailments such as eye strain, repetitive strain injuries, and psychological effects, such as cyberbullying and online harassment.

1.2. Digital Safety Competence (DSC)

Digital safety competence (DSC) refers to protecting personal data and digital content, safeguarding privacy and personal identity, adopting safety measures, and ensuring sensible use of digital tools [8]. It is defined as the ability to use technological tools, evaluate and choose appropriate tools for data management, protect personal identity, and remain aware of the potential risks associated with digital technological use [6,18,19]. In the rapidly evolving digital landscape, DSC has emerged as a critical skill set, especially for youths who are increasingly immersed in technology. As digital natives, young people are exposed to a myriad of online environments that broaden their multi-level context [20]. This exposure advances navigational competencies, including the ability to protect personal data and recognize security risks in networked systems [21]. To prepare youth to become leaders in their professions within the emerging AI society, educators should adopt a holistic approach [22].

1.3. University’s Role

One area of research that is receiving increasing attention is the ethical use of digital tools in our daily lives. AI tools have the potential to enhance learning and facilitate decision-making by providing novel perspectives [23]. However, as technology advances and AI applications become more sophisticated, they also raise ethical concerns. For instance, the information generated by GenAI tools can sometimes be biased or inaccurate, known as “hallucination”, which poses ethical risks and challenges [24,25]. In addition, the growing autonomy of AI tools increases the risk of academic integrity issues, such as plagiarism, intellectual copyright infringement, and unethical use of AI tools in higher education [24,25,26]. Students are generally receptive to using GenAI for their assignments [27]. For instance, they are more likely to engage in academic dishonesty during online assessment compared to traditional methods [28]. Furthermore, Smerdon [29] noted that around two-thirds of students expressed that they would use GenAI if allowed. Similarly, another study found that 75% of students acknowledged they would still use GenAI even if they considered it as a form of cheating [30]. Yusuf and Yusuf [31] also found around 40% of university students admitted to using GenAI for plagiarism, with 20% planning to use it in the future. In addition, Playfoot et al. [32] found that students exhibiting high levels of apathy were likely to use AI tools to engage in academic dishonesty when doing assignments. A study by Chan [33] explored student perceptions of academic misconduct and posited that students often have an ambiguous understanding of AI-related plagiarism. This highlights the need for increased awareness regarding the responsible use of AI technology.
The university aims to nurture its students so that they are fully prepared to meet societal needs and tackle future challenges. Digital competence becomes essential for enhancing employability and achieving success in careers [2,10]. Our future leaders should focus not only on personal gain but also on civic responsibility [34,35]. Universities play a crucial role in empowering the next generation to use digital tools effectively while also fostering ethical awareness and safe practices in technology consumption [36]. Researchers argued that this autonomy must be exercised by a commitment to ethical practices and responsibilities [37,38]. By developing digital skills, students are equipped to thrive in society and navigate the digital world competently, responsibly, and safely [39]. Indeed, “gaining access to technology does not necessarily give students the competence and ethical awareness needed to behave responsibly and correctly online” (p. 729, [40]). Digital safety competence is not merely a technical skill, it is a developmental imperative that intersects with students’ higher-order thinking in an AI-driven society.
Industries and universities argued for fostering an ethical mindset when teaching AI [41,42]. Tertiary education provides an ideal arena for equipping students with the knowledge and skills to handle the complex and ever-changing workplace while also becoming responsible lifelong digital users [43]. By embedding digital safety principles into digital learning, educators can prepare the youth to harness AI’s potential while mitigating its associated risks. To date, research on the effects of digital competence, especially regarding safety on higher-order thinking, such as cognitive competence, task-specific self-efficacy, and moral decision-making, remains scarce.
Specifically, critical thinking, confidence (e.g., self-efficacy), and character (e.g., ethics) are three important components that empower youth to navigate the rapidly evolving digital landscape. AI applications such as Generative Artificial Intelligence (GenAI) and Intelligent Tutoring System (ITS), not only streamline student learning experiences but also provide diverse educational resources by personalizing content and approaches [23]. Unlike traditional classrooms, AI tools can create complex and personalized self-paced learning environments [44,45]. This propels deeper cognitive learning processes, such as critical thinking, moral judgment, and information evaluation [44,46], requiring learners to utilize higher-order thinking skills, for instance, critical thinking [47].

1.4. Cognitive Competence

Cognitive competence refers to the mental processes through which individuals utilize various cognitive skills to understand their environment [48,49]. It involves identifying strategies and knowledge that guide behavior during problem-solving, enabling them to address challenges in novel ways through meaningful reconstruction of knowledge [48,50]. It encompasses critical thinking, creativity, problem-solving, self-regulation, and metacognition [50,51,52]. Cognitive competence is an important skill for academic performance and for meeting future workplace needs [53,54]. Researchers argued for the need to promote digital competence as a means of promoting cognitive competence [55]. Empirical findings show that the use of digital tools, such as GenAI applications and ITS, is associated with higher-order thinking through the provision of timely feedback and personalized assistance [44]. However, recent studies show that GenAI tools create a “critical thinking paradox” in higher education. While 72% of students reported that ChatGPT and Copilot improved their assignment efficiency, a longitudinal analysis revealed a 41% decline in source verification skills [56].
With the advancement of technology, the likelihood of collaboration with GenAI-based chatbots for complex tasks has increased, making active interaction and collaboration with others inevitable [34]. To date, few studies have investigated the impacts of DC on specific cognitive functioning, such as critical (thinking) competence, especially in higher education. The present study attempted to address this research gap. In line with past research, we hypothesized that digital safety competence is positively related to cognitive competence (Hypothesis 1).

1.5. AI Self-Efficacy

Self-efficacy refers to one’s beliefs about his/her ability to complete a specific task [57]. AI self-efficacy is defined as individuals’ confidence in using AI tools, their understanding of these tools’ capabilities, and their willingness to engage in learning about them [58,59]. Studies show that AI self-efficacy is a significant predictor of students’ positive learning experiences, academic achievement, and learning satisfaction [60,61]. For example, Bewersdorff et al. [59] found that students with high AI literacy perceived AI positively, expressed interest in AI, and reported a higher level of AI self-efficacy. Similarly, based on structural equation modeling, Chen et al. [62] observed that AI self-efficacy is negatively related to AI-related anxiety, which in turn leads to increased actual AI usage among university students.
With regard to digital safety, task-specific self-efficacy is tightly related to digital competence. For example, Herrnández-Martin et al. [8] found that adolescents with better digital competence were more likely to regulate their social media usage, as they were aware of personal privacy and the data implications of social media. Similarly, a study by Cabezas-Gonzáles et al. [63] suggested that students who spend more time on digital devices and social media reported great digital competence in safety. This relationship is more salient among boys. Research has shown that fear of data breaches and algorithmic discrimination hinders exploratory learning, especially among marginalized groups [64]. On the contrary, individuals who learn about encryption, privacy settings, and ethical hacking are more willing to experiment with AI tools, perceiving their risks as manageable rather than prohibitive. University students who completed AI courses exhibited greater confidence in troubleshooting algorithms and engaging in ethical discussions [65]. The reciprocal relationship between safety literacy and confidence is evident in AI-powered learning platforms. Adaptive systems like Duolingo enhance self-efficacy through personalized feedback, but their effectiveness depends on users’ trust in data security [66]. University students are future leaders equipped with essential digital safety skills for their professional careers. Educators and policymakers have acknowledged the impact of digital technology, specifically with the arrival of AI, which is characterized with greater autonomy [67]. Assessing the impact of digital competence in the area of safety can shape our beliefs and perceptions on the use of AI, thereby enhancing employability and achieving success in careers [68]. The present study attempts to test the relationship between DSC and AI self-efficacy in university contexts. We hypothesized that digital safety competence is positively associated with AI self-efficacy (Hypothesis 2).

1.6. AI Ethics and Moral Competence

In addition, building on Gammoh et al.’s [69] call for more research on incorporating ethical considerations into our understanding of student learning, we also examine how DSC relates to cognitive competence, AI self-efficacy, and ethical elements (i.e., AI ethics and moral competence). By uncovering the interrelationships among these variables, the present study aims to foster a responsible and ethical approach to technology use in higher education. We hypothesized that digital safety competence is positively linked to ethical outcomes (i.e., moral competence and AI ethics) (Hypothesis 3).
The objective of the present study was to test the associations among digital safety competence, cognitive competence, AI self-efficacy, and ethical outcomes (moral competence and AI ethics). To tests our hypotheses, we proposed a model illustrated in Figure 1.

2. Materials and Methods

2.1. Research Design and Sample

Using a cross-sectional design, students were invited to complete an online structured survey via the Qualtrics platform in the second semester of the 2024–2025 academic year. A total of 169 students participated in the study. A total of 10 students were excluded due to incompletion or failure to pass the attention check. In total, 159 valid questionnaires were collected. Convenient and snowball sampling was employed to recruit participants to join the study.
Among the participants, 62 males (36.7%), 103 females (60.9%), and four students did not disclose their gender. The majority of the age group is 21–23 years old (48.5%). In total, 63 students were in Year 4 (37.3%); 49.6% were from Business and Hotel Management (n = 66), 28.6% were from Health and Social Sciences (n = 38), 10.2% were from STEM and Applied Infrastructure (n = 15), and 9.8% were from Creative Arts, Design, and Humanities (n = 13). Participants consent via an online platform prior to the data collection. They were informed about the study’s aim. Each participant was assigned a unique identifier to ensure the confidentiality and anonymity of the data. This study was approved by the IRB (reference no. HSEARS20231227002).

2.2. Measurement

The European Commission’s DIGCOMP 2.0 framework defines digital competence as “the confident, critical and responsible use of, and engagement with, digital technologies for learning, at work and for participant in society” (p. 10, [70]). To measure ethical and safety digital practices, we adopted the scale developed by Skov [71], which is based on the DIGCOMP framework. Digital Safety Competence encompasses an individual’s ability to manage digital practices, personal data safety, and online privacy within digital environments. This includes managing identity, protecting personal data, knowing digital risks (e.g., identity theft, cyberbullying), and behaving ethically online. We employed this framework for two reasons: its widespread adoption in digital competence research, as evidence in a recent review study [72], and its validated application among Chinese students [5,73], who share similar backgrounds with our study participants. Furthermore, the scale has been used and validated among high school [74] and university students [75]. A total of 14 items (e.g., I carefully considered where and how digital content is saved and stored) were selected for the current study. Participants rated on a 7-point Likert scale from 1 = “To a very small degree” to 7 = “To a very large degree”. The results of a confirmatory factor analysis supported the unidimensional factor structure of this scale (X2: 110.241, df = 73, RMSEA = 0.057 (90%CI: 0.03–0.08), CFI = 0.913, SRMR = 0.064). The internal consistency was good (0.81).
Cognitive competence and moral competence were measured using the short version of the Chinese Positive Youth Development Scale (CPYDS-S) [76]. We used three items for moral competence (e.g., I keep my promise) and four items for cognitive competence (e.g., I know how to see things from different angles) on a 6-point Likert scale (0 = Strongly disagree to 5 = Strongly agree). The confirmatory factor analysis shows that this scale supports the two-factor structure of this scale (X2: 16.687, df = 13, RMSEA = 0.042 (90%CI: 0.000–0.094), CFI = 0.973, SRMR = 0.042). In the present study, the internal consistency for cognitive competence is acceptable (0.73) but marginally satisfactory for moral competence (0.50).
We used four items to measure AI self-efficacy. These items were based on Long and Magerko’s work [77]. The reliability and validity of this scale have been supported [78]. The response format consists of a 5-point Likert scale. Example of items “I have a clear understanding of different GenAI tools”. A composite score is used in the present study.
We measure AI ethics with an item about the awareness of AI ethics on a 6-point Likert scale (“I am aware of the potential issues, such as accuracy, ethics, and academic integrity, associated with using GenAI tools.”). In addition, demographic questions, such as gender, age, and program of study information, were asked.

2.3. Data Analysis

We used a structural modeling analysis (SEM) to test the relationships between digital ethics and safety practices, AI ethics, AISE, moral competence, and cognitive competence (Figure 1). It allows for estimating the interrelationships among the variables simultaneously, including both latent and measurable variables derived from indicators [17,79]. The hypothesized model was tested using Mplus 8.11.
With the effect size of 0.15, the desired statistical power level of 0.8, a significance level of 0.05 with a statistical power level of 0.80, and one predictor and four DVs, the minimum sample size for the SEM was 87 [80]. Our present sample size is sufficient. To examine the model fit, several fit indices were adopted: the Comparative Fit Index (CFI) values greater than 0.90, values of Root-Mean-Square Error of Approximation (RMSEA), and Standardized Root Mean Squared Residual (SRMR) below 0.06 and 0.08, indicating a good fit [81,82].

3. Results

Descriptive statistics and correlation analyses of all variables are presented in Table 1. The distribution of all variables is in the acceptable range, with skewness values ranging from 0.13 to 0.91 and kurtosis values ranging from −0.02 to 1.63 [83].
Table 1 shows the relationships among all variables. Digital safety competence is positively associated with AI ethics (r = 0.44, p < 0.01), AI self-efficacy (r = 0.29, p < 0.05), cognitive competence (r = 0.29, p < 0.01), and moral competence (r = 0.17, p < 0.05). AI ethics is positively related to AI self-efficacy (r = 0.24, p < 0.01) and cognitive competence (r = 0.16, p < 0.05) but not moral competence (p > 0.05). Finally, moral competence is positively correlated with cognitive competence (r = 0.29, p < 0.01) but not significant with AI-related variables (p > 0.05).
The structural modeling analysis (SEM) was used to test the relationships between DSC, higher-order thinking skills (cognitive competence, moral competence), and AI-related outcomes (AI self-efficacy, AI ethics). This model shows a good fit (X2: 192.721, df = 134, RMSEA = 0.058 (90%CI: 0.039–0.076), CFI = 0.902, SRMR = 0.084). As predicted, digital safety competence demonstrates a positive impact on cognitive competence (β = 0.58, p < 0.01), moral competence (β = 0.98, p < 0.01), AI self-efficacy (β = 0.29, p < 0.01), and AI ethics (β = 0.30, p < 0.01), confirming all hypotheses. All paths are displayed in Figure 2.
We took the reviewer’s suggestion and examined the potential nonlinear effects of all outcome variables. The results of multiple regression analysis show that the nonlinear effects of DSC are significant for cognitive competence (unstandardized beta = 0.12; standardized beta = 1.30, p = 0.04) and moral competence (unstandardized beta = 0.17; standardized beta = 1.69, p = 0.02) but not for the two AI-related variables (p > 0.05). To further investigate whether there is a minimum level of DSC required before its effects are manifested, we used linear mixed models. Results of fit indices (BIC and AIC) indicated the nonlinear model with quadratic term demonstrated a better fit than the linear model (see Table 2). Based on the estimates of the linear and quadratic coefficients for DSC on cognitive competence and moral competence, we computed the turning points and found that the effects of DSC reach its minimum at 3.14 and 3.70, respectively.

4. Discussion

The purpose of the present study was to investigate the impact of DSC on student higher-order thinking and AI-related outcomes. In general, our findings indicated that DSC has a positive impact on students’ outcomes. Consistent with past studies, DSC was significantly related to cognitive competence, AI self-efficacy, moral competence, and AI ethics.
In line with past research, we hypothesized that DSC is positively related to cognitive competence (Hypothesis 1). The results of present study support the hypothesis by suggesting that students with higher DSC are more proficient in engaging higher-order thinking, such as synthesizing, logical reasoning, and self-evaluation. The benefits of DC on cognitive abilities have been supported in high school and tertiary education [54,84]. Cognitive competence is a multi-dimensional concept that encompasses logical reasoning, self-evaluation, questioning, synthesizing, and inferences [51,85]. It involves understanding problems, applying purposeful thinking, analyzing situations, and evaluating alternative solutions or strategies [86]. Our findings suggested that digital safety competence encourages students to actively search for suitable digital tools, monitor their learning progress, and evaluate their performance. The results of the present study added to the existing work by showing the impacts of DSC on cognitive abilities. The literature consistently demonstrated DC as a pivotal factor in developing cognitive skills. For example, Getenet et al. [87] found a positive relationship between DL and cognitive engagement, while Karakus et al. [86] illustrated that students with high DC tend to show cognitive flexibility and higher critical thinking skills. Similarly, Pagani et al. [88] identified significant correlations between technology-oriented learning and cognitive engagement, thereby contributing to students’ academic success. Wang et al. (p. 12, [58]) noted that “the reflective and constructive processes of knowledge acquisition through human–computer interaction” foster autonomous and meaningful learning experiences which in turn promotes further learning engagement. Clearly, the present study extends the literature by highlighting the role of DSC on cognitive competence in tertiary education.
In support with our Hypothesis 2, the present study demonstrated that DSC has a positive and significant impact on AI self-efficacy. The importance of self-efficacy is well-supported in the literature. Students with low self-efficacy exhibited diminished learning motivation and less effective development of computational skills compared to their high self-efficacy counterparts [74,89]. As hypothesized, DSC significantly predicted AI self-efficacy. This is consistent with findings by Wang et al. [58], who reported that digital self-efficacy was related to learning satisfaction, continuance intention, and perceived usefulness when using GenAI. Similarly, Getenet et al. [87] observed that digital self-efficacy is positively correlated with emotional engagement, which, in turn, promotes student satisfaction [90] and academic achievement [91]. In line with these findings, Yilmaz [92] adopted an experimental design and found that students in the intervention group exhibited significantly higher computational programming self-efficacy and computational thinking skills—such as critical thinking, problem-solving, and creativity when compared to a control group. Taken together, these findings are consistent with our results, highlighting the role of digital safety competence in developing AI self-efficacy.
Lastly, the positive relationships between DSC and ethical accountability in personal and AI contexts supports our Hypothesis 3. These results align with past research, revealing the impacts of DSC on ethical considerations in both real and virtual environments. For instance, Yang et al. [93] found DSC is positively linked to AI ethics awareness and moral sensitivity among nursing students. In addition, they suggested that those with AI-related education exhibited heightened ethical consciousness regarding AI. Kwak et al. [94] reported comparable results, indicating that DSC shapes individuals’ ethical perspectives and attitudes toward AI technologies. Taken together, the above findings underscore the role of fostering digital safety competence to promote ethical awareness and encourage the responsible use of digital technologies in the evolving educational landscape.
While GenAI enriches students’ learning experiences, ethical considerations associated with its integration into education should not be overlooked. A study by Yusuf et al. [31], inviting students and teachers from 76 countries, revealed a troubling paradox. Although students expressed a commitment to avoiding plagiarism, many still intended to engage in such behavior in the future. Some even viewed the use of GenAI as not cheating. This result is consistent with findings from Gruenhagen et al. [27], who suggested that the use of AI chatbots did not constitute a violation of academic integrity based on a sample of Australian university students. The overreliance of GenAI tools might hinder social skills or interpersonal interaction, but most importantly, it undermines the traditional ethical values of education [95]. With the increasing integration of AI technologies into academic contexts, there is a pressing need to develop ethical awareness and responsible use of AI to uphold academic integrity [31,93]. Our study provides empirical evidence in this area of digital literacy research. Importantly, improving digital competence in the area of safety seems to enhance students’ ethical and moral considerations. It underscores the importance of equipping students with the necessary skills to mitigate ethical risks and threats associated with the use of digital technologies. Educators can design educational programs to develop digital competence, particularly promoting cognitive thinking and informed ethical decision-making. By integrating digital safety competence into the curriculum, institutions not only cultivate higher-order thinking but also foster a deeper awareness of ethics and academic integrity in the digital age.
Despite the significance of this study, several limitations should be acknowledged. First, although self-reported instruments are generally acceptable for assessing digital competence [96], future research should consider using other approaches, such as objective or qualitative methods, to measure students’ actual competence levels, as self-estimation may sometimes be inaccurate [97]. Second, this study adopts a cross-sectional design, but a longitudinal approach would be beneficial in examining the effects of DSC over time. Third, this study’s sample comprises students from the social sciences, business, fashion, and applied infrastructure programs, and therefore, caution should be exercised when generalizing findings to other disciplines, particularly engineering sciences. The academic and industry-related contexts in engineering may differ substantially from those in the current sample, potentially influencing the relationships among the variables under study. Future research should explore whether the current findings extend to students from other disciplines, thereby capturing a better understanding of the cross-disciplinary applicability of the results. In addition, future studies should consider other variables to enrich the literature on AI in education. For example, Shahzad et al. [98] found that trust moderates the relationship between GenAI usage, self-efficacy, and learning performance. Also, we did not assess how safety knowledge, such as health-related risks, threats, and environmental protection, influence physical and psychological well-being. Lastly, the present findings are based on data from a single university, so future research is warranted to replicate and extend the current results in other educational contexts.

5. Conclusions

Universities not only devise ethical guidelines for the use of AI tools but also educate students to apply these tools effectively and responsibly across all academic endeavors. Academic autonomy should strike a balance between leveraging the benefits of AI and maintaining ethical principles in human–computer interaction learning environments [99]. The present study extends the literature by testing the direct effect of DSC on learning outcomes, such as cognitive skills, AI ethics and moral competence, and AI self-evaluation. This study highlights the importance of training students to use digital tools properly through the development of digital safety competence, thereby better equipping them to uphold academic integrity, avoid fraudulent practices, and identify misinformation.
One of the primary outcomes of the present study is the potential for incorporating digital safety training into higher education curricula. Traditionally, universities have focused on establishing guidelines for ethical AI use. Our findings revealed that embedding DSC training within academic programs offers significant benefits. Such integration will equip our students to navigate digital learning environments ethically and responsibly. Our study offers a roadmap for creating interdisciplinary modules that address technical competence, ethical judgment, and cognitive development.
From the practical perspective, educators and policy makers are encouraged to design courses that integrate digital safety standards with hands-on training in AI-tools. Also, institutions should consider cultivating a campus-wide culture that fosters ethical digital practices among faculty and students. This holistic approach not only leverages the potential of AI technologies but also mitigates risks associated with their misuse.

Author Contributions

Conceptualization, C.M.S.M. and I.Y.H.F.; Methodology, I.Y.H.F.; Formal analysis, C.M.S.M.; Data curation, C.M.S.M. and X.Z.; Writing—original draft, C.M.S.M. and I.Y.H.F.; Writing—review & editing, D.T.L.S.; Visualization, X.H.; Funding acquisition, D.T.L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the University Grants Committee, Hong Kong and the Hong Kong Polytechnic University.

Institutional Review Board Statement

This study was approved by the IRB (reference no. HSEARS20231227002).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy restrictions. The dataset is derived from a university-wide survey containing participants’ personal information, and ethical considera-tions prevent its public release to protect the privacy of respondents.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ramos, A.; Casado, E.; Sevillano, U. Evaluating digital literacy levels among students in two private universities in Panama. South Fla. J. Dev. 2023, 4, 1869–1886. [Google Scholar] [CrossRef]
  2. UNESCO. AI and Education: Guidance for Policy-Makers. UNESCO, 2021. Recommendation on the Ethics of Artificial Intelligence, 2020. Digital Library UNESDOC. Available online: http://en.unesco.org/ (accessed on 12 April 2025).
  3. Kong, S.-C.; Cheung, W.M.-Y.; Zhang, G. Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Comput. Educ. Artif. Intell. 2021, 2, 100026. [Google Scholar] [CrossRef]
  4. Nguyen, L.A.T.; Habók, A. Tools for assessing teacher digital literacy: A review. J. Comput. Educ. 2023, 11, 305–346. [Google Scholar] [CrossRef]
  5. Jin, K.-Y.; Reichert, F.; Cagasan, L.P.; de la Torre, J.; Law, N. Measuring digital literacy across three age cohorts: Exploring test dimensionality and performance differences. Comput. Educ. 2020, 157, 103968. [Google Scholar] [CrossRef]
  6. Carretero, S.; Vuorikari, R.; Punie, Y. DigComp 2.1: The Digital Competence Framework for Citizens with Eight Proficiency Levels and Examples of Use; Publications of the European Union: Luxembourg, 2017. [Google Scholar] [CrossRef]
  7. Janssen, J.; Stoyanov, S.; Ferrari, A.; Punie, Y.; Pannekeet, K.; Sloep, P. Experts’ views on digital competence: Commonalities and differences. Comput. Educ. 2013, 68, 473–481. [Google Scholar] [CrossRef]
  8. Hernández-Martín, A.; Martín-del-Pozo, M.; Iglesias-Rodríguez, A. Pre-adolescents’ digital competences in the area of safety. Does frequency of social media use mean safer and more knowledgeable digital usage? Educ. Inf. Technol. 2021, 26, 1043–1067. [Google Scholar] [CrossRef]
  9. Kindarji, V.; Wong, W.H. Digital Literacy Will Be Key in a World Transformed by AI; Schwartz Reisman Institute: Toronto, ON, USA, 2023; Available online: https://srinstitute.utoronto.ca/news/digital-literacy-will-be-key-in-a-world-transformed-by-ai (accessed on 12 April 2025).
  10. OECD. Beyond Academic Learning: First Results from the Survey of Social and Emotional Skills; OECD Publishing: Paris, France, 2021. [Google Scholar] [CrossRef]
  11. Widowati, A.; Siswanto, I.; Wakid, M. Factors affecting students’ academic performance: Self-efficacy, digital literacy, and academic engagement effects. Int. J. Instr. 2023, 16, 885–898. [Google Scholar] [CrossRef]
  12. Mejías-Acosta, A.; D’Armas Regnault, M.; Vargas-Cano, E.; Cárdenas-Cobo, J.; Vidal-Silva, C. Assessment of digital competencies in higher education students: Development and validation of a measurement scale. Front. Educ. 2024, 9, 1497376. [Google Scholar] [CrossRef]
  13. Kryukova, N.I.; Chistyakov, A.A.; Shulga, T.I.; Omarova, L.B.; Tkachenko, T.V.; Malakhovsky, A.K.; Babieva, N.S. Adaptation of higher education students’ digital skills survey to Russian universities. Eurasia J. Math. Sci. Technol. Educ. 2022, 18, em2183. [Google Scholar] [CrossRef]
  14. Wulan, R.; Sintowoko, D.A.W.; Resmadi, I.; Yenni, S. Digital Skills in Education: Perspective from Teaching Capabilities in Technology. In Proceedings of the 9th BCM International Conference, Bandung, Indonesia, 1 September 2022; pp. 432–436. [Google Scholar] [CrossRef]
  15. Ferrari, A. Digital Competence in Practice: An Analysis of Frameworks; Publications Office of the EU: Luxembourg, 2012; Available online: https://ifap.ru/library/book522.pdf (accessed on 12 April 2025).
  16. Riina, V.; Stefano, K.; Yves, P. DigComp 2.2: The Digital Competence Framework for Citizens—With New Examples of Knowledge, Skills and Attitudes; JRC Research Reports JRC128415; Joint Research Centre: Brussels, Belgium, 2022. [Google Scholar]
  17. Blanc, S.; Conchado, A.; Benlloch-Dualde, J.V.; Monteiro, A.; Grindei, L. Digital competence development in schools: A study on the association of problem-solving with autonomy and digital attitudes. Int. J. STEM Educ. 2025, 12, 13. [Google Scholar] [CrossRef]
  18. Godaert, E.; Aesaert, K.; Voogt, J.; van Braak, J. Assessment of students’ digital competences in primary school: A systematic review. Educ. Inf. Technol. 2022, 27, 9953–10011. [Google Scholar] [CrossRef]
  19. Kim, D.; Vandenberghe, C. Ethical Leadership and Team Ethical Voice and Citizenship Behavior in the Military: The Roles of Team Moral Efficacy and Ethical Climate. Group Organ. Manag. 2020, 45, 514–555. [Google Scholar] [CrossRef]
  20. Shek, D.T.; Dou, D.; Zhu, X.; Chai, W. Positive youth development: Current perspectives. Adolesc. Health Med. Ther. 2019, 10, 131–141. [Google Scholar] [CrossRef]
  21. Estrada, F.J.R.; George-Reyes, C.E.; Glasserman-Morales, L.D. Security as an emerging dimension of Digital Literacy for education: A systematic literature review. J. E Learn. Knowl. Soc. 2022, 18, 22–33. [Google Scholar]
  22. Chiu, T.K. A holistic approach to the design of artificial intelligence (AI) education for K-12 schools. TechTrends 2021, 65, 796–807. [Google Scholar] [CrossRef]
  23. Carr, N. The Shallows: What the Internet Is Doing to Our Brains; WW Norton and Company: New York, NY, USA, 2020. [Google Scholar]
  24. Malik, A.; Khan, M.L.; Hussain, K.; Qadir, J.; Tarhini, A. AI in higher education: Unveiling academicians’ perspectives on teaching, research, and ethics in the age of ChatGPT. Interact. Learn. Environ. 2024, 33, 1–17. [Google Scholar] [CrossRef]
  25. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 2023, 6, 342–363. [Google Scholar] [CrossRef]
  26. Cooper, G. Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. J. Sci. Educ. Technol. 2023, 32, 444–452. [Google Scholar] [CrossRef]
  27. Gruenhagen, J.H.; Sinclair, P.M.; Carroll, J.-A.; Baker, P.R.A.; Wilson, A.; Demant, D. The rapid rise of generative AI and its implications for academic integrity: Students’ perceptions and use of chatbots for assistance with assessments. Comput. Educ. Artif. Intell. 2024, 7, 100273. [Google Scholar] [CrossRef]
  28. Mumtaz, S.; Parahoo, S.K.; Gupta, N.; Harvey, H.L. Tryst with the unknown: Navigating an unplanned transition to online examinations. Qual. Assur. Educ. 2023, 31, 4–17. [Google Scholar] [CrossRef]
  29. Smerdon, D. AI in essay-based assessment: Student adoption, usage, and performance. Comput. Educ. Artif. Intell. 2024, 7, 100288. [Google Scholar] [CrossRef]
  30. Intelligent. Nearly 1 in 3 College Students Have Used ChatGPT on Written Assignments; Intelligent: Seattle, WA, USA, 2023; Available online: https://intelligent.com (accessed on 12 April 2025).
  31. Yusuf, A.; Pervin, N.; Román-González, M. Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. Int. J. Educ. Technol. High. Educ. 2024, 21, 21–29. [Google Scholar] [CrossRef]
  32. Playfoot, D.; Quigley, M.; Thomas, A.G. Hey ChatGPT, give me a title for a paper about degree apathy and student use of AI for assignment writing. Internet High. Educ. 2024, 62, 100950. [Google Scholar] [CrossRef]
  33. Chan, C.K.Y. Students’ perceptions of ‘AI-giarism’: Investigating changes in understandings of academic misconduct. Educ. Inform. Technol. 2024, 30, 8087–8108. [Google Scholar] [CrossRef]
  34. Chang, C.Y.; Kuo, H.C. The development and validation of the digital literacy questionnaire and the evaluation of students’ digital literacy. Educ. Inf. Technol. 2025; in press. [Google Scholar] [CrossRef]
  35. Redecker, C.; Punie, Y. Digital Competence Framework for Educators (DigCompEdu); European Union: Brussels, Belgium, 2017; Available online: https://publications.jrc.ec.europa.eu/repository/handle/JRC107466 (accessed on 12 April 2025).
  36. Zhu, W.; Huang, L.; Zhou, X.; Li, X.; Shi, G.; Ying, J.; Wang, C. Could AI Ethical Anxiety, Perceived Ethical Risks and Ethical Awareness About AI Influence University Students’ Use of Generative AI Products? An Ethical Perspective. Int. J. Hum. Comput. Interact. 2024, 41, 742–764. [Google Scholar] [CrossRef]
  37. Mekheimer, M.; Abdelhalim, W.M. The digital age students: Exploring leadership, freedom, and ethical online behavior: A quantitative study. Soc. Sci. Humanit. Open 2025, 11, 101325. [Google Scholar] [CrossRef]
  38. Santoni de Sio, F. Human Freedom in the Age of AI, 1st ed.; Routledge: London, UK, 2024. [Google Scholar]
  39. Burns, T.; Gottschalk, F. Educating 21st Century Children: Emotional Well-Being in the Digital Age, Educational Research and Innovation; OECD Publishing: Paris, France, 2019. [Google Scholar] [CrossRef]
  40. Hatlevik, O.E.; Tømte, K. Using multilevel analysis to examine the relationship between upper secondary students’ Internet safety awareness, social background and academic aspirations. Future Internet 2014, 6, 717–734. [Google Scholar] [CrossRef]
  41. Ahmad, M.A.; Teredesai, A.; Eckert, C. Fairness, Accountability, and Transparency in AI At Scale: Lessons from National Programs. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; p. 690. [Google Scholar]
  42. Microsoft. Responsible and Trusted AI. Available online: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai. (accessed on 12 April 2025).
  43. Katende, E. Critical thinking and higher education: A historical, theoretical and conceptual perspective. J. Educ. Pract. 2023, 7, 19–39. [Google Scholar] [CrossRef]
  44. Lu, K.; Zhu, J.; Pang, F.; Shadiev, R. Understanding the relationship between college students’ artificial intelligence literacy and higher-order thinking skills using the 3P model: The mediating roles of behavioral engagement and peer interaction. Educ. Technol. Res. Dev. 2024. [Google Scholar] [CrossRef]
  45. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  46. Delcker, J.; Heil, J.; Ifenthaler, D.; Seufert, S.; Spirgi, L. First-year students AI-competence as a predictor for intended and de facto use of AI tools for supporting learning processes in higher education. Int. J. Educ. Technol. High. Educ. 2024, 21, 18. [Google Scholar] [CrossRef]
  47. Bakht Jamal, S.A.A.R. Analysis of the Digital Citizenship Practices among University Students in Pakistan. Pak. J. Distance Online Learn. 2023, 9, 50. [Google Scholar] [CrossRef]
  48. Fry, P.S. Fostering Children’s Cognitive Competence Through Mediated Learning Experiences: Frontiers and Futures; Charles C. Thomas, Publisher: Springfield, IL, USA, 1992. [Google Scholar]
  49. Vygotsky, L.S.; Cole, M. Mind in Society: The Development of Higher Psychological Processes; Harvard University Press: London, UK, 1978. [Google Scholar]
  50. Demetriou, A.; Spanoudis, G.C.; Greiff, S.; Makris, N.; Panaoura, R.; Kazi, S. Changing priorities in the development of cognitive competence and school learning: A general theory. Front. Psychol. 2022, 13, 954971. [Google Scholar] [CrossRef]
  51. Sun, R.C.; Hui, E.K. Cognitive competence as a positive youth development construct: A conceptual review. Sci. World J. 2012, 2012, 210953. [Google Scholar] [CrossRef] [PubMed]
  52. Facione, P.A.; Gittens, C.A.; Facione, N.C. Think Critically, 3rd ed.; Pearson: New York City, NY, USA, 2020. [Google Scholar]
  53. Fjeldheim, S.; Kleppe, L.C.; Stang, E.; Storen-Vaczy, B. Digital competence in social work education: Readiness for practice. Soc. Work Educ. 2024, 1–17. [Google Scholar] [CrossRef]
  54. Zhu, H.; Andersen, S.T. Digital competence in social work practice and education: Experiences from Norway. Nord. Soc. Work Res. 2021, 12, 823–838. [Google Scholar] [CrossRef]
  55. Zhu, C.; Sun, M.; Luo, J.; Li, T.; Wang, M. How to harness the potential of ChatGPT in education? Knowl. Manag. E Learn. 2023, 15, 133–152. [Google Scholar] [CrossRef]
  56. Lee, H.P.H.; Sarkar, A.; Tankelevitch, L.; Drosos, I.; Rintel, S.; Banks, R.; Wilson, N. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers; Microsoft: Redmond, WA, USA, 2025. [Google Scholar]
  57. Bandura, A. Self-Efficacy: The Exercise of Control; W.H. Freeman: New York, NY, USA, 1997. [Google Scholar]
  58. Wang, Y.Y.; Chuang, Y.W. Artificial intelligence self-efficacy: Scale development and validation. Educ. Inf. Technol. 2024, 29, 4785–4808. [Google Scholar] [CrossRef]
  59. Bewersdorff, A.; Hornberger, M.; Nerdel, C.; Schiff, D.S. AI advocates and cautious critics: How AI attitudes, AI interest, use of AI, and AI literacy build university students’ AI self-efficacy. Comput. Educ. Artif. Intell. 2025, 8, 100340. [Google Scholar] [CrossRef]
  60. Wang, B.; Rau, P.L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of the artificial intelligence literacy scale. Behav. Inform. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  61. Lee, Y.F.; Hwang, G.J.; Chen, P.Y. Impacts of an AI-based chatbot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educ. Technol. Res. Dev. 2022, 70, 1843–1865. [Google Scholar] [CrossRef]
  62. Chen, D.; Liu, W.; Liu, X. What drives college students to use AI for L2 learning? Modeling the roles of self-efficacy, anxiety, and attitude based on an extended technology acceptance model. Acta Psychol. 2024, 249, 104442. [Google Scholar] [CrossRef]
  63. Cabezas-González, M.; Casillas-Martín, S.; García-Valcárcel Muñoz Repiso, A. Influence of the use of video games, social media, and smartphones on the development of digital competence with regard to safety. Comput. Sch. 2024, 1–20. [Google Scholar] [CrossRef]
  64. Holmes, W.; Porayska-Pomsta, K. The Ethics of Artificial Intelligence in Education; Routledge: London, UK, 2023. [Google Scholar]
  65. O’Dea, X.; Ng, D.T.K.; O’Dea, M.; Shkuratskyy, V. Factors affecting university students’ generative AI literacy: Evidence and evaluation in the UK and Hong Kong contexts. Policy Futures Educ. 2024, 14782103241287401. [Google Scholar] [CrossRef]
  66. Kittredge, A.K.; Hopman, E.W.; Reuveni, B.; Dionne, D.; Freeman, C.; Jiang, X. Mobile language app learners’ self-efficacy increases after using generative AI. Front. Educ. 2025, 10, 1499497. [Google Scholar] [CrossRef]
  67. Siau, K.; Wang, W. Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. J. Database Manag. 2020, 31, 74–87. [Google Scholar] [CrossRef]
  68. López, C. Artificial Intelligence and Advanced Materials. Adv. Mater. 2023, 35, 2208683. [Google Scholar] [CrossRef] [PubMed]
  69. Gammoh, L.A. ChatGPT risks in academia: Examining university educators’ challenges in Jordan. Educ. Inform. Technol. 2025, 30, 3645–3667. [Google Scholar] [CrossRef]
  70. European Commission: Directorate-General for Education, Youth, Sport and Culture. In Key Competences for Lifelong Learning; Publications Office of the European Union: Luxembourg, 2019; Available online: https://data.europa.eu/doi/10.2766/569540 (accessed on 3 May 2025).
  71. Skov, A. The Digital Cometence Wheel. Center for Digital Dannelse. Available online: https://digital-competence.eu/ (accessed on 12 April 2025).
  72. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Edu. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  73. Liu, Z. The digital competence of Chinese university students: A survey study. J. Educ. Educ. Res. 2023, 2, 35–38. [Google Scholar] [CrossRef]
  74. Tso, W.W.Y.; Reichert, F.; Law, N.; Fu, K.W.; de la Torre, J.; Rao, N.; Ip, P. Digital competence as a protective factor against gaming addiction in children and adolescents: A cross-sectional study in Hong Kong. Lancet Reg. Health West. Pac. 2022, 20, 100382. [Google Scholar] [CrossRef] [PubMed]
  75. Liu, C.-H.; Horng, J.-S.; Chou, S.-F.; Yu, T.-Y.; Lee, M.-T.; Lapuz, M.C.B. Digital capability, digital learning, and sustainable behaviour among university students in Taiwan: A comparison design of integrated mediation-moderation models. Int. J. Manag. Educ. 2023, 21, 100835. [Google Scholar] [CrossRef]
  76. Shek, D.T.L.; Yu, L.; Wu, F.K.Y.; Ng, C.S.M. General education program in a new 4-year university curriculum in Hong Kong: Findings based on multiple evaluation strategies. Int. J. Disabil. Hum. Dev. 2015, 14, 377–384. [Google Scholar] [CrossRef]
  77. Long, D.; Magerko, B. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar]
  78. Ma, C.M.S.; Shek, D.T.L.; Fan, I.Y.H.; Zhu, X.X.; Hu, X.G.; Chan, K.; Yick, K.L.; Chu, R.W.C.; Lui, R.W.C. The Relationship Between the use of Generative Artificial Intelligence (GenAI) and Student Engagement: Insights from AI Self-Efficacy; AISE: Brussels, Belgium, Under peer review.
  79. Jöreskog, K.G. Structural analysis of covariance and correlation matrices. Psychometrika 1978, 43, 443–477. [Google Scholar] [CrossRef]
  80. Wolf, E.J.; Harrington, K.M.; Clark, S.L.; Miller, M.W. Sample size requirements for structural equation models: An evaluation of power, bias, and solution propriety. Educ. Psychol. Meas. 2013, 76, 913–934. [Google Scholar] [CrossRef] [PubMed]
  81. Browne, M.W.; Cudeck, R. Alternative Ways of Assessing Model Fit. In Testing Structural Equation Models; Bollen, K.A., Long, J.S., Eds.; Sage: Washington, DC, USA, 1993; pp. 136–162. [Google Scholar]
  82. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  83. Curran, P.J.; West, S.G.; Finch, J.F. The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychol. Methods 1996, 1, 16–29. [Google Scholar] [CrossRef]
  84. Shi, Y.; Qu, S. The effect of cognitive ability on academic achievement: The mediating role of self-discipline and the moderating role of planning. Front. Psychol. 2022, 13, 1014655. [Google Scholar] [CrossRef]
  85. Gök, B.; Erdoğan, T. The investigation of the creative thinking levels and the critical thinking disposition of pre-service elementary teachers. Ank. Univ. J. Fac. Educ. Sci. 2011, 44, 29–52. [Google Scholar] [CrossRef]
  86. Karakuş, İ. University students’ cognitive flexibility and critical thinking dispositions. Front. Psychol. 2024, 15, 1420272. [Google Scholar] [CrossRef] [PubMed]
  87. Getenet, S.; Cantle, R.; Redmond, P.; Albion, P. Students’ digital technology attitude, literacy and self-efficacy and their effect on online learning engagement. Int. J. Educ. Technol. High. Educ. 2024, 21, 3. [Google Scholar] [CrossRef]
  88. Pagani, L.; Argentin, G.; Gui, M.; Stanca, L. The impact of digital skills on educational outcomes: Evidence from performance tests. Educ. Stud. 2016, 42, 137–162. [Google Scholar] [CrossRef]
  89. Fagerlund, J.; Häkkinen, P.; Vesisenaho, M.; Viiri, J. Computational thinking in programming with Scratch in primary schools: A systematic review. Comput. Appl. Eng. Educ. 2021, 29, 12–28. [Google Scholar] [CrossRef]
  90. Luo, Y.; Xie, M.; Lian, Z. Emotional engagement and student satisfaction: A study of Chinese college students based on a nationally representative sample. Asia Pac. Educ. Res. 2019, 28, 283–292. [Google Scholar] [CrossRef]
  91. Dunn, T.; Kennedy, M. Technology Enhanced Learning in higher education; motivations, engagement and academic achievement. Comput. Educ. 2019, 137, 104–113. [Google Scholar] [CrossRef]
  92. Yilmaz, R.; Karaoglan Yilmaz, F.G. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Comput. Educ. Artif. Intell. 2023, 4, 100147. [Google Scholar] [CrossRef]
  93. Yang, Y. Influences of digital literacy and moral sensitivity on artificial intelligence ethics awareness among nursing students. Healthcare 2024, 12, 2172. [Google Scholar] [CrossRef]
  94. Kwak, Y.; Ahn, J.W.; Seo, Y.H. Influence of AI ethics awareness, attitude, anxiety, and self-efficacy on nursing students’ behavioral intentions. BMC Nurs. 2022, 21, 267. [Google Scholar] [CrossRef]
  95. Mogavi, H.; Deng, R.; Kim, C.J.; Zhou, J.; Kwon, P.; Hui, P. ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Comput. Hum. Behav. 2024, 2, 100027. [Google Scholar] [CrossRef]
  96. Devaux, M.; Sassi, F. Social disparities in hazardous alcohol use: Self-report bias may lead to incorrect estimates. Eur. J. Public Health 2016, 26, 129–134. [Google Scholar] [CrossRef] [PubMed]
  97. Porat, E.; Blau, I.; Barak, A. Measuring digital literacies: Junior high-school students’ perceived competencies versus actual performance. Comput. Educ. 2018, 126, 23–36. [Google Scholar] [CrossRef]
  98. Shahzad, M.F.; Xu, S.; Zahid, H. Exploring the impact of generative AI-based technologies on learning performance through self-efficacy, fairness & ethics, creativity, and trust in higher education. Educ. Inf. Technol. 2025, 30, 3691–3716. [Google Scholar] [CrossRef]
  99. Acosta-Enriquez, B.G.; Ballesteros, M.A.A.; Vargas, C.G.A.P.; Ulloa, M.N.O.; Ulloa, C.R.G.; Romero, J.M.P.; Jaramillo, N.D.G.; Orellana, H.U.C.; Anzoátegui, D.X.A.; Roca, C.L. Knowledge, attitudes, and perceived ethics regarding the use of ChatGPT among generation Z university students. Int. J. Educ. Integr. 2024, 20, 10. [Google Scholar] [CrossRef]
Figure 1. A hypothesized model of the mediating role of AI self-efficacy.
Figure 1. A hypothesized model of the mediating role of AI self-efficacy.
Applsci 15 05440 g001
Figure 2. The impact of digital safety competence on cognitive competence, moral competence, AI self-efficacy, and AI ethics. All paths are standardized coefficients. ** p < 0.01.
Figure 2. The impact of digital safety competence on cognitive competence, moral competence, AI self-efficacy, and AI ethics. All paths are standardized coefficients. ** p < 0.01.
Applsci 15 05440 g002
Table 1. Descriptive statistics and correlations for all variables.
Table 1. Descriptive statistics and correlations for all variables.
MSDαSkewnessKurtosis12345
1.
DSC
4.120.710.810.71−0.01-
2.
AIET
3.850.91-0.910.580.44 **-
3.
MC
3.240.570.480.570.620.17 *0.03-
4.
AISE
3.360.810.870.811.630.29 **0.24 **0.03-
5.
CC
3.880.560.730.56−0.020.29 **0.16 *0.29 **0.13-
DSC: digital safety competence; AIET: artificial intelligence ethics; MC: moral competence; AISE: artificial intelligence self-efficacy; CC: cognitive competence ** p < 0.01; * p < 0.05.
Table 2. Results of linear mixed modeling for competence variables.
Table 2. Results of linear mixed modeling for competence variables.
Model 1Model 2Estimate
(Linear Term)
Estimate
(Quadratic Term)
Cognitive competence
AIC259.81259.39−0.790.12 *
BIC262.86262.44
Moral competence
AIC279.66276.62−1.220.17 #
BIC282.72279.67
Model 1: linear term only; Model 2: both linear and quadratic terms; # p = 0.01; * p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, C.M.S.; Shek, D.T.L.; Fan, I.Y.H.; Zhu, X.; Hu, X. The Impact of Digital Safety Competence on Cognitive Competence, AI Self-Efficacy, and Character. Appl. Sci. 2025, 15, 5440. https://doi.org/10.3390/app15105440

AMA Style

Ma CMS, Shek DTL, Fan IYH, Zhu X, Hu X. The Impact of Digital Safety Competence on Cognitive Competence, AI Self-Efficacy, and Character. Applied Sciences. 2025; 15(10):5440. https://doi.org/10.3390/app15105440

Chicago/Turabian Style

Ma, Cecilia M. S., Daniel T. L. Shek, Irene Y. H. Fan, Xixian Zhu, and Xiangen Hu. 2025. "The Impact of Digital Safety Competence on Cognitive Competence, AI Self-Efficacy, and Character" Applied Sciences 15, no. 10: 5440. https://doi.org/10.3390/app15105440

APA Style

Ma, C. M. S., Shek, D. T. L., Fan, I. Y. H., Zhu, X., & Hu, X. (2025). The Impact of Digital Safety Competence on Cognitive Competence, AI Self-Efficacy, and Character. Applied Sciences, 15(10), 5440. https://doi.org/10.3390/app15105440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop