Next Article in Journal
Generative AI and the New Landscape of Automated Journalism: A Systematized Review of 185 Studies (2012–2024)
Previous Article in Journal
Electoral Confrontation on Social Media Platforms: Political Communication and Institutional Contestation in Romania (2025)
 
 
Correction published on 20 March 2026, see Journal. Media 2026, 7(1), 67.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training

by
Esma Kucukalic Ibrahimovic
Department of International Relations, Faculty of Social Sciences & Communication, Universidad Europea de Valencia, 46010 Valencia, Spain
Journal. Media 2026, 7(1), 38; https://doi.org/10.3390/journalmedia7010038
Submission received: 7 January 2026 / Revised: 9 February 2026 / Accepted: 11 February 2026 / Published: 14 February 2026 / Corrected: 20 March 2026

Abstract

The disruptive acceleration of Generative Artificial Intelligence (GAI) has amplified the phenomenon of Global Friction (Globofriction), where technological speed undermines informational stability and weakens democratic resilience. Within higher education, this scenario demands training models capable of preparing future communicators to act as guarantors of truth amid automated erosion of discourse. This research evaluates the digital competence of Communication students through an interdisciplinary STEM-SSH (Science, Technology, Engineering, Mathematics—Social Sciences and Humanities) nexus approach based on the Kirkpatrick model. A mixed-methods methodology was employed, analyzing self-perception and cybersecurity data (n = 59), technical performance in the production of interactive infographics (n = 25), and qualitative evidence from reflection forums on systemic risks. The results reveal a “showcase digital competence”: a functional dissonance where future communicators demonstrate technical excellence under academic supervision but maintain negligent habits in their autonomous praxis. The study concludes that, given risks such as data porridge and strategic disinformation, it is urgent to transition toward a model of ethical resilience. This shift is imperative to reclaim the sovereignty of human judgment and ensure the integrity of public debate amidst current technological friction.

1. Introduction

Within the framework of ‘Globofriction’ (Aspachs, 2025), information integrity has evolved from a technical hurdle into a critical frontier for democratic resilience and social stability. The World Economic Forum (2025) reinforces this urgency by identifying AI-generated disinformation as the most severe short-term global risk, requiring ‘trust environments’ where human agency remains the ultimate guarantor of veracity. Consequently, evaluating whether future communicators are equipped to navigate this ‘clash phenomenon’—where systemic crises and mass disinformation outpace institutional response—becomes a priority (Kucukalic Ibrahimovic & Quirós-Fons, 2025). This institutional tension is exacerbated by what Simon et al. (2025) describe as a growing public anxiety over source traceability and the ‘data porridge’ effect in digital news environments.
Contemporary scholarship underscores a critical ambivalence in the adoption of Generative Artificial Intelligence (GAI) within the educational sphere. Blanco et al. (2026) identify a paradoxical perception among communication students, who categorize GAI as an indispensable instrument for professional productivity while simultaneously recognizing it as a primary vector of systemic vulnerability. Their study highlights that although future professionals exhibit high levels of usability and functional confidence, they remain profoundly concerned by the acceleration of disinformation flows and the resultant labor uncertainty. Consequently, these findings validate the necessity of an AI literacy framework that transcends mere technical proficiency to address the ethical, epistemic, and civic dimensions of synthetic content production.
Addressing such complexity demands a dual scientific approach rooted in STEM–SSH (Science, Technology, Engineering, Mathematics—Social Sciences and Humanities) methodological integration. As proposed by Hall et al. (2018) in the Science of Team Science framework, this synergy does not merely facilitate interdisciplinary encounters; it ensures the replicability needed for evaluation to move beyond operational efficiency toward a profound understanding of digital sovereignty. Such an approach is addressed to satisfy the mandates of EU Regulation 2024/1689 (The European Parliament and the Council of the European Union, 2024), known as the IA Act where AI literacy is defined not as an elective skill but as a requirement for technological sovereignty (Art. 4). Therefore, universities find themselves at a crossroads where they must transcend technical instruction to lead a robust system of algorithmic and ethical governance (Pedreño-Muñoz et al., 2024). This institutional imperative is further corroborated by interdisciplinary analyses that position GAI as a disruptive force against informational integrity. Al-kfairy et al. (2024) delineate a spectrum of critical risks—ranging from the algorithmic amplification of bias to the destabilization of democratic trust through synthetic media—that necessitate a shift in pedagogical priorities. Consequently, ethical resilience must transcend theoretical discourse to become a pragmatic component of training, ensuring that professional judgment remains traceable and human-centered under the pressures of algorithmic acceleration.
As Slimi and Carballido (2023) point out, transparency and the understanding of systems are fundamental to generating trust (Gallent-Torres et al., 2023; Sánchez-Rodríguez et al., 2024). This training, aligned with UNESCO (2025) guidelines and the European Digital Competence Framework (DigComp 2.2) (Vuorikari et al., 2022), is important not only for labor market success but also to bolster democratic resilience and consolidate a critical citizenship capable of navigating “polycrisis” environments (Sanchez-Acedo et al., 2024; Turpo-Gebera et al., 2025). However, the reality remains concerning: according to UNESCO (2025), despite technological progress, only 43% of countries have integrated media literacy into their curricula from a critical perspective. This regulatory gap is particularly alarming in the current AI ecology, where the proliferation of deepfakes and synthetic content demands competencies that transcend mere technical access. As noted by Sanchez-Acedo et al. (2024), the challenge lies in the students’ ability to detect algorithmic disinformation and successfully navigate environments of altered reality. This erosion of digital traceability is further exacerbated by the emerging patterns of consumption and production. Blanco et al. (2026) demonstrate that future practitioners are increasingly dependent on GAI systems that synthesize verified and fabricated data streams, effectively obscuring the provenance and accountability of information. This phenomenon aligns with broader scholarly concerns regarding the inherent opacity of generative models, where source attribution becomes not only ambiguous but technically irrecoverable. Such dynamics foster environments conducive to what this study conceptualizes as data porridge—a state in which informational coherence and transparency dissolve into an indistinguishable synthetic mixture.
The European Parliament (2025) advocates for a strategic convergence between legislation and education to fortify societal resilience against manipulation. This institutional mandate aligns with the evidence presented by Turpo-Gebera et al. (2025), who establish a direct correlation between critical judgment and the exercise of robust civic responsibility. Within this landscape, achieving a transition toward a sort of “ethical prosumer” model becomes essential—a perspective that, building upon the conscious production and consumption principles proposed by Pachuca Ortiz et al. (2025), empowers students to safeguard the integrity of public debate in the digital age.
Maintaining the university’s status as a “place of trust” amidst advancing automation is identified by the AI Observatory at Universidad Europea (2024b) as the primary challenge for higher education. This vision is consistent with the Spanish Conference of Rectors (CRUE, 2024) guidelines, which urge academic institutions to lead AI development in a way that enhances technological sovereignty. To operationalize these mandates, the Educational Innovation Unit (Universidad Europea, 2024a) has implemented a framework rooted in Bartram’s (2005) competency model. Under this schema, the responsible use of technology transcends mere technical dexterity; it is redefined as a comprehensive ethical capacity encompassing device protection, the secure management of digital identity, and the active safeguarding of privacy.
While these curricular efforts aim to cultivate responsible digital citizenship, the path from theory to practice reveals profound systemic friction. Although the institutional strategy is designed to deliver high-level capacity-building actions, empirical reality frequently exposes a persistent resistance within students’ autonomous habits. The drive for clear standards frequently clashes with the inertia of environments where transparency and responsible use (Slimi & Carballido, 2023; Pedreño-Muñoz et al., 2024) are not yet fully internalized. Only by confronting these daily practical challenges can higher education ensure the sovereignty of human judgment in the face of the systemic technological friction that defines the current era of Globofriction.

1.1. Background and Context

Under the classical media literacy paradigm, the validity of any training model depends on its capacity to transcend technical access and cultivate the critical faculties—rooted in the foundations established by Livingstone (2004)—necessary to analyze, evaluate, and create content within complex environments. This ideal requires higher education initiatives to be firmly aligned with indicators that provide a reliable scientific compass for learning assessment (Portillo Gil, 2021). Such rigor is a necessary response to the pervasiveness of the ‘Audit Society’ (Power, 1997), where administrative verification and operational efficiency often overshadow the qualitative depth and the genuine transformation of student competencies.
To move beyond this bureaucratic reductionism, this study adopts the Kirkpatrick and Kirkpatrick (2007) architecture—an evolution of the original 1970 four-level training evaluation model—to assess effectiveness through four progressive stages: Reaction (engagement); Learning (acquisition); Behavior (transfer); and Results (institutional impact). This evaluative logic transforms the research into a micro-experimental intervention designed to test the actual transfer of competencies through an Action-Research cycle. By utilizing the C-RIL methodology (Collaborative Research and Innovation-based Learning), students become co-creators of teaching materials—specifically through the “Countryfile” activity—allowing for a direct audit of how advanced knowledge is integrated into the core curriculum.
This study belongs to a longitudinal action-research process structured into two levels: a macro-institutional monitoring framework and a micro-experimental implementation. As part of this architecture, Universidad Europea (2025) leads a four-year project (2024–2028) to improve digital competence among undergraduate students using pre-test and post-test questionnaires based on DigComp 2.2 (Vuorikari et al., 2022). In its initial phase, the project reached n = 1738 responses, with 761 originating from the School of Social Sciences and Communication. While global results for Dimension 4 (Safety and Responsible Use) showed high self-perception (9.10 pre-test vs. 9.46 post-test), degree-specific analysis revealed significant asymmetries (ranging from 6.94 to 9.00). These disparities highlight the need for a dual scientific direction to adapt measurement instruments and reinforce ethical GAI integration.
A pilot intervention served as the direct precursor to the current proposal. This activity centered on an interactive infographic titled “Countryfile,” developed using Genially as a GAI tool (Martínez, 2024), to operationalize the higher levels of the Kirkpatrick model. The objective was to transform general self-perception into verifiable applied competence. The intervention followed three validation phases. In the diagnostic stage (n = 41), tests showed high theoretical understanding (8/10) in data protection and digital health. During technical execution, students successfully integrated digital security protocols (INCIBE, 2019, 2020; OSI, 2020) into their designs. However, the third phase of metacognitive reflection identified a clear “behavioral gap”: despite possessing regulatory knowledge, students reported only “occasional” proactive management of privacy and digital well-being (Kucukalic Ibrahimovic & Quirós-Fons, 2025).
The replicability of this model is evidenced by the activity developed at Universidad Rey Juan Carlos, awarded by the Global Female Trainers Fund 2025 from the Solutions Journalism Network (Kucukalic Ibrahimovic & Bañón Castellón, 2025). This project operationalized the “Three Cs” model—digital competence, gender awareness conscience, and sustainability consideration—as a response to the phenomenon of ‘compassion fatigue’ and news sensationalism in climate emergencies, as conceptualized by Moeller (1999) and recently analyzed in the context of the 2024 DANA floods in Spain (a severe high-altitude isolated depression) (López Carrión & Llorca-Abad, 2025). Ultimately, this trajectory enables universities to function as “places of trust” (Universidad Europea, 2024b), bridging the gap between theoretical GAI knowledge and the daily habits required for resilient citizenship (UNESCO, 2025; Turpo-Gebera et al., 2025).

1.2. Objectives and Hypotheses

Based on the challenges previously outlined, this study addresses a fundamental research question: To what extent can an intervention model grounded in the STEM-SSH nexus effectively transform the technical digital literacy of future communicators into a proactive ethical governance of information? This question stems from the need to determine whether such pedagogical frameworks achieve a genuine change in habits or merely produce a “showcase effect” for academic compliance. To address this, the research evaluates the second edition of a pioneering pedagogical initiative at the School of Social Sciences and Communication (UEV), designed for the acquisition of knowledge, skills, and critical attitudes essential for secure professional praxis (Fernández-Torres et al., 2019) and the ethical use of GAI.
To operationalize this evaluation, the study aligns with the Kirkpatrick and Kirkpatrick (2007) model—an evolution of the 1970 training assessment framework—complementing quantitative analysis with a qualitative approach to capture the depth of metacognitive reflection in dialogic interaction spaces. This approach enables a traceability that transcends mere operational efficacy across four levels. At the Reaction level, the study seeks to validate Hypothesis 3 (H3: Professional Ethical Resilience), postulating that the use of reflective self-assessment instruments—aligned with EU Regulation 2024/1689 (The European Parliament and the Council of the European Union, 2024)—fosters a professional awareness capable of navigating the “data porridge” effect (Simon et al., 2025). This awareness is expected to strengthen social responsibility and technological sovereignty in the face of systemic disinformation (Turpo-Gebera et al., 2025).
Regarding learning, the intervention trains students in the critical auditing of GAI to discern reliable sources and identify threats such as phishing or algorithmic hallucinations. This seeks to confirm Hypothesis 2 (H2: GAI Auditing): that GAI-driven content production—specifically through intensive documentary verification in Genially (Martínez, 2024)—reinforces informational rigor. However, the backbone of the research lies at the Behavior level, which aims to test Hypothesis 1 (H1: The Prosumer Gap). This hypothesis suggests a persistent competency dissonance: students demonstrate high theoretical and technical performance (Level 2) but lack systematic security and verification habits in their autonomous praxis (Level 3), falling back into a “showcase” behavior rather than a practical transformation.
In practice, resolving this “showcase effect” involves achieving specific Learning Outcomes (LO) across five key areas: the protection of assets through robust security measures (INCIBE, 2020; OSI, 2020); the management of risks and personal data; documentary skills for identity protection; digital well-being against physical and mental health risks; and environmental sustainability aimed at reducing the technological footprint (Somos Digital, 2020). Thus, the research not only evaluates technical transfer—such as secure password selection or malware detection—but also measures the final impact (Level 4: Results) in shaping a culture of professional responsibility aligned with the DigComp 2.2 framework (Vuorikari et al., 2022) and the integrity standards of the World Economic Forum (2025).

2. Materials and Methods

The research is grounded in the interpretive-critical paradigm (Portillo Gil, 2021) and implemented through an Action-Research design. This framework allows the authors to actively intervene in the pedagogical process, using the “Countryfile” activity as a cycle of continuous improvement (Kucukalic Ibrahimovic & Quirós-Fons, 2025). Such an approach aligns with the necessity of navigating ethical challenges in Higher Education, where transparency and ethical policies serve as fundamental pillars of academic integrity (Slimi & Carballido, 2023).
The study was conducted at the Universidad Europea de Valencia during the 2025/2026 academic year. The sample consists of n = 59 undergraduate students from the School of Social Sciences and Communication. Participants were selected through non-probability convenience sampling, as they were enrolled in specific modules integrated into the university’s macro-institutional monitoring project (2024–2028). Despite its non-probabilistic nature, the sample is justified as a strategic case study; its highly international profile provides a “controlled laboratory” for analyzing the “behavioral gap” in future global actors who must navigate complex geopolitical data and international governance. The core intervention involved the creation of an interactive infographic titled “Countryfile” using the Genially tool (Universidad Europea, 2025; Martínez, 2024). This design is directly linked to the “Globofriction” scenario (Aspachs, 2025), requiring students to perform verifiable documentary searches and critical analysis of geopolitical data. The primary goal was for students to work securely and responsibly, protecting devices and content according to INCIBE (2019, 2020) and OSI (2020) protocols.

2.1. Disclosure of Generative AI Usage

In compliance with the journal’s transparency requirements, the authors disclose that GAI was utilized as an integral component of the experimental design. Specifically, GAI served as the technological object of study during the “Countryfile” activity, where students applied GAI for content co-creation and documentary verification. Furthermore, GAI was used by the author to assist in the linguistic refinement of the theoretical framework and the alignment of research instruments with international standards. All primary data analysis and the formulation of conclusions were conducted exclusively by the human researchers.

2.2. Instruments, Procedure, and Analytical Constraints

To ensure a deep-impact evaluation based on the Kirkpatrick and Kirkpatrick (2007) model, several instruments designed according to Universidad Europea (2024a) guidelines were applied. These tools were synchronized to monitor progress from initial self-perception to the actual transfer of competencies through the following structured battery, as detailed in Table 1:
  • Initial Monitoring Questionnaire: Based on the DigComp 2.2 framework (Vuorikari et al., 2022) and adapted from Somos Digital (2020) indicators. It evaluates self-perception across three dimensions: (1) Data protection and privacy; (2) Digital health and well-being; and (3) Environmental protection and sustainability. This instrument targets Kirkpatrick’s Levels 1 and 2.
  • Activity Table (Process Log): A tracking tool designed to monitor risk identification and privacy management during documentary research. It serves as the primary evidence for validating H2, linking technical praxis with the frameworks of Martínez (2024) and Bartram (2005).
  • Author-Developed Performance Rubric: This ad hoc instrument integrates Bartram’s (2005) competency framework with specific pedagogical criteria. It evaluates technical rigor, synthesis capacity, and—most critically—the veracity and verification of documentary sources. This component is specifically tailored to test student resilience within the “Globofriction” theoretical landscape (Aspachs, 2025; Self-elaboration).
  • Reflective Checklist & Forum: These tools focus on self-assessment and qualitative interaction (Kirkpatrick’s Level 3). They are key for identifying the “behavioral gap” between known norms and executed practice (H1) and for collecting qualitative evidence of ethical awareness (H3) through content analysis (Kucukalic Ibrahimovic & Quirós-Fons, 2025). To ensure anonymity and ethical integrity in the Forum, participants were assigned alphanumeric codes (S1…).
  • Final Impact Triangulation: This final stage is specifically oriented toward evaluating Kirkpatrick’s Level 4. It correlates the quantitative data from the diagnostics with the qualitative outcomes of the intervention. This instrument assesses the broader impact of the training on democratic resilience and digital sovereignty, validating the transformation of students into “Ethical Prosecutors” capable of mitigating the automated erosion of discourse.
During the technical intervention phase, 25 infographics were submitted, reflecting both individual performance and cooperative co-creation. The triangulation of these results enables an evaluation of Kirkpatrick’s Level 4, focusing on the long-term impact of ethical resilience on professional identity. By utilizing full access to statistical data, a robust correlation was established between competency perception and actual performance. However, two main constraints were identified: (1) institutional time limitations, which prioritized data collection over preliminary interviews; and (2) the significant theoretical investment required to adapt the Kirkpatrick model to a GAI-specific activity. These factors reinforced an interpretive approach, prioritizing qualitative depth to evaluate the broader impact on digital sovereignty and the mitigation of ‘Globofriction’.

3. Results

3.1. Initial Assessment: Mastery of Regulatory Literacy (Kirkpatrick Level 2)

The findings confirm that students possess an exceptional level of theoretical knowledge regarding digital norms. As shown in Figure 1, 98% of the sample correctly identified identity theft prevention practices, highlighting the use of password managers (52.5%) and multi-factor authentication (45.7%). This pattern of excellence was consistent across infrastructure security: 95% recognized the importance of antivirus software and 83% identified secure Wi-Fi as a prerequisite for sensitive transactions.
In the SSH nexus dimensions (Digital Health and Ethics), the consensus was nearly total. A significant 96.6% demonstrated knowledge of practices to mitigate physical and mental risks (Figure 2). Furthermore, an identical percentage (96.6%) confirmed a solid ethical foundation by identifying that omitting information about digital risks violates the protection of vulnerable groups. This suggests that, at a cognitive level, the cohort has fully internalized the regulatory framework. The initial questionnaire reflects an exceptional level of regulatory literacy, placing the students at Level 2 of the Kirkpatrick scale (Learning).

3.2. Performance Evaluation: The Academic “Showcase” (n = 25)

The intervention phase resulted in the production of 25 interactive infographics. When analyzing the distribution of grades (Table 2), an academic success curve clearly skewed toward excellence is observed. The absence of grades below 7.0 among active participants indicates high technical proficiency in GAI management and information verification within the controlled environment of the intervention.
Furthermore, the almost complete absence of grades below 9.0 points across the 25 submissions validates the effectiveness of the pedagogical scaffolding (C-RIL). Students demonstrated both technical and strategic capacity under academic demand, meeting the expected quality standards for GAI management and design.
However, the honesty reflected in the Reflective Checklist reveals the central dissonance of this study. While students clearly “know” the norm and perform excellently under supervision, they admit that in their private, autonomous praxis:
  • File Scanning: Conducted only “occasionally.”
  • Privacy Policies: Read only “at times.”
  • Offline Mode: Systematically ignored.
This reveals that high academic performance (Kirkpatrick Level 2) does not automatically translate into a change in habits (Kirkpatrick Level 3), confirming a persistent gap between theoretical literacy and habitual security behaviors.

3.3. Self-Perception Assessment: The Prosumer Gap (H1)

The third phase of the study shifts the focus from academic performance to actual, autonomous behavior. It is here that the “Prosumer Gap” (H1) clearly emerges. Despite the high academic scores previously documented, the honesty of the self-assessment reveals a set of risky patterns (Figure 3) where digital threats are theoretically known but not systematically managed.
Based on the aggregated analysis of self-reports, the following behavioral patterns were identified, marking the transition from theoretical literacy to daily praxis:
  • Systemic Security Negligence: While students ensure that antivirus software is installed, the actual scanning of study material is frequently relegated to “sometimes” or directly ignored. The prevalence of this inconsistent management suggests that risks are recognized but not addressed through systematic protocols.
  • The Privacy Paradox: Students demonstrate a “check-the-box” mentality. They may adjust basic cookie preferences, yet they admit to “never” reading the privacy policies or terms and conditions of the GAI tools they utilize. This confirms a conscious negligence often driven by a perceived lack of time or interest.
  • Infrastructure Dependency and Self-Regulation: Working in “offline mode” is the most systematically rejected practice, identified by the majority as something they “didn’t do.” This dependency is linked to inconsistent self-regulation; students struggle to establish prudent screen time or use power-saving settings, even while admitting to significant digital fatigue.
This stark contrast between high scores in theoretical tests and the irregularity of private habits confirms the “Showcase Digital Competence” architecture. The student possesses the technical and theoretical tools to protect themselves—the “display case” of academic success—but lacks the attitudinal consistency to apply them once the external academic pressure is removed. This finding provides empirical evidence for H1, demonstrating that cognitive mastery (Kirkpatrick Level 2) is insufficient to ensure a genuine change in habits (Kirkpatrick Level 3).

3.4. Qualitative Analysis: Perceptions of Traceability and Governance from the Forum

Beyond the quantitative metrics, the dialogic reflections in the forum provided a dynamic window into the students’ internal critical process. Through axial coding of the testimonies collected, a sophisticated understanding of GAI emerged—not as a simple tool but as a systemic challenge to truth and governance.
The students’ discourse centers first on the erosion of epistemic integrity. There is a collective concern regarding the opacity of AI synthesis (Table 3); participants perceive the output not as a structured report but as a “data porridge”. This metaphor, which gained significant traction during the debates, describes a mixture so homogeneous that it effectively “erases” original authorship and makes bibliographic traceability an impossible task. This lack of transparency is seen as a direct violation of accountability, especially when the AI begins to blend verified facts with fabricated “hallucinations.”
This technical concern quickly scales to a geopolitical dimension. Students demonstrated a remarkable ability to project these algorithmic failures onto real-world social stability. In their analysis of crisis scenarios, GAI is no longer seen as a productivity aid but as a potential “disinformation weapon”. The fear expressed in the forum is that the AI’s tendency to “omit critical factors” or “minimize political actions” could deepen institutional distrust during emergencies, turning a technical bias into a democratic threat.
However, this diagnosis of risk leads to a powerful reclaiming of human agency. Faced with the prospect of total automation, the reflections show a unanimous rejection of any model where the machine replaces the person. Instead, students demand a shift toward an ancillary role for technology. In this vision, the AI is relegated to support tasks—brainstorming or conceptual clarification—while the sovereignty of human judgment remains the only non-transferable competence for academic rigor and ethical verification.
The qualitative forum acted as a space for deeper exploration, where students defined the boundaries of their own digital sovereignty. By identifying the “data porridge” that precludes traceability and recognizing the technology’s potential as a “disinformation weapon," they move beyond passive consumption. They conclude that the only safeguard against algorithmic opacity is a return to human responsibility, setting the stage for a model of ethical resilience over mere technical automation.

4. Discussion and Conclusions

The pedagogical intervention developed in this study has allowed for a multidimensional evaluation of responsible technology use, addressing the fundamental tension between technical literacy and ethical governance. The integral analysis of the results reveals a complex jurisdictional architecture that culminates in what this study defines as “Showcase Digital Competence.” This term describes a profound behavioral dissonance: the subject demonstrates high technical and theoretical performance in controlled environments (Kirkpatrick Levels 1 and 2) yet maintains negligent habits in their autonomous digital praxis (Level 3). This paradox is empirically evidenced by the stark contrast between the 98% accuracy in initial security diagnostics and the academic success in the “Countryfile” project (with a concentration of grades >9.0), versus a confessed daily reality where reading privacy policies is non-existent (“never”) and malware scanning is an occasional practice.
This finding systematically validates the three research hypotheses. H1 (The Prosumer Gap) is confirmed, as students “activate” their rigor only under academic demand, demonstrating a 95% mastery of theoretical security that fails to translate into their digital autonomy. H2 (Critical AI Auditing) is validated, showing that under the structured C-RIL framework, students act as effective technology auditors, reinforcing informational rigor and mitigating the risk of algorithmic hallucinations. Finally, H3 (Professional Ethical Resilience) is ratified; the qualitative denunciation of GAI as a “data porridge”—a concept aligned with the latest reports from the Reuters Institute (Simon et al., 2025) on the erosion of news traceability—indicates that the intervention successfully fostered a critical awareness in 95% of the participants.
The central inquiry of this research sought to determine to what extent a STEM-SSH nexus model can effectively transform literacy into proactive ethical governance. The forum reflections provided a unique window into this transformation: students emerged as “Ethical Prosecutors,” capable of identifying systemic threats and demanding accountability from AI developers. They eloquently argued against the opacity of GAI, demonstrating a sophisticated sociopolitical awareness of how the “data porridge” facilitates the use of AI as a “disinformation weapon.” However, the evidence indicates that this “proactive governance” remains largely theoretical. This “showcase effect” warns of a significant gap: future communicators can perform as rigorous ethical judges in a public or academic debate while remaining vulnerable prosumers in their private lives. Professional awareness—aligned with the EU Regulation 2024/1689 (The European Parliament and the Council of the European Union, 2024), also known as the AI Act)—is successfully fostered, strengthening technological sovereignty (Turpo-Gebera et al., 2025), but the transition from “knowing” to “doing” is hindered by Globofriction (Aspachs, 2025). In this scenario, technological acceleration challenges informational integrity, making it difficult for students to ethically process innovation against the inertia of private habits.
To transcend mere academic compliance and achieve a genuine change in habits, and following the insights of Kucukalic Ibrahimovic and Quirós-Fons (2025), it is necessary to move beyond technical instruction. It is not enough to teach how to use the tool; we must teach how to inhabit the uncertainty of the algorithmic era. As current literature demands, education must foster a “human sovereignty” that does not depend on constant supervision. This transformation requires an experiential and ethical pedagogy capable of simulating real-world digital frictions, enabling students to confront—in a safe environment—the tangible consequences of negligent digital behavior. Such learning must be sustained by institutionalized critical dialog, with permanent discussion spaces that reduce the gap between academic norms and private praxis. By encouraging an ethical stance that remains active beyond institutional demand, we cultivate a professional identity that internalizes rigor as part of its ordinary reflexes. Finally, the shift toward sustainable resilience protocols involves practical strategies that align digital well-being with professional responsibility (Vuorikari et al., 2022), ensuring that ethical governance becomes a “reflex action” rather than a context-dependent requirement.
To further operationalize the C-RIL framework, the pedagogical interventions integrate structured actions that directly address this behavioral transformation. First, advanced source-verification protocols—similar to those applied in capacity-building initiatives that combine digital competence, gender awareness, and sustainability—can be embedded into each phase of the activity to strengthen documentary traceability and counteract the “data porridge” phenomenon. Second, crisis-simulation projects, informed by analyses of informational crises such as the 2024 DANA (López Carrión & Llorca-Abad, 2025), expose students to real-time informational risks in situated, high-pressure contexts. Third, multidimensional assessment rubrics from the 3C model (Kucukalic Ibrahimovic & Bañón Castellón, 2025) can be aligned with C-RIL to reinforce accountability. Reflective checkpoints then consolidate habits, turning ethical reflection into continuous metacognitive practice. Complementing these strategies, structured self-governance routines—including weekly traceability audits, personal risk-awareness logs, offline verification sessions, and dual-layer assessment of both AI accuracy and verification rigor—ensure that ethical resilience is consolidated as a transferable, internalized habit rather than a supervised performance.
In the final analysis, this research confirms that in the face of Globofriction, technical training is a blunt tool if it is not accompanied by an attitudinal transformation. Integrating the STEM-SSH nexus is not merely a curricular improvement; it is an essential pillar of democratic and informative resilience. Following Hall et al. (2018), the success of these models lies in transdisciplinary scientific collaboration. Only by transcending technical instruction will it be possible to transform knowledge into defensive and responsible professional conduct, capable of preserving rigor and public debate in the era of Artificial Intelligence.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Committee of Doctoral School of Research at the European University (protocol code 2026-122 and date of approval 19 January 2026).

Informed Consent Statement

Informed consent was waived by the Ethics Committee of Universidad Europea due to the nature of the study as a Teaching Innovation Activity (Project Code CI: 2026-122). The research involved the analysis of pre-existing, fully anonymized pedagogical data collected during routine academic assessment, ensuring no risk to participants and maintaining complete confidentiality.

Data Availability Statement

The data presented in this study are available on request from the author. The data are not publicly available as they consist of internal student assessment records and primary educational materials.

Acknowledgments

The author would like to thank the students of the 2025/2026 academic cycle for their participation and honesty in the self-reflective phase of this project Special thanks to Gloria Sánchez (Vice-Rectorate for Quality, Innovation, and Accreditation, UEV) for providing the anonymized data from the student questionnaires, and to Ali Esquembre for his technical support in data processing and the final visualization of the graphics. During the preparation of this manuscript, the author used Gemini 3 (Google) for the purposes of language translation from Spanish to English, stylistic polishing, and structural formatting of the academic text. The author has reviewed and edited the output and takes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GAIGenerative Artificial Intelligence
C-RILCollaborative Research and Innovation-based Learning
STEMScience, Technology, Engineering, and Mathematics
SSHSocial Sciences and Humanities
DigCompEuropean Digital Competence Framework for Citizens
INCIBEInstituto Nacional de Ciberseguridad (Spanish National Cybersecurity Institute)
OSIOficina de Seguridad del Internauta (Internet User Security Office)
UEVUniversidad Europea de Valencia
CRUEConferencia de Rectores de las Universidades
SDGSustainable Development Goals
LOLearning Outcome
H1, H2, H3Research Hypotheses
DANADepresión Aislada en Niveles Altos (Isolated High-Altitude Depression) that causes extreme rainfall occurred in 2024.

References

  1. Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical challenges and solutions of generative AI: An interdisciplinary perspective. Informatics, 11(3), 58. [Google Scholar] [CrossRef]
  2. Aspachs, O. (2025). La globofricción. In Anuario internacional CIDOB 2026 (F. Fàbregues, & O. Farrés, Coord.; pp. 42–51). CIDOB edicions. [Google Scholar]
  3. Bartram, D. (2005). The great eight competencies: A criterion-centric approach to validation. Journal of Applied Psychology, 90(6), 1185–1203. [Google Scholar] [CrossRef] [PubMed]
  4. Blanco, S., Sánchez González, M., Martín-Martín, F. M., & Sánchez Gonzales, H. M. (2026). The challenge of generative artificial intelligence for future communication professionals: Experiences and usability. Telecommunications Policy, 50(1), 103083. [Google Scholar] [CrossRef]
  5. CRUE. (2024). La inteligencia artificial generativa en la docencia universitaria: Oportunidades, desafíos y recomendaciones. Crue Universidades Españolas. Available online: https://www.crue.org/wp-content/uploads/2024/03/Crue-Digitalizacion_IA-Generativa.pdf (accessed on 10 February 2026).
  6. European Parliament. (2025). Media literacy–EPRS briefing: Strengthening citizen resilience against disinformation. European Parliamentary Research Service. Available online: https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/772886/EPRS_BRI(2025)772886_EN.pdf (accessed on 10 February 2026).
  7. Fernández-Torrers, Y., Gutiérrez-Fernández, M., & Palomo-Zurdo, R. (2019). ¿Cómo percibe la banca cooperativa el impacto de la transformación digital? CIRIEC-España, Revista de Economía Pública, Social y Cooperativa, (95), 233–264. [Google Scholar] [CrossRef]
  8. Gallent-Torres, C., Zapata-González, A., & Ortego-Hernando, J. L. (2023). The impact of Generative Artificial Intelligence in higher education: A focus on ethics and academic integrity. RELIEVE-Revista Electrónica de Investigación y Evaluación Educativa, 29(2), 1–19. [Google Scholar] [CrossRef]
  9. Hall, K. L., Vogel, A. L., Huang, G. C., Serrano, K. J., Rice, E. L., Tsakraklides, S. P., & Fiore, S. M. (2018). The science of team science: A review of the empirical evidence and research gaps on collaboration in science. Stanford University. Available online: https://domteamscience.stanford.edu/wp-content/uploads/2023/02/The-Science-of-Team-Science-A-Review-of-the-Empirical-Evidence-and-Research-Gaps-on-Collaboration-in-Science.pdf (accessed on 10 February 2026).
  10. INCIBE-Instituto Nacional de Ciberseguridad. (2019). Guía para aprender a identificar fraudes online. Available online: https://www.incibe.es/ciudadania/formacion/guias/guia-para-aprender-identificar-fraudes-online (accessed on 10 February 2026).
  11. INCIBE-Instituto Nacional de Ciberseguridad. (2020). Guía de ciberataques. Available online: https://www.incibe.es/ciudadania/formacion/guias/guia-de-ciberataques (accessed on 10 February 2026).
  12. Kirkpatrick, D. L., & Kirkpatrick, J. D. (2007). Implementing the four levels: A practical guide to effective training evaluation. Berrett-Koehler Publishers. [Google Scholar]
  13. Kucukalic Ibrahimovic, E., & Bañon Castellón, L. (2025). Integrando competencia digital, perspectiva de género y sostenibilidad en el periodismo de soluciones: Una estrategia de capacity building probada en el Máster de Periodismo Internacional. In A. Diestro Fernández (Ed.), Inteligencia artificial responsable y sostenibilidad curricular: Oportunidades y retos para la innovación docente (pp. 36–38). Universidad Europea de Madrid. Available online: https://hdl.handle.net/11268/16277 (accessed on 12 March 2026).
  14. Kucukalic Ibrahimovic, E., & Quirós-Fons, A. (2025). Educación superior y ciudadanía digital en la era de la IA: Un enfoque competencial necesario. In Propuestas educativas en la era de la IA. Regulación y uso ético (pp. 351–365). Dykinson. [Google Scholar]
  15. Livingstone, S. (2004). Media literacy and the challenge of new information and communication technologies. The Communication Review, 7(1), 3–14. [Google Scholar] [CrossRef]
  16. López Carrión, A. E., & Llorca-Abad, G. (2025). Desinformación durante la crisis producida por la DANA de 2024 en España: Análisis, características, tipologías y desmentidos. Revista Mediterránea De Comunicación, 16(2), e29303. [Google Scholar] [CrossRef]
  17. Martínez, S. (2024, January 14). Digital Heroes [Presentación de Genially]. Available online: https://view.genial.ly/65a413f729757a0013692dab/interactive-content-digital-heroes (accessed on 10 February 2026).
  18. Moeller, S. D. (1999). Compassion fatigue: How the media sell disease, famine, war and death. Routledge. [Google Scholar]
  19. OSI-Oficina de Seguridad del Internauta. (2020). Guía de privacidad y seguridad en Internet. Instituto Nacional de Ciberseguridad (INCIBE). Available online: https://www.osi.es/es/guias-y-infografias (accessed on 10 February 2026).
  20. Pachuca Ortiz, R., Hernández Pacheco, F. J., García Cerda, A., Ricárdez Cortés, R. A., & García Rivera, X. (2025). La alfabetización mediática desde una mirada crítica en tiempos digitales. Ciencia Latina Revista Científica Multidisciplinar, 9(3), 6255–6283. [Google Scholar] [CrossRef]
  21. Pedreño Muñoz, A., González Gosálbez, R., Mora Illán, T., Pérez Fernández, E., Ruiz Sierra, J., & Torres Penalva, A. (2024). Informe IA en universidades: Retos y oportunidades. 1MillionBot Group. Available online: https://raeia.org/books/la-inteligencia-artificial-en-las-universidades-retos-y-oportunidades/ (accessed on 10 February 2026).
  22. Portillo Gil, G. (2021). Modelo de evaluación Kirkpatrick en Educación para el Desarrollo y ciudadanía global: Recomendaciones para su adaptación en proyectos de Farmamundi [Master’s thesis, Universitat Politècnica de València]. Riunet. Available online: https://riunet.upv.es/server/api/core/bitstreams/4a80fed7-e515-42e0-81e6-4328bbc2aa2c/content (accessed on 10 February 2026).
  23. Power, M. (1997). The audit society: Rituals of verification. Oxford University Press. [Google Scholar]
  24. Sanchez-Acedo, A., Carbonell-Alcocer, A., Gertrudix, M., & Rubio-Tamayo, J. L. (2024). Retos de la Alfabetización Mediática e Informacional en la ecología de la Inteligencia Artificial: Deepfakes y desinformación. Communication & Society, 37(4), 223–239. [Google Scholar] [CrossRef]
  25. Sánchez Rodríguez, A. N., Martínez Romero, M. E., Rodríguez Agreda, C. J., Romero Saldarriaga, J. G., & Romero Saldarriaga, M. A. (2024). Impacto de la inteligencia artificial en las prácticas educativas: Percepciones y actitudes del profesorado. LATAM Revista Latinoamericana de Ciencias Sociales y Humanidades, 5(2), 1038–1055. [Google Scholar] [CrossRef]
  26. Simon, F., Rasmus, K., Nielsen, R., & Fletcher. (2025). I A AI AND THE FUTURE OF NEWS Generative AI and news report 2025: How people think about AI’s role in journalism and society. Available online: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2025-10/Gen_AI_and_News_Report_2025.pdf (accessed on 10 February 2026).
  27. Slimi, Z., & Carballido, B. V. (2023). Navigating the ethical challenges of artificial intelligence in higher education: An analysis of seven global ai ethics policies. TEM Journal, 12(2), 590–602. [Google Scholar] [CrossRef]
  28. Somos Digital. (2020). Guía tecnologías para la sostenibilidad ambiental. Available online: https://somos-digital.org/wp-content/uploads/2020/03/Guia-TecnologIas-para-la-sostenibilidad-ambiental_Asociacion_Somos_Digital.pdf (accessed on 10 February 2026).
  29. The European Parliament and the Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed on 10 February 2026).
  30. Turpo-Gebera, O., Rosales-Márquez, C., Gutiérrez-Aguilar, O., & Rivera-Mansilla, E. (2025). Alfabetización Mediática e Informacional y Formación Ciudadana en estudiantes universitarios. Revista Latina De Comunicación Social, (83), 1–23. [Google Scholar] [CrossRef]
  31. UNESCO. (2025). Media and information literacy for all: Closing the gaps in global policy and practice. United Nations Educational, Scientific and Cultural Organization. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000396030 (accessed on 10 February 2026).
  32. Universidad Europea. (2024a). Guía para el desarrollo de la competencia digital. Unidad de Innovación del Vicerrectorado de Profesorado e Investigación. [Google Scholar]
  33. Universidad Europea. (2024b). La universidad en la era de la Inteligencia Artificial (2.º Informe del Observatorio de IA). Available online: https://universidadeuropea.com/resources/media/documents/OBSERVATORIO_IA_-_Informe_Abril_24.pdf (accessed on 10 February 2026).
  34. Universidad Europea. (2025). Plan de desarrollo competencia digital: Presentación inicial curso 2025-2026. Unidad de Proyectos de Evaluación de Aprendizajes. [Google Scholar]
  35. Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The digital competence framework for citizens. Publications Office of the European Union. [Google Scholar] [CrossRef]
  36. World Economic Forum. (2025). Rethinking media literacy: A framework for information integrity. Available online: https://www.weforum.org/publications/rethinking-media-literacy-a-new-ecosystem-model-for-information-integrity/ (accessed on 10 February 2026).
Figure 1. Distribution of responses regarding practices to prevent identity theft (n = 59).
Figure 1. Distribution of responses regarding practices to prevent identity theft (n = 59).
Journalmedia 07 00038 g001
Figure 2. Knowledge of practices to prevent health risks in the use of digital technology (n = 59).
Figure 2. Knowledge of practices to prevent health risks in the use of digital technology (n = 59).
Journalmedia 07 00038 g002
Figure 3. Primary Evidence of Student Self-Assessment. Sample of the reflective checklist used to monitor the transition from theoretical knowledge to autonomous digital habits. Source: Primary research data.
Figure 3. Primary Evidence of Student Self-Assessment. Sample of the reflective checklist used to monitor the transition from theoretical knowledge to autonomous digital habits. Source: Primary research data.
Journalmedia 07 00038 g003
Table 1. Methodological Operationalization Matrix: Technical Details, Kirkpatrick Impact Levels, and Hypothesis Traceability.
Table 1. Methodological Operationalization Matrix: Technical Details, Kirkpatrick Impact Levels, and Hypothesis Traceability.
Phase/ParameterKirkpatrick LevelInstrument/ActionLearning Outcome (LO)Reference Framework
Population….Convenience Samplingn = 59 (Social Sciences)Universidad Europea (2024a)
Deliverables….Countryfile Activity25 Interactive InfographicsUniversidad Europea (2024a)/Martínez (2024)
1. DiagnosisLevels 1 and 2Initial Test (Google Forms)Privacy, Health, and SustainabilityVuorikari et al. (2022)/Somos Digital (2020)
2. InterventionLevel 2Process Log (Activity Table)Risk identification and GAI management (H2)Martínez (2024)/Bartram (2005)
3. EvaluationLevel 2Author-developed RubricTechnical rigor and source verificationAspachs (2025)/Self-elaboration
4. AttitudeLevel 3Checklist and ForumBehavioral gap (Norm vs. Practice) (H1)Kirkpatrick and Kirkpatrick (2007)/Bartram (2005)
5. Final ImpactLevel 4Result TriangulationDemocratic resilience and ethics (H3)Kucukalic Ibrahimovic and Quirós-Fons (2025)/Self-elaboration
Note. GAI = Generative Artificial Intelligence; LO = Learning Outcome; H1–H3 = Research Hypotheses. All “Author-developed” instruments and the “Self-elaboration” frameworks are based on the Action-Research design implemented for the 2025/2026 academic cycle.
Table 2. Distribution of Academic Performance and Competency Achievement Levels (n = 25).
Table 2. Distribution of Academic Performance and Competency Achievement Levels (n = 25).
Grading LevelFrequency Competency Achievement Level
Excellent (9.0–10.0)High concentration Kirkpatrick Level 2 (Superior Learning)
Good (7.0–8.0)Residual concentrationKirkpatrick Level 2 (Adequate Learning)
Sufficient/Low (<7.0)Only non-submissions Need for attitudinal reinforcement
Note. The absence of grades below 7.0 among active participants indicates high technical proficiency in GAI management and information verification within the controlled environment of the intervention.
Table 3. Qualitative Analysis Matrix: Student Testimonies on Ethical Awareness and GAI Governance Dimensions (Academic Cycle 2025/2026).
Table 3. Qualitative Analysis Matrix: Student Testimonies on Ethical Awareness and GAI Governance Dimensions (Academic Cycle 2025/2026).
CategorySubjectKey Testimony (Direct Evidence)Identified Impact
Epistemic Integrity S1“The output is a ‘data porridge’; everything is mixed so homogeneously that you lose track of the original source.”Erosion of rigor: Loss of bibliographic traceability.
S2“AI creates logical explanations, but mixes real facts with false information without showing references.”Inconsistency: Risk of academic hallucinations.
Systemic RiskS4“AI can act as a weapon of disinformation… it amplifies false narratives faster than we can debunk them.”Manipulation: Threat to public opinion during crises
S5“In the DANA case, AI tends to omit critical factors like climate change, generating distrust.”Institutional Bias: Omission of political and climatic variables.
Ethical GovernanceS7“It is useful for clarifying concepts, but it should never replace peer-reviewed sources.”Ancillary Role: AI as a support tool, not a replacement.
S8“Sovereignty is ours; the researcher must take full responsibility for verification and bibliography.”Human Agency: Reclaiming critical judgment.
Note. Testimonies were collected during the 2025/2026 academic cycle. GAI = Generative Artificial Intelligence. Participants are identified using alphanumeric codes (S1, S2, etc.) to ensure anonymity in accordance with ethical research protocols.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kucukalic Ibrahimovic, E. Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training. Journal. Media 2026, 7, 38. https://doi.org/10.3390/journalmedia7010038

AMA Style

Kucukalic Ibrahimovic E. Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training. Journalism and Media. 2026; 7(1):38. https://doi.org/10.3390/journalmedia7010038

Chicago/Turabian Style

Kucukalic Ibrahimovic, Esma. 2026. "Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training" Journalism and Media 7, no. 1: 38. https://doi.org/10.3390/journalmedia7010038

APA Style

Kucukalic Ibrahimovic, E. (2026). Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training. Journalism and Media, 7(1), 38. https://doi.org/10.3390/journalmedia7010038

Article Metrics

Back to TopTop