Next Article in Journal
Creative Personality and Academic Achievement in Secondary School Students: Contributions to the Development of a Sustainable Future
Previous Article in Journal
The Spectrum of Teaching Styles in Physical Education: A Feasibility Study on Children’s Physical Fitness and Self-Perception
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perceptions of Pre-Service Teachers in Early Childhood and Primary Education on GenAI-Generated Deepfakes

by
José María Campillo-Ferrer
1 and
Pedro Miralles-Sánchez
2,*
1
Department of Didactics of Mathematical and Social Sciences, University of Murcia, 30100 Murcia, Spain
2
Department of Humanities and Educational Sciences, Public University of Navarra, 31006 Pamplona, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(4), 575; https://doi.org/10.3390/educsci16040575
Submission received: 26 January 2026 / Revised: 29 March 2026 / Accepted: 2 April 2026 / Published: 4 April 2026

Abstract

This study explored pre-service teachers’ views on the use of generative artificial intelligence (Gen AI) in the production of misinformation, addressing the potential challenges posed by deepfakes generated by these online resources. A quantitative approach was used; 133 pre-service teachers participated in the study, all of them were enrolled in primary education degree programmes in the Region of Murcia, Spain. The results indicated a clear awareness of the risks posed by these digital tools in the generation of deepfakes. Respondents became aware of the potential threats this may pose on the internet, which can be further exacerbated when disseminated in educational environments. Recognising the relevance of pre-service teachers’ concerns can help educators and educational administrations take steps to limit Gen AI in accordance with ethical parameters and thus reduce the spread of misinformation. In social science teaching and learning, further research is needed to equip students with the essential skills to distinguish between accurate and inaccurate information. For all these reasons, it seems essential to improve research in media literacy education for the application of identification skills used in assessment processes. These improvements can take the form of evidence-based approaches, such as AI literacy programmes or media literacy modules, to facilitate student learning and ensure better quality education.

1. Introduction

In the era of digital media, Gen AI has added value to the digital world with powerful applications that are growing quickly and delivering digital services effectively, resulting in creative and new opportunities that have had a positive impact on advanced and transformative virtual scenarios around the world. These innovative proposals are boosting creative mindsets, approaches and workflows in many spheres from the ideation process (Lu et al., 2024). The widespread development of automated content catalysed by Gen AI through large language models (LLMs), image generation applications, audio generation systems, and code generation models is dominating and pushing the boundaries of current innovation. Specifically, these web-based technologies use types of machine learning, such as deep learning and neural networks, to enhance scientific research, generate personalized experiences, provide customer advice, and improve talent management, among other benefits (Boussioux et al., 2024; Monteith et al., 2024). Therefore, it is necessary to raise awareness about it, which mainly involves understanding and analysing the diverse outputs it generates, in accordance with its main distinguishing feature: producing human-like content.
In the field of education and given their current relevance, it is essential that educators familiarise themselves with the potential advantages and disadvantages of these technological resources to ensure that most of them have educational purposes. These online tools are making their mark in a wide range of educational contexts by addressing students’ needs, improving school management and generating original educational materials, stimulating a transition towards more effective digital work processes (Özer, 2024). However, one of the main risks identified in its implementation, both in educational and non-educational contexts, is related to the progressive emergence of fake content generated by Gen AI, which inevitably exposes people to misleading and inaccurate information (Delchev et al., 2024). In this regard, misinformation is a global problem that reflects the vulnerability of institutions fighting against subversive social attempts to manipulate collective opinion due to the spread of cyberattacks, deepfakes, and the biased dissemination of sensitive information. In particular, one of the challenges related to the spread of this misleading content is the vulnerability of citizens, especially younger internet users, who are often very active on social media and consider it to be true (Lao et al., 2025).
In this research, we focus on identifying AI-generated deepfakes as the central objective of the study. According to Kietzmann et al. (2020), the term ‘deepfake’ was coined by an internet user in 2017, who combined the terms ‘deep learning’ and ‘fakes’ and shared deepfakes on the social news platform Reddit, triggering an avalanche of false information. The authors identified the first deepfakes based on the application of lip-syncing to previous audio clips, generating new fake video clips. The main targets of this misleading information are primarily politicians, athletes, footballers, actors and singers, and it should be noted that deepfakes are constantly increasing due to two important factors, credibility and accessibility.
Maham and Küspert (2023) classify the risks regarding this concern in three broad categories, risks from unreliability, misuse risks and systemic risks. Firstly, there are risks arising from unreliability that can lead to discrimination and the spread of stereotypes on the web, as well as misunderstandings due to reliance on inaccurate content. Secondly, misuse risks related to malicious actions powered by AI such as cybercrime, biosecurity concerns and politically driven misuse. Thirdly, the authors identify systemic risks concerning economic centralisation, ideological homogenisation and other risks stemming from inaccurate social adaptation. In this context, it should be noted that this study focuses on the first category and on the potential risks of deepfakes related to discrimination and social stereotypes. With this aim in mind, the Council of the EU approved the AI Act (Regulation EU 2024/1689, 2024), which establishes a regulatory framework for AI to reduce the risks associated with high-risk AI-powered content. It identifies them as a high-risk product due to their potential to spread false information and manipulatively influence citizens’ opinions. It includes measures such as transparency mechanisms for deepfake generators and disseminators, restrictions on malicious deepfakes, and safety assessments for unethical AI practices such as deepfakes.
In the context of education, given the increasingly sophisticated technology used to generate deepfakes, research into contributory reasons for deepfake knowledge and new deep learning strategies for detection purposes has become popular. As Somoray et al. (2025) concluded, detection accuracy rates varied across studies, ranging from nearly 60% to 75% accuracy, leaving space for further improvement in results and refinement of methods to restrict and control deepfakes. In this regard, Bitton et al. (2025) analysed deepfake knowledge in a German context and proved that digital skills and the use of social media were very relevant when approaching this issue. Murillo-Ligorred et al. (2023) found that university students under the age of 20 requested more digital literacy training to distinguish AI-generated fake images from real ones, as they felt that technology was developing faster than their knowledge and skills. Regarding the most used methods for detecting deepfakes, some studies point to visual checking for inconsistencies, biometric analysis, and other behavioural patterns, with the aim of identifying in real time whether the online content in question is fake or not (Agarwal et al., 2020; Liu et al., 2024). The promotion of digital literacy in the aforementioned detection measures, along with training in critical thinking, can enhance technology-based learning in educational contexts where problem-solving and analytical thinking skills are required (Furbani et al., 2025). In this sense, Ma et al. (2025) called for a curriculum reform that encompasses an innovative normative basis against malicious, fake content in an attempt to shield higher education institutions and students’ constitutional rights.
Specifically, in the field of history teaching, the development of reflective thinking is key to addressing these types of global challenges, as it encourages students to actively participate in the analysis, review and critical study of historical data, which can ultimately help to uncover patterns and anomalies over time, including the identification of inaccurate information. To activate historical thinking, students are provided with a wide range of sources that act as catalysts for the analysis and interpretation of historical events efficiently. In this way, they often work with second-order concepts, which are tools used to shape and understand events throughout historical periods, discerning what is true and what is not. In the digital age, where everything is constantly evolving, students must be equipped with the skills necessary to critically evaluate the information available on the internet, as this will ultimately enable them to identify deepfakes more accurately. In this sense, being able to examine and compare digital information becomes essential for understanding current events during their learning processes. In the first stage, they must learn how these digital content-generating tools work and how to use them properly, while familiarising themselves with their advantages and potential risks (Campillo-Ferrer et al., 2025). In this regard, this study has a dual purpose: to teach them to what extent Gen AI can be a tool for improving historical thinking and detecting deep fakes, while avoiding high-risk practices.
Pre-service teachers constitute a particularly relevant population for examining perceptions of AI-generated deepfakes. On the one hand, as future educators, they are increasingly exposed to digital and AI-mediated content, including potentially manipulated or synthetic media that may appear in online resources, social networks, or even educational materials. On the other hand, they are expected to play a key role in fostering students’ critical thinking, media literacy, and digital competence. This includes the ability to recognize and critically evaluate misinformation, such as deepfakes. Therefore, understanding how future teachers perceive and interpret AI-generated deepfakes is essential to inform teacher education programs and to ensure that they are adequately prepared to address the challenges posed by emerging forms of disinformation. Within this framework, the present study aims to analyze pre-service teachers’ perceptions of AI-generated deepfakes and their ability to identify them. To achieve this aim, the following research questions are proposed:
RQ1. 
What is the level of concern among pre-service teachers regarding AI-generated deepfakes and their potential for spreading false content on the Internet?
RQ2. 
Are there significant differences in pre-service teachers’ perceptions and opinions regarding the use of deepfakes for deceptive purposes?
RQ3. 
To what extent are pre-service teachers able to identify deepfakes, and how is this ability related to variables such as age, gender, digital competence, knowledge of GenAI applications, and frequency of use?

2. Materials and Methods

2.1. Participants and Context

In this study, the sample comprised 133 pre-service teachers aged between 18 and 37, selected using convenience sampling. The majority of participants were women: 104 women (78.2%) and 29 men (21.8%). This non-probabilistic technique favours the selection of participants based on their ease of access, which implies a less labour-intensive and costly procedure in this exploratory study. However, there are some associated drawbacks, such as selection bias or lack of representativeness (Golzar et al., 2022). In this case, the gender imbalance, the number of participants and the fact that they were enrolled exclusively in a university degree programme in Social Sciences may be drawbacks of the study related to this technique. The pre-service teachers were taking the core subjects of Social Sciences and their Teaching and Teaching Methodology for Social Sciences in their second and third years of academic studies. The training programme on deepfake detection and the use of Gen-AI was carried out in the first term of the 2025/2026 academic year, and a survey design was implemented to analyse their opinions on these digital challenges for teaching and learning. Participants were informed about the implications of this study before participating in it and gave their consent both in writing and verbally, as required by the Research Ethics Committee of the University of Murcia (Spain). Respondents were informed about the anonymity of data collection and analysis before the survey was administered. Data anonymity was ensured by encrypting the identifiers associated with each respondent in the stored data. Personal information was requested from respondents for better categorisation and analysis of the independent variables (see Table 1).

2.2. Data Collection Instruments

Respondents were surveyed through questionnaire that had been previously applied and validated by Neyazi et al. (2024). Surveys are one of the most comprehensive data collection tools, as they provide anonymity through an appropriate and cost-effective data collection procedure for subsequent academic research, in addition, it was adapted to the current learning scenario, and a focus group was conducted to improve the validity of the adapted instrument, particularly in terms of content relevance, clarity, and suitability for the target population. It should be noted that the reliability of the instrument was calculated using Cronbach’s alpha (α = 0.929), and previous studies have highlighted some advantages of using this research tool, such as its ease of administration, simple reproducibility, and easy adaptability. Therefore, they are suitable for effectively gathering the perspectives and opinions that participants communicate freely and openly (Alordiah & Ossai, 2023). However, this survey tool also has some disadvantages, such as the tendency of respondents to agree with the proposed items, which leads to acquiescence bias (Smith & Fischer, 2015). Although some items address the societal impact of deepfakes (e.g., their influence on political processes), these were included to assess participants’ broader understanding of misinformation risks. This is particularly relevant in the context of teacher education, as future teachers are expected to promote media literacy and critical thinking skills that enable students to navigate complex digital environments.
The questionnaire used for the study consists of four different sections, such as an introductory section highlighting the relevance of the European Digital Competence Framework (DigComp) and initial instructions for students to understand the purpose of the research, how to proceed with the questions, and how to convey accurate and precise answers, with clear guidelines on the format of the rating scale used. It also consists of a series of screening questions to determine whether respondents meet the research criteria, including demographic data such as age, gender, programme of study, year of study, and questions focused on digital literacy, frequency of AI use, and prior knowledge of AI’s potential to produce deepfakes, to ensure data quality. The second section includes six questions about respondents’ level of exposure to and concern about deepfakes. The third section includes four questions related to the role of government in addressing the challenge of deepfakes and their influence on political elections. Each item was rated on a five-point Likert scale, ranging from 1 “strongly disagree” to 5 “strongly agree”. The fourth section went beyond gathering participants’ theoretical perspectives and tested their ability to identify deepfakes. This was a practical skills test that involved showing all participants four potential deepfakes, alongside the same question focusing on their ability to distinguish them. Essentially, by asking participants to identify them, objective data on media literacy was collected, enabling a more rigorous comparison of how effectively pre-service teachers engage with online content and critically filter sophisticated digital disinformation. They consisted of a series of four images and videos—some real, some fake—which were not presented in sequential order, so that participants could develop their ability to identify deepfakes more effectively.

2.3. Data Analysis

Once the data had been collected using the questionnaire, the participants’ responses were carefully analysed using the Statistical Package for the Social Sciences (SPSS) v.29.0. Specifically, descriptive statistics were applied with the aim of summarising the data effectively by calculating the mean, median and mode values. In addition, inferential statistics were applied after assuming that the data did not follow a normal distribution using Kolmogorov–Smirnov tests. Specifically, Mann–Whitney and Kruskal–Wallis tests were performed to examine two or more independent groups in the sample, with the main aim of comparing whether the medians differ. In addition, Wilcoxon signed-rank tests were applied to analyse whether the pre-service teachers’ self-reported perceptions had changed significantly or remained unchanged.

2.4. Intervention Program

The study was conducted during the first term of the 2025/2026 academic year in the core subjects of Social Sciences and their Teaching and Teaching Methodology for Social Sciences. The Social Sciences teaching program was based on improving pre-service teachers’ social and digital skills across a wide range of disciplines, including history, geography, politics and ethics. The programme was based on an existing one that was analysed by Ayuso del Puerto and Gutiérrez Esteban (2022) and included the design of AI-driven educational projects carried out by student teachers through teamwork and following project-based guidelines.
The intervention program was designed as an awareness-raising activity aimed at introducing pre-service teachers to artificial intelligence tools and their educational implications, including the emergence of deepfakes. It was not intended as a specific training program for developing deepfake detection skills. The aim of the program was to improve intercultural understanding of the reality around them, focusing on citizenship and the development of civic duties, while engaging pre-service teachers in problem-solving tasks that required them to improve their reasoning skills. The development of reflection and critical capacity through theory and practice was carried out continuously, as well as the analysis of current challenges related to sustainability and human rights, under a critical paradigm based on the examination of contemporary social phenomena, including immigration, poverty, war, around the world from different perspectives, including divisive politics, controversial political leaders and their questionable decision-making processes.
It also delves into Gen-AI literacy by conveying various types of AI-powered resources to achieve the objectives effectively. During the lessons, student teachers addressed the content by contextualising primary sources, examining research articles, and other multi-disciplinary studies under a digital approach. Teamwork was promoted in class by getting groups of four and five involved in cognitively demanding activities in which they had to design lesson plans and formulate strategies that address cross-cultural challenges that have arisen in recent years in the current societies. With this goal in mind, pre-service teachers had to use Gen AI-based applications that promoted the discussion of complex social issues and the development of student research, while also improving their academic performance. Specifically, they analysed texts with Chatpdf, made use of chatbot-based applications like ChatGPT, Gemini, and Hello History and customise timelines, infographics and mind maps with the aid of Venngage, Learning studio AI and Piktochart. In this way, participants explored the potential of these technological tools to convey valuable information through AI-based data analysis, supported their learning process with the help of virtual assistants, and helped them simplify complex issues by providing visual representations and graphic diagrams.
Simultaneously, they addressed the potential risks to public trust associated with Gen AI in the form of deepfakes. Participants were periodically presented with a wide range of online content and had to decide whether it was fake or not after carefully examining it. In this regard, a detection approach was adopted based on a review of previous studies, such as those by Farid (2022) and Wang et al. (2024), which focused on the application of deepfake detection techniques in social science videos, such as the identification of visual and spatial inconsistencies, such as colour divergences between real and synthetic sections or sound patterns. They also focused on temporal disparities in fake videos, such as inconsistencies between speech flow and mouth movement articulation. In addition, another technique applied was tracking their means of dissemination, for example, they checked troll accounts and fake profile scams that often distribute deepfakes on social media platforms. The source of the content, as well as the metadata, helped student teachers to identify them and uncover the malicious and fraudulent purposes behind them. After completing the tasks, student teachers were asked to design their own social studies lesson plans with the aim of learning how to plan sessions focused on detecting deepfakes and other types of disinformation, with the help of digital classroom materials that addressed and focused on how to effectively teach social studies content to primary school pupils (Martínez et al., 2023).

3. Results

3.1. RQ1: Level of Concern About Deepfakes

This section presents the main results of the study in a series of tables, with the aim of providing a clear and logical overview of the findings obtained after data analysis. Table 2 shows descriptive statistics that give a picture of the main perceptions of the items proposed before and after the academic term, like averages (mean, median, mode) and spread (range, standard deviation). According to the results obtained, it can be clearly observed that respondents’ ratings were very similar before and after the programme’s implementation. However, the median and mode values increased, representing a higher middle value as well as a higher most frequent value in the analysed dataset.
As for the identification of possible differences in the perceptions and opinions of pre-service teachers, findings revealed that male pre-service teachers worried more about deepfakes when used for entertainment than female pre-service teachers (see Table 3). Moreover, female participants trusted more in the information provided by government institutions and the measures adopted than men.
As can be seen in Table 3, most of the pre-service teachers surveyed do not agree that the government looks after their interests or provides them with reliable information, with values lower than 3 in all the items presented in both the pretest and the posttest. Non-parametric tests were applied to determine whether significant differences existed in the perspectives expressed before and after the program’s implementation. The results showed that such differences did exist, for example, in respondents’ level of concern and awareness regarding exposure to deepfakes. The paired medians differed significantly when respondents were asked whether deepfakes concerned them, even when used for entertainment (p < 0.001). The results were also statistically significant regarding the implementation of AI to detect deepfakes (p < 0.001). However, their perspectives on the measures and information provided by official institutions increased, with more positive than negative values at the end of the period (see Table 4).

3.2. RQ2: Differences in Perceptions About Deceptive Uses

Significant differences were also found when considering deepfakes as a problem among pre-service teachers who were aware of the potential of Gen AI and those who were not (see Table 5). Respondents more aware of the potential of Gen-AI considered deepfakes a more serious problem than less aware pre-service teachers, even when used for entertainment purposes. They also reported being exposed to online misinformation more frequently than the second subgroup.
When analysing respondents’ ratings according to their frequency of use of Gen AI applications, significant differences with p-values below 0.005 were found between participants (see Table 6). Pre-service teachers who reported less frequent use of AI-powered resources considered deepfakes to be a problem, even for entertainment purposes, more significantly than those who used Gen-AI more frequently.

3.3. RQ3: Ability to Identify Deepfakes and Influencing Variables

When analysing pre-service teachers’ ability to identify deepfakes, the results showed that participants were better able to detect them once they had received training during the programme (see Table 7).
Additionally, the results showed that women were more adept than men at identifying deepfakes (see Table 8). Similarly, those who were aware of the potential of Gen-AI proved to be better at detecting deepfakes than those who were not.

4. Discussion

This study has delved into the perception of deepfakes, considered synthetic media that are not easily recognizable in the form of video, image or audio material (Altuncu et al., 2024). A comprehensive understanding of deepfake technology requires further research, as it is a relatively unexplored but rapidly growing digital area that can have harmful effects on the public. With this in mind, this study provides insights into pre-service teachers’ perceptions of deepfakes, which may inform future research and discussions on the role of AI in education and the risks of exposure to disinformation that these technological tools can cause, despite the widespread interest in the innovative opportunities this field offers.
In particular, RQ1 of the research delves into the participants’ level of concern regarding deepfakes and their potential spread online, and the results revealed significant concern among pre-service teachers regarding the deceptive use of Gen-AI and their dissemination via Internet. These findings align with previous studies, which highlighted that pre-service teachers perceived deepfakes as online risks, as they can limit the validity of information and diminish the ability to distinguish real from false (Cochran & Napshin, 2021; Roe et al., 2025). However, respondents do not have much confidence in the information provided by government institutions or in the measures taken to address this digital challenge; lower institutional trust could influence the degree of concern about the societal impact of deepfakes. In this regard, Ognyanova et al. (2020) highlighted the role of deepfakes as a major factor eroding trust in public administrations, especially since information provided by official institutions may not be entirely accurate. In other words, deepfakes can be exploited to weaken and shape public trust when citizens perceive AI-generated content as a risk to their rights in a wide range of political, economic, or educational scenarios.
Regarding RQ2 of the study, which was to examine possible differences in the perceptions of future teachers regarding the spread of deepfakes, the results revealed that those participants who were more aware of the value of using Gen-AI assumed to have been exposed to online misinformation more frequently than those who were less aware. These results may be related to those presented by Murillo-Ligorred et al. (2023), who found that pre-service teachers under 20 years of age requested greater digital literacy in their academic training to be able to more easily identify fake content, as they thought that the technology was evolving faster than their detection capabilities. In other words, the more aware students are of the usefulness and evolution of Gen-AI applications, the more perceptive they will be of the risks of exposure to this synthetic content, which may ultimately lead to a demand for better digital literacy training to address these digital challenges more effectively. Other studies, such as that of Bitton et al. (2025), also point to the development of digital skills and the use of social media as the main predictors of deepfake knowledge, which reinforces the idea of digital literacy training as one of the key solutions to address this issue.
Regarding RQ3 of the study, which analysed pre-service teachers’ ability to identify deepfakes based on their age, gender, digital literacy, and knowledge of Gen-AI applications, the results showed that those familiar with the potential of Gen-AI indicated a greater ability to detect deepfakes than those who were not, a finding closely related to the results of the second objective, which links the level of knowledge of Gen-AI applications to self-reported exposure to online disinformation. Additionally, women were more skilled than men at identifying deepfakes, which does not coincide with findings from previous research in which men outperformed women (Nadimpalli & Rattani, 2022; Lovato et al., 2024), however, gender differences are not consistent enough to conclude that they have a significant impact on the detection of deepfakes, for several reasons. Firstly, the results are significant in 25% of the deepfakes presented, furthermore, recent studies highlight that there are no clear correlations between certain demographic characteristics, such as age or gender, when it comes to identifying deepfakes (Nightingale et al., 2017; Tahir et al., 2021). Consequently, further research should be carried out into the analysis of this feature as a potential variable that may influence the level of detection of deepfakes.

5. Conclusions

The use of appropriate technologies that meet pupils’ learning needs in today’s technology-integrated classrooms presents a series of pedagogical challenges for the educational community regarding how pupils select suitable online resources from the wide range of available information sources, and how they decide to use these sources, taking into account whether they are genuine or fake. In this regard, the pre-service teachers who participated in this research expressed concern about these digital challenges, which provides a common basis for developing effective measures aimed at creating a safer learning environment. Indeed, pre-service teachers’ awareness appears to be linked to a greater recognition of the risks associated with Generative AI, which could ultimately lead to the promotion of standard protocols. Consequently, higher education institutions should promote the principles of academic integrity to guide students on how to use Generative AI appropriately to support their learning. As Wilson (2025) states, universities must strengthen policies that prevent the misuse of these technological resources, providing case studies that indicate specific examples reflecting the negative consequences of incorrect use, as well as highlighting best practices for the educational community. In this context, the Russell Group (2023), comprising 24 universities, jointly published a position statement on this issue, setting out some basic principles regarding the need to acquire knowledge about AI, the ethical use of Generative AI, academic rigour, and the benefits of effective practices that should be developed in accordance with the proposed academic standards. These skills and competences could be taught in specific teacher training modules, such as a module on detecting deepfakes within the Digital Competence Framework for Educators (DigCompEdu), which could serve as an international approach for digital literacy education.
Specifically in the context of teaching and learning the social sciences, the development of historical literacy can also help students to make ethical use of these online resources by critically analysing current and past content from original sources. Therefore, students should be advised against collecting digital content from unknown references, including videos, podcasts, and photos, and should log out of suspicious sessions that do not require advanced verification. Instead, they should use verified learning platforms that present materials from authentic sources to prevent misuse and misinformation (Monteagudo-Fernández et al., 2021). Additionally, students’ cognitive skills should be fostered by examining the causes and consequences of past events, reviewing relevant changes that have occurred over time, and enhancing their creativity and autonomy in addressing current social concerns. In this respect, the study and analysis of previously selected historical deepfakes could also serve as a valuable research resource in today’s educational landscape, as it offers teachers and students a way to assess unreliable multimedia content, interpret the underlying reasons for its creation and online dissemination, and evaluate the potential impact it may have on different social groups.
Several limitations should be considered when interpreting the findings of this study. First, the sample exhibited an uneven gender distribution, which may limit the generalizability of the results. Second, participants were recruited through convenience sampling, potentially introducing selection bias. Third, the deepfake examples used in the intervention were selected subjectively, which could influence participants’ responses. Fourth, the absence of a control group means that causal inferences regarding the intervention’s impact cannot be made. Fifth, the study focused on pre-service teachers from specific education programs, limiting the applicability of the findings to broader populations. Also, we can mention the possible oversimplification of the results obtained due to the quantitative method applied. Despite this issue being inherent in rating-scale surveys and thus complex to enhance the research process, we have selected a previously used data collection instrument, as recommended by Bee and Murdoch-Eaton (2016), which ensures a good questionnaire design, in addition to integrating detailed instructions, clarifying the anonymity of the process, and separating the independent variables from the dependent ones. Finally, another limitation of the intervention is that it did not include a structured training component specifically focused on deepfake detection. Future research could incorporate targeted instructional designs to further explore the development of these skills. Despite these constraints, the study provides valuable insights into pre-service teachers’ perceptions and abilities related to AI-generated deepfakes, which can inform future research and teacher education initiatives.

Author Contributions

Conceptualization, J.M.C.-F. and P.M.-S. methodology, J.M.C.-F.; software, J.M.C.-F.; validation, J.M.C.-F. and P.M.-S.; formal analysis, J.M.C.-F.; investigation, J.M.C.-F. and P.M.-S.; resources, J.M.C.-F.; data curation, J.M.C.-F.; writing—original draft preparation, J.M.C.-F.; writing—review and editing, P.M.-S.; visualization, P.M.-S.; supervision, J.M.C.-F. and P.M.-S.; project administration, J.M.C.-F. and P.M.-S.; funding acquisition, J.M.C.-F. and P.M.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by MICIU/AEI/10.13039/501100011033/FEDER, EU, grant number PID2024-155318NB-C31.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Ethics Research Committee of the University of Murcia (protocol code 3113/2020 and approved on 3 August 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Agarwal, S., Farid, H., El-Gaaly, T., & Lim, S. N. (2020). Detecting deep-fake videos from appearance and behavior. In 2020 IEEE international workshop on information forensics and security (WIFS) (pp. 1–6). IEEE. [Google Scholar] [CrossRef]
  2. Alordiah, C. O., & Ossai, J. N. (2023). Enhancing questionnaire design: Theoretical perspectives on capturing attitudes and beliefs in social studies research. International Journal of Innovative Science and Research Technology, 8(10), 603–614. [Google Scholar]
  3. Altuncu, E., Franqueira, V. N., & Li, S. (2024). Deepfake: Definitions, performance metrics and standards, datasets, and a meta-review. Frontiers in Big Data, 7, 1400024. [Google Scholar] [CrossRef]
  4. Ayuso del Puerto, D., & Gutiérrez Esteban, P. (2022). La Inteligencia Artificial como recurso educativo durante la formación inicial del profesorado. RIED-Revista Iberoamericana de Educación a Distancia, 25(2), 347–362. [Google Scholar] [CrossRef]
  5. Bee, D. T., & Murdoch-Eaton, D. (2016). Questionnaire design: The good, the bad and the pitfalls. Archives of Disease in Childhood-Education and Practice, 101(4), 210–212. [Google Scholar] [CrossRef]
  6. Bitton, D. B., Hoffmann, C. P., & Godulla, A. (2025). Deepfakes in the context of AI inequalities: Analysing disparities in knowledge and attitudes. Information, Communication & Society, 28(2), 295–315. [Google Scholar] [CrossRef]
  7. Boussioux, L., Lane, J. N., Zhang, M., Jacimovic, V., & Lakhani, K. R. (2024). The crowdless future? Generative AI and creative problem-solving. Organization Science, 35(5), 1589–1607. [Google Scholar] [CrossRef]
  8. Campillo-Ferrer, J. M., López-García, A., & Miralles-Sánchez, P. (2025). Student perceptions of the use of gen-AI in a higher education program in Spain. Digital, 5(3), 29. [Google Scholar] [CrossRef]
  9. Cochran, J. D., & Napshin, S. A. (2021). Deepfakes: Awareness, concerns, and platform accountability. Cyberpsychology, Behavior, and Social Networking, 24(3), 164–172. [Google Scholar] [CrossRef] [PubMed]
  10. Delchev, K., Safieddine, F., & Hammad, R. (2024). Identification of AI generated deep fake video by higher education students. In Science and Information Conference (pp. 473–489). Springer Nature Switzerland. [Google Scholar] [CrossRef]
  11. Farid, H. (2022). Creating, using, misusing, and detecting deep fakes. Journal of Online Trust and Safety, 1(4). [Google Scholar] [CrossRef]
  12. Furbani, W., Purnawanti, F., Dewi, A. E. R., Sari, N., & Thoriq, T. (2025). Digital literacy and critical thinking skills of students in the era industry 4.0. Juwara: Jurnal Wawasan Dan Aksara, 5(1), 136–148. [Google Scholar] [CrossRef]
  13. Golzar, J., Noor, S., & Tajik, O. (2022). Convenience sampling. International Journal of Education & Language Studies, 1(2), 72–77. [Google Scholar] [CrossRef]
  14. Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. [Google Scholar] [CrossRef]
  15. Lao, Y., Hirvonen, N., & Larsson, S. (2025). Everyday encounters with deepfakes: Young people’s media and information literacy practices with AI-generated media. Journal of Documentation, 81(7), 216–235. [Google Scholar] [CrossRef]
  16. Liu, W., She, T., Liu, J., Li, B., Yao, D., & Wang, R. (2024). Lips are lying: Spotting the temporal inconsistency between audio and visual in lip-syncing deepfakes. Advances in Neural Information Processing Systems, 37, 91131–91155. [Google Scholar] [CrossRef]
  17. Lovato, J., St-Onge, J., Harp, R., Salazar Lopez, G., Rogers, S. P., Haq, I. U., Hébert-Dufresne, L., & Onaolapo, J. (2024). Diverse misinformation: Impacts of human biases on detection of deepfakes on networks. npj Complexity, 1(1), 5. [Google Scholar] [CrossRef]
  18. Lu, H., He, L., Yu, H., Pan, T., & Fu, K. (2024). A study on teachers’ willingness to use Generative AI technology and its influencing factors: Based on an integrated model. Sustainability, 16(16), 7216. [Google Scholar] [CrossRef]
  19. Ma, Y., Su, Y., Li, M., Zhang, Y., Chai, W., Huang, A., & Zhao, X. (2025). Preparing students for an AI-driven world: Generative AI and curriculum reform in higher education. Frontiers of Digital Education, 2(4), 30. [Google Scholar] [CrossRef]
  20. Maham, P., & Küspert, S. (2023). Governing general purpose AI: A comprehensive map of unreliability, misuse and systemic risks (Policy Brief). Stiftung Neue Verantwortung. Available online: https://www.interface-eu.org/publications/governing-general-purpose-ai-comprehensive-map-unreliability-misuse-and-systemic-risks (accessed on 28 March 2026).
  21. Martínez, P. M., Ferrer, J. M. C., & Cuevas, J. P. (2023). La enseñanza y el aprendizaje de las ciencias sociales en tiempos de incertidumbre. Áreas. Revista Internacional de Ciencias Sociales, (45), 5–9. [Google Scholar] [CrossRef]
  22. Monteagudo-Fernández, J., Gómez-Carrasco, C. J., & Chaparro-Sainz, Á. (2021). Heritage education and research in museums. Conceptual, intellectual and social structure within a knowledge domain (2000–2019). Sustainability, 13(12), 6667. [Google Scholar] [CrossRef]
  23. Monteith, S., Glenn, T., Geddes, J. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2024). Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 224(2), 33–35. [Google Scholar] [CrossRef]
  24. Murillo-Ligorred, V., Ramos-Vallecillo, N., Covaleda, I., & Fayos, L. (2023). Knowledge, integration and scope of deepfakes in arts education: The development of critical thinking in postgraduate students in primary education and master’s degree in secondary education. Education Sciences, 13(11), 1073. [Google Scholar] [CrossRef]
  25. Nadimpalli, A. V., & Rattani, A. (2022). GBDF: Gender balanced deepfake dataset towards fair deepfake detection. In International Conference on Pattern Recognition (pp. 320–337). Springer Nature Switzerland. [Google Scholar] [CrossRef]
  26. Neyazi, T. A., Nadaf, A. H., Tan, K. E., & Schroeder, R. (2024). Does trust in government moderate the perception towards deepfakes? Comparative perspectives from Asia on the risks of AI and misinformation for democracy. Government Information Quarterly, 41(4), 101980. [Google Scholar] [CrossRef]
  27. Nightingale, S. J., Wade, K. A., & Watson, D. G. (2017). Can people identify original and manipulated photos of real-world scenes? Cognitive research: Principles and Implications, 2(1), 30. [Google Scholar] [CrossRef]
  28. Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review, 1(4). [Google Scholar] [CrossRef]
  29. Özer, M. (2024). Potential Benefits and Risks of Artificial Intelligence in Education. Bartin University Journal of Faculty of Education, 13(2), 232–244. [Google Scholar] [CrossRef]
  30. Regulation EU 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). (2024). Official Journal of the European Union. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed on 28 March 2026).
  31. Roe, J., Perkins, M., Somoray, K., Miller, D., & Furze, L. (2025). To Deepfake or not to Deepfake: Higher education stakeholders’ perceptions and intentions towards synthetic media. arXiv, arXiv:2502.18066. [Google Scholar] [CrossRef]
  32. Russell Group. (2023). Russell Group principles on the use of Generative AI tools in education. Available online: https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf (accessed on 28 March 2026).
  33. Smith, P. B., & Fischer, R. (2015). Acquiescence, extreme response bias and culture: A multilevel analysis. In Multilevel analysis of individuals and cultures (pp. 285–314). Psychology Press. [Google Scholar]
  34. Somoray, K., Miller, D. J., & Holmes, M. (2025). Human performance in deepfake detection: A systematic review. Human Behavior and Emerging Technologies, 2025(1), 1833228. [Google Scholar] [CrossRef]
  35. Tahir, R., Batool, B., Jamshed, H., Jameel, M., Anwar, M., Ahmed, F., Zaffar, M. A., & Zaffar, M. F. (2021). Seeing is believing: Exploring perceptual differences in deepfake videos. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1–16). Association for Computing Machinery. [Google Scholar] [CrossRef]
  36. Wang, T., Liao, X., Chow, K. P., Lin, X., & Wang, Y. (2024). Deepfake detection: A comprehensive survey from the reliability perspective. ACM Computing Surveys, 57(3), 1–35. [Google Scholar] [CrossRef]
  37. Wilson, T. D. (2025). The development of policies on generative artificial intelligence in UK universities. IFLA Journal, 51(3), 722–734. [Google Scholar] [CrossRef]
Table 1. Personal profile of respondents.
Table 1. Personal profile of respondents.
RespondentsN%
SexMen2921.8
Women10478.2
AgeUnder 202418.1
20 and over10981.9
Digital proficiencyLow level2216.5
Intermediate level8967
High level2216.5
Frequency of useSometimes6045.1
Very often7354.9
Awareness of the potential of AIThey were aware7354.9
They were not aware6045.1
Table 2. Descriptive statistics of pre-service teachers’ level of concern before and after the programme.
Table 2. Descriptive statistics of pre-service teachers’ level of concern before and after the programme.
X ¯ SDMMoR
Pre-test2.820.98214
Post-test2.831.022.534
Table 3. Descriptive statistics of the level of concern about deepfakes according to gender.
Table 3. Descriptive statistics of the level of concern about deepfakes according to gender.
Men (N = 29)Women (N = 104)
Pre-TestPost-TestPre-TestPost-Test
X ¯ SD X ¯ SD X ¯ SD X ¯ SD
Deepfakes are not a problem1.961.291.641.081.921.371.661.19
When used for entertainment, they don’t worry me.2.511.412.581.232.260.912.161.13
AI can be used to detect and debunk deepfakes2.671.092.481.022.550.832.510.98
I have been exposed to misinformation online.3.781.063.811.193.841.023.810.98
I have spread misinformation online, even unintentionally.2.511.292.831.362.391.222.511.11
I am concerned that AI could be used for large-scale disinformation campaigns.4.210.834.250.814.430.744.260.91
I am concerned that AI could be used to manipulate public opinion during elections.4.510.694.290.784.450.764.350.88
Government information is reliable.2.071.112.251.062.210.942.430.98
I trust the government to do the right thing.1.710.892.10.891.970.952.210.95
The government looks out for my interests2.111.162.121.142.190.912.310.93
Table 4. Wilcoxon’s signed Rank test in relation to the level of concern about deepfakes.
Table 4. Wilcoxon’s signed Rank test in relation to the level of concern about deepfakes.
Negative RankPositive RankTest Statistics
NMean RankSum of RanksNMean RankSum of RanksTiesZp
Deepfakes are not a problem4158.7624094327.0116149−0.280.004 *
When used for entertainment, they don’t worry me.9350.524698.5530.5152.535−8.31<0.001 *
AI can be used to detect and debunk deepfakes9056.1150501949.7494524−6.35<0.001 *
I have been exposed to misinformation online.9455.985262.51234.04408.526−7.73<0.001 *
I have spread misinformation online, even unintentionally.4950.2224615554.53299929−8.900.373
I am concerned that AI could be used for large-scale disinformation campaigns.6950.9335142640.23104638−4.73<0.001 *
I am concerned that AI could be used to manipulate public opinion during elections.10762.6667051019.819815−8.92<0.001 *
Government information is reliable.515.577.511863.977548.510−9.51<0.001 *
I trust the government to do the right thing.222.54512162.65758110−9.60<0.001 *
The government looks out for my interests3648.461744.55746.082626.539−1.740.081
* Based on negative ranks.
Table 5. Results of the Mann–Whitney tests in relation to the opinions of student teachers on the generation, spread and detection of deepfakes.
Table 5. Results of the Mann–Whitney tests in relation to the opinions of student teachers on the generation, spread and detection of deepfakes.
Aware of the Potential of Gen-AINot Aware of the Potential of Gen-AITest Statistics
NMean RankNMean RankUzp
Deepfakes are not a problem9571.838551349−2.6350.008
When used for entertainment, they don’t worry me.9548.138114.51100−11.48<0.001
AI can be used to detect and debunk deepfakes9568.383863.541673−0.790.429
I have been exposed to misinformation online.9571.473855.821380−2.1950.028
I have spread misinformation online, even unintentionally.9565.993869.531709−0.5060.613
I am concerned that AI could be used for large-scale disinformation campaigns.9573.203851.501216−3.0770.002
I am concerned that AI could be used to manipulate public opinion during elections.9568.553761.231562.5−1.0180.309
Government information is reliable.9571.043856.911421.5−2.0840.037
I trust the government to do the right thing.9569.463860.841571−1.2960.195
The government looks out for my interests9467.373864.3616.831−0.4290.668
Table 6. Results of the Mann–Whitney tests in relation to the opinions of student teachers on the consideration of deepfakes according to the frequency of use of Gen-AI.
Table 6. Results of the Mann–Whitney tests in relation to the opinions of student teachers on the consideration of deepfakes according to the frequency of use of Gen-AI.
I Sometimes Use Gen-AII Very Often Use Gen-AITest Statistics
NMean RankNMean RankUzp
Deepfakes are not a problem6030.57397.11100−11.48<0.001
When used for entertainment, they don’t worry me.6074.67360.751734−2.6350.008
Table 7. Descriptive statistics on the success rate in detecting deepfakes.
Table 7. Descriptive statistics on the success rate in detecting deepfakes.
Participants (N = 133)
Pre-TestPost-Test
Success Rate (%)SDSuccess Rate (%)SD
Potential deepfake #133.1%0.4898.5%0.12
Potential deepfake #229.3%0.4695.4%0.19
Potential deepfake #330.8%0.4788.7%0.26
Potential deepfake #421.8%0.4292.4%0.19
Table 8. Results of the Mann–Whitney tests in relation to the detection of deepfakes according to gender and awareness of Gen-AI applications.
Table 8. Results of the Mann–Whitney tests in relation to the detection of deepfakes according to gender and awareness of Gen-AI applications.
MenWomenTest Statistics
Deepfake on US PresidentNMean rankNMean rankUzp
2970.1710462.771300−2.0570.040
Deepfake on immigrantAware of the potential of Gen-AINot aware of the potential of Gen-AITest statistics
NMean rankNMean rankUZp
9557.733871.461568−2.6200.009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Campillo-Ferrer, J.M.; Miralles-Sánchez, P. Perceptions of Pre-Service Teachers in Early Childhood and Primary Education on GenAI-Generated Deepfakes. Educ. Sci. 2026, 16, 575. https://doi.org/10.3390/educsci16040575

AMA Style

Campillo-Ferrer JM, Miralles-Sánchez P. Perceptions of Pre-Service Teachers in Early Childhood and Primary Education on GenAI-Generated Deepfakes. Education Sciences. 2026; 16(4):575. https://doi.org/10.3390/educsci16040575

Chicago/Turabian Style

Campillo-Ferrer, José María, and Pedro Miralles-Sánchez. 2026. "Perceptions of Pre-Service Teachers in Early Childhood and Primary Education on GenAI-Generated Deepfakes" Education Sciences 16, no. 4: 575. https://doi.org/10.3390/educsci16040575

APA Style

Campillo-Ferrer, J. M., & Miralles-Sánchez, P. (2026). Perceptions of Pre-Service Teachers in Early Childhood and Primary Education on GenAI-Generated Deepfakes. Education Sciences, 16(4), 575. https://doi.org/10.3390/educsci16040575

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop