Next Article in Journal
Auditing GenAI Literature Search Workflows: A Replicable Protocol for Traceable, Accountable Retrieval in Student-Facing Inquiry
Previous Article in Journal
Robust Deep Knowledge Tracing with Out-of-Distribution Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Something Old, Something New: WebQuests and GenAI in Teacher Education

School of STEM Education, Innovation and Global Studies, Institute of Education, Dublin City University, D09 YT18 Dublin, Ireland
*
Author to whom correspondence should be addressed.
AI Educ. 2026, 2(1), 7; https://doi.org/10.3390/aieduc2010007
Submission received: 15 December 2025 / Revised: 9 February 2026 / Accepted: 17 February 2026 / Published: 11 March 2026

Abstract

Generative artificial intelligence (GenAI) has rapidly emerged as a transformative educational technology, raising questions about how educators and pre-service teachers critically engage with AI-produced content. This case study investigates how WebQuests, a long-established, inquiry-based pedagogical model, can foster critical engagement with GenAI tools. Situated within an initial teacher education programme, a WebQuest, incorporating GenAI sources, was implemented with 24 pre-service language teachers, who engaged with curated resources alongside ChatGPT and Copilot to produce infographics for secondary school audiences. Data were collected through semi-structured interviews and were analysed using Braun and Clarke’s thematic analysis. Findings indicate that scaffolded engagement with GenAI encouraged participants to compare AI-generated outputs with trusted sources, critically evaluate accuracy and reliability, and reflect on integration into their future practice. Whilst pre-service teachers valued GenAI’s accessibility and efficiency, they expressed concerns about clarity, verbosity, and trustworthiness. The WebQuest model effectively supported synthesis of multiple information sources, fostering functional AI engagement and critical evaluation of its affordances and limitations. This case study concludes that integrating GenAI within structured, inquiry-based pedagogies advances digital and AI literacy in initial teacher education, whilst highlighting the need for institutional guidance, professional development, and further research in this area.

1. Introduction

Recent years have seen a constant stream of information, both research-led and industry-led, about artificial intelligence (AI). AI has piqued interest across society and industries (Donlon & Tiernan, 2023). Reports suggest AI may augment or replace tasks across industries (Eloundou et al., 2023; Ernst, 2022; Ghosh et al., 2025; Guimarães & Mazeda Gil, 2022; Leinen et al., 2020) and affect how we communicate and build relationships (Strengers, 2022; S. Wang, 2020), spend free time (Edgar, 2023), and parent (Radesky, 2024). In education, promised benefits include improved learner analytics, individualised assessments, and virtual assistants (Borisov & Stoyanova, 2024). Concerns about Generative AI (GenAI) are equally prominent (Gupta et al., 2025; Łodzikowski et al., 2024; Zapata-Rivera et al., 2024), including plagiarism (Hutson, 2024), potential harms to creativity, agency, and critical thinking (Bali, 2024), and inconsistent information quality and reliability (Sousa & Cardoso, 2025; Williams, 2024). GenAI may redefine information literacy, altering how students access, search, filter, and evaluate content (Tiernan et al., 2023). Within this context, this case study explores a structured approach to embedding GenAI in initial teacher education (ITE).

2. Context

The theoretical and contextual frame this paper adopts to explore this topic is centred around three interrelated areas: Digital, information, and AI literacy; GenAI in education; and WebQuests as an inquiry-based approach to structure engagement with complex concepts and tasks.

2.1. Digital and Information Literacy

Digital literacy, or perhaps more accurately ‘digital literacies’, is an umbrella term describing competencies essential to thriving in an increasingly digital world (List et al., 2020; Pérez-Escoda et al., 2019; Rohatgi et al., 2016). It incorporates operational and functional aspects of working with technology from earlier conceptualisations (e.g., ICT literacy and computer literacy), established since the 1970s (Martin, 2005; Oliver et al., 2000). In line with technological and societal developments, widely used frameworks such as DigComp 2.2 (Vuorikari et al., 2016), UNESCO (Law et al., 2018), and the Joint Information Systems Committee (JISC, 2015), expand digital literacies to include communicating and collaborating online; communicating across formats (video, text, audio); creating and remixing digital content; appreciating digital safety and wellbeing; and leveraging digital tools for professional development and learning. These frameworks are iterative, reflecting shifts in platforms, participation, and policy, signalling digital literacies are developmental rather than fixed. They also foreground ethical participation and respect for others in digital spaces. Most relevant to this study is the emergence of information literacy as a critical skill. Whilst no two frameworks or authors define information literacy identically, common threads recur. DigComp 2.2 frames it as locating and retrieving data, judging relevance and quality, and storing, organising, and managing that data. Similarly, the JISC capabilities framework defines information literacy as the ability to find, evaluate, and organise information. The academic literature expresses consensus around collecting, analysing and evaluating, and synthesising data from diverse sources (Churchill, 2020; Kim, 2019; Martin, 2005; W. Ng, 2012). Yet Brenner (2019) shows that students often struggle to internalise these concepts, and their understanding does not always align with established frameworks and definitions. Coupled with the advent of AI, this underscores Hicks’ (2018) account of the complex, situated, sociocultural nature of information literacy and the need to recognise how societal and institutional contexts shape engagements with information. These factors point to the need for information literacy to be clearly embedded in curricula and contextualised through meaningful, discipline-specific experiences.

2.2. AI Literacy

AI literacy builds on and extends digital literacy to include the specific competencies needed to understand, use, and engage with AI (Long & Magerko, 2020; D. T. K. Ng et al., 2021). AI literacy includes functional knowledge necessary to operating AI-enabled tools, acquiring a basic understanding of algorithms, and the critical and ethical dimensions of using AI in education, work, and everyday life (Druga et al., 2017; Kong et al., 2023; Long & Magerko, 2020; D. T. K. Ng et al., 2021; Popenici & Kerr, 2017). Prominent framings of AI literacy include those by D. T. K. Ng et al. (2021), Long and Magerko (2020), and Kong et al. (2023). D. T. K. Ng et al. (2021) frame AI literacy around four interrelated areas: understanding AI concepts and process; the appropriate use of AI tools; critiquing and evaluating AI outputs; and recognition of the societal and ethical implications of AI use. Similarly, Kong et al. (2023) framed AI literacy according the cognitive (fundamental AI concepts), affective (confidence to use AI), and sociocultural (ethical use of AI) domains. Others have adopted a more granular approach. For example, Long and Magerko (2020) identified 17 AI competencies and 15 design considerations for educators to develop AI literacy. Casal-Otero et al. (2023) reviewed K–12 initiatives, identifying common components and persistent gaps, including limited assessment of student understanding, little attention to unintended or negative consequences, and absent sequenced, continuous, context-adjusted curricula. Yang and Capan (2025) outlined a practitioner-focused K–12 framework highlighting interplay between conceptual knowledge, practical skills, ethics, and civic readiness. These works indicate common factors: AI literacy goes beyond operational competence to cultivate critical, ethical, and socially grounded engagement with AI systems across diverse educational and societal contexts. They emphasise transparency, accountability, and human oversight as requirements for adoption and classroom practice. They position learners not just as users but as co-creators of AI-enabled activities and artefacts.
These frameworks emphasise that AI literacy requires both cognitive and socio-technical competencies: interpreting AI-driven recommendations, questioning their validity, and situating them within disciplinary, cultural, and ethical contexts (Casal-Otero et al., 2023; Zhang & Dafoe, 2019). However, understanding remains fragmented and often shaped by media narratives, leading to misconceptions about capabilities, limitations, and risks (Cave et al., 2019; Druga et al., 2017). This gap between expert definitions and user perceptions parallels information literacy findings, where formal conceptualisations often fail to translate into practice. Touretzky et al. (2019) stress embedding AI literacy in curricula across all levels, not only to equip individuals with technical skills, but also to foster informed, critical engagement with AI systems. This requires a situated, interdisciplinary approach that accounts for AI’s complex, evolving, socially constructed nature, ensuring learners can navigate opportunities and challenges in their personal, professional, and civic lives. Addressing such misconceptions requires teaching about data, models, and limitations, coupled with opportunities to interrogate AI outputs in authentic tasks. This can be supported through discussion, collaborative inquiry, and iterative design.

2.3. GenAI in Education

GenAI is recognised as one of the most transformative technologies to emerge in recent years (Baidoo-anu & Ansah, 2023; Hu, 2022). Applications such as ChatGPT and Microsoft Copilot, known for intuitive interfaces and responsiveness (Gleason, 2022; Grasse et al., 2023; Richter et al., 2019), have accelerated adoption of AI tools in higher education. Whilst best known for text generation, these applications also produce video, images, 3D objects, audio, and code (Cao et al., 2023), and perform complex tasks such as summarising research papers, translating languages, and creating long-form content (O’Dea, 2024). They draw on large datasets and/or internet information (Lim et al., 2023), processed by text generators (e.g., ChatGPT’s Generative Pre-trained Transformer) built on large language models (LLMs). LLMs are models within natural language processing (NLP) (O’Dea & O’Dea, 2023) that recognise patterns and relationships in data to create statistically probable content, outputting it in human-like formats (Nah et al., 2023). A key feature is their ability to interpret user instructions and generate responses within seconds (O’Dea, 2024).
The literature identifies many opportunities for GenAI in education. Several relate to personalisation, including personalised interactions addressing unique learning needs (Lo, 2023), early identification of learning difficulties (Luckin et al., 2018; Panjwani-Charania & Zhai, 2024), assistive tools to help students overcome barriers (Dolan, 2021), and immediate feedback enabling personalised learning paths (Boulhrir & Hamash, 2025; Ilieva et al., 2023; Taranikanti & Davidson, 2023). GenAI can support academic research and writing by summarising documents, suggesting relevant studies, and aiding drafting (Eke, 2023; Sok & Heng, 2023). It may also contribute to cognitive skills such as creativity, critical thinking, and problem solving (Essel et al., 2024; Hwang & Chang, 2023; Jia & Tu, 2024; Wu et al., 2024). By giving students access to greater volumes of information, learners can deploy analytical and creative skills to experiment with concepts and approaches and explore complex topics from multiple perspectives (Bearman et al., 2023; Essien et al., 2024; Hooda et al., 2022; Sok & Heng, 2023). Students may even develop social and emotional capacities by engaging with chatbots in simulated real-world scenarios (Lin et al., 2024). For teachers, GenAI can streamline assessment generation and grading, content summarisation and creation, and analysis of student data (Božić & Poola, 2023; Cotton et al., 2024; Dai et al., 2023). Recent studies highlight several emerging uses of GenAI for teachers. For instance, GenAI can act as a co-teaching assistant, enabling instructors to cover more material while supporting student understanding (Alghazo et al., 2025). It also facilitates teacher collaboration and professional learning by co-constructing instructional resources within professional communities (Tan et al., 2025). In immersive learning contexts, GenAI-powered virtual characters can provide guidance and companionship, enriching VR-based lessons (Hemminki-Reijonen et al., 2025). In arts education, GenAI supports creative collaboration by co-composing and improvising with students (Tsao & Nogues, 2024). In STEM, it offers dynamic scaffolding, enabling adaptive, real-time tutoring that helps address misconceptions (M. A. Hamash et al., 2024; K. D. Wang et al., 2025). These applications position GenAI as efficient, and a partner in innovation across teaching practices.
The literature also documents challenges. Educators worry that overdependence on GenAI may erode students’ cognitive skills, reducing creativity, critical thinking, and problem-solving (Memarian & Doleck, 2023). Academic integrity and plagiarism are major concerns: AI-generated assessment content has become a significant problem (Ogurlu & Mossholder, 2023; Wild, 2023; Williams, 2024). GenAI makes it easy for students to generate essays, problem solutions, and other assignments (Lodge et al., 2023) that are not easily detected by academic staff or traditional plagiarism tools (Sullivan et al., 2023). Consequences include potential unfair advantage over peers (Cotton et al., 2024), damage to the integrity of academic qualifications (Balalle & Pannilage, 2025), reduced trust between teachers and students (Gratiot, 2023; Plé, 2023), and students spending more time avoiding AI-flags than producing work that deepens their understanding (Luo, 2025).
Another set of concerns centres on data privacy and security. AI systems are trained on vast datasets collected and analysed, often without users’ knowledge or consent (Porayska-Pomsta et al., 2023). Large, highly profitable companies may use this data with little acknowledgement of copyright or the intellectual property rights of original creators and producers (Baer, 2025). Some institutions, wary of ethical implications, argue students should be banned from using AI (Eke, 2023). A further significant concern is the quality of data produced by generative AI and the implications of its use and proliferation. Because models like ChatGPT are trained predominantly on internet-based datasets, they can reflect inherent biases, incomplete information, and the underrepresentation of marginalised or minority groups (Angwin et al., 2016; Chan & Hu, 2023; Halaweh, 2023). These concerns are not abstract: Scatter Lab’s AI chatbot has been reported using offensive language toward LGBTQ+ persons and people with disabilities (Perkins, 2020), and Meta systems powered by AI initially labelled videos of black men as primates (Dadkhahnikoo, 2020). Worries about data quality are exacerbated by limited transparency in algorithms that aggregate and present information. Many AI systems operate as ‘black boxes’, with decision-making processes not understood by students, teachers, or even developers. If processes cannot be explained or audited, accountability for errors and biases may go unchecked (Noble, 2020; O’Neil, 2016). Studies also document inaccuracies, hallucinations, and false information produced by GenAI (Montenegro-Rueda et al., 2023). Outputs that appear credible can be misleading, inaccurate, or fabricated when scrutinised or cross-checked (Boyle, 2025; Kutty et al., 2024). Beyond the harms of false information, this amplifies risks of misinformation, disinformation, and malicious use (Shelby et al., 2023; Weidinger et al., 2021). Finally, concerns remain about equitable access to AI tools (Sullivan et al., 2023), with a potential new digital divide if students from low-income or disadvantaged areas lack resources to engage effectively (Luckin et al., 2018).

2.4. Teacher and Student Engagement with GenAI

At an institutional level, responses to GenAI are mixed, with some institutions embracing it as a potential catalyst for transforming teaching and learning (Xiao et al., 2023), while others discourage or ban its use because of ethical and academic integrity concerns (Sullivan et al., 2023). International studies indicate students are aware of AI and its capabilities (Abdelwahab et al., 2023; Chan & Hu, 2023; Farhi et al., 2023; Malik et al., 2023), including use for preparing for classes and exams, and completing homework and projects (Laird & Dwyer, 2023; Sidoti & Gottfried, 2025; Walczak & Cellary, 2023). Students view GenAI as support for creative writing (Farhi et al., 2023; Malik et al., 2023) and as a way to improve critical thinking and problem solving (Farhi et al., 2023; Walczak & Cellary, 2023). However, they recognise challenges. Studies show concerns about the quality and accuracy of AI-generated information (Jiang & Nakatani, 2025) and acknowledgement of the ethical issues involved in using AI for coursework, which many view as cheating (Matzinger, 2023; Rogers et al., 2024). Teachers are using AI. Many are optimistic about its potential (McGehee, 2023; Tiernan & Donlon, 2024; Woodruff et al., 2023), yet reported use centres on non-teaching activities such as lesson planning and idea generation (Walton Family Foundation, 2023) and exploring personalised learning (McGehee, 2023). Use for instruction is lower, shaped by concerns that include lack of training and expertise (Tiernan & Donlon, 2024; Whalen et al., 2025), absence of clear policy and guidance (Whalen et al., 2025), reduced human interaction and social engagement (McGehee, 2023), and potential negative impacts on student learning and development (Clarke, 2025; Ogurlu & Mossholder, 2023).
Teachers play a pivotal role in helping students understand the opportunities and challenges AI brings to education, requiring they receive appropriate training and guidance for use in educational settings (Russell Group, 2023; Zhu et al., 2023). This role is particularly critical in inclusive classrooms, where educational technologies may support learners with diverse needs, including students with special educational needs and disabilities (M. Hamash & Mohamed, 2021). Many feel overwhelmed by AI and its rapid development. This lack of confidence and knowledge can reduce meaningful integration and increase the risk of inappropriate or misuse (Celik et al., 2022; Chan & Hu, 2023). Training should include a fundamental understanding of AI and how it works, including algorithms (Jovanović & Campbell, 2022; Richter et al., 2019), how algorithms operate and make decisions (Chan & Hu, 2023; Chen et al., 2023), and opportunities to practise classroom integration (Chan & Hu, 2023; Rachha & Seyam, 2023). Priority areas include personalised learning (Baidoo-anu & Ansah, 2023) and the development of innovative materials and exercises (Khalil et al., 2023), alongside explicit attention to pitfalls such as ethical issues and information quality concerns, and strategies to mitigate them (Aydin & Karaarslan, 2023; Celik et al., 2022). Teachers need clear guidelines and policies to prevent misuse and encourage responsible, pedagogically sound application of AI in education (Cotton et al., 2024; Qadir, 2023). These should emphasise transparency and accountability (Halaweh, 2023) so that teachers are equipped to navigate the opportunities and challenges AI presents (Luo, 2025). Teachers should also consider how GenAI aligns with curricular, pedagogical, and student needs (Rahman & Watanobe, 2023). Integration should enhance learning by engaging students in critical and creative thinking with AI (Al-Abdullatif & Alsubaie, 2024; Dwivedi et al., 2023; Esiyok et al., 2024; Essel et al., 2024; Laak et al., 2024), and by fostering understanding of how AI works, a critical approach to evaluating AI outputs (Zhai, 2022; Zhu et al., 2023), and awareness of the wider social implications of AI (Cervera & Caena, 2022; Kaur & Gill, 2024).
While much of the literature is valuable to the teaching and research communities, some scholars caution against overreliance on opinion pieces and literature reviews (Jiang & Nakatani, 2025). There are increased calls for more empirical studies that explore the practical implications of using GenAI in education and capture the experiences of teachers and students (Chan & Hu, 2023; Mishra et al., 2023; O’Dea, 2024).

2.5. WebQuests

WebQuests, first developed by Bernie Dodge in 1995, are inquiry-oriented learning activities that utilise web-based resources to foster critical thinking, problem-solving, and higher-order thinking skills such as analysis, synthesis, and evaluation of information. The core objective of WebQuests is to create a structured approach to web-based learning whilst promoting meaningful engagement by encouraging students to think critically about the information they encounter and apply it to a meaningful task (Vidoni & Maddux, 2002). In a WebQuest, students are given specific tasks that require them to use information from the web to form conclusions, solve problems, or create new content (Mohammadi et al., 2023). The structure of a WebQuest guides students through the inquiry process, helping them develop essential research skills, evaluate information credibility, and apply it by judging its relevance to complex tasks (Bui et al., 2018). By focusing on these higher levels of thinking, WebQuests prepare students for real-world problem-solving and decision-making, skills valued in both academic and professional settings. Research shows that inquiry-based activities like WebQuests help develop students’ critical thinking and collaborative skills, making them more engaged and independent learners (Abu-Tineh et al., 2019; Auditor & Roleda, 2013; Zhou et al., 2012).
The structured approach of WebQuests (see Table 1) allows students to engage with current and diverse sources of information while working together on tasks, sharing information, and solving problems collectively (Liang & Fung, 2020). WebQuests have evolved to integrate online platforms, multimedia content, and interactive exercises (Bui et al., 2018). They are versatile, being adapted to subjects ranging from science to social studies, and implemented at all levels of education, from primary to higher education (Abu-Tineh et al., 2019). Of particular relevance to this study, WebQuests have been employed in teacher education contexts to support pre-service teacher development. Smith et al. (2005) conducted a qualitative study examining the use of WebQuests in problem-based elementary methods courses (science and literacy), exploring their potential to address three key dilemmas in teacher education: modelling research-based instructional practices, providing pre-service teachers with sufficient pedagogical knowledge within limited course time, and preparing them to integrate technology meaningfully in their teaching. Their findings demonstrated that WebQuests effectively supported pre-service teachers’ construction of pedagogical content knowledge consistent with teaching standards, whilst simultaneously introducing them to multiple perspectives on educational issues and developing their ability to critically evaluate web-based information.
The structured nature of WebQuests has proven particularly valuable in teacher education, providing scaffolded experiences with problem-solving processes whilst requiring engagement with technology (F. Wang & Hannafin, 2008). Research suggests that with adequate WebQuest support, pre-service teachers may enhance their technology integration capabilities before entering the profession (Abu-Elwan, 2007; Bayram et al., 2019; Piro & Marksbury, 2012). The scaffolded approach inherent in WebQuests aims to develop not only the competence but also the willingness of pre-service teachers to undertake complex assignments independently (Belland et al., 2013). WebQuests are particularly effective in developing 21st-century skills such as digital literacy and communication skills (Zhukova et al., 2021). As technology continues to evolve, WebQuests have the potential to play a pivotal role in the modern educational landscape.

2.6. Research Gap and Study Purpose

Whilst the literature demonstrates growing interest in both GenAI’s educational potential and the importance of developing AI literacy, significant gaps remain in understanding how these technologies can be meaningfully integrated into initial teacher education. Existing research has largely focused on student use of GenAI or teacher attitudes towards AI, with limited investigation of structured pedagogical approaches that support pre-service teachers in developing critical AI literacy skills (Chan & Hu, 2023; Mishra et al., 2023). Furthermore, whilst AI literacy frameworks articulate what competencies are needed (Kong et al., 2023; Long & Magerko, 2020; D. T. K. Ng et al., 2021), there is insufficient guidance on how these competencies can be cultivated through specific pedagogical interventions in teacher education contexts. The WebQuest methodology, despite its established effectiveness in developing information literacy and critical thinking, has not been explored as a framework for scaffolding pre-service teachers’ engagement with GenAI tools. This represents an opportunity to leverage a proven inquiry-based approach to address emerging challenges in digital literacy education. Additionally, whilst calls have been made for authentic studies examining how educators critically engage with AI-generated content (Jovanović & Campbell, 2022; Rachha & Seyam, 2023), there remains a paucity of research documenting pre-service teachers’ experiences in evaluating, comparing, and synthesising information from both curated academic sources and AI-generated outputs within structured learning activities. This case study addresses these gaps by investigating how a WebQuest incorporating GenAI tools can support pre-service teachers in developing critical evaluation skills, functional AI competencies, and pedagogical awareness necessary for their future practice. By examining pre-service teachers’ experiences, evaluation processes, and synthesis strategies within a bounded, scaffolded context, this research contributes evidence to inform both teacher education practice and the integration of emerging technologies in initial teacher education programmes.
Against this backdrop, this study explores the following research questions:
  • What are pre-service teachers’ experiences in using GenAI tools as part of their WebQuest?
  • How do pre-service teachers consider the accuracy, reliability, and general validity of GenAI responses?
  • What are pre-service teachers’ experiences of synthesising information from curated sources and GenAI output for a WebQuest task?

3. Methodology

This case study was designed to explore pre-service teachers’ experiences in using GenAI tools as part of a WebQuest activity. The case study approach is appropriate given the bounded nature of the investigation, a single cohort of 24 pre-service teachers within a specific module context, and the aim to provide in-depth, contextualised insights into how structured pedagogical approaches can support critical engagement with emerging technologies (Stake & Visse, 1995; Yin, 2018). The study, which was situated within an initial teacher education module, gathered qualitative data to examine how pre-service teachers engaged with, evaluated, and reflected upon the use of GenAI content, alongside curated information sources.

3.1. Participation and Sample

The module cohort comprised 29 Year 2 students enrolled in a Bachelor of Education in Gaeilge and French, German or Spanish (BEDLAN) programme at the Institute of Education, Dublin City University. Students were undertaking a module on digital media and language learning, taught by two of the authors, two hours per week during semester two. This module introduces students to pedagogically grounded uses of technology in language education.
For the purpose of the research, a WebQuest activity was integrated into the module’s curriculum, focusing on the development of critical digital and information literacy skills in the context of emerging AI technologies. All 29 students completed the WebQuest activity as part of their regular coursework. However, participation in the research element, specifically, the semi-structured interviews, was entirely voluntary, with no impact on academic assessment or progression. Of the 29 students in the cohort, 24 (82.8%) volunteered to participate in the research interviews, forming the sample for this case study (n = 24). This high participation rate suggests that the findings capture the experiences of the substantial majority of students who completed the WebQuest activity.

3.2. Description of Process

Over a two-week period, students participated in a scaffolded WebQuest structured around the WebQuest pedagogical framework (Dodge, 1995). The activity introduced the concept of digital and information literacy and guided students through an inquiry using two distinct source sets. The resources page of the WebQuest contained a structured collection of academic and credible online resources for pre-service teachers to review. It also contained links to ChatGPT (4.5) and Copilot (3.3.19) for students to pose questions on the topic. To aid with this process, students were provided with a set of prompts to gather information from the GenAI tools (see Table 2 below). Following the information gathering phase, students were asked to synthesise findings from both source sets into an infographic aimed at a secondary school student audience. The infographic task served multiple pedagogical purposes: it required students to evaluate and synthesise information from diverse sources, translate complex academic concepts into accessible formats, and consider age-appropriate communication strategies, all essential skills for future teachers. Additionally, the creative production task encouraged deeper processing of the content and provided an authentic assessment of students’ understanding of both digital literacy concepts and the comparative affordances of curated versus AI-generated sources.
Per-service teachers were also asked to critically reflect on the accuracy, reliability, and potential bias of GenAI outputs, or put simply, to adopt a critical perspective on the information they received. This reflection was scaffolded through the WebQuest structure, which explicitly directed students to compare information across sources and evaluate credibility.

3.3. Instruments

Data collection was primarily carried out using semi-structured interviews, which explored pre-service teachers’ experiences with GenAI, their reflections, and their perceptions of the implications of its use. Exploratory questions included: ‘How would you describe the experience of using GenAI as part of a WebQuest?’, ‘Have you any comments on (a) the information provided by the AI tools, and (b) the process of asking the AI for information?’. Pre-service teachers were specifically asked, ‘Did you notice any difference between the responses you received from ChatGPT and Copilot?’ and ‘How did you find the process of combining the information you found through the links with the information provided to you by AI?’ Pre-service teachers were also asked to comment on the overall process: ‘How did you find WebQuests as a tool for finding information related to Digital Literacy? What are your overall thoughts on the experience?’

3.4. Ethical Considerations

This study received ethical approval from the university ethics committee prior to commencement (Approval No: DCUREC/2025/114). Particular care was taken due to the dual role of the authors as both lecturers and researchers. To minimise bias and ensure ethical integrity, students were informed that participation was voluntary and that their responses would remain anonymous, with no impact on their grades or course progress.

3.5. Data Analysis

In an effort to understand pre-service teachers’ perceptions and experiences, interview data was analysed using Braun and Clarke’s (2006) thematic analysis approach. This approach involves analysing the qualitative data for patterns in the words and phrases in pre-service teacher interviews, which were coded and grouped together as initial categories. As categories emerged, rules of inclusion were developed to ensure consistency in each category. If a piece of data did not meet the rules for inclusion, a new category was created. Two researchers independently coded the interview transcripts. Each researcher initially coded all 24 interviews individually, identifying preliminary codes and patterns in the data. Following independent coding, the researchers met to compare coding frameworks and discuss emergent themes. Where codes differed, researchers engaged in negotiated agreement through discussion of the underlying data, ultimately reaching consensus on a unified coding structure (Richards & Hemphill, 2018). Discrepancies were resolved through collaborative review of specific transcript segments and refinement of theme definitions to ensure consistency. To assess inter-rater reliability, the team calculated percentage agreement across a randomly selected subset of five interviews (21% of the sample), achieving 87% agreement prior to consensus discussions (McAlister et al., 2017). This iterative, collaborative approach enhanced rigour whilst allowing for multiple perspectives on the data to inform theme development. For presentation of data, quotations in the findings are identified using participant codes (P1, P2, etc., where P indicates ‘Participant’).
To illustrate the coding process: when Participant 5 stated, “The AI responses were lengthy and in-depth, from which you could pick and choose the information that was relevant,” this was initially coded as “AI verbosity” and “selective information use.” Similar patterns emerged across multiple participants, with Participant 20 noting “At times, AI answers were either too text heavy or didn’t give the information we were looking for,” coded as “excessive detail” and “relevance concerns.” Participant 9 observed “ChatGPT often gave a bit more content but occasionally waffled on a bit,” coded as “comprehensive but unfocused.” Through iterative refinement, these individual codes were grouped (as seen in Table 3) under the broader theme “Length and relevance of responses,” with sub-dimensions distinguishing positive aspects (comprehensive detail providing material for synthesis) and negative aspects (excessive verbosity requiring additional prompting). This theme captured pre-service teachers’ recognition that whilst AI-generated responses offered depth, they required critical evaluation and selective extraction to identify relevant information for the task at hand.

4. Findings and Discussion

This section presents the findings from the thematic analysis of the semi-structured interviews with pre-service teachers, followed by detailed discussion of their meaning and implications. The findings are organised according to the three research questions that guided this case study. Table 4, Table 5, Table 6, Table 7 and Table 8 illustrate the themes, the number of participants who expressed each theme, representative quotes, and the key principles underlying each theme. These tables demonstrate both the breadth of participant engagement with different aspects of the WebQuest experience and the depth of critical reflection evident in their responses. The findings address: (RQ1) pre-service teachers’ experiences using GenAI tools within the scaffolded WebQuest structure; (RQ2) their critical evaluation of AI-generated information quality, accuracy, and reliability; and (RQ3) their processes of synthesising information from both curated academic sources and GenAI outputs to complete an authentic pedagogical task.

4.1. RQ1: Experiences of Using GenAI as Part of a WebQuest

Pre-service teachers reported that using GenAI in the WebQuest offered valuable insights into applying GenAI and prompted reflection on outputs. The majority of participants indicated that they do not typically have opportunities to use AI as part of their coursework, making this integration particularly valuable. The structured WebQuest framework, particularly the provision of suggested prompts, scaffolded engagement with GenAI effectively. The evidence suggests that integrating GenAI into a scaffolded WebQuest can give pre-service teachers valuable opportunities to develop functional, critical, and reflective skills associated with digital literacy (Kong et al., 2023; Long & Magerko, 2020; Martin, 2005; D. T. K. Ng et al., 2021; Oliver et al., 2000). The findings align with calls from Chan and Hu (2023), Jovanović and Campbell (2022), and Rachha and Seyam (2023) for authentic studies of educator engagement with AI, demonstrating how structured pedagogical frameworks can support meaningful interaction with emerging technologies.
The WebQuest framework directly addressed critical educator preparation needs identified in the recent literature on GAI in education. Both Jovanović and Campbell (2022) and Rachha and Seyam (2023) emphasise that educators require specialised preparation to evaluate AI-generated content, as the ‘black box’ nature of generative models makes it difficult to determine which features influence outputs or to assess their trustworthiness. Rachha and Seyam specifically note that educators need guidance in evaluating AI outputs for “utility, aesthetics, clarity, or similarity to real-world content” and that developing such evaluation skills requires moving beyond technical understanding to critical analytical capabilities. Our WebQuest structure addressed these needs through two key pedagogical mechanisms. First, the prompt scaffolding in Task 2 required pre-service teachers to systematically vary their prompts whilst observing how changes affected output quality, thereby developing intuition about GenAI model behaviour without requiring deep technical knowledge of the underlying algorithms. As P14 reflected, “I learned that the more specific you are with your prompt, the better quality response you will get.” Second, the comparison task in Task 3 required pre-service teachers to evaluate AI-generated responses against each other, directly building the critical evaluation skills that Jovanović and Campbell identify as essential for responsible AI use. This structured comparison approach enabled per-service to move beyond surface-level assessment; for example, P8 noted learning to evaluate “the accuracy and relevance of AI-generated content,” whilst P20 described developing awareness of “verifying the information provided by AI tools.” These reflections demonstrate how the WebQuest’s scaffolded structure successfully developed the critical evaluation capabilities that both papers identify as currently missing from educator preparation.

4.2. RQ2: Evaluating the Credibility and Accuracy of AI-Generated Information

Pre-service teachers in this case study demonstrated that the activity encouraged them to consider the quality of AI-generated information critically. Their reflections addressed multiple dimensions of quality, including clarity, accuracy, and appropriateness for purpose. Regarding clarity, responses were mixed. Some participants appreciated GenAI’s ability to simplify complex ideas and support understanding, particularly its capacity to break down concepts and explain them in accessible terms. As P22 noted, ‘Using AI allowed me to gain an easier understanding,’ whilst P18 appreciated how ‘AI breaks down the question you ask it. However, not all experiences were positive, as some found AI responses harder to follow than traditional academic sources, suggesting that simplification does not uniformly enhance comprehension. P2, for instance, found that ‘the language and layout of the responses weren’t as well laid out so I got more confused and found the information from the links easier to understand. Participants also discussed consistency and accuracy, demonstrating awareness of the need to evaluate AI-generated responses. There was recognition that whilst generally accurate outputs can appear credible, they can also be misleading or incorrect, necessitating verification against trusted sources. Whilst many valued AI for efficiency and accessibility, they simultaneously expressed reservations about trusting it completely. This tension is exemplified by P4’s observation: ‘You have to be careful with the information it gives you as it can be wrong,’ and P8’s caution that ‘its sources can often be flawed, and to not trust everything that it comes out with. This tension between convenience and credibility represents an important finding: pre-service teachers are developing a nuanced understanding of GenAI as a tool that offers significant affordances but requires critical oversight. This evaluative stance aligns with the goals of AI literacy frameworks that emphasise not just functional use but critical assessment of AI systems (Kong et al., 2021; D. T. K. Ng et al., 2021; Popenici & Kerr, 2017). The underlying mechanism appears to be one of comparative evaluation. By having access to both curated academic sources and AI-generated content within the same task, pre-service teachers were able to develop benchmarks for quality and identify where AI outputs fell short of academic standards. This comparative process supported the development of evaluative criteria that extended beyond surface-level assessments. Regarding differences between AI platforms, students noted variations between ChatGPT and Copilot across several dimensions. Whilst some perceived them as broadly equivalent, others highlighted differences in detail, length, structure, formatting, and style. For example, P9 observed that ‘ChatGPT often gave a bit more content but occasionally waffled on a bit whereas Copilot was quicker and more concise’. These observations demonstrate that scaffolded engagement can enable pre-service teachers to move beyond treating “AI” as a monolithic entity and instead recognise variation across platforms, developing more sophisticated understanding of different tools’ characteristics.
This evidence suggests that scaffolded activities which ask pre-service teachers to compare AI outputs with curated resources support critical evaluation central to digital and AI literacy (Churchill, 2020; Kim, 2019; Long & Magerko, 2020; Martin, 2005; D. T. K. Ng et al., 2021). The study extends existing work by showing how a WebQuest methodology can turn abstract calls for critical evaluation into concrete classroom practice, offering a replicable model for cultivating discernment in navigating AI-mediated information landscapes.

4.3. RQ3: Synthesising Information from Curated Sources and GenAI Outputs

Comments indicate the WebQuest approach allowed pre-service teachers to think critically about information provided and apply that information to the infographic task. The clear and easy-to-follow structure was noted by the majority of participants, who valued the comprehensive explanations for each step and the ability to navigate flexibly through the process. The guided approach and readily available resources meant that students could focus on higher-order tasks of synthesis and production rather than spending time verifying sources. Participants reported that the approach was more engaging than traditional methods, provided information from multiple perspectives, and was helpful in supporting learning. Several noted interest in using WebQuests as a resource in their future teaching, recognising the pedagogical value of the structured inquiry approach. The synthesis task, which required pre-service teachers to combine curated sources and GenAI outputs to create an infographic for secondary school audiences, revealed purposeful engagement with multiple source types. Participants recognised the value of drawing on diverse sources, with comments noting that multiple source types were “P7—really good and helpful” and enabled “P13—more in-depth explanations.” This complementary approach allowed them to “P17—gain access to a wide range of info very quickly,” suggesting that the combination of curated and AI-generated content provided comprehensive coverage that neither source type could offer alone. For example, Figure 1 demonstrates how one pre-service teacher drew on information from digital literacy frameworks and ChatGPT to present their understanding of digital literacy.
However, participants did not treat all sources as equivalent. They demonstrated awareness of distinctions between source types, noting differences in “P2—language used” that made it “P2—obvious which resource was which.” Critically, this awareness translated into differentiated trust and strategic deployment. Whilst some participants enthusiastically combined sources, others expressed clear preferences: “P4—I had more confidence that the information I was receiving was correct” when using curated resources, with one participant noting, “P7—I tried my best to stay away from AI… and focused more on finding information in the references provided.” This hierarchy of trust represents a sophisticated evaluative stance, moving beyond uncritical acceptance of all available information.
The patterns of GenAI integration are particularly interesting. Rather than using AI as a primary information source, participants often assigned it specific, limited roles within their workflow. AI “P18—responses were used as titles to my research,” providing organisational structure, or were added “P22—as quotes to back up the information we had found through the [WebQuest] links,” functioning as supplementary rather than foundational content. This strategic positioning reflects understanding that different source types serve different purposes, a key dimension of information literacy. Figure 2 demonstrates evidence of this approach where pre-service teachers seem to have led with academic content and used GenAI to supplement this with additional detail and summary information. The underlying mechanism appears to be one of source triangulation and purposeful selection. By requiring engagement with both curated and AI-generated sources within a single task, the WebQuest created conditions for participants to directly compare source characteristics and develop informed preferences. This experiential learning, discovering through practice that curated sources felt more trustworthy or that AI was useful for specific purposes, likely supported more durable understanding than abstract instruction about source evaluation would have achieved. The synthesis requirement transformed what could have been a passive information-gathering exercise into an active evaluation and decision-making process. Pre-service teachers also expressed greater trust in curated sources, because they “P4—had more confidence that the information I was receiving was correct.” Several participants indicated they preferentially used curated resources when accuracy was paramount, whilst strategically deploying AI for efficiency, organisation, or supplementary purposes. This represents a sophisticated approach to source integration: rather than uncritically accepting all sources as equivalent, pre-service teachers developed hierarchies of trust and purpose-specific source selection strategies.
This data suggests the underlying structure of the WebQuest methodology, provided the mechanism for a comparative approach. WebQuests maybe a powerful scaffold for inquiry-based learning, enabling pre-service teachers to engage critically with sources whilst producing meaningful outputs. The structured design and guided resources reflected strengths identified in earlier studies for fostering critical thinking, collaboration, and engagement (Abu-Tineh et al., 2019; Auditor & Roleda, 2013; Bui et al., 2018; Zhou et al., 2012), whilst extending these by incorporating GenAI as a complementary and contested information source. The findings show students valued the clarity, organisation, and time-saving benefits of the WebQuest structure and used it to interrogate the affordances and limitations of AI-generated content, echoing organisation, synthesis, and production dimensions in digital and AI literacy frameworks (Churchill, 2020; Kong et al., 2023; D. T. K. Ng et al., 2021; Vuorikari et al., 2016). The evaluative stance demonstrated aligns with calls for embedding critical engagement in authentic academic tasks, whilst the creative synthesis evident in the infographics illustrates how WebQuests can bridge curated and AI-based sources into coherent, relevant outputs.

5. Discussion

The findings from this case study can be systematically mapped onto established AI literacy frameworks, demonstrating how the WebQuest pedagogical approach supported development across multiple dimensions of AI literacy.

5.1. D. T. K. Ng et al. (2021) Four-Area Framework

D. T. K. Ng et al. (2021) conceptualise AI literacy around four interrelated areas: (1) understanding AI concepts and processes, (2) appropriate use of AI tools, (3) critiquing and evaluating AI outputs, and (4) recognition of societal and ethical implications. (1) Understanding AI concepts and processes: Whilst participants did not develop deep technical understanding of underlying algorithms, they did develop functional understanding of how GenAI tools operate, i.e., that they require clear prompting, that they can produce variable outputs, and that they draw on large datasets that may contain inconsistencies. Participant 8’s observation that “sometimes you have to be careful how you word a sentence” demonstrates emerging understanding of the relationship between input quality and output quality. (2) Appropriate use of AI tools: The findings demonstrate development of strategic, purpose-driven AI use. Participants learned to use AI tools effectively within an academic context, with many noting ease of use and quick response times. More significantly, they developed sophisticated strategies for when and how to deploy AI, using it for efficiency and organisation whilst preferencing curated sources for accuracy-critical content. This represents appropriate, rather than indiscriminate, use of AI. (3) Critiquing and evaluating AI outputs: This dimension was strongly supported by the case study findings. Participants developed critical evaluation skills, noting concerns about accuracy (“you have to be careful with the information it gives you as it can be wrong”), clarity, and verbosity. The comparative structure of the WebQuest, requiring engagement with both GenAI and curated sources, created conditions for developing evaluative benchmarks. (4) Recognition of societal and ethical implications: Whilst less explicitly addressed in the data, participants’ expressed concerns about trustworthiness and their preferential use of curated sources suggest emerging awareness that uncritical reliance on AI could have negative implications for information quality and academic integrity.

5.2. Long and Magerko (2020) AI Competencies

Long and Magerko (2020) identified 17 specific AI competencies organised around themes including recognising AI, understanding AI capabilities and limitations, and critically evaluating AI. (1) Recognising AI: Participants clearly recognised and distinguished between AI tools (ChatGPT and Copilot) and traditional information sources, noting differences in language used and presentation style. (2) Understanding AI capabilities: The findings show participants recognised AI’s capabilities for simplification (“AI will almost ‘dumb’ things down slightly”), speed (“came back with answers very quickly”), and comprehensiveness (“lengthy and in-depth”). They also recognised its ability to respond to natural language queries and adapt outputs based on prompting. (3) Understanding AI limitations: Critically, participants developed awareness of AI limitations, including potential inaccuracies, challenges with “minor dialects of English,” excessive verbosity, and variable quality. This nuanced understanding moves beyond both uncritical acceptance and blanket rejection. (4) Critically evaluating AI: The comparative evaluation demonstrated throughout the findings, assessing clarity, accuracy, relevance, and comparing platforms, represents emerging critical engagement with GenAI outputs.

5.3. Kong et al. (2023) Cognitive, Affective, and Sociocultural Domains

Kong et al. (2023) framed AI literacy across cognitive (fundamental AI concepts), affective (confidence to use AI), and sociocultural (ethical use of AI) domains. (1) Cognitive domain: Participants developed conceptual understanding of AI as a tool requiring skilled interaction (prompting), capable of both valuable and problematic outputs, and varying across platforms and contexts. (2) Affective domain: The findings demonstrate development of confidence in using AI tools. Twelve participants noted ease of use, and 20 indicated the experience was valuable and beneficial. Simultaneously, they developed appropriate caution rather than either over-confidence or technophobia, a balanced affective stance supportive of responsible AI engagement. (3) Sociocultural domain: Participants’ strategic choices about when to prioritise curated sources over AI, their recognition of the need to verify AI outputs, and their expressed intention to use similar structured approaches in their own teaching all suggest developing understanding of responsible, ethical AI use in educational contexts.

5.4. Development of Critical Thinking in Teacher Education

Critical thinking has long been recognised as a cornerstone of teacher education, yet effectively developing it remains challenging (Kuhn, 1999; Paulr & Elder, 2001). The activity in this case study encouraged critical thinking through several interconnected mechanisms. (1) Comparison and contrast: The requirement to engage with multiple source types necessitated comparison, a fundamental critical thinking skill. Participants could not complete the task without evaluating relative strengths and weaknesses of different sources. (2) Evaluation of credibility: The activity required participants to assess accuracy, reliability, and trustworthiness, core components of critical evaluation. Unlike activities using a single source type, the deliberately diverse source set forced evaluative judgements. (3) Synthesis for authentic purpose: Creating infographics for secondary school audiences required analysis (breaking down complex concepts), evaluation (selecting appropriate content), and synthesis (combining information coherently), representing higher-order thinking. (4) Reflection on process: The interview protocol explicitly prompted reflection on the information-seeking and synthesis process, encouraging participants to think about their own thinking. For future teachers, this has particular significance. Pre-service teachers will need to model information literacy and critical thinking for their own students. Having experienced structured inquiry that develops these skills positions them to implement similar approaches in their classrooms. Several participants explicitly recognised this potential, noting they would “definitely consider using this tool in my future classes” and that “these types of resources will be useful to us going forward, because if we can use them effectively in our class, we will be teaching different skills at once.” This meta-pedagogical awareness, understanding not just what they learned but how they learned it and how they might teach it, represents an important outcome for teacher education. Moreover, as GenAI tools become increasingly prevalent in educational settings, teachers will need to guide students in developing critical AI literacy. Having engaged in structured activities that support such development provides pre-service teachers with pedagogical models and personal experience to draw upon. This addresses calls from numerous scholars (Chan & Hu, 2023; Jovanović & Campbell, 2022; Rachha & Seyam, 2023) for teacher education to include authentic opportunities to develop AI-related pedagogical knowledge.

5.5. Theoretical Contribution

This case study makes several theoretical contributions. First, it demonstrates empirically how an established pedagogical framework (WebQuests) can be adapted to support development of emerging literacies (AI literacy), offering a practical model for technology integration that builds on rather than replacing proven approaches. Second, it provides evidence that comparative engagement with diverse source types, particularly the juxtaposition of curated and AI-generated content, supports development of evaluative criteria and critical thinking. Third, it illustrates how tensions and contradictions (e.g., between efficiency and trust) can be productive sites for learning, supporting development of nuanced rather than simplistic understandings. Finally, it contributes to the limited empirical literature on GenAI in teacher education specifically, addressing calls from multiple scholars (Chan & Hu, 2023; Mishra et al., 2023; O’Dea, 2024) for research that moves beyond speculation to examine actual experiences.

6. Conclusions

This case study explored how WebQuests, incorporating GenAI tools such as ChatGPT and Microsoft Copilot, can be used to AI literacy amongst pre-service teachers. By situating GenAI within a scaffolded inquiry-based framework, participants were encouraged not only to engage with emerging AI technologies but also to critically interrogate the accuracy, clarity, and reliability of their outputs. Findings indicate that WebQuests provide a valuable structure for supporting meaningful interaction with GenAI, prompting students to compare AI responses with curated sources, and to synthesise information into coherent outputs. The case study highlights that GenAI has the potential to support efficiency, accessibility, and creativity in information-seeking tasks, whilst also introducing new challenges related to accuracy, bias, and overreliance. Importantly, the WebQuest model encouraged pre-service teachers to develop evaluative skills and reflect on the pedagogical affordances and limitations of GenAI. Systematic mapping of findings to established AI literacy frameworks demonstrates that the structured approach supported development across cognitive, affective, and sociocultural dimensions of AI literacy. Taken together, these results suggest that integrating GenAI within structured, inquiry-driven pedagogies can advance both traditional information literacy and emerging AI literacy, preparing future teachers to navigate and critically engage with digital technologies in their professional practice. The case study approach, whilst limited in generalisability, provides rich, contextualised insights into how pedagogical design can mediate the integration of emerging technologies in teacher education.

7. Recommendations

The findings of this case study suggest several important recommendations for teacher education, policy, and future research. Within teacher education programmes, scaffolded, inquiry-based approaches such as WebQuests should be more widely adopted to provide pre-service teachers with practical opportunities to engage with generative AI. Such approaches can help future educators move beyond functional use, fostering critical evaluation of AI outputs and enabling them to integrate these tools into pedagogically meaningful classroom practices. At an institutional and policy level, there is a pressing need to develop clear guidelines that promote the responsible, transparent, and pedagogically sound use of GenAI in educational contexts. These guidelines should be supported by sustained professional development opportunities to ensure educators remain confident and informed as AI technologies continue to evolve. Finally, future research should build on this work by conducting larger-scale, longitudinal studies to explore how GenAI tools influence teacher learning and practice over time. Further investigations might also examine how these approaches can be adapted across different subjects and educational levels, and explore their impact on student learning outcomes in school settings.

8. Limitations

Whilst this case study offers valuable insights, several limitations must be acknowledged. First, the research was conducted with a relatively small sample (n = 24) of pre-service language teachers at a single institution. The case study design prioritises depth over breadth, providing rich contextualised insights into a bounded phenomenon rather than generalisable findings applicable across all contexts. As such, the findings should be understood as illustrative of what is possible within particular conditions rather than prescriptive for all teacher education settings. Second, the study relied on self-reported perceptions through interviews, which, whilst rich, may be influenced by social desirability bias or participants’ limited prior experience with GenAI. Third, the intervention took place over a short period (two weeks), meaning that longer-term impacts on digital and AI literacy could not be assessed. Future longitudinal research could examine whether the critical evaluation skills and strategic source-use approaches demonstrated in this study persist and transfer to other contexts. Finally, the dual role of the researchers as both lecturers and investigators, whilst carefully mitigated through ethical procedures, may nonetheless have influenced participants’ responses. Future studies should seek to address these limitations by engaging larger, more diverse cohorts, adopting mixed-methods approaches, and examining the longitudinal effects of integrating GenAI into teacher education curricula.

Author Contributions

Conceptualization, P.T., E.D., M.H. and J.L.; methodology, P.T., E.D., M.H. and J.L.; validation, P.T. and E.D.; formal analysis, P.T. and E.D.; investigation, P.T., E.D, M.H. and J.L.; data curation, P.T., E.D., M.H. and J.L.; writing—original draft preparation, P.T., E.D., M.H. and J.L.; writing—review and editing, P.T., E.D., M.H. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Dublin City University (DCUREC/2025/114, 20 June 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdelwahab, H. R., Rauf, A., & Chen, D. (2023). Business students’ perceptions of Dutch higher educational institutions in preparing them for artificial intelligence work environments. Industry and Higher Education, 37(1), 22–34. [Google Scholar] [CrossRef]
  2. Abu-Elwan, R. (2007). The use of Webquest to enhance the mathematical problem-posing skills of pre-service teachers. International Journal for Technology in Mathematics Education, 14, 31. [Google Scholar]
  3. Abu-Tineh, A., Murphy, C., Calder, N., & Mansour, N. (2019). The use of Webquests in developing inquiry based learning: Views of teachers and students in Qatar. International Journal of Educational and Pedagogical Sciences, 13(10), 1334–1337. [Google Scholar]
  4. Al-Abdullatif, A. M., & Alsubaie, M. A. (2024). ChatGPT in learning: Assessing students’ use intentions through the Lens of perceived value and the influence of AI literacy. Behavioral Sciences, 14(9), 845. [Google Scholar] [CrossRef]
  5. Alghazo, R., Fatima, G., Malik, M., Abdelhamid, S. E., Jahanzaib, M., Nayab, D. E., & Raza, A. (2025). Exploring ChatGPT’s role in higher education: Perspectives from Pakistani University students on academic integrity and ethical challenges. Education Sciences, 15(2), 158. [Google Scholar] [CrossRef]
  6. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed on 21 November 2025).
  7. Auditor, E., & Roleda, L. S. (2013). The WebQuest: Its impact on students’ critical thinking, performance, & perceptions in physics. International Journal of Research Studies in Educational Technology, 3(1), 3–21. [Google Scholar] [CrossRef][Green Version]
  8. Aydin, Ö., & Karaarslan, E. (2023). Is ChatGPT leading generative AI? What is beyond expectations? Academic Platform Journal of Engineering and Smart Systems, 11(3), 118–134. [Google Scholar] [CrossRef]
  9. Baer, A. (2025). Unpacking predominant narratives about generative AI and education: A starting point for teaching critical AI literacy and imagining better futures. Library Trends, 73(3), 141–159. [Google Scholar] [CrossRef]
  10. Baidoo-anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. [Google Scholar] [CrossRef]
  11. Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299. [Google Scholar] [CrossRef]
  12. Bali, M. (2024). Different critiques of AI in education. Educational Technology, 7, 1077–1084. Available online: https://blog.mahabali.me/educational-technology-2/different-critiques-of-ai-in-education/ (accessed on 20 November 2025).
  13. Bayram, D., Kurt, G., & Atay, D. (2019). The implementation of WebQuest-supported critical thinking instruction in pre-service English teacher education: The Turkish context. Participatory Educational Research, 6(2), 144–157. [Google Scholar] [CrossRef]
  14. Bearman, M., Ryan, J., & Ajjawi, R. (2023). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86(2), 369–385. [Google Scholar] [CrossRef]
  15. Belland, B. R., Kim, C., & Hannafin, M. J. (2013). A framework for designing scaffolds that improve motivation and cognition. Educational Psychologist, 48(4), 243–270. [Google Scholar] [CrossRef]
  16. Borisov, B., & Stoyanova, T. (2024). Artificial intelligence in higher education: Pros and cons. SCIENCE International Journal, 3(2), 1–7. [Google Scholar] [CrossRef]
  17. Boulhrir, T., & Hamash, M. (2025). Unpacking artificial intelligence in elementary education: A comprehensive thematic analysis systematic review. Computers and Education: Artificial Intelligence, 9, 100442. [Google Scholar] [CrossRef]
  18. Boyle, C. (2025). ChatGPT, Gemini, & Copilot: Using generative AI as a tool for information literacy instruction. The Reference Librarian, 66(1–2), 13–29. [Google Scholar] [CrossRef]
  19. Božić, V., & Poola, I. (2023). Chat GPT and education. ResearchGate. Available online: https://www.researchgate.net/publication/369926506_Chat_GPT_and_education (accessed on 17 November 2025).
  20. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  21. Brenner, K. C. (2019). Examining student perspectives on information literacy [Ph.D. thesis, University of New England]. Available online: https://dune.une.edu/theses/271/?utm_source=dune.une.edu%2Ftheses%2F271&utm_medium=PDF&utm_campaign=PDFCoverPages (accessed on 16 October 2025).
  22. Bui, L. D., Kim, Y. G., Ho, W., Ho, H. T. T., & Pham, N. K. (2018). Developing WebQuest 2.0 model for promoting computational thinking skill. International Journal of Engineering and Technology, 7(2), 140–144. [Google Scholar] [CrossRef]
  23. Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., & Sun, L. (2023). A comprehensive survey of AI-generated content (AIGC): A history of generative AI from GAN to ChatGPT. arXiv, arXiv:2303.04226. [Google Scholar] [CrossRef]
  24. Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro, S. (2023). AI literacy in K-12: A systematic literature review. International Journal of STEM Education, 10(1), 29. [Google Scholar] [CrossRef]
  25. Cave, S., Coughlan, K., & Dihal, K. (2019, January 27–28). ‘Scary robots’: Examining public responses to AI. 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 331–337), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  26. Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66(4), 616–630. [Google Scholar] [CrossRef]
  27. Cervera, M., & Caena, F. (2022). Teachers’ digital competence for global teacher education. European Journal of Teacher Education, 45(4), 451–455. [Google Scholar] [CrossRef]
  28. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  29. Chen, Y., Jensen, S., Albert, L. J., Gupta, S., & Lee, T. (2023). Artificial intelligence (AI) student assistants in the classroom: Designing chatbots to support student success. Information Systems Frontiers, 25(1), 161–182. [Google Scholar] [CrossRef]
  30. Churchill, N. (2020). Development of students’ digital literacy skills through digital storytelling with mobile devices. Educational Media International, 57(3), 271–284. [Google Scholar] [CrossRef]
  31. Clarke, S. (2025). Exploring the landscape of GenAI and education literature: A taxonomy of themes and sub-themes. British Educational Research Journal, 51(5), 2573–2604. [Google Scholar] [CrossRef]
  32. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  33. Dadkhahnikoo, N. (2020). Incident 113: Facebook’s AI put ‘primates’ label on video featuring black men. Available online: https://incidentdatabase.ai/cite/113 (accessed on 13 October 2025).
  34. Dai, W., Lin, J., Jin, H., Li, T., Tsai, Y.-S., Gašević, D., & Chen, G. (2023, July 10–13). Can large language models provide feedback to students? A case study on ChatGPT. 2023 IEEE International Conference on Advanced Learning Technologies (ICALT) (pp. 323–325), Orem, UT, USA. [Google Scholar] [CrossRef]
  35. Dodge, B. (1995). Some thoughts about WebQuests. Available online: https://edweb.sdsu.edu/courses/edtec596/about_webquests.html (accessed on 16 October 2025).
  36. Dolan, R. (2021). Assistive technologies for dyslexia: AI’s role in supporting learning and accessibility. Educational Technology Research & Development, 69(2), 401–415. [Google Scholar]
  37. Donlon, E., & Tiernan, P. (2023). Chatbots and citations: An experiment in academic writing with generative AI. Irish Journal of Technology Enhanced Learning, 7(2), 75–87. [Google Scholar] [CrossRef]
  38. Druga, S., Williams, R., Breazeal, C., & Resnick, M. (2017, June 27–30). ‘Hey google is it ok if I eat you?’: Initial explorations in child-agent interaction. Proceedings of the 2017 Conference on Interaction Design and Children (pp. 595–600), Stanford, CA, USA. [Google Scholar] [CrossRef]
  39. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. [Google Scholar] [CrossRef]
  40. Edgar, A. (2023). Sport and AI. Sport, Ethics and Philosophy, 17(3), 275–277. [Google Scholar] [CrossRef]
  41. Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. [Google Scholar] [CrossRef]
  42. Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv, arXiv:2303.10130. [Google Scholar] [CrossRef]
  43. Ernst, E. (2022). Artificial intelligence: Productivity growth and the transformation of capitalism. In A. Bounfour (Ed.), Platforms and artificial intelligence: The next generation of competences (pp. 149–181). Springer International Publishing. [Google Scholar] [CrossRef]
  44. Esiyok, E., Gokcearslan, S., & Kucukergin, K. G. (2024). Acceptance of educational use of AI chatbots in the context of self-directed learning with technology and ICT self-efficacy of undergraduate students. International Journal of Human–Computer Interaction, 41(1), 641–650. [Google Scholar] [CrossRef]
  45. Essel, H. B., Vlachopoulos, D., Essuman, A. B., & Amankwa, J. O. (2024). ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Computers and Education: Artificial Intelligence, 6, 100198. [Google Scholar] [CrossRef]
  46. Essien, A., Bukoye, O. T., O’Dea, X., & Kremantzis, M. (2024). The influence of AI text generators on critical thinking skills in UK business schools. Studies in Higher Education, 49(5), 865–882. [Google Scholar] [CrossRef]
  47. Farhi, F., Jeljeli, R., Aburezeq, I., Dweikat, F. F., Al-shami, S. A., & Slamene, R. (2023). Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Computers and Education: Artificial Intelligence, 5, 100180. [Google Scholar] [CrossRef]
  48. Ghosh, D., Ghosh, R., Roy Chowdhury, S., & Ganguly, B. (2025). AI-exposure and labour market: A systematic literature review on estimations, validations, and perceptions. Management Review Quarterly, 75(1), 677–704. [Google Scholar] [CrossRef]
  49. Gleason, N. (2022). ChatGPT and AI text generators: How HE can respond|THE campus learn, Share, connect. Available online: https://www.timeshighereducation.com/campus/chatgpt-and-rise-ai-writers-how-should-higher-education-respond (accessed on 3 September 2025).
  50. Grasse, O., Mohr, A., Lange, A.-K., & Jahn, C. (2023, August 29–30). AI approaches in education based on individual learner characteristics: A review. 2023 IEEE 12th International Conference on Engineering Education (ICEED) (pp. 50–55), Shah Alam, Malaysia. [Google Scholar] [CrossRef]
  51. Gratiot, C. (2023). AI and the erosion of trust in higher Ed (No. 13; The Bravery Media). Available online: https://bravery.co/podcast/ai-and-erosion-of-trust-in-higher-ed/ (accessed on 15 October 2025).
  52. Guimarães, L., & Mazeda Gil, P. (2022). Looking ahead at the effects of automation in an economy with matching frictions. Journal of Economic Dynamics and Control, 144, 104538. [Google Scholar] [CrossRef]
  53. Gupta, A., Agrawal, N., & Agrawal, H. (2025). Generative ai in education: A review report on current and future trends. International Journal of Engineering Applied Sciences and Technology, 9, 96–99. [Google Scholar] [CrossRef]
  54. Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. Available online: https://eric.ed.gov/?id=EJ1385551 (accessed on 16 September 2025). [CrossRef]
  55. Hamash, M., & Mohamed, H. (2021). BASAER team: The first Arabic robot team for building the capacities of visually impaired students to build and program robots. International Journal of Emerging Technologies in Learning (iJET), 16(24), 91–107. [Google Scholar] [CrossRef]
  56. Hamash, M. A., Mohamed, H., & Tiernan, P. (2024). Developing a new model for achieving flow state in STEAM education: A mixed-method Investigation. Sains Humanika, 16(3), 101–111. [Google Scholar] [CrossRef]
  57. Hemminki-Reijonen, U., Hassan, N. M. A. M., Huotilainen, M., Koivisto, J.-M., & Cowley, B. U. (2025). Design of generative AI-powered pedagogy for virtual reality environments in higher education. Npj Science of Learning, 10(1), 31. [Google Scholar] [CrossRef]
  58. Hicks, A. E. (2018). Making the case for a sociocultural perspective on information literacy. Library Juice Press. [Google Scholar]
  59. Hooda, M., Rana, C., Dahiya, O., Rizwan, A., & Hossain, M. S. (2022). Artificial intelligence for assessment and feedback to enhance student success in higher education. Mathematical Problems in Engineering, 2022(1), 5215722. [Google Scholar] [CrossRef]
  60. Hu, L. (2022). Generative AI and future. Medium. Available online: https://pub.towardsai.net/generative-ai-and-future-c3b1695876f2 (accessed on 8 September 2025).
  61. Hutson, J. (2024). The rise of AI in academic inquiry. IGI Global. [Google Scholar]
  62. Hwang, G.-J., & Chang, C.-Y. (2023). A review of opportunities and challenges of chatbots in education. Interactive Learning Environments, 31(7), 4099–4112. [Google Scholar] [CrossRef]
  63. Ilieva, G., Yankova, T., Klisarova-Belcheva, S., Dimitrov, A., Bratkov, M., & Angelov, D. (2023). Effects of generative chatbots in higher education. Information, 14(9), 492. [Google Scholar] [CrossRef]
  64. Jia, X.-H., & Tu, J.-C. (2024). Towards a new conceptual model of ai-enhanced learning for college students: The roles of artificial intelligence capabilities, general self-efficacy, learning motivation, and critical thinking awareness. Systems, 12(3), 74. [Google Scholar] [CrossRef]
  65. Jiang, Y., & Nakatani, K. (2025). Exploring implementations of GenAI in teaching IS subjects and student perceptions. Journal of Information Systems Education, 35(2), 180–194. [Google Scholar] [CrossRef]
  66. JISC. (2015). Building digital capabilities: The six elements defined. In Building capability for new digital leadership, pedagogy and efficiency (pp. 1–3). JISC. [Google Scholar]
  67. Jovanović, M., & Campbell, M. (2022). Generative artificial intelligence: Trends and prospects. IEEE Computer, 55, 107–112. [Google Scholar] [CrossRef]
  68. Kaur, P., & Gill, D. R. K. (2024). The dual impact of ai in education: Unraveling the opportunities and challenges of Chatgpt. Indexed and Peer Reviewed Journal, 11, 86–91. [Google Scholar]
  69. Khalil, G. I., Mohammad Sajjad, H., Sohail, M., & Ishfaq, Z. (2023, February 16). Role of AI in the education sector in the kingdom of Bahrain. 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE) (pp. 992–997), Jakarta, Indonesia. [Google Scholar] [CrossRef]
  70. Kim, K. T. (2019). The structural relationship among digital literacy, learning strategies, and core competencies among South Korean college students. Educational Sciences: Theory and Practice, 19(2), 3–21. [Google Scholar]
  71. Kong, S.-C., Cheung, W. M.-Y., & Zhang, G. (2023). Evaluating an artificial intelligence literacy programme for developing university students? Conceptual understanding, literacy, empowerment and ethical awareness. Educational Technology & Society, 26(1), 16–30. [Google Scholar]
  72. Kong, S.-C., Man-Yin Cheung, W., & Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Computers and Education: Artificial Intelligence, 2, 100026. [Google Scholar] [CrossRef]
  73. Kuhn, D. (1999). A developmental model of critical thinking. Educational Researcher, 28(2), 16–46. [Google Scholar] [CrossRef]
  74. Kutty, S., Chugh, R., Perera, P., Neupane, A., Jha, M., Li, L., Gunathilake, W., & Perera, N. C. (2024). Generative AI in higher education: Perspectives of students, educators and administrators. Journal of Applied Learning and Teaching, 7(2), 47–60. [Google Scholar] [CrossRef]
  75. Laak, K.-J., Abdelghani, R., & Aru, J. (2024). Personalisation is not guaranteed: The challenges of using generative AI for personalised learning. In Y.-P. Cheng, M. Pedaste, E. Bardone, & Y.-M. Huang (Eds.), Innovative technologies and learning (pp. 40–49). Springer Nature Switzerland. [Google Scholar] [CrossRef]
  76. Laird, E., & Dwyer, M. (2023). Report—Off task: EdTech threats to student privacy and equity in the age of AI. Center for Democracy and Technology. Available online: https://cdt.org/insights/report-off-task-edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/ (accessed on 11 November 2025).
  77. Law, N., Woo, D., de la Torre, J., & Wong, G. (2018). A global framework of reference on digital literacy skills for indicator 4.4. 2. UNESCO. Available online: https://uis.unesco.org/sites/default/files/documents/ip51-global-framework-reference-digital-literacy-skills-2018-en.pdf (accessed on 11 November 2025).
  78. Leinen, P., Esders, M., Schütt, K. T., Wagner, C., Müller, K.-R., & Tautz, F. S. (2020). Autonomous robotic nanofabrication with reinforcement learning. Science Advances, 6(36), eabb6987. [Google Scholar] [CrossRef]
  79. Liang, W., & Fung, D. (2020). Development and evaluation of a WebQuest-based teaching programme: Students’ use of exploratory talk to exercise critical thinking. International Journal of Educational Research, 104, 101652. [Google Scholar] [CrossRef] [PubMed]
  80. Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. [Google Scholar] [CrossRef]
  81. Lin, X., Wang, X., Shao, B., & Taylor, J. (2024). How chatbots augment human intelligence in customer services: A mixed-methods study. Journal of Management Information Systems, 41(4), 1016–1041. [Google Scholar] [CrossRef]
  82. List, A., Brante, E. W., & Klee, H. L. (2020). A framework of pre-service teachers’ conceptions about digital literacy: Comparing the United States and Sweden. Computers & Education, 148, 103788. [Google Scholar] [CrossRef]
  83. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. [Google Scholar] [CrossRef]
  84. Lodge, J. M., Thompson, K., & Corrin, L. (2023). Mapping out a research agenda for generative artificial intelligence in tertiary education. Australasian Journal of Educational Technology, 39(1), 1–8. [Google Scholar] [CrossRef]
  85. Long, D., & Magerko, B. (2020, April 25–30). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16), Honolulu, HI, USA. [Google Scholar] [CrossRef]
  86. Luckin, R., Holmes, W., & Forcier, L. B. (2018). An argument for AI in education. UCL Knowledge Lab. [Google Scholar]
  87. Luo, J. (2025). How does GenAI affect trust in teacher-student relationships? Insights from students’ assessment experiences. Teaching in Higher Education, 30(4), 991–1006. [Google Scholar] [CrossRef]
  88. Łodzikowski, K., Foltz, P. W., & Behrens, J. T. (2024). Generative AI and its educational implications. In D. Kourkoulou, A.-O. (Olnancy) Tzirides, B. Cope, & M. Kalantzis (Eds.), Trust and inclusion in AI-mediated education: Where human learning meets learning machines (pp. 35–57). Springer Nature. [Google Scholar] [CrossRef]
  89. Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., Darwis, A., & Marzuki. (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. International Journal of Educational Research Open, 5, 100296. [Google Scholar] [CrossRef]
  90. Martin, A. (2005). DigEuLit—A European framework for digital literacy: A progress report. Journal of eLiteracy, 2, 130–136. [Google Scholar]
  91. Matzinger, K. (2023). The rising trend of teens using AI for schoolwork. Junior Achievement. Available online: https://jausa.ja.org/news/blog/the-rising-trend-of-teens-using-ai-for-schoolwork (accessed on 28 November 2025).
  92. McAlister, A., Lee, D., Ehlert, K., Kajfez, R., Faber, C., & Kennedy, M. (2017). Qualitative coding: An approach to assess inter-rater reliability. In 2017 ASEE annual conference & exposition proceedings, Columbus, OH, USA, June 25–28 (p. 28777). American Society for Engineering Education. [Google Scholar] [CrossRef]
  93. McGehee, N. (2023). Balancing the risks and rewards of AI integration for Michigan teachers. Michigan Virtual. Available online: https://michiganvirtual.org/research/publications/balancing-the-risks-and-rewards-of-ai-integration-for-michigan-teachers/ (accessed on 4 November 2025).
  94. Memarian, B., & Doleck, T. (2023). ChatGPT in education: Methods, potentials, and limitations. Computers in Human Behavior: Artificial Humans, 1(2), 100022. [Google Scholar] [CrossRef]
  95. Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. [Google Scholar] [CrossRef]
  96. Mohammadi, A., Modarres, M., Khakbazan, Z., Hoseini, A. S. S., Asghari-Jafarabadi, M., & Geranmayeh, M. (2023). Effect of WebQuest-based education on critical thinking and academic self-efficacy of midwifery students: Study protocol of a randomized, controlled crossover trial. Journal of Education and Health Promotion, 12(1), 395. [Google Scholar] [CrossRef]
  97. Montenegro-Rueda, M., Fernández-Cerero, J., Fernández-Batanero, J. M., & López-Meneses, E. (2023). Impact of the implementation of ChatGPT in education: A systematic review. Computers, 12(8), 153. [Google Scholar] [CrossRef]
  98. Nah, F. F.-H., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277–304. [Google Scholar] [CrossRef]
  99. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. [Google Scholar] [CrossRef]
  100. Ng, W. (2012). Can we teach digital natives digital literacy? Computers & Education, 59(3), 1065–1078. [Google Scholar] [CrossRef]
  101. Noble, S. U. (2020). Algorithms of oppression: How search engines reinforce racism. New York University Press. [Google Scholar] [CrossRef]
  102. O’Dea, X. (2024). Generative AI: Is it a paradigm shift for higher education? Studies in Higher Education, 49(5), 811–816. [Google Scholar] [CrossRef]
  103. O’Dea, X., & O’Dea, M. (2023). Can AI support academic research? Wonkhe. Available online: https://wonkhe.com/blogs/can-ai-support-academic-research/ (accessed on 28 August 2025).
  104. Ogurlu, U., & Mossholder, J. (2023). The perception of ChatGPT among educators: Preliminary findings. Research in Social Sciences and Technology, 8(4), 196–215. [Google Scholar] [CrossRef]
  105. Oliver, R., Towers, S., & Oliver, H. (2000). Information and communications technology literacy—Getting serious about IT (pp. 862–867). Association for the Advancement of Computing in Education (AACE). Available online: https://www.learntechlib.org/primary/p/16174/ (accessed on 19 August 2025).
  106. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. [Google Scholar]
  107. Panjwani-Charania, S., & Zhai, X. (2024). AI for students with learning disabilities: A systematic review. In X. Zhai, & J. Krajcik (Eds.), Uses of artificial intelligence in STEM education. University Press. [Google Scholar]
  108. Paulr, R., & Elder, L. (2001). Critical thinking: Tools for taking charge of your learning and your life. Prentice Hall. [Google Scholar]
  109. Perkins, K. (2020). Incident 106: Korean chatbot luda made offensive remarks towards minority groups. Available online: https://incidentdatabase.ai/cite/106 (accessed on 14 December 2025).
  110. Pérez-Escoda, A., García-Ruiz, R., & Aguaded, I. (2019). Dimensions of digital literacy based on five models of development. Cultura y Educacion, 31(2), 232–266. [Google Scholar] [CrossRef]
  111. Piro, J. M., & Marksbury, N. (2012). Technologizing teaching: Using the WebQuest to enhance pre-service education. In R. Ronau, C. Rakes, & M. Niess (Eds.), Educational technology (pp. 228–250). IGI Global Scientific Publishing. [Google Scholar]
  112. Plé, L. (2023). Should we trust students in the age of generative AI? Times Higher Education (THE). Available online: https://www.timeshighereducation.com/campus/should-we-trust-students-age-generative-ai (accessed on 28 August 2025).
  113. Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1), 22. [Google Scholar] [CrossRef]
  114. Porayska-Pomsta, K., Holmes, W., & Nemorin, S. (2023). The ethics of AI in education. In B. du Boulay, A. Mitrovic, & K. Yacef (Eds.), Handbook of artificial intelligence in education (pp. 571–604). Edward Elgar Publishing. [Google Scholar]
  115. Qadir, J. (2023, May 1–4). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. 2023 IEEE Global Engineering Education Conference (EDUCON) (pp. 1–9), Kuwait, Kuwait. [Google Scholar] [CrossRef]
  116. Rachha, A., & Seyam, M. (2023, April 1–16). Explainable AI In education: Current trends, challenges, and opportunities. SoutheastCon 2023 (pp. 232–239), Orlando, FL, USA. [Google Scholar] [CrossRef]
  117. Radesky, J. (2024). AI, parenting, and child development. Journal of Developmental & Behavioral Pediatrics, 45(1), e2. [Google Scholar] [CrossRef]
  118. Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strategies. Applied Sciences, 13(9), 5783. [Google Scholar] [CrossRef]
  119. Richards, K. A. R., & Hemphill, M. A. (2018). A practical guide to collaborative qualitative data analysis. Journal of Teaching in Physical Education, 37(2), 225–231. [Google Scholar] [CrossRef]
  120. Richter, A., Gacic, T., Koelmel, B., Waidelich, L., & Glaser, P. (2019). A review of fundamentals and influential factors of artificial intelligence. International Journal of Computer and Information Technology, 8(4), 142–156. [Google Scholar]
  121. Rogers, M. P., Hillberg, H. M., & Groves, C. L. (2024, March 20–23). Attitudes towards the use (and misuse) of ChatGPT: A preliminary study. 55th ACM Technical Symposium on Computer Science Education (Vol. 1, pp. 1147–1153), Portland, OR, USA. [Google Scholar] [CrossRef]
  122. Rohatgi, A., Scherer, R., & Hatlevik, O. E. (2016). The role of ICT self-efficacy for students’ ICT use and their achievement in a computer and information literacy test. Computers & Education, 102, 103–116. [Google Scholar] [CrossRef]
  123. Russell Group. (2023). Principles on the use of generative AI tools in education. Russell Group. Available online: https://www.russellgroup.ac.uk/policy/policy-briefings/principles-use-generative-ai-tools-education (accessed on 12 September 2025).
  124. Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N., Gallegos, J., Smart, A., Garcia, E., & Virk, G. (2023, August 8–10). Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 723–741), Montréal, QC, Canada. [Google Scholar] [CrossRef]
  125. Sidoti, O., & Gottfried, J. (2025). About a quarter of U.S. teens have used ChatGPT for schoolwork—Double the share in 2023. Pew Research Center. Available online: https://www.pewresearch.org/short-reads/2025/01/15/about-a-quarter-of-us-teens-have-used-chatgpt-for-schoolwork-double-the-share-in-2023/ (accessed on 15 September 2025).
  126. Smith, L. K., Draper, R. J., & Sabey, B. L. (2005). The promise of technology to confront dilemmas in teacher education: The use of WebQuests in problem-based methods courses. Journal of Computing in Teacher Education, 21(4), 99–108. [Google Scholar]
  127. Sok, S., & Heng, K. (2023). ChatGPT for education and research: A review of benefits and risks. Cambodian Journal of Educational Research, 3, 110–121. [Google Scholar] [CrossRef]
  128. Sousa, A., & Cardoso, P. (2025). Use of generative AI by higher education students. Electronics, 14(7), 1258. [Google Scholar] [CrossRef]
  129. Stake, R., & Visse, M. (1995). The art of case study research. Springer. [Google Scholar]
  130. Strengers, Y. (2022). AI at home: An urgent urban policy and research Agenda. Urban Policy and Research, 40(3), 250–258. [Google Scholar] [CrossRef]
  131. Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching, 6(1), 31–40. [Google Scholar] [CrossRef]
  132. Tan, X., Cheng, G., & Ling, M. H. (2025). Artificial intelligence in teaching and teacher professional development: A systematic review. Computers and Education: Artificial Intelligence, 8, 100355. [Google Scholar] [CrossRef]
  133. Taranikanti, V., & Davidson, C. J. (2023). Metacognition through an iterative anatomy AI chatbot: An innovative playing field for educating the future generation of medical students. Anatomia, 2(3), 271–281. [Google Scholar] [CrossRef]
  134. Tiernan, P., Costello, E., Donlon, E., Parysz, M., & Scriney, M. (2023). Information and media literacy in the age of AI: Options for the future. Education Sciences, 13(9), 906. [Google Scholar] [CrossRef]
  135. Tiernan, P., & Donlon, E. (2024). The perception of teachers about the future impact of AI & on teachers’ awareness of student perceptions of AI (Preparing Teachers for the AI Development in Education as an Innovative Asset). Franchetti Centro Studi Villa Montesca. Available online: https://www.academia.edu/129729750/The_perception_of_teachers_about_the_future_impact_of_AI_and_on_teachers_awareness_of_student_perceptions_of_AI (accessed on 21 August 2025).
  136. Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for K-12: What should every child know about AI? Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 9795–9799. [Google Scholar] [CrossRef]
  137. Tsao, J., & Nogues, C. (2024). Beyond the author: Artificial intelligence, creative writing and intellectual emancipation. Poetics, 102, 101865. [Google Scholar] [CrossRef]
  138. Vidoni, K. L., & Maddux, C. D. (2002). WebQuests: Can they be used to improve critical thinking skills in students? Computers in the Schools, 19(1–2), 101–117. [Google Scholar] [CrossRef]
  139. Vuorikari, R., Punie, Y., Carretero, S., & Van Den Brande, L. (2016). DigComp 2.0: The digital competence framework for citizens. In Jrc-Ipts (Issue June). Joint Research Centre. [Google Scholar] [CrossRef]
  140. Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to generative AI. Economics and Business Review, 9(2), 71–100. [Google Scholar] [CrossRef]
  141. Walton Family Foundation. (2023). ChatGPT used by teachers more than students. Walton Family Foundation. Available online: https://www.waltonfamilyfoundation.org/chatgpt-used-by-teachers-more-than-students-new-survey-from-walton-family-foundation-finds (accessed on 15 August 2025).
  142. Wang, F., & Hannafin, M. J. (2008). Integrating WebQuests in preservice teacher education. Educational Media International, 45(1), 59–73. [Google Scholar] [CrossRef]
  143. Wang, K. D., Wu, Z., Tufts, L., Wieman, C., Salehi, S., & Haber, N. (2025, April 22–25). Scaffold or crutch? Examining college students’ use and views of generative AI tools for STEM education. 2025 IEEE Global Engineering Education Conference (EDUCON) (pp. 1–10), London, UK. [Google Scholar] [CrossRef]
  144. Wang, S. (2020). Calculating dating goals: Data gaming and algorithmic sociality on Blued, a Chinese gay dating app. Information, Communication & Society, 23(2), 181–197. [Google Scholar] [CrossRef]
  145. Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from language models. arXiv, arXiv:2112.04359. [Google Scholar] [CrossRef]
  146. Whalen, J., Grube, W., Xu, C., & Trust, T. (2025). K-12 educators’ reactions and responses to ChatGPT and GenAI during the 2022–2023 school year. TechTrends, 69(1), 125–137. [Google Scholar] [CrossRef]
  147. Wild, B. (2023). ChatGPT: Cardiff students admit using AI on essays. Available online: https://www.bbc.com/news/uk-wales-65167321 (accessed on 19 November 2025).
  148. Williams, A. (2024). Comparison of generative AI performance on undergraduate and postgraduate written assessments in the biomedical sciences. International Journal of Educational Technology in Higher Education, 21(1), 52. [Google Scholar] [CrossRef]
  149. Woodruff, K., Hutson, J., Arnone, K., Woodruff, K., Hutson, J., & Arnone, K. (2023). Perceptions and barriers to adopting artificial intelligence in K-12 education: A survey of educators in fifty states. In Reimagining education—The role of e-learning, creativity, and technology in the post-pandemic era. IntechOpen. [Google Scholar] [CrossRef]
  150. Wu, D., Zhang, S., Ma, Z., Yue, X.-G., & Dong, R. K. (2024). Unlocking potential: Key factors shaping undergraduate self-directed learning in AI-enhanced educational environments. Systems, 12(9), 332. [Google Scholar] [CrossRef]
  151. Xiao, P., Chen, Y., & Bao, W. (2023). Waiting, banning, and embracing: An empirical analysis of adapting policies for generative AI in higher education (SSRN Scholarly Paper No. 4458269). Social Science Research Network. [Google Scholar] [CrossRef]
  152. Yang, H., & Capan, S. (2025). Promoting AI literacy in K-12: Components, challenges, and opportunities. SRI International. [Google Scholar]
  153. Yin, R. K. (2018). Case study research and applications: Design and methods. Sage Publication, Inc. [Google Scholar]
  154. Zapata-Rivera, J.-D., Torre, I., Lee, C.-S., Cabezuelo, A. S., Ghergulescu, I., & Libbrecht, P. (2024). Editorial: Generative AI in education. Frontiers in Artificial Intelligence, 7, 1532896. [Google Scholar] [CrossRef] [PubMed]
  155. Zhai, X. (2022). ChatGPT user experience: Implications for education (SSRN Scholarly Paper No. 4312418). Social Science Research Network. [Google Scholar] [CrossRef]
  156. Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends (SSRN Scholarly Paper No. 3312874). Social Science Research Network. [Google Scholar] [CrossRef]
  157. Zhou, Q., Ma, L., Huang, N., Liang, Q., Yue, H., & Peng, T. (2012). Integrating webquest into chemistry classroom teaching to promote students’ critical thinking. Creative Education, 3(3), 369–374. [Google Scholar] [CrossRef]
  158. Zhu, C., Sun, M., Luo, J., Li, T., & Wang, M. (2023). How to harness the potential of ChatGPT in education? Knowledge Management & E-Learning, 15(2), 133–152. [Google Scholar]
  159. Zhukova, O., Nalyvaiko, O., Shvedova, Y., & Nalyvaiko, N. (2021). Creation of Webquest as a form of development of students digital competence. Professional Education: Methodology, Theory and Technologies, 14, 172–195. [Google Scholar] [CrossRef]
Figure 1. Drawing on multiple sources.
Figure 1. Drawing on multiple sources.
Aieduc 02 00007 g001
Figure 2. Using different sources for different purposes.
Figure 2. Using different sources for different purposes.
Aieduc 02 00007 g002
Table 1. WebQuest structure.
Table 1. WebQuest structure.
Page Description
IntroductionThe introduction sets the stage for the activity, presenting the topic and providing background information that prepares the students for the task. It also aims to capture their interest and motivate them to engage with the activity.
TaskThe task outlines what the students are expected to accomplish. It is usually a creative and challenging task that requires higher-order thinking. Tasks often involve real-world scenarios that necessitate problem-solving, decision-making, or creative output.
ProcessThis section provides step-by-step instructions for how students should proceed in completing the task. It often includes directions for how to find and use the resources provided.
ResourcesA key feature of WebQuests is that they provide curated lists of resources, typically in the form of hyperlinks to relevant web pages. This ensures that students are accessing credible and relevant information while also preventing the time-consuming task of searching for materials independently.
EvaluationThe evaluation section usually includes a rubric that clearly defines how the students’ work will be assessed. This ensures that students understand the criteria for success and provides a transparent method of assessment.
ConclusionThe conclusion brings closure to the activity, often encouraging students to reflect on what they have learned and how they can apply this knowledge in the future.
Table 2. Sample prompts.
Table 2. Sample prompts.
CategorySample Prompt
Understanding digital literacy
  • What is digital literacy, and how is it defined in education contexts?
  • Explain the key components of digital literacy and why they are important
  • How does digital literacy differ from information literacy and media literacy?
Exploring frameworks
  • What is the DigComp 2.2 framework, and how does it support digital literacy development?
  • What are some globally recognized frameworks for teaching digital literacy in schools?
Current trends and research
  • What are the latest trends or challenges in digital literacy education for secondary schools?
  • How do digital literacy skills affect students’ ability to evaluate online information?
  • Provide examples of how digital literacy is being taught in classrooms around the world
The role of Digital Literacy in education
  • Why is digital literacy considered a critical skill for secondary school students?
  • How does digital literacy impact students’ readiness for higher education and the workforce?
  • What role do teachers play in fostering digital literacy skills in their classrooms?
Real-world applications
  • Provide examples of how digital literacy can be applied to solve real-world problems
  • How does digital literacy help students critically evaluate fake news or misinformation?
  • What are the connections between digital literacy and responsible social media use?
Case studies and best practices
  • What are some case studies that show effective integration of digital literacy into school curricula?
  • How have schools or educators successfully promoted digital literacy skills among students?
  • What are best practices for incorporating digital literacy into secondary education?
Table 3. Themes identified through thematic analysis.
Table 3. Themes identified through thematic analysis.
Research QuestionThemes
Research Question 1: Pre-service teachers’ experiences using GenAI as part of a WebQuest
  • Valuable opportunity for practical engagement
  • Ease of use and accessibility
  • Value of structured prompts
  • Importance of clear questioning
  • Encouraged comparison and reflection
Research Question 2: Evaluating the accuracy, reliability, and validity of GenAI responses
  • Clarity and comprehensibility of AI outputs
  • Consistency and accuracy concerns
  • Length and relevance of responses
  • Platform-specific differences (ChatGPT vs. Copilot)
Research Question 3: Synthesising information from curated sources and GenAI outputs
  • Clear and structured approach of WebQuest
  • Time efficiency through curated resources
  • Engagement and novelty of approach
  • Support for collaborative and independent work
  • Relevance for future teaching practice
  • Complementary nature of multiple sources
  • Differences in language and tone between source types
  • Greater trust in curated sources
  • Strategic use of AI for efficiency and structure
Table 4. Pre-service Teachers’ Experiences Using GenAI in the WebQuest.
Table 4. Pre-service Teachers’ Experiences Using GenAI in the WebQuest.
ThemeNo. of ParticipantsRepresentative QuotesKey Principles
Valuable opportunity for practical engagementn = 20“The majority of the time we do not get to use AI as part of our work” (P4); “Insightful” (P1); “A very beneficial part of the WebQuest” (P17)Pre-service teachers valued hands-on experience with GenAI in an academic context; scaffolded engagement supported authentic learning
Ease of use and accessibilityn = 12“It was easy to use” (P1); “Simple to ask a question” (P5); “Very easy to use and came back with answers very quickly” (P6)GenAI tools were perceived as user-friendly and responsive; low barrier to entry for engagement
Value of structured promptsn = 8“The prompts allowed us to receive relevant and detailed answers” (P3); “The prompts were quite helpful as the AI gave us a range of discussion points” (P7)Scaffolding through provided prompts supported effective interaction; reduced cognitive load of formulating queries
Importance of clear questioningn = 5“Sometimes you have to be careful how you word a sentence” (P8); “It doesn’t understand minor dialects of English too well” (P8)Recognition that prompt quality affects output quality; awareness of limitations in natural language understanding
Encouraged comparison and reflectionn = 8“Compare information that AI contributed and information that I had researched myself” (P10); “The information we gather from our own research can differ from what AI tells us” (P20)WebQuest structure prompted critical comparison between AI and curated sources; developed evaluative stance
Table 5. Pre-service teachers’ evaluation of AI-generated information.
Table 5. Pre-service teachers’ evaluation of AI-generated information.
ThemeNo. of ParticipantsRepresentative QuotesKey Principles
Clarity and comprehensibility
Positive: Simplification
n = 4“AI will almost ‘dumb’ things down slightly and just overall make the information easier to understand” (P9); “I like how AI breaks down the question you ask it” (P18); “Using AI allowed me to gain an easier understanding” (P22)AI ability to simplify complex concepts valued; supported accessibility and understanding
Clarity and comprehensibility
Negative: Confusing presentation
n = 2“The language and layout of the responses weren’t as well laid out so I got more confused and found the information from the links easier to understand” (P2)Not all found AI clearer; some preferred traditional academic sources
Consistency and accuracyn = 5“You have to be careful with the information it gives you as it can be wrong” (P4); “Its sources can often be flawed, and to not trust everything that it comes out with” (P8)Awareness of need to verify AI outputs; critical stance toward AI-generated information
Length and relevance
Positive: Comprehensive detail
n = 3“The AI responses were lengthy and in-depth, from which you could pick and choose the information that was relevant” (P5)Detailed responses provided material for selection and synthesis
Length and relevance
Negative: Excessive verbosity
n = 3“At times, AI answers were either too text heavy or didn’t give the information we were looking for so we had to ask it to simplify/shorten the response” (P20)Over-elaboration sometimes hindered rather than helped; required additional prompting
Table 6. Comparison of ChatGPT and Copilot.
Table 6. Comparison of ChatGPT and Copilot.
DimensionThemeNo. of ParticipantsRepresentative Quotes
Overall similarityMinimal differencen = 6“I found the responses to be quite similar” (P2); “They both provide more or less the same service” (P8)
Detail and lengthChatGPT more verbosen = 4“ChatGPT gave more information as it had a longer response” (P17); “ChatGPT often gave a bit more content but occasionally waffled on a bit whereas Copilot was quicker and more concise” (P9)
Structure and formattingChatGPT more segmentedn = 3“ChatGPT broke the questions down… Copilot generalised the information” (P16); “ChatGPT provided a well-structured prompt… Copilot was slightly more direct” (P18)
Style and toneDifferent presentationn = 2“The way the AI tools phrase the information is very different” (P22); “Copilot to be slightly more direct” (P6)
Table 7. Pre-service Teachers’ Experiences with the WebQuest Structure.
Table 7. Pre-service Teachers’ Experiences with the WebQuest Structure.
ThemeNo. of ParticipantsRepresentative QuotesKey Principles
Clear and structured approachn = 19“Broke down the project process… and it was easy to follow the steps” (P1); “Comprehensive explanation for each step” (P5); “Go back to another step if you needed to” (P4)WebQuest structure provided clear scaffolding; supported navigation and flexibility
Time efficiencyn = 5“Having the links and resources readily available” (P5); Did not need to spend time “cross checking resources” (P16); “Focus on producing their infographic to a high standard” (P18)Curated resources reduced search time; allowed focus on synthesis and production
Engagement and noveltyn = 4“Very different from what I am used to” (P7); “More engaging than traditional approaches” (P1); Information “from different angles and perspectives” (P12)Novel approach increased engagement; multiple perspectives enriched learning
Support for collaborative and independent workn = 5“Layout of the WebQuest and the instructions made it very easy for us to divide the work” (P2); “Able to do the work independently and understand it” (P6); “My partner and I put together a great infographic” (P7)Structure supported both individual and collaborative approaches; flexible working
Relevance for future teachingn = 4“I would definitely consider using this tool in my future classes” (P1); “These types of resources will be useful to us going forward” (P19)Pre-service teachers recognised pedagogical value; considered applicability to their own teaching
Table 8. Synthesising Curated and AI-Generated Sources.
Table 8. Synthesising Curated and AI-Generated Sources.
ThemeNo. of ParticipantsRepresentative QuotesKey Principles
Complementary sourcesn = 7“Process was really good and helpful” (P11); “Take information from one and add it to the other to create more in-depth explanations” (P13); “Mixture of ChatGPT and the info from the links allowed me to gain access to a wide range of info very quickly” (P17)Multiple source types provided comprehensive coverage; synthesis created richer understanding
Differences in language and tonen = 3“There was a bit of a difference in the language used… it was obvious which resource was which” (P2); “Information from the links was a bit clearer” (P9)Recognisable distinctions between source types; academic sources often perceived as clearer
Greater trust in curated sourcesn = 2“I had more confidence that the information I was receiving was correct” (P4) when using curated resources; “I tried my best to stay away from AI… and focused more on finding information in the references provided” (P7)Curated sources viewed as more reliable; preference for traditional academic materials when accuracy critical
AI for efficiency and structuren = 3“The AI responses were used as titles to my research” (P18); “We decided to add it [AI] in as quotes to back up the information we had found through the [WebQuest] links” (P22); “Definitely speeds up the process” (P8)Strategic use of AI for organisational purposes; efficiency gains in workflow
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tiernan, P.; Donlon, E.; Hamash, M.; Lovatt, J. Something Old, Something New: WebQuests and GenAI in Teacher Education. AI Educ. 2026, 2, 7. https://doi.org/10.3390/aieduc2010007

AMA Style

Tiernan P, Donlon E, Hamash M, Lovatt J. Something Old, Something New: WebQuests and GenAI in Teacher Education. AI in Education. 2026; 2(1):7. https://doi.org/10.3390/aieduc2010007

Chicago/Turabian Style

Tiernan, Peter, Enda Donlon, Mahmoud Hamash, and James Lovatt. 2026. "Something Old, Something New: WebQuests and GenAI in Teacher Education" AI in Education 2, no. 1: 7. https://doi.org/10.3390/aieduc2010007

APA Style

Tiernan, P., Donlon, E., Hamash, M., & Lovatt, J. (2026). Something Old, Something New: WebQuests and GenAI in Teacher Education. AI in Education, 2(1), 7. https://doi.org/10.3390/aieduc2010007

Article Metrics

Back to TopTop