Next Article in Journal
Rethinking School Inclusion: A Comparative Analysis of Decree-Laws No. 3/2008 and No. 54/2018 in Portugal Based on UNESCO Reports
Previous Article in Journal
Delphi Validation of a Rubric for IkasLab Spaces for Active and Global Learning
Previous Article in Special Issue
Enhancing Early STEM Engagement: The Impact of Inquiry-Based Robotics Projects on First-Grade Students’ Problem-Solving Self-Efficacy and Collaborative Attitudes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Analysis of AI Use in Scientific Inquiry Learning Among Gifted and Non-Gifted Students

Department of Special Education, National Taiwan Normal University, Taipei 10610, Taiwan
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(12), 1611; https://doi.org/10.3390/educsci15121611
Submission received: 25 August 2025 / Revised: 17 November 2025 / Accepted: 25 November 2025 / Published: 29 November 2025
(This article belongs to the Special Issue Inquiry-Based Learning and Student Engagement)

Abstract

This study examined the utilization of artificial intelligence (AI) in inquiry-based science learning among gifted and non-gifted students. The participants included 484 students (197 gifted and 287 non-gifted; 226 males and 233 females) who completed three validated questionnaire instruments: the AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ), the AI-Assisted Science Learning Questionnaire (AASLQ), and the AI Literacy Questionnaire (AILQ). Factor analyses confirmed four latent constructs in the AASILQ, two in the AASLQ, and four in the AILQ, with all scales demonstrating strong internal consistency. Group comparisons were conducted according to educational placement and gender. The results indicated significant differences regarding educational placement: gifted students reported lower levels of AI-Assisted Scientific Inquiry Learning yet demonstrated higher AI literacy and greater confidence in the safe use of AI. Gender analyses revealed that female students expressed heightened concern regarding privacy issues. These findings extend the literature on AI integration in science education by highlighting nuanced differences in how gifted and non-gifted learners engage with AI, thereby offering implications for the design of equitable and responsive AI-supported learning environments.

1. Introduction

Artificial Intelligence (AI) has transformed scientific inquiry learning by creating adaptive, learner-centered environments (Kunnath & Botes, 2025). Through real-time feedback and interactive guidance, AI tools allow students to engage with scientific concepts at their own pace and according to individual needs (Nursurila, 2025). Adaptive learning systems and intelligent tutors further support acceleration and enrichment—core principles of gifted education—while promoting self-regulation and reducing teachers’ repetitive workload (Kim, 2023; Trpin, 2024). Across regions, studies demonstrate diverse approaches in AI-supported inquiry: European research emphasizes teacher–AI co-orchestration and formative feedback, whereas Asian studies highlight student autonomy and digital experimentation (Chang et al., 2023).
However, effectively integrating AI into inquiry-based learning (IBL) remains a significant challenge. Algorithmic bias and limited natural-language comprehension may hinder meaningful student–AI interaction (Chang et al., 2023), while over-reliance on AI or unequal access to technology may widen learning disparities. Moreover, AI-generated content often lacks contextual and cultural sensitivity, emphasizing the need for context-aware and pedagogically grounded AI design.
Despite growing attention to AI in education, few empirical studies have systematically compared gifted and non-gifted students in their use of AI for scientific inquiry learning. Prior research tends to emphasize cognitive outcomes or technical affordances rather than the behavioral and attitudinal aspects of AI-assisted inquiry across gender and ability groups. Additionally, while AI literacy studies address students’ abilities to interpret and apply AI-generated content, they rarely connect these skills with the inquiry cycle described by Pedaste et al. (2015).
Based on the integrated framework above, the present study investigates
  • What are the gender-based differences between gifted and non-gifted students in their use of AI for scientific inquiry learning (AASIL)?
  • What are the gender-based differences between gifted and non-gifted students in their use of AI for general science learning (AASL)?
  • How does AI literacy (AIL) differ between gifted and non-gifted students across gender?
  • Which AI tools are most commonly utilized by students in their daily learning practices?
  • What are the gender-based differences between gifted and non-gifted students in their levels of concern regarding the use of AI in science learning?
Collectively, these questions aim to clarify the interaction between AI literacy and inquiry-based learning within the contexts of gifted and general education. The findings will contribute to the development of more equitable and pedagogically sound models of AI integration in science education, providing educational recommendations for gifted education.

1.1. The Application of AI in Science Inquiry

AI-supported inquiry-based learning has grown robustly with regional distinctions (Akhmadieva et al., 2023). For instance, Asian studies often prioritize student autonomy through conversational agents like Inquirybot (Chang et al., 2023), while North American research explores adaptive personalization in gifted education (Siegle, 2024). However, no empirical study has examined behavioral and attitudinal AI use across giftedness and gender or linked AI literacy to Pedaste et al.’s (2015) full inquiry cycle—critical gaps this study addresses (Akhmadieva et al., 2023).
Despite these affordances, significant limitations persist. Rule-based chatbots struggle with the nuances of natural language, resulting in superficial interactions (Chang et al., 2023). Equitable access remains uneven due to connectivity and device disparities, and over-reliance on AI risks diminishing authentic inquiry depth and teacher facilitation if not balanced with human guidance (Kunnath & Botes, 2025).
To address these gaps, this study develops the AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ), grounded in AI literacy and inquiry-based learning (IBL) frameworks. The conceptual alignment (see Figure 1) links four AI literacy dimensions—information management, conceptualization, experimental design, and data analysis—to the core phases of scientific inquiry. This framework guided our analyses of how students’ AI use varies across gender and giftedness groups. The resulting insights establish the conceptual basis for examining how students use AI during inquiry activities and how such practices differ across learner groups, thus directly informing the research gap addressed in this study.

1.2. Cognitive and Behavioral Characteristics of Gifted Students in Inquiry Contexts

In the 1920s, Terman defined geniuses as those with an IQ of 140 and a percentile rank above 99 to conduct long-term follow-up research on the psychological, physical, and personality traits of gifted children. “Giftedness” became synonymous with intelligence. Following Terman’s study, an IQ of 130 (corresponding to an intelligence test percentile grade of above 97) has become the standard for identifying gifted children. However, the identification of giftedness solely through a single intelligence test has been replaced by the concept of multiple intelligences. Renzulli (1978), noting the emergence of many underachieving gifted children, proposed that giftedness develops through the interaction of three characteristics: (1) above-average ability, (2) creativity, and (3) task commitment. Based on Renzulli’s Three-Ring Conception of Giftedness, gifted performance does not necessarily depend on a high IQ but rather on strong perseverance and dedication. Howard Gardner’s (1983) theory of multiple intelligences strongly opposed the notion of defining intelligence solely by IQ. He mentioned that a single assessment tool can never adequately capture the complexity of human cognitive abilities. The Actiotope Model of Giftedness (AMG) highlighted that giftedness is not an innate personal attribute, but a dynamic, socially constructed phenomenon arising from the interaction between an individual’s actions and their evolving environment (Ziegler, 2005).
Based on the development of multiple concepts of giftedness and respect for various ethnic groups, the National Association for Gifted Children (2014) stated the following: “Students with gifts and talents perform—or have the capability to perform—at higher levels compared to others of the same age, experience, and environment in one or more domains. They require modification(s) to their educational experience(s) to learn and realize their potential.”
Gifted students’ cognitive characteristics include the effective use of acquired knowledge. They prefer complex and challenging environments and fast problem-solving. They are capable of presenting and classifying problems efficiently, as well as integrating procedural knowledge. They also possess flexible problem-solving skills, excellent metacognition, and self-regulatory ability (Shore & Kanevsky, 1993). Jackson and Butterfield (1986) argued that the significant difference between gifted individuals and others lies in their unusual ability to analyze and process various types of information during problem-solving, as well as their capacity to make effective adaptations. These abilities can be empirically verified through experimental observation. Metacognitive ability enables individuals to be aware, plan, monitor, evaluate, and regulate their behaviors or approaches to tasks so that the outcomes of their actions may approach perfection. Specifically, Pfeiffer (2008) mentioned “special needs children,” a phrase that most people associate with students who struggle to overcome learning and physical disabilities, as well as problem behaviors that interfere with achieving full academic potential.
In conclusion, the concept of giftedness is multifaceted, and its development results from the interaction of innate and environmental factors. Therefore, the assessment of giftedness also requires multiple approaches and long-term observation. Against this theoretical backdrop, the present study investigates how gifted students use AI to support scientific inquiry, drawing on these models of giftedness to interpret their AI-related behaviors and attitudes and to explain individual differences in AI-supported inquiry performance.

1.3. Scientific Inquiry and Gifted Education

Scientific inquiry is an authentic process through which students construct and refine scientific understanding by formulating questions, investigating phenomena, and reasoning from evidence (Tang et al., 2009; Bybee, 2006). Inquiry-based learning (IBL) translates this process into classroom practice, positioning students as active constructors of knowledge who engage in cycles of exploration, experimentation, and reflection. Pedaste et al. (2015) synthesized existing inquiry models into a comprehensive framework comprising five iterative phases—Orientation, Conceptualization, Investigation, Conclusion, and Discussion—each representing a core component of authentic scientific inquiry. This cyclical structure fosters scientific reasoning, argumentation, and evidence-based sense-making, rather than relying on procedural routines.
In Taiwan, the Curriculum Guidelines of 12-Year Basic Education: Natural Science Domain (Ministry of Education, 2018) closely align with these international inquiry frameworks. The standards emphasize the mutually reinforcing relationship between learning performance (skills and attitudes) and content knowledge. Inquiry-based learning is articulated through four stages—observation and problem identification, planning and execution, analysis and discovery, and discussion and communication—which correspond conceptually to the phases identified by Pedaste et al. (2015). Together, these stages underscore a shared emphasis on integrating observation, experimentation, analysis, and reflective communication as essential components of science learning.
Gifted education provides a context in which IBL can be particularly effective. Contemporary theories of giftedness—such as Renzulli’s Three-Ring Conception (1978), Gardner’s Multiple Intelligences (1983), and Ziegler’s Actiotope Model of Giftedness (2005)—emphasize that giftedness is multidimensional, emerging through the interaction of ability, creativity, and environmental opportunity. Inquiry-based approaches are especially suitable activities that resonate with the cognitive profiles of gifted students. Studies indicate that IBL enhances gifted students’ conceptual understanding, self-efficacy, and scientific motivation (Özgür & Yilmaz, 2017; Eysink et al., 2015). When designed with adequate scaffolding, IBL environments enable gifted students to engage deeply in scientific reasoning and metacognitive reflection, cultivating both conceptual mastery and intrinsic motivation.
In summary, the convergence of these frameworks underscores that AI-assisted inquiry can further enhance metacognitive regulation and self-directed learning among diverse learners. However, empirical evidence comparing gifted and non-gifted students in AI-supported inquiry remains limited. This study addresses that gap by examining how AI tools are integrated into the inquiry processes of students across ability and gender groups, drawing from both international and Taiwanese inquiry frameworks.

1.4. AI and Science Inquiry in Gifted Education

The integration of AI into gifted education has created new opportunities for personalization, differentiation, and inquiry-driven learning. Gifted students, who often demonstrate rapid information processing, curiosity, and metacognitive strength, require cognitively challenging and interest-aligned content. AI and the Gender Gap Drawing on constructivist and inquiry-based learning (IBL) theories, recent research underscores AI’s capacity to support higher-order thinking and self-regulated learning through adaptive feedback and task scaffolding.
Empirical evidence illustrates how AI can operationalize inquiry principles in gifted education. For instance, Kahraman and Kıyıcı (2025) analyzed ChatGPT-4-generated science lesson plans for Turkey’s national curriculum and found strong alignment with the IBL phases—orientation, investigation, and evidence collection. These AI-generated plans promoted hypothesis-driven exploration and real-world relevance, though they struggled to sustain curiosity or provide affective feedback. Similarly, Chang et al. (2023) developed “Inquirybot,” a conversational agent designed to guide elementary gifted students through inquiry lessons on sound transmission. The bot effectively prompted planning and evidence evaluation but lacked nuanced emotional engagement, highlighting current technical limits in dialogic AI.
Generative AI further enhances interdisciplinary inquiry by assisting gifted students in synthesizing diverse data sources. Tools such as ChatGPT, Perplexity, and LabXchange facilitate data summarization, concept mapping, and report drafting—core competencies for student-led investigations and competitions in gifted programs (Kahraman & Kıyıcı, 2025). Moreover, AI tools supporting creative expression (e.g., DALL·E, MuseNet) broaden opportunities for multimodal creativity, allowing students to visualize abstract concepts, compose music, or design scientific visualizations (Siegle, 2023). These applications align with the goals of gifted education to nurture originality, autonomy, and transdisciplinary thinking.
At the instructional level, AI also benefits teachers by streamlining differentiation and feedback. AI-assisted systems can suggest alternative learning tasks, design tiered lesson plans, and provide formative feedback, reducing teachers’ cognitive load and ensuring appropriate challenge levels for diverse learners (Trpin, 2024). However, ethical and pedagogical concerns persist. Over-reliance on AI-generated materials risks superficial understanding and cultural bias (Kim, 2023). Gifted students also require mentorship and socio-emotional support that AI cannot yet replicate. Additionally, issues of academic honesty and overdependence on automation necessitate explicit AI literacy education and ethical guidelines for responsible use.
In synthesis, AI integration in gifted education shows strong potential to enhance inquiry-based learning by supporting autonomy, creativity, and metacognitive engagement. However, limitations in emotional intelligence, contextual sensitivity, and ethical judgment underscore the need for AI to serve as a co-facilitator rather than a substitute for human instruction. To investigate how students actually engage with AI during learning, this study employs the AI-assisted science learning questionnaire, which captures students’ AI-assisted learning behaviors across key cognitive and metacognitive dimensions. By comparing gifted and non-gifted students, as well as gender groups, the study addresses a critical gap in understanding how AI-mediated learning practices differ across diverse learners.

1.5. AI and the Gender Gap

A substantial and persistent gender gap has emerged in the adoption of generative AI, shaped by differences in knowledge, confidence, ethics, and sociocultural access (Russo et al., 2025; Otis et al., 2025). Large-scale international studies indicate that women are approximately 20% less likely than men to utilize chat-based AI tools, such as ChatGPT, Gemini, or Copilot. This disparity reflects not only access but also bigger differences in familiarity, self-efficacy, and perceived usefulness. Women reported lower confidence in querying and applying AI, greater reliance on training before independent use, and reduced persistence following unsuccessful attempts, whereas men tended to engage in more spontaneous and sustained exploration (Otis et al., 2025).
Gendered patterns also extend to attitudes and ethics. Female users expressed more caution regarding the educational use of AI and were more likely to view it as academically risky or socially disruptive (Russo et al., 2025). They reported greater concern about academic integrity, job displacement, and weakened interpersonal communication, alongside higher levels of AI-related anxiety, which predicted lower willingness to adopt AI tools.
Sociocultural and structural factors further exacerbate these patterns. Limited exposure to emerging technologies, gendered differences in professional networks, and lower representation in STEM disciplines collectively restrict women’s opportunities to engage with AI (Otis et al., 2025). Even when access barriers are minimized, usage disparities persist, suggesting that structural equity alone cannot close the gap (Russo et al., 2025). Without targeted interventions, these disparities risk reinforcing a cycle in which AI systems, trained on datasets with limited representation of women’s experiences, perpetuate unequal participation and outcomes (Guilbeault et al., 2024).
In summary, the gender gap in AI engagement reflects not only differential access but also disparities in confidence, perception, and cultural representation. The findings above highlight the importance of examining gender as a moderating factor in AI-assisted scientific inquiry. By investigating how male and female students—both gifted and non-gifted—interact with AI tools in science learning contexts, this study contributes to understanding how gendered patterns of AI engagement may shape inquiry participation, metacognitive development, and learning equity.

2. Materials and Methods

2.1. Research Participants

In Taiwan, gifted students are identified through the Gifted Identification and Placement Committee under the Ministry of Education, based on the principles of multiple intelligences. Identification typically involves one or more of the following criteria:
A.
scoring two standard deviations above the mean (97th percentile or above) on academic aptitude tests and being recommended by professionals or teachers;
B.
receiving awards in national or international academic competitions;
C.
demonstrating outstanding performance in academic seminars; or
D.
publishing research reports or receiving formal recommendations for exceptional academic achievement.
Gifted students receive specialized educational support, including accelerated coursework, enriched curricula, and opportunities for independent research. It is common for them to present scientific inquiry projects at the end of each semester.
The participants in this study were junior high school students aged 13–15 from schools across northern Taiwan that offered both gifted and regular classes. A stratified purposive sampling approach was employed to ensure balanced representation across gender, school type, and giftedness categories. In total, 527 questionnaires were distributed, and 484 valid responses were obtained. Table 1 shows the distribution of participants. Overall, 197 were gifted students (GSs; 116 males, 68 females, and 13 undisclosed gender), and 287 were non-gifted students (NGSs; 110 males, 165 females, and 12 undisclosed gender). The survey was conducted online, emphasizing anonymity and voluntary participation. Cases with missing gender data were retained for overall analyses but excluded from gender-based comparisons.

2.2. Instruments

This study employed three survey instruments: 1. The AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ) is a self-developed tool designed to examine students’ experiences with AI-Assisted Scientific Inquiry Learning; 2. the AI-Assisted Science Learning Questionnaire (AASLQ) is also a self-developed tool, designed to examine students’ attitude toward learning science and using AI; 3. the AI Literacy Questionnaire (AILQ), developed by Ng et al. (2023) which was constructed to capture multiple dimensions of AI literacy, including affective, behavioral, and cognitive aspects (Ng et al., 2023). All questionnaires were administered to participants and used to explore differences across student groups based on gender, giftedness, and academic specialization.

2.2.1. AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ)

The AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ; 46 items) was developed to assess students’ engagement with AI tools across stages of scientific inquiry. The instrument was aligned with both the Taiwanese Science Inquiry Curriculum and the international IBL framework proposed by Pedaste et al. (2015). Expert reviews and pilot testing refined the instrument into a five-point frequency scale (0–4).
The dataset was suitable for factor analysis (KMO = 0.985; Bartlett’s test, p < .001). Exploratory Factor Analysis (EFA) using principal axis factoring and varimax rotation yielded a four-factor structure, explaining 80.14% of the total variance (see Appendix A and Appendix B). These factors were:
(1)
ASDAR—AI-Supported Data Analysis and Reporting
(2)
ASEDM—AI-Supported Experimental Design and Methods
(3)
ASCE—AI-Supported Conceptualization and Explanation
(4)
ASIMS—AI-Supported Information Management and Synthesis
Table 2 shows the alignment of AASILQ subscales with the phases of IBL, the stages of TSCI, and the associated questionnaire items.
Factor loadings exceeded .47, supporting convergent validity. Reliability analysis revealed excellent internal consistency (α = .989; split-half = .968; ω = .992), all surpassing the .70 benchmark (Drost, 2011). (see Table 3).
Pearson correlation coefficients were computed to examine the relationships among the three questionnaires. As shown in Table 6, four dimensions of AASIL were significantly correlated at the .01 level (two-tailed). The correlations ranged from r = .846 (p < .01) to r = .921 (p < .01), providing evidence of high convergent validity for the instrument.
In sum, the AASILQ demonstrated strong psychometric properties and is a reliable tool for assessing students’ use of AI in scientific inquiry learning.

2.2.2. AI-Assisted Science Learning Questionnaire (AASLQ)

The AI-Assisted Science Learning Questionnaire (AASLQ) was respecified into two factors:
(1)
AALA-AI-Assisted Learning Applications, and
(2)
AASDL- AI-Assisted Self-Directed Learning
The data were suitable for factor analysis, as indicated by a high KMO value (.931) and a significant Bartlett’s test (p < .001) score. Exploratory factor analysis (principal axis factoring and varimax rotation) confirmed a two-factor solution, with all loadings above .50, and a total variance of 55.00% (see Appendix C and Appendix D). Reliability analyses showed strong internal consistency (α = .934; split-half = .832; ω = .938), all exceeding the .70 benchmark (see Table 4). Pearson correlation coefficients were computed to examine the relationships among the three questionnaires. As shown in Table 6, the two AASL dimensions were weakly but significantly correlated (r = .277, p < .01, two-tailed). The robust tests of equality of means using the Welch and Brown–Forsythe statistics for AASL indicated significant group differences across AALA, AASDL, and AASL (all p < .01). These results support the robustness of the findings, even when the assumption of homogeneity of variance is relaxed (see Appendix G).

2.2.3. AI Literacy Questionnaire (AILQ)

The AI Literacy Questionnaire (AILQ; Ng et al., 2023) assesses the affective, behavioral, and cognitive dimensions of AI literacy using a 5-point Likert scale. The data were suitable for factor analysis (KMO = .965; Bartlett’s test, p < .001), and all factor loadings exceeded .50, confirming convergent validity. Exploratory factor analysis supported a four-factor structure—Ethics and Responsibility in AI (AIER), AI Self-Efficacy (AISE), AI Learning Engagement (AILE), and AI Application and Interest (AIAI)—explaining 63.88% of the variance (see Appendix E and Appendix F). The reliability was excellent, with an overall Cronbach’s alpha of .966, a Guttman split-half of .891, and a McDonald’s omega of .938. (see Table 5). These results indicate that the AILQ is a reliable and valid instrument for assessing students’ AI literacy.
Pearson correlation coefficients were computed to examine the relationships among the three questionnaires. As shown in Table 6, four AIL dimensions are significantly correlated at the .01 level (two-tailed). The correlations range from moderate (r = .526, p < .01) to high (r = .779, p < .01), providing evidence of convergent validity for the instrument.

2.2.4. Correlations Among the AASLQ, AASL, and AILQ

Table 6 shows the Pearson correlations among the AASILQ, AASL, and AILQ. The correlation coefficients were computed to examine the relationships among the three instruments with 10 factors. As shown in Table 6, all factors were significantly correlated at the p < .01 level (two-tailed), providing evidence of convergent validity for the instrument.

2.2.5. Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM)

Table 7 presents the confirmatory factor analysis (CFA) results and model fit indices for the three instruments: AASILQ, AASLQ, and AILQ. The overall model fit was evaluated using AMOS 30.0. Although the RMSEA (.113) and AGFI (.853) values indicated limitations in absolute fit, most incremental fit indices (CFI, IFI, TLI) and parsimony indices (PGFI, PNFI, AIC) met recommended thresholds.
These results demonstrate acceptable construct validity for the three measurement models—AASILQ, AASLQ, and AILQ—and provide a sound basis for subsequent Structural Equation Modeling (SEM) analyses.
As shown in Figure 2, the latent constructs were positively correlated. AASI demonstrated a high correlation with AASL (r = .90, p < .001) and a moderate correlation with AIL (r = .30, p < .001). Furthermore, AIL and AASL were positively correlated (r = .41, p < .001). All standardized factor loadings (λ) were satisfactory (e.g., AASIL: .91–.97; AIL: .72–.92), with the exception of the AASDL indicator for the AASL construct (λ = .31). This low loading indicates a weaker representation of the subconstruct, suggesting a need for item refinement in future studies.

2.3. Data Collection and Analysis

After the questionnaires were collected, the data were coded and analyzed using SPSS Statistics version 29. Descriptive statistics (means, standard deviations, and frequencies) were computed to summarize participants’ background information and AI usage patterns.
To address the research questions, inferential statistics were employed. Independent-sample t-tests and two-way ANOVAs (giftedness × gender) were used to compare group differences in AI-Assisted Scientific Inquiry Learning, AI-Assisted Science Learning, AI literacy, and AI-related concerns. Pearson product–moment correlations were calculated to examine the relationships among the dimensions of the AI literacy and science learning constructs. The statistical significance was set at p < .05.

3. Results

To assess the normality of the data distributions, Q–Q plots were generated for the three main measures. As shown in Figure 3, Figure 4 and Figure 5, the data points generally align with the reference line, suggesting an approximate normal distribution, though slight deviations can be observed at the tails. These results support the use of parametric tests (e.g., t-tests and ANOVA) in subsequent analyses.

3.1. Differences in AI-Assisted Scientific Inquiry Learning (AASIL) Among Gifted and Non-Gifted Students of Different Genders

3.1.1. Group Differences in AI-Assisted Scientific Inquiry Learning

To examine whether gifted students (GSs) and non-gifted students (NGSs) differed in their AI-Assisted Scientific Inquiry Learning, independent-sample t-tests were conducted. Levene’s test indicated that the assumption of equal variances was met for all variables (p > .05). Therefore, the results were interpreted under the equal variances assumed condition. As shown in Table 8, gender differences were not statistically significant (all p > .05), with trivial effect sizes (η2 < .01). However, as presented in Table 9, significant group differences emerged between gifted students (GSs) and non-gifted students (NGSs). Non-gifted students scored significantly higher than gifted students on all four AASIL dimensions, and the total AASIL score (p < .01). The observed effect sizes ranged from small to moderate (η2 = .02–.04).
These results suggest that non-gifted students may rely more heavily on AI tools to support their scientific inquiry processes, particularly in data analysis, experimental design, and conceptualization. In contrast, gifted students may engage in these tasks with greater independence or prefer traditional analytical methods that demand higher cognitive control.

3.1.2. Differences in AI-Assisted Scientific Inquiry Learning (AASIL) Among Gifted and Non-Gifted Students of Different Genders

A one-way ANOVA (see Table 10) revealed significant group differences across all four subscales and the total scale. For AI-Supported Data Analysis and Reporting (ASDAR), both non-gifted males (M = 1.55; SD = 1.11) and non-gifted females (M = 1.53; SD = 1.07) scored significantly higher than gifted females (M = 0.93; SD = 0.93), F(3, 455) = 6.93, p < .001. A similar pattern was observed for AI-Supported Experimental Design and Methods (ASEDM), where NGMs (M = 1.56; SD = 1.13) and NGFs (M = 1.43; SD = 1.09) outperformed GFs (M = 0.93; SD = 0.97), F(3, 455) = 5.19, p = .002.
For AI-Supported Conceptualization and Explanation (ASCE), again, NGFs (M = 1.54; SD = 1.07) and NGMs (M = 1.59; SD = 1.13) obtained higher mean scores compared with GFs (M = 0.97; SD = 0.89), F(3, 455) = 5.70, p = .001. In AI-Supported Information Management and Synthesis (ASIMS), NGMs (M = 1.67; SD = 1.16) scored significantly higher than GFs (M = 1.18; SD = 0.99), F(3, 455) = 3.41, p = .018.
Finally, in terms of the total AASILQ score, both non-gifted males (M = 1.59; SD = 1.08) and non-gifted females (M = 1.52; SD = 1.03) significantly outperformed gifted females (M = 1.00; SD = 0.89), F (3, 455) = 5.63, p = .001.
GFs consistently scored significantly lower than NGFs and NGMs across most subscales, with small to moderate effect sizes. GMs also showed lower scores in some subscales, though the differences were less consistent. The regression analyses of group differences across the AASILQ subscales and total score, with effect sizes (partial η2), noncentrality parameters (λ), and observed power reported, can be seen in Appendix H. The high observed power for significant comparisons indicates that these results are statistically robust.
Figure 6 shows the distribution of AASIL scores across four student groups. Non-gifted males (NGMs) and females (NGFs) achieved higher median scores than both gifted groups, while gifted females (GFs) demonstrated the lowest overall performance. The narrower interquartile ranges among non-gifted students indicate more consistent engagement with AI-assisted inquiry learning, whereas gifted students, especially females, showed lower and more variable levels of AI use.
In summary, gifted females reported the lowest engagement in AI-assisted scientific inquiry, while non-gifted males scored the highest across subscales. Gifted males performed moderately, though still below non-gifted peers in AI-supported data analysis and reporting (ASDAR). These results suggest that gender and giftedness jointly influence students’ AI-assisted inquiry, with gifted females being the least engaged and non-gifted students showing more consistent use of AI tools.

3.2. Differences in AI-Assisted Science Learning (AASL) Among Gifted and Non-Gifted Students of Different Genders

3.2.1. Group Differences in AI-Assisted Science Learning (AASL)

To examine whether differences in gender or educational placement affected students’ AI-Assisted Science Learning, independent-samples t-tests were conducted. Levene’s test indicated that the assumption of equal variances was met for all variables (p > .05). Therefore, the results were interpreted under the equal variances assumed condition. Table 11 presents the results, showing no significant gender differences on the total AASL scale; however, males scored slightly higher on AI-Assisted Self-Directed Learning (AASDL). In contrast, significant differences emerged according to educational placement: non-gifted students consistently reported higher scores than gifted students on AI-Assisted Learning Applications (AALA), AASDL, and the overall AASL measure (Table 12).

3.2.2. Differences Among GSs and NGSs of Different Genders in AASL

A two-way ANOVA was conducted to examine the effects of educational placement (gifted vs. non-gifted) and gender (male vs. female) on the three dimensions of the AI-Assisted Science Learning and total scales. Descriptive statistics for the four groups (gifted male, gifted female, non-gifted male, and non-gifted female) across the three dimensions of science learning and the total scale are presented in Table 13.
For AI-Assisted Learning Applications, non-gifted female students (M = 1.90; SD = 0.95) and non-gifted male students (M = 1.84; SD = 0.95) reported higher means than gifted male students (M = 1.65; SD = 0.91) and gifted female students (M = 1.52; SD = 0.83). For AI-Assisted Self-Directed Learning, again, non-gifted male students (M = 3.13; SD = 0.66) and non-gifted female students (M = 3.06; SD = 0.58) scored higher than gifted male students (M = 3.00; SD = 0.65) and gifted female students (M = 2.79; SD = 0.51).
Finally, on the total scale, non-gifted male students (M = 2.48; SD = 0.58) and non-gifted female students (M = 2.39; SD = 0.61) scored higher than gifted male students (M = 2.33; SD = 0.64) and gifted female students (M = 2.16; SD = 0.55). These patterns consistently indicate that, regardless of gender, non-gifted students reported higher engagement in AI-Assisted Science Learning activities than gifted students.
Figure 7 presents the boxplot of AI-Assisted Science Learning (AASL) scores across the four student groups. As shown, non-gifted students (both male and female) generally scored higher than their gifted counterparts, with gifted females reporting the lowest median scores.

3.3. Differences in AI Literacy Among Gifted and Non-Gifted Students of Different Genders

3.3.1. Group Differences in AI Literacy

AI literacy, independent-samples t-tests were conducted. Levene’s test indicated that the assumption of equal variances was met for all variables (p > .05). Therefore, the results were interpreted under the equal variances assumed condition. Table 14 presents the descriptive statistics and independent-sample t-test results for AI literacy dimensions, categorized by gender and educational level. Gender-based comparisons revealed that male students reported significantly higher scores than female students in AI Ethics and Responsibility (AIER) (t = 2.19, p < .05), AI Self-Efficacy (AISE) (t = 4.21, p < .001), and AI Learning Engagement (AILE) (t = 4.27, p < .001). No significant gender differences were observed for AI Application and Interest (AIAI) or the total AI literacy score.
Comparisons of educational placement showed that gifted students were significantly higher than non-gifted students on AI Ethics and Responsibility (AIER) (t = 4.15, p < .001), AI Application and Interest (AIAI) (t = 1.85, p < .05), and the total AI literacy score (t = 1.85, p < .05). No significant differences were found for AISE and AILE. These findings suggest that male students tend to demonstrate higher self-efficacy and greater engagement with AI in learning contexts, and that gifted students report greater concerns about their awareness of ethical and responsible use of AI (Table 15).

3.3.2. Gender-Based Differences in AI Literacy Among Gifted and Non-Gifted Groups

A one-way ANOVA was employed to investigate potential differences among the four student groups across the AI literacy dimensions. The analyses revealed significant main effects for all subscales (p < .01). Post-hoc comparisons (Games–Howell and Scheffé tests) further identified specific between-group differences. Detailed results of the ANOVA and post-hoc analyses are presented in Table 16.
The results revealed significant group differences for all four dimensions of AI literacy and the total scale (p < .001).
AIER (Ethics and Responsibility in AI): Significant differences were found among groups, F (3, 455) = 7.00, p < .001. The post-hoc Games–Howell tests indicated that gifted male students scored significantly higher than non-gifted males (p < .05) and non-gifted females (p < .001), and that gifted females also scored higher than non-gifted females (p < .05).
AISE (AI Self-Efficacy): Significant differences emerged, F (3, 455) = 6.54, p < .001. Gifted males reported higher self-efficacy than gifted females (p < .05) and non-gifted females (p < .01), while non-gifted males also scored higher than non-gifted females (p < .05).
AILE (AI Learning Engagement): The group effect was significant, F (3, 455) = 6.96, p < .001. Scheffé tests revealed that non-gifted males scored significantly higher than gifted females and non-gifted females (p < .01).
AIAI (AI Application and Interest): Group differences were also significant, F (3, 455) = 7.04, p < .001. Gifted males had the highest mean (M = 3.79; SD = .90), but post-hoc comparisons showed no significant pairwise differences beyond the overall variation.
Total AI literacy: A significant overall effect was found, F (3, 455) = 7.04, p < .001. Scheffé tests indicated that gifted males scored significantly higher than non-gifted females (p < .01).
Gifted male students scored the highest on AIER, AISE, and the total AI literacy scale. In contrast, non-gifted male students obtained the highest mean score for AILE, performing significantly higher than both gifted and non-gifted female students. Overall, female students, regardless of giftedness, demonstrated comparatively lower mean scores across most dimensions.
Figure 8 presents a boxplot of the AI literacy scores across four student groups (GM, GF, NGM, and NGF). The plot shows that GMs had the highest median scores, followed by NGMs and NGFs, while GFs demonstrated relatively lower central tendency and variability. These findings highlight consistent gender-based and educational placement-related differences in AI literacy, with gifted males maintaining an overall advantage.

3.4. AI Tools Commonly Used in the Participants’ Daily Learning

To address RQ4, descriptive statistics were calculated to examine how students conceptualize AI and understand what AI tools they commonly use in their daily learning.
Perceptions of AI: As shown in Table 17, most students regarded AI as a technology that helps answer or solve problems (91.5%). A large proportion also viewed it as a tool that can assist with writing, calculations, and programming (81.2%). Additionally, 75.4% described AI as a chatting or drawing program, and 72.1% acknowledged AI as a powerful but potentially dangerous tool. Only a small number of students (3.7%) provided other descriptions. These findings suggest that students generally have a practical, problem-solving-oriented perspective on AI, while also acknowledging its risks.
AI tools used: Table 18 presents the specific AI tools reported by students. The vast majority had used ChatGPT (95.2%), followed by AI translation tools (77.5%), AI voice assistants (54.8%), and AI drawing tools (49.4%). Some students also reported using localized educational platforms such as e-du on the Taiwan Adaptive Learning Platform (TALP) (34.5%) and Cool AI (CooC-Cloud’s AI learning assistant) (23.3%). More advanced or specialized tools, such as Gemini (24.6%), Copilot (14.7%), Gamma (12.0%), and Suno (8.9%), were less frequently used. Only a small percentage (1.9%) reported other tools. These results demonstrate that while ChatGPT and translation tools dominate students’ AI usage, a variety of other generative and educational AI platforms are also being incorporated into their learning experiences.

3.5. Concerns About Using AI in Science Learning

3.5.1. Differences in AI Concerns Between Gifted and Non-Gifted Students Across Genders

To examine whether gifted and non-gifted students, across genders, differed in their concerns about using AI in science learning, independent-sample t-tests were conducted, and the results are presented in Table 19. To assess potential response bias, one item (“AI can be used safely without concern”) was designed as a reverse-coded question. For analysis, this item was recoded so that higher scores consistently reflected greater concern about the use of AI in science learning.
Independent-sample t-tests revealed that male and female students reported similar levels of concern across most dimensions. However, significant gender differences were observed for data security and privacy (t = −2.63, p < .01), with female students (M = 3.14; SD = 0.66) expressing greater concern than male students (M = 2.95; SD = 0.90). In addition, females were more likely to disagree with the statement that ‘AI can be used safely without concern’ (t = −2.58, p < .01), suggesting relatively greater caution among female students.
When comparing educational placements, significant differences emerged for concerns about content accuracy (t = 2.90, p < .01) and safety perceptions (t = 3.81, p < .001). Gifted students were more concerned about AI content being inaccurate or inappropriate (M = 3.25; SD = 0.72) than non-gifted students (M = 3.05; SD = 0.74). Conversely, gifted students also expressed greater confidence that AI can be used safely in science learning (M = 3.38; SD = 0.83) than non-gifted students (M = 3.06; SD = 0.94). No significant group differences were found for concerns about over-reliance or data security/privacy.

3.5.2. Gender and Giftedness in Concerns About Using AI in Science Learning

A one-way ANOVA further explored group differences across the four subgroups (gifted male, gifted female, non-gifted male, non-gifted female). As shown in Table 20, the results demonstrate significant effects for content accuracy concerns (F (3, 455) = 3.38, p = .018), with post-hoc tests indicating that gifted males reported significantly greater concerns than non-gifted males. For data security and privacy, gifted females reported greater concern than gifted males (F (3, 455) = 3.12, p < .05). The largest effect appeared for safety perceptions (F (3, 455) = 15.39, p < .001), where gifted males, gifted females, and non-gifted females all scored significantly higher than non-gifted males, who expressed the lowest sense of safety.
Taken together, these findings suggest that female students are more cautious about privacy and safety when using AI. In contrast, gifted students exhibit both heightened concern about accuracy and greater confidence in the safe use of AI. Notably, non-gifted male students consistently expressed the lowest level of concern about AI risks, particularly regarding safe use.

4. Discussion

This study explores how gender and giftedness jointly influence students’ engagement with AI-assisted learning, encompassing scientific inquiry, science learning, and AI literacy. The findings highlight nuanced behavioral and cognitive patterns that align with existing theories while revealing culturally specific dynamics in Taiwan’s education context. Although stratified purposive sampling enhanced representativeness, the regional sample may limit the generalizability of results to other contexts.

4.1. Gender and Giftedness in AI-Assisted Scientific Inquiry Learning

The findings revealed that gifted students, particularly gifted females, reported lower use of AI in scientific inquiry learning compared to their non-gifted peers, which contrasts with much of the existing literature (Kahraman & Kıyıcı, 2025; Kim, 2023). This pattern can be explained through Bandura’s self-efficacy theory, which emphasizes that individuals’ confidence in their abilities affects their willingness to engage with technology. Gifted students’ high self-regulation and metacognitive control (Shore & Kanevsky, 1993) may lead them to rely less on AI, preferring autonomous reasoning and independent problem-solving. The relatively low engagement among gifted females may further reflect cultural and gender norms in Taiwan, where social expectations often discourage assertive technological exploration among girls. Consistent with prior research, female students generally display more cautious and conservative attitudes toward AI, highlighting broader gender disparities in confidence and perceptions of AI use (Otis et al., 2025; Russo et al., 2025).
Educational Implications: AI should serve as a scaffold rather than a substitute for reasoning. Teacher-led modeling of AI-supported inquiry could strengthen gifted females’ self-efficacy and reduce hesitation in AI adoption. Implementing AI-based differentiated instruction may help tailor inquiry tasks to students’ confidence levels and learning autonomy.

4.2. Gender-Based Differences in the Use of AI for General Science Learning

In line with AASIL findings, gifted students relied less on AI in general science learning, consistent with Ziegler’s Actiotope Model of Giftedness, which emphasizes the interaction between personal resources (motivation, cognition) and environmental factors. Gifted learners, operating with higher internalized competence, may perceive AI as supplementary rather than essential. Within Taiwan’s inquiry-oriented curriculum (Ministry of Education, 2018), these students internalize analytic and reflective approaches, leading to selective AI use. Hence, lower reliance on AI reflects not deficiency but autonomy and cognitive sophistication.
Educational Implications: Teachers should recognize that gifted students may require less AI assistance in basic science learning but can benefit from AI-supported enrichment tasks. Designing tiered assignments that allow gifted students to use AI in advanced applications—such as simulation, prediction, or interdisciplinary exploration—may enhance engagement without undermining their autonomy.

4.3. Differences in AI Literacy (AIL) Across Gender and Giftedness

In contrast to their lower AI use, gifted students exhibited higher AI literacy, especially in self-efficacy and reflective engagement, supporting the notion that cognitive potential translates into greater metacognitive awareness. According to the Technology Acceptance Model (TAM), higher perceived usefulness and self-efficacy foster more adaptive AI use. Gifted students’ ability to critically evaluate AI outputs allows them to integrate these tools purposefully. However, the relatively lower AI self-efficacy of female students highlights persistent gender gaps in technology confidence.
This finding is noteworthy, as prior research has rarely documented differences in AI literacy among gifted populations specifically. It resonates with previous findings on the metacognitive strengths of gifted students, including their ability to plan, monitor, and regulate their learning (Shore & Kanevsky, 1993; Eysink et al., 2015). By showing higher levels of AI literacy, gifted students may be better positioned to integrate AI into complex inquiry tasks, balancing the potential risks and opportunities that AI tools present in science learning (Kahraman & Kıyıcı, 2025).
Educational Implications: Schools could provide AI literacy training not only for non-gifted students, who may need additional support, but also for gifted students, to ensure they are challenged to apply AI literacy in authentic inquiry settings. Curriculum designers can emphasize critical evaluation of AI outputs, ethical considerations, and reflective practices to deepen students’ metacognitive engagement with AI.

4.4. Commonly Used AI Tools in Students’ Daily Learning Practices

The survey results indicated that students frequently used a limited range of AI tools, with preferences that differed from those highlighted in the international literature. For example, while platforms such as Magic School, Edpuzzle, and LabXchange have been noted to enhance inquiry learning through simulations and guided experimentation (Kunnath & Botes, 2025; Chang et al., 2023), students in this study reported more frequent use of general-purpose tools (e.g., ChatGPT) in their daily practices. This suggests a contextual distinction: Taiwanese students may rely more heavily on widely available generative AI platforms than on specialized educational applications, reflecting both accessibility factors and cultural differences in technology adoption.
Educational Implications: Educators could introduce students to a broader range of AI-based science tools beyond general-purpose chatbots. By integrating specialized educational AI platforms, such as TALP, into classroom activities, teachers can better align AI usage with inquiry-based learning goals, thus helping students move beyond surface-level applications toward deeper scientific engagement.

4.5. Gender Differences in AI Anxiety and Cultural Context

Female students expressed greater concern about AI use, reflecting broader cultural patterns of gendered technology anxiety observed in Taiwan. This aligns with global patterns, which show that women report higher AI-related anxiety, lower confidence in AI adoption, and greater ethical reservations about AI applications (Otis et al., 2025; Russo et al., 2025). These findings reinforce the importance of considering societal gender roles in AI education. Without targeted interventions, these differences may reinforce existing gender disparities in STEM participation and technology adoption (Guilbeault et al., 2024).
Educational Implications: Teachers should create supportive learning environments that allow female students to voice their concerns and engage in discussions about ethical AI. Embedding discussions on responsible use and critical evaluation into science lessons can help reduce anxiety about AI. Moreover, mentorship programs featuring female role models in STEM and AI fields may provide additional encouragement and support.

4.6. Model Fit and Improvement Considerations

The CFA model for this study revealed mixed fit results. While the incremental fit indices were strong (e.g., CFI = .956, TLI = .938), meeting recommended thresholds and indicating a good comparative fit against the null model, the absolute fit indices suggested a need for improvement. Specifically, the χ2/df ratio (7.189) and the SRMR (.057) exceeded their respective thresholds. Most notably, the RMSEA (.113) was markedly high, indicating substantial unexplained variance between the proposed model and the observed data. Furthermore, Hoelter’s CN (.05) value of 97 fell below the 200 thresholds, suggesting that the sample size may be insufficient for the model’s complexity.
An examination of the model suggests two primary explanations for the poor absolute fit:
  • Construct Overlap and Correlated Errors: The CFA path diagram revealed a very high correlation (r = .90) between the AASIL and AASL constructs. This likely stems from semantic overlap between items in the AI-Assisted Science Learning Questionnaire (AASLQ) and the AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ). When items from different constructs describe similar AI-supported learning behaviors, it can introduce correlated measurement errors, which, in turn, inflates the RMSEA.
  • Lack of Indicator Representation: Within the AASL construct (potentially corresponding to the AI-Assisted Self-Directed Learning [AASDL] subscale), the AASDL = .31. This confirms its weakness as an indicator for this latent variable. The item’s content may be overly focused on “using AI to find answers” rather than engaging in higher-order inquiry or reflection, resulting in a weak theoretical coherence with the broader science learning construct.
To address these limitations and improve model fit, future refinements should be considered:
(1)
Item Revision: Prioritize the revision or broadening of the problematic AASDI item to better capture deeper aspects of inquiry-based self-directed learning, such as hypothesis generation and evidence evaluation.
(2)
Specify Error Covariances: Based on theoretical justification, allow error covariances between semantically similar items across the AASILQ and AASLQ. This would account for shared linguistic variance not captured by the latent constructs.
(3)
Assess Discriminant Validity: Given the high correlation (r = .90) between AASIL and AASL, their discriminant validity must be rigorously tested. If the constructs prove to be empirically indistinct, the model could be simplified by merging them into a single, more parsimonious construct. The SEM analysis revealed acceptable fit indices overall but a relatively high RMSEA, suggesting that model improvement is needed.

4.7. Practical and Policy Implications

This study contributes to the emerging field of AI-supported science education by demonstrating that AI integration must be pedagogically differentiated. Policymakers and curriculum designers should:
(1)
Item Revision: Prioritize the revision or broadening of the problematic AASDL item to better capture deeper aspects of inquiry-based self-directed learning, such as hypothesis generation and evidence evaluation.
(2)
Integrate AI literacy as a cross-curricular competency aligned with inquiry-based science frameworks.
(3)
Encourage teacher professional development focused on balancing AI facilitation and human inquiry.

5. Conclusions

This study contributes to the growing understanding of how gender and giftedness jointly influence students’ engagement in AI-assisted scientific inquiry learning, AI-assisted science learning, and AI literacy. By integrating structural modeling, psychometric validation, and comparative analyses, several key insights emerge.

5.1. Theoretical Contributions

This study extends the literature on AI in education by positioning AI-assisted learning within inquiry-based learning (IBL) frameworks (Pedaste et al., 2015; Bybee, 2006) and theories of gifted education (Renzulli, 1978; Shore & Kanevsky, 1993). The findings extend theoretical discussions on AI in education by integrating Bandura’s self-efficacy theory, the Technology Acceptance Model (TAM), and Ziegler’s Actiotope Model of Giftedness. Gifted students’ higher AI literacy but lower AI reliance suggest that strong metacognitive and self-regulatory abilities moderate their engagement with AI, reflecting self-efficacy-driven autonomy.
This research further refines conceptual models of AI literacy by highlighting the influence of cognitive and sociocultural factors, especially gender norms, on technology confidence and use. By situating AI-assisted learning within the Inquiry-Based Learning (IBL) cycle (Pedaste et al., 2015), this study provides a theoretical bridge linking AI integration, scientific reasoning, and learner agency.

5.2. Practical Contributions

From a pedagogical perspective, AI should function as a scaffold for reasoning, not a replacement for students’ cognitive engagement. Teachers can lead students to use AI to model hypothesis generation, data analysis, and reflection, thereby amplifying students’ inquiry competence. AI literacy training combined with ethical discussions may reduce anxiety and increase confidence. For gifted females, who showed the lowest AI engagement, interventions such as teacher-guided modeling and confidence-building workshops can promote more assertive AI exploration.
At the curriculum level, aligning AI use with Taiwan’s 12-Year Basic Education Curriculum Guidelines can ensure that technology integration enhances inquiry authenticity and cultural relevance. Furthermore, to address the gender gap identified in this study, policies should prioritize enhancing female students’ confidence and participation in science and technology learning. Teacher professional development should emphasize both technical competence and gender sensitivity, preparing educators to bridge confidence gaps and foster equitable AI literacy.

5.3. Methodological Contributions

This study makes key methodological contributions by developing two original instruments—the AI-Assisted Scientific Inquiry Learning Questionnaire (AASILQ) and the AI-Assisted Science Learning Questionnaire (AASLQ)—and adapting the AI Literacy Questionnaire (AILQ) for the Taiwanese context. The AASILQ and AASLQ were designed based on the Inquiry-Based Learning (IBL) framework and AI-assisted scientific practices, while the AILQ was translated and localized from an established international scale (Ng et al., 2023).
Using Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM), the study confirmed strong construct validity and reliability across all three instruments, providing robust psychometric evidence for future research on AI-assisted learning. Although the RMSEA value exceeded the recommended threshold, suggesting limited absolute fit, the consistently strong incremental indices (CFI, IFI, TLI) affirmed the structural soundness of the models. Future model refinements may consider adding error covariances or revising latent path structures to further enhance parsimony and absolute fit.
Moreover, reporting Cohen’s d and η2 across gender and educational placement comparisons enhanced the methodological transparency and replicability of the analyses. These effect size indicators provided a more nuanced interpretation of group differences, contributing to emerging quantitative reporting standards in AI-in-education research.

5.4. Policy Recommendations

At the policy level, the integration of AI into science and gifted education should:
(1)
Promote AI literacy standards that emphasize ethical awareness, responsible use, and critical evaluation of AI applications in science learning.
(2)
Support teacher training programs focused on AI-facilitated inquiry, equitable learning design, and gender-responsive pedagogy to ensure inclusive classroom practices.
(3)
Encourage AI-based differentiated instruction models, enabling both gifted and non-gifted students to engage with AI tools at their optimal challenge level, thereby enhancing motivation and learning depth.
(4)
Establish national evaluation frameworks to monitor the effectiveness, equity, and ethical implications of AI-assisted learning environments across different educational contexts.
(5)
Strengthen gifted education policy by promoting the responsible and innovative use of AI for talent development. This includes providing gifted students with access to advanced AI-supported research opportunities, fostering creativity and scientific reasoning, and ensuring that AI use complements, rather than replaces, the development of higher-order thinking skills.

6. Limitations and Future Directions

This study is limited by its regional sampling (northern Taiwan) and reliance on self-reported data, which may not fully capture authentic AI use. The cross-sectional design also restricts causal inference.
Future research could employ longitudinal or experimental approaches to explore how AI self-efficacy and literacy evolve across diverse educational settings. Cross-cultural comparisons could further reveal how educational systems and cultural contexts shape AI adoption behaviors. Expanding the sample beyond northern Taiwan and incorporating qualitative data (e.g., think-aloud protocols) could enrich understanding of students’ cognitive and emotional engagement with AI tools.
In the future, AI will definitely offer transformative potential for gifted education by enabling personalized, inquiry-driven, and creative learning experiences. While its integration presents ethical, technical, and pedagogical challenges, the strategic application of AI—grounded in educational theory and critical oversight—can enhance both instruction and student agency. As the field advances, interdisciplinary collaboration and robust evaluation will be key to realizing AI’s full potential in nurturing gifted learners for an AI-driven world. Integrating AI-enhanced science inquiry into national gifted curricula should be prioritized. Platforms such as Inquirybot offer scalable models for embedding AI within hands-on learning without sacrificing instructional quality or inquiry fidelity (Chang et al., 2023). The development of AI systems that incorporate not only inquiry structure but also effective and curiosity-driven prompts is crucial to achieving high-level engagement in science learning (Kahraman & Kıyıcı, 2025).

Author Contributions

Conceptualization, M.-H.L. and C.-W.W.; methodology, M.-H.L. and C.-C.K.; software, M.-H.L. and C.-C.K.; formal analysis, M.-H.L. and C.-C.K.; investigation, M.-H.L. and C.-W.W.; data curation, M.-H.L. and C.-W.W.; writing—original draft preparation, M.-H.L. and C.-W.W.; writing—review and editing, C.-C.K. and C.-W.W.; visualization, M.-H.L. and C.-C.K.; supervision, C.-C.K.; project administration, M.-H.L. and C.-W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study. As school teachers conducted the survey, and students were not required to provide their names, it fulfilled the IRB waiver criteria in Taiwan. Under Taiwan’s Ministry of Education regulations, teachers conducting teaching-related research at the elementary and secondary school level are not required to apply for IRB approval. This study adopts an anonymous, non-interactive, and non-invasive survey method. No personally identifiable information is collected, and respondents cannot be individually recognized from the data obtained. In accordance with relevant ethical review regulations in Taiwan, research that meets these conditions qualifies as exempt from formal Institutional Review Board (IRB) review.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are unavailable due to privacy concerns.

Acknowledgments

The authors have reviewed and edited the output and take full responsibility for the content of this publication. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AASILQAI-Assisted Scientific Inquiry Learning Questionnaire
AALAAI-Assisted Learning Applications
AASLAI-Assisted Science Learning
AASDLAI-Assisted Self-Directed Learning
AIArtificial Intelligence
AIAIAI Application and Interest
AIEREthics and Responsibility in AI
AILEAI Learning Engagement
AILQAI Literacy Questionnaire
AISEAI Self-Efficacy
ASDARAI-Supported Data Analysis and Reporting,
ASCEAI-Supported Conceptualization and Explanation
ASEDMAI-Supported Experimental Design and Methods
ASIMSAI-Supported Information Management and Synthesis
EFAExploratory Factor Analysis
GFGifted Female
GMGifted Male
GSGifted Student
IBLInquiry-Based Learning
KMOKaiser–Meyer–Olkin
NGFNon-Gifted Female
NGMNon-Gifted Male
NGSNon-Gifted Student

Appendix A

Table A1. Factor analysis for the AI-Assisted Scientific Inquiry Learning Questionnaire(AASLQ).
Table A1. Factor analysis for the AI-Assisted Scientific Inquiry Learning Questionnaire(AASLQ).
No.ItemsFactors
1234
27I used AI to generate charts during data analysis..690.362.359.239
10I used AI’s suggestions for data analysis or chart presentation..676.401.411.238
22I input data into AI for analysis..672.315.396.326
14I used AI to tabulate experimental data..668.438.395.222
17I used AI to analyze data, create experimental charts, and identify data trends..662.442.379.232
15I used AI to summarize correlations when analyzing data..646.308.417.356
12I used AI to identify relationships among data during data analysis..643.338.437.310
28I used AI to create charts for the final report..640.396.373.211
25I used AI to create engaging reports and concise conclusions..627.412.402.237
8I used AI to analyze experimental data and subsequently generate or revise scientific explanations..623.426.429.255
29I used AI to generate presentations for reports and then revise them to save time..570.377.424.279
13I used AI to help write and revise reports..552.362.498.307
1I used AI to list or summarize the experimental results..540.439.469.358
9I used AI to verify the consistency between the conclusion and the experiment..539.418.518.202
20I used AI to generate conclusions in scientific inquiry or project work..517.395.451.357
34I used AI to review and refine my writing for clarity and flow when preparing reports..513.362.377.328
40I used AI to design data recording tables and recommend measurement methods..370.795.308.193
42I used AI to design the inquiry process and list the experimental materials and procedures..348.789.304.224
38I used AI to create tables of multiple experimental and control groups during experiment design..381.779.331.194
39I used AI to help precisely control variables and develop the experimental design..383.777.281.242
43I consulted AI on possible variables to guide my experimental design..339.747.317.260
36I asked AI to draft an experimental design process and verify its suitability..408.747.316.237
45I asked AI to recommend effective digital tools..280.699.344.223
41I asked AI about the principles behind the experiment or related reactions..239.698.421.268
44I used AI to search for information, find references, design answer sheets, and brainstorm experimental methods..243.628.386.333
37I used AI to search for related experiments conducted by others..358.524.388.389
46I used AI to pose questions to me or suggest questions to ask..441.473.312.239
5I used AI to identify the scientific concepts contained in the data to help me construct scientific concepts..345.361.758.210
7I asked AI for possible explanations of phenomena and the scientific concepts involved..333.356.756.216
11I asked AI about the definitions of scientific terms or concepts and their applications..329.301.743.275
18I used AI to help understand complex and abstract concepts..337.293.721.256
4I used AI to search for scientific concepts and focus my thinking..371.335.711.282
6I used AI to verify the accuracy of experimental arguments..415.399.676.219
3I used AI to provide theories or doctrines related to scientific concepts..426.385.666.241
2I used AI to generate or revise scientific explanations..493.416.615.190
19I aligned my reasoning with AI and clarify misunderstandings..511.312.577.266
21I used AI to create mind maps or concept maps to support learning..475.409.567.146
16I used AI to summarize reports to support learning..509.378.566.262
24I used AI to find supporting evidence when preparing reports..373.362.552.437
23I used AI to compile a list of potential references..405.392.533.413
35I used AI to extract keywords or help outline the data..319.385.399.622
26I used AI to gather and integrate related information..466.439.360.573
31I used AI to summarize key points from collected data and analyze their relevance to the experiment..463.448.348.543
30I used AI to organize collected data into lists or tables for faster, clearer presentation..511.427.330.521
33I used AI to organize data and keep a record of my notes..500.451.335.511
32I used AI to support data collection methods, not to produce false data..473.416.380.496
Percentage of Variance Explained (%)23.65923.24622.77410.435
Cumulative Percentage of Variance Explained (%)80.14%
Note: Extraction method: principal axis factoring. Rotation method: varimax rotation. Factor 1 = ASDAR; Factor 2 = ASEDM; Factor 3 = ASCE; Factor 4 = ASIMS.

Appendix B

Table A2. Exploratory Factor Analysis: Explained Total Variance of the AASLQ.
Table A2. Exploratory Factor Analysis: Explained Total Variance of the AASLQ.
ComponentInitial EigenvaluesExtraction Sums of Squared LoadingsRotation Sums of Squared Loadings
Total% of VarianceCumulative %Total% of VarianceCumulative %Total% of VarianceCumulative %
133.93773.77673.77633.74173.35173.35110.88323.65923.659
21.6873.66877.4441.5223.30876.65910.69323.24646.905
31.1392.47579.919.9532.07178.73010.47622.77469.679
4.8311.80781.727.6361.38380.1144.80010.43580.114
5.7301.58683.313
6.5641.22784.539
7.432.93885.478
8.386.84086.317
9.367.79887.116
10.339.73787.853
11.328.71288.566
12.298.64889.214
13.280.60989.823
14.270.58790.410
15.260.56690.976
16.255.55591.531
17.232.50592.036
18.211.45992.495
19.205.44792.941
20.201.43793.378
21.190.41493.792
22.184.40094.192
23.179.39094.581
24.167.36294.944
25.166.36095.304
26.156.33995.643
27.153.33395.975
28.144.31396.288
29.143.31196.599
30.135.29396.892
31.126.27497.166
32.123.26797.434
33.115.25097.683
34.110.24097.923
35.108.23498.157
36.098.21298.370
37.096.21098.579
38.090.19598.774
39.087.19098.964
40.081.17699.140
41.080.17599.315
42.073.15999.474
43.070.15199.625
44.064.13899.763
45.057.12399.886
46.052.114100.000
Extraction Method: Principal Component Analysis

Appendix C

Table A3. Factor analysis for the AI-Assisted Science Learning Questionnaire (AASLQ).
Table A3. Factor analysis for the AI-Assisted Science Learning Questionnaire (AASLQ).
No.ItemsFactors
12
1I used AI to find, collect, and compile information, and organize my notes..810.151
2I used AI to clarify concepts, search for related ideas, and verify their accuracy..808.201
3I asked AI to provide questions related to a specific topic, observed phenomena, or data..804.091
4I engaged with AI to discuss problems and ask about theorems or concepts I don’t understand..799.161
5I used AI to analyze data and produce result descriptions..791.102
6I asked AI to gather questions on related inquiry topics from the internet..779.155
7I asked AI for alternative ideas and to assess the quality of my chosen topic..773.130
8I used AI to seek inspiration and gather suggestions..765.122
9.I used AI to assist in completing assignments or reports..732.037
10I used AI to create learning outputs, generating text, articles, reports, and presentations..724.023
11I used AI to explain difficult scientific concepts..707.161
12I used AI to generate practice questions and answers..689.133
13I used AI to complete assignments or reports..655.073
14I used AI as a learning partner or personal tutor..605.105
15I used AI to research and translate information..520.086
16I used AI in self-directed learning to get immediate answers to scientific questions I don’t understand..145.793
17I used AI to collect information and literature related to the question..067.774
18I used AI to find answers to questions..035.757
19I used AI to verify the accuracy of information..182.664
20I used AI to improve my questioning skills..116.587
Percentage of Variance Explained (%)40.91414.095
Cumulative Percentage of Variance Explained (%)55.00%
Note: Extraction method: principal axis factoring. Rotation method: varimax rotation. Factor 1 = AALA; Factor 2 = AASDL.

Appendix D

Table A4. Exploratory Factor Analysis: Explained Total Variance of the AASLQ.
Table A4. Exploratory Factor Analysis: Explained Total Variance of the AASLQ.
ComponentInitial EigenvaluesExtraction Sums of Squared LoadingsRotation Sums of Squared Loadings
Total% of VarianceCumulative %Total% of VarianceCumulative %Total% of VarianceCumulative %
19.08345.41645.4168.66143.30343.3038.18340.91440.914
22.79013.94959.3652.34111.70655.0092.81914.09555.009
31.0075.03664.402
4.9334.66469.065
5.7673.83472.899
6.6983.49176.390
7.5842.91879.309
8.5722.86282.170
9.4692.34484.515
10.4592.29486.808
11.3561.77988.588
12.3401.69890.285
13.3291.64391.929
14.3041.52193.449
15.2661.33294.781
16.2521.26196.042
17.2271.13697.178
18.2171.08598.263
19.188.93899.200
20.160.800100.000
Extraction Method: Principal Component Analysis.

Appendix E

Table A5. Factor analysis for the AI Literacy Questionnaire (AILQ).
Table A5. Factor analysis for the AI Literacy Questionnaire (AILQ).
No.ItemsFactors
1234
1I believe users should be informed of the purpose, operation, and potential limitations of AI systems..823.130.103.160
3I believe people should be responsible for their use of AI systems..784.162.049.213
2I believe AI systems should comply with ethical and legal standards..780.108.093.146
4I believe users have a responsibility to understand the design and decision-making processes of the AI they use..744.231.198.058
6I understand that the misuse of AI can pose tangible risks to humans..710.245.101.067
5I believe AI systems should undergo rigorous testing to ensure they operate as intended..709.202.112.029
7I believe AI can be used to help disadvantaged groups..701.097.130.255
8I believe AI systems should benefit everyone, regardless of physical condition or gender..676.113.116.185
9I know how to use AI applications (e.g., Siri, chatbots)..542.300.169.323
17I can evaluate AI applications and concepts based on the needs of different contexts..499.468.317.147
10I can use AI applications to solve problems..485.359.212.395
21I know what AI is and can recall its definition..403.388.392.082
12I am confident in my ability to perform well in AI-related tasks..225.775.331.189
11I believe I can acquire AI knowledge and skills..251.770.267.176
13I believe I can achieve good results in AI-related assessments..188.749.360.164
14I am confident I can excel in AI-related projects..167.748.383.180
15I am confident in my ability to perform well in AI-related tasks..203.715.331.279
16I can understand AI-related resources and tools..367.594.224.258
20I will keep myself up to date with the latest AI technologies..276.470.439.414
19I can compare different AI concepts (e.g., deep learning, machine learning) and their differences..298.457.452.127
18I can develop AI-driven solutions (e.g., chatbots, robotics) to solve problems..316.443.313.236
22I often try to explain what I’ve learned about AI to classmates or friends..053.248.803.061
23I often discuss AI-related topics with classmates in my free time..027.288.798.090
24I will try to work with classmates to complete AI learning tasks and projects..187.261.701.229
25I will actively participate in AI-related learning activities..224.425.651.296
26I am highly engaged in AI-related learning content..219.410.628.318
27I plan to spend time in the future exploring new features of AI applications..251.392.539.394
28AI is related to my daily life (e.g., personal, work)..301.197.206.562
30Learning about AI is interesting to me..229.350.447.559
29I will continue to use AI in the future..504.239.104.551
31I am curious about exploring new AI technologies..262.468.332.504
32Learning about AI makes my daily life more meaningful..198.354.479.491
Percentage of Variance Explained (%)20.85218.44015.7138.873
Cumulative Percentage of Variance Explained (%)63.878
Note: Extraction method: principal axis factoring. Rotation method: varimax rotation. Factor 1 = AIER; Factor 2 = AISE; Factor 3 = AILE; Factor 4 = AIAI.

Appendix F

Table A6. Exploratory Factor Analysis: Explained Total Variance of the AILQ.
Table A6. Exploratory Factor Analysis: Explained Total Variance of the AILQ.
ComponentInitial EigenvaluesExtraction Sums of Squared LoadingsRotation Sums of Squared Loadings
Total% of VarianceCumulative %Total% of VarianceCumulative %Total% of VarianceCumulative %
115.81149.40849.40815.81149.40849.4086.75321.10421.104
23.55511.10960.5173.55511.10960.5176.50720.33641.440
31.2693.96564.4821.2693.96564.4825.16916.15357.592
41.1873.71068.1921.1873.71068.1923.39210.60068.192
5.9633.01071.203
6.6632.07373.275
7.6472.02175.296
8.6091.90377.199
9.5481.71378.912
10.5111.59680.508
11.4931.54282.050
12.4681.46383.513
13.4161.30184.815
14.4011.25386.067
15.3731.16687.233
16.3491.09188.324
17.3411.06689.390
18.3341.04390.434
19.303.94691.380
20.293.91592.295
21.267.83593.130
22.263.82393.952
23.246.76994.722
24.229.71695.438
25.227.70996.147
26.219.68596.832
27.196.61197.444
28.187.58598.029
29.171.53598.564
30.166.51899.082
31.151.47199.553
32.143.447100.000
Extraction Method: Principal Component Analysis.

Appendix G

Table A7. Robust Tests of Equality of Means Using Welch and Brown–Forsythe Statistics for AASL.
Table A7. Robust Tests of Equality of Means Using Welch and Brown–Forsythe Statistics for AASL.
SourceTest Statistic (F) adf1df2p
AALAWelch4.2073214.120.006
Brown–Forsythe3.9743396.741.008
AASDLWelch5.5183216.563.001
Brown–Forsythe4.6473405.250.003
AASLWelch6.6013217.153<.001
Brown–Forsythe6.1073409.569<.001
a Asymptotically F distributed. NOTE: AALA: AI-Assisted Learning Applications, AASDL: AI-Assisted Self-Directed Learning, AASL: AI-Assisted Science Learning.

Appendix H

Table A8. Tests of Between-Subjects Effects in AASILQ Subscales and Total Score.
Table A8. Tests of Between-Subjects Effects in AASILQ Subscales and Total Score.
SubscalesGroupBStd. ErrortSig.
(2-Tailed)
95% Confidence Interval Partial η2λObserved Power
LowerUpper
1. ASDARIntercept1.528.08318.463<0.0011.3651.691.42818.4631.000
GM−.3150.129−2.4470.015−0.568−0.062.0132.447.685
GF−0.5950.153−3.885<0.001−0.896−0.294.0323.885.972
NGM0.0210.1310.1640.870−0.2360.279.0000.164.053
NGF0 a
2. ASEDMIntercept1.4250.08516.791<.0011.2581.592.38316.7911.000
GM−0.1540.132−1.1630.245−0.4130.106.0031.163.213
GF−0.4980.157−3.1730.002−0.807−0.190.0223.173.886
NGM0.1310.1340.9790.328−0.1320.395.0020.979.165
NGF0 a
3. ASCEIntercept1.5420.08418.421<.0011.3771.706.42718.4211.000
GM−0.1690.130−1.2980.195−0.4250.087.0041.298.254
GF−0.5770.155−3.723<.001−0.881−0.272.0303.723.960
NGM0.0430.1320.3240.746−0.2170.303.0000.324.062
NGF0 a
4. ASIMSIntercept1.5740.08618.205<.0011.4041.744.42118.2051.000
GM−0.1920.135−1.4240.155−0.4560.073.0041.424.295
GF−0.3920.160−2.4520.015−0.707−0.078.0132.452.687
NGM0.0970.1370.7130.476−0.1710.366.0010.713.110
NGF0 a
ASILQIntercept1.5170.08018.893<.0011.3591.675.44018.8931.000
GM−0.2070.125−1.6590.098−0.4530.038.0061.659.381
GF−0.5160.149−3.469<.001−0.808−0.224.0263.469.933
NGM0.0730.1270.5770.564−0.1760.323.0010.577.089
NGF0 a
NOTE: ASDAR: AI-Supported Data Analysis and Reporting, ASEDM: AI-Supported Experimental Design and Methods, ASCE: AI-Supported Conceptualization and Explanation, ASIMS: AI-Supported Information Management and Synthesis, AASILQ: AI-assisted scientific inquiry learning Questionnaire, GM: gifted male, GF: gifted female, NGM: non-gifted male, and NGF: non-gifted female. a: this parameter is set to zero because it is redundant.

References

  1. Akhmadieva, R. S., Udina, N. N., Kosheleva, Y. P., Zhdanov, S. P., Timofeeva, M. O., & Budkevich, R. L. (2023). Artificial intelligence in science education: A bibliometric review. Contemporary Educational Technology, 15(4), ep460. [Google Scholar] [CrossRef]
  2. Bybee, R. W. (2006). Scientific inquiry and science teaching. In L. B. Flick, & N. G. Lederman (Eds.), Scientific inquiry and nature of science (pp. 1–14). Springer. [Google Scholar]
  3. Chang, J., Park, J., & Park, J. (2023). Using an artificial intelligence chatbot in scientific inquiry: Focusing on a guided-inquiry activity using Inquirybot. Asia-Pacific Science Education, 9, 44–74. [Google Scholar] [CrossRef]
  4. Drost, E. A. (2011). Validity and reliability in social science research. Education Research and Perspectives 38, 105–123. [Google Scholar] [CrossRef]
  5. Eysink, T., Gersen, L., & Gijlers, H. (2015). Inquiry learning for gifted children. High Ability Studies, 26(1), 63–74. [Google Scholar] [CrossRef]
  6. Gardner, H. (1983). Frames of mind. Bantam Books. [Google Scholar]
  7. Guilbeault, D., Delecourt, S., Hull, T., Desikan, B. S., Chu, M., & Nadler, E. (2024). Online images amplify gender bias. Nature, 626, 1049–1055. [Google Scholar] [CrossRef] [PubMed]
  8. Jackson, N. E., & Butterfield, E. C. (1986). A conception of giftedness designed to promote research. In R. J. Sternberg, & J. E. Davidson (Eds.), Conceptions of giftedness (pp. 151–181). Cambridge University Press. [Google Scholar]
  9. Kahraman, N., & Kıyıcı, G. (2025). Evaluating the efficacy of AI-generated inquiry-based lesson plans in science. Sakarya University Journal of Education, 15(1), 40–53. [Google Scholar] [CrossRef]
  10. Kim, J. (2023). The use of AI-based writing feedback for gifted elementary students: A case study. Asia-Pacific Journal of Gifted and Talented Education, 35(2), 119–136. [Google Scholar]
  11. Kunnath, A. J., & Botes, W. (2025). Transforming science education with artificial intelligence: Enhancing inquiry-based learning and critical thinking in South African science classrooms. EURASIA Journal of Mathematics, Science and Technology Education, 21(6), em2655. [Google Scholar] [CrossRef]
  12. Ministry of Education. (2018). Curriculum guidelines of 12 year basic education. Ministry of Education. [Google Scholar]
  13. National Association for Gifted Children. (2014). What is giftedness? Available online: http://www.nagc.org/resources-publications/resources/definitions-giftedness (accessed on 25 August 2025).
  14. Ng, D. T. K., Leung, J. K. L., Su, J., Chu, S. K. W., & Qiao, M. S. (2023). Teachers’ AI digital competencies and twenty-first century skills in the post-pandemic world. Educational Technology Research and Development 71, 137–161. [Google Scholar] [CrossRef]
  15. Nursurila, N. (2025). Analysis of artificial intelligence assistance in inquiry learning model on students’ critical thinking. Journal of Physics, Science, and Technology Education, 1(1), 30–35. [Google Scholar]
  16. Otis, N. G., Delecourt, S., Cranney, K., & Koning, R. (2025). Global evidence on gender gaps and generative AI. Harvard Business School. [Google Scholar]
  17. Özgür, S. D., & Yilmaz, A. (2017). The effect of inquiry-based learning on gifted and talented students’ understanding of acid-based concepts and motivation. Journal of Baltic Science Education, 16(6), 994–1008. [Google Scholar] [CrossRef]
  18. Pedaste, M., Mäeots, M., Siiman, L. A., De Jong, T., Van Riesen, S. A., Kamp, E. T., Manoli, C. C., Zacharia, Z. C., & Tsourlidaki, E. (2015). Phases of inquiry-based learning: Definitions and the inquiry cycle. Educational Research Review, 14, 47–61. [Google Scholar] [CrossRef]
  19. Pfeiffer, S. I. (Ed.). (2008). Handbook of giftedness in children: Psychoeducational theory, research, and best practices. Springer Science + Business Media. [Google Scholar] [CrossRef]
  20. Renzulli, J. S. (1978). What makes giftedness? Reexamining a definition. Phi Delta Kappan, 60, 180–184. [Google Scholar] [CrossRef]
  21. Russo, C., Romano, L., Clemente, D., Iacovone, L., Gladwin, T. E., & Panno, A. (2025). Gender differences in artificial intelligence: The role of artificial intelligence anxiety. Frontiers in Psychology, 16, 1559457. [Google Scholar] [CrossRef] [PubMed]
  22. Shore, B. M., & Kanevsky, L. S. (1993). Thinking processes: Being and becoming gifted. In International handbook of research and development of giftedness and talent (pp. 133–147). Pergamon Press. [Google Scholar]
  23. Siegle, D. (2023). A role for ChatGPT and AI in gifted education. Gifted Child Today, 46(3), 211–219. [Google Scholar] [CrossRef]
  24. Siegle, D. (2024). Using Artificial Intelligence (AI) technology to support the three legs of talent development. Gifted Child Today, 47(3), 221–227. [Google Scholar] [CrossRef]
  25. Tang, X., Coffey, J. E., Elby, A., & Levin, D. M. (2009). The scientific method and scientific inquiry: Tensions in teaching and learning. Science Education, 94, 29–47. [Google Scholar] [CrossRef]
  26. Trpin, A. (2024). Teaching gifted students in mathematics: A literature review. International Journal of Childhood Education, 5(1), 1–13. [Google Scholar] [CrossRef]
  27. Ziegler, A. (2005). The actiotope model of giftedness. In R. J. Sternberg, & J. E. Davidson (Eds.), Conceptions of giftedness (2nd ed., pp. 411–436). Cambridge University Press. [Google Scholar] [CrossRef]
Figure 1. Conceptual Alignment Between AI Literacy Dimensions and Inquiry-Based Learning Phases.
Figure 1. Conceptual Alignment Between AI Literacy Dimensions and Inquiry-Based Learning Phases.
Education 15 01611 g001
Figure 2. Structural Equation Model (SEM) diagram. Note: ASDAR: AI-Supported Data Analysis and Reporting; ASEDM: AI-Supported Experimental Design and Methods; ASCE: AI-Supported Conceptualization and Explanation; ASIMS: AI-Supported Information Management and Synthesis; AASIL: AI-Assisted Scientific Inquiry Learning; AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASL: AI-Assisted Science Learning; AIER: Ethics and Responsibility in AI; AISE: AI Self-Efficacy; AILE: AI Learning Engagement; AIAI: AI Application and Interest. AIL: AI literacy.
Figure 2. Structural Equation Model (SEM) diagram. Note: ASDAR: AI-Supported Data Analysis and Reporting; ASEDM: AI-Supported Experimental Design and Methods; ASCE: AI-Supported Conceptualization and Explanation; ASIMS: AI-Supported Information Management and Synthesis; AASIL: AI-Assisted Scientific Inquiry Learning; AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASL: AI-Assisted Science Learning; AIER: Ethics and Responsibility in AI; AISE: AI Self-Efficacy; AILE: AI Learning Engagement; AIAI: AI Application and Interest. AIL: AI literacy.
Education 15 01611 g002
Figure 3. Q–Q plot of AI-Assisted Scientific Inquiry Learning (AASIL).
Figure 3. Q–Q plot of AI-Assisted Scientific Inquiry Learning (AASIL).
Education 15 01611 g003
Figure 4. Q–Q plot of AI-Assisted Science Learning (AASL).
Figure 4. Q–Q plot of AI-Assisted Science Learning (AASL).
Education 15 01611 g004
Figure 5. Q–Q plot of AI literacy (AIL).
Figure 5. Q–Q plot of AI literacy (AIL).
Education 15 01611 g005
Figure 6. Distribution of AASILQ scores across gifted and non-gifted students according to gender.
Figure 6. Distribution of AASILQ scores across gifted and non-gifted students according to gender.
Education 15 01611 g006
Figure 7. Boxplot of AI-Assisted Science Learning (AASL) scores across student groups.
Figure 7. Boxplot of AI-Assisted Science Learning (AASL) scores across student groups.
Education 15 01611 g007
Figure 8. Boxplot of AI literacy (AIL) scores across student groups.
Figure 8. Boxplot of AI literacy (AIL) scores across student groups.
Education 15 01611 g008
Table 1. Distribution of Participants.
Table 1. Distribution of Participants.
GroupMaleFemaleUndisclosedTotal
Gifted Students (GSs)1166813197
Non-Gifted Students (NGSs)11016512287
Total22623325484
Table 2. Mapping of AASILQ Subscales to IBL Phases, TSCI Stages, and Questionnaire Items.
Table 2. Mapping of AASILQ Subscales to IBL Phases, TSCI Stages, and Questionnaire Items.
AASILQ SubscaleCorresponding IBL PhaseTSCI Stage (Taiwan Curriculum)Questionnaire Items
ASDARConclusion/
Discussion
Analysis and Discovery1, 8, 9, 10, 12, 13, 14, 15, 17, 20, 22, 25, 27, 28, 29, 34
ASEDMInvestigationPlanning and Execution36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46
ASCEConceptualizationDiscussion and Communication2, 3, 4, 5, 6, 7, 11, 16, 18, 19, 21, 23, 24
ASIMSOrientationObservation and Problem Definition26, 30, 31, 32, 33, 35
Note. AASILQ = AI-assisted Scientific Inquiry Learning Questionnaire; IBL = Inquiry-Based Learning framework (Pedaste et al., 2015); TSCI = Taiwanese Science Curriculum Inquiry framework (Ministry of Education, 2018).
Table 3. The reliability of the AASILQ.
Table 3. The reliability of the AASILQ.
SubscaleNumber of ItemsCronbach’s αGuttman Split-HalfMcDonald’s ω
ASDAR16.983.979.983
ASEDM11.975.959.975
ASCE13.980.966.980
ASIMS6.966.965.966
AASILQ46.992.968.992
Note. ASDAR: AI-Supported Data Analysis and Reporting; ASEDM: AI-Supported Experimental Design and Methods; ASCE: AI-Supported Conceptualization and Explanation; ASIMS: AI-Supported Information Management and Synthesis; AASILQ: AI-Assisted Scientific Inquiry Learning Questionnaire.
Table 4. The reliability of the AASL.
Table 4. The reliability of the AASL.
SubscaleNumber of ItemsCronbach’s αGuttman Split-HalfMcDonald’s ω
AALA15.947.941.948
AASDL5.841.729.835
AASL20.934.832.938
Note. AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASL: AI-Assisted Science Learning.
Table 5. The reliability of the AILQ.
Table 5. The reliability of the AILQ.
SubscaleNumber of ItemsCronbach’s αGuttman Split-HalfMcDonald’s ω
AIER12.932.888.931
AISE9.939.873.939
AILE6.921.850.919
AIAI5.871.848.870
AILQ32.966.891.965
AIER: Ethics and Responsibility in AI; AISE: AI Self-Efficacy; AILE: AI Learning Engagement; AIAI: AI Application and Interest. AILQ: AI Literacy Questionnaire.
Table 6. Pearson correlations among AASILQ, AASLQ, and AILQ (N = 484).
Table 6. Pearson correlations among AASILQ, AASLQ, and AILQ (N = 484).
ASDARASEDMASCEASIMSAASILQAALAAASDLAASLQAIERAISEAILEAIAIAILQ
1.ASDAR1
2.ASEDM.869 **1
3.ASCE.921 **.846 **1
4.ASIMS.901 **.864 **.867 **1
AASILQ.968 **.939 **.953 **.953 **1
1.AALA.747 **.766 **.755 **.752 **.792 **1
2.AASDL.218 **.232 **.224 **.213 **.233 **.277 **1
AASLQ.658 **.679 **.667 **.659 **.698 **.873 **.711 **1
1.AIER.061.035.071.049.057.110 *.256 **.211 **1
2.AISE.270 **.290 **.271 **.219 **.275 **.305 **.313 **.382 **.672 **1
3.AILE.346 **.351 **.345 **.305 **.353 **.347 **.343 **.429 **.526 **.779 **1
4.AIAI.237 **.237 **.256 **.191 **.242 **.329 **.380 **.433 **.677 **.763 **.709 **1
AILQ.273 **.274 **.281 **.228 **.277 **.321 **.372 **.424 **.798 **.922 **.878 **.899 **1
ASDAR: AI-Supported Data Analysis and Reporting; ASEDM: AI-Supported Experimental Design and Methods; ASCE: AI-Supported Conceptualization and Explanation; ASIMS: AI-Supported Information Management and Synthesis; AASILQ: AI-Assisted Scientific Inquiry Learning Questionnaire; AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASLQ: AI-Assisted Science Learning Questionnaire; AIER: Ethics and Responsibility in AI; AISE: AI Self-Efficacy; AILE: AI Learning Engagement; AIAI: AI Application and Interest. AILQ: AI Literacy Questionnaire. Note: ** p < .01; * p < .05.
Table 7. CFA Results and Model Fit Indices for the AASILQ, AASLQ, and AILQ.
Table 7. CFA Results and Model Fit Indices for the AASILQ, AASLQ, and AILQ.
Fit IndexRecommended ThresholdObtained ValueModel Evaluation
χ2/df<5.007.189Poor fit
GFI>.90.915Good
AGFI>.90.853Marginal
SRMR<.05.057Slightly above threshold
RMSEA<.08.113Weak absolute fit
ECVI< independence model (9.371).571Good
NFI>.90.949Good
RFI>.90.928Good
IFI>.90.956Good
TLI(NNFI)>.90.938Good
CFI>.90.956Good
PGFI>.50.532Acceptable
PNFI>.50.675Acceptable
AIC< independence model (4526.284)276.033Good (parsimonious)
CAIC< independence model (4578.105)395.221Good
Hoelter’s CN (0.05)>20097Below threshold
(sample limitation)
Note. GFI = Goodness-of-Fit Index; AGFI = Adjusted GFI; RMSEA = Root Mean Square Error of Approximation; ECVI = Expected Cross-Validation Index; NFI = Normed Fit Index; RFI = Relative Fit Index; IFI = Incremental Fit Index; TLI = Tucker–Lewis Index; CFI = Comparative Fit Index; PGFI = Parsimony Goodness-of-Fit Index; PNFI = Parsimony Normed Fit Index; AIC = Akaike Information Criterion; CAIC = Consistent AIC. Values meeting or exceeding recommended thresholds indicate acceptable or strong model fit.
Table 8. T-test Results for AASIL Dimensions by Gender.
Table 8. T-test Results for AASIL Dimensions by Gender.
DimensionMale
(n = 226)
M (SD)
Female
(n = 233)
M (SD)
t-Test Cohen’s d [95% CI] η2Effect Size
Interpretation
ASDAR1.38 (1.11)1.35 (1.06).22 (.413)0.02 [−0.16, 0.20].0001Trivial
ASEDM1.41(1.13)1.28 (1.08)1.266 (.103)0.12 [−0.07, 0.30].004Negligible
ASCE1.48 (1.13)1.37 (1.05)1.01 (.158)0.09 [−0.09, 0.28].002Trivial
ASIMS1.52(1.15)1.46 (1.09).609 (.272)0.06 [−0.13, 0.24].001Trivial
Total AASIL1.45 (1.08)1.37 (1.02).815 (.208)0.08 [−0.11, 0.26].001Trivial
Note. None of the gender-based differences reached statistical significance (p > .05). Effect sizes (η2 < 0.01) indicate trivial gender influence across all AASIL dimensions.
Table 9. T-test Results for AASIL Dimensions by Educational Placement (Gifted vs. Non-Gifted).
Table 9. T-test Results for AASIL Dimensions by Educational Placement (Gifted vs. Non-Gifted).
DimensionGS
(n = 197)
M (SD)
NGS
(n = 287)
M (SD)
t-TestCohen’s d [95% CI] η2Effect Size Interpretation
ASDAR1.12 (1.04)1.54 (1.08)−4.247 ***−0.39 [−0.58, −0.21].036Small–Medium
ASEDM1.15 (1.08)1.47 (1.09)−3.226 ***−0.30 [−0.48, −0.12].021Small
ASCE1.22 (1.07)1.56 (1.09)−3.455 ***−0.32 [−0.50, −0.14].024Small
ASIMS1.30 (1.09)1.61 (1.13)−3.006 **−0.28 [−0.46, −0.10].019Small
Total AASIL1.19 (1.02)1.54 (1.05)−3.652 ***−0.34 [−0.52, −0.16].027Small–Medium
Note: *** p < .001; ** p < .01; ASDAR: AI-Supported Data Analysis and Reporting; ASEDM: AI-Supported Experimental Design and Methods; ASCE: AI-Supported Conceptualization and Explanation; ASIMS: AI-Supported Information Management and Synthesis; AASIL: AI-Assisted Scientific Inquiry Learning; GS: gifted students; NGS: non-gifted students.
Table 10. Gender-based differences in AASIL among gifted and non-gifted students.
Table 10. Gender-based differences in AASIL among gifted and non-gifted students.
DimensionsNMSDSourceSum of SquaresMean
Square
FpPost-Hoc
ASDAR NGF > GF
NGM > GF
GM1161.2131.083between23.50937.8366.934.000 ***
GF68.933.932
NGM1101.5491.111within514.2094551.130
NGF1651.5281.067
Total4591.3651.084total537.719458
ASEDM NGF > GF
NGM > GF
GM1161.2711.120between18.50036.1675.191.002 **
GF680.927.972
NGM1101.5561.126within540.5484551.188
NGF1651.4251.090
Total4591.3441.105total559.048458
ASCE
GM1161.3731.124between19.76136.5875.699.001 **NGF > GF
NGM > GF
GF68.965.891
NGM1101.5851.131within525.8534551.156
NGF1651.5421.071
Total4591.4241.091total545.614458
ASIMS
GM1161.3821.125between12.59434.1983.405.018 *NGM > GF
GF681.181.992
NGM1101.6711.155within561.0314551.233
NGF1651.5741.115
Total4591.4911.119total573.626458
AASIL
GM1161.3101.063between17.98135.9945.634.001 **NGF > GF
NGM > GF
GF681.001.894
NGM1101.5901.075within484.0424551.064
NGF1651.5171.031
Total4591.4061.047total502.022458
Note: *** p < .001; ** p < .01; * p < .05. ASDAR: AI-supported data Analysis and Reporting; ASEDM: AI-Supported Experimental Design and Methods; ASCE: AI-Supported Conceptualization and Explanation; ASIMS: AI-Supported Information Management and Synthesis; AASIL: AI-Assisted Scientific Inquiry Learning; GM: gifted males; GF: gifted females; NGM: non-gifted males; NGF: non-gifted females.
Table 11. Descriptive statistics and independent samples t-test results for AASL dimensions according to gender.
Table 11. Descriptive statistics and independent samples t-test results for AASL dimensions according to gender.
DimensionMale
(n = 226)
M (SD)
Female
(n = 233)
M (SD)
t-Test Cohen’s d [95% CI]η2Effect Size
Interpretation
AALA1.74 (0.931)1.79 (0.877)−0.583−0.054 [−0.237, 0.129].001Trivial
AASDL3.06 (0.655)2.98 (0.571)1.471 *0.137 [−0.046, 0.320].005Trivial
Total AASL2.40 (0.616)2.39 (0.612).3050.029 [−0.154, 0.212].000Trivial
Note: AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASL AI-Assisted Science Learning; GS: gifted students (n = 197); NGS: non-gifted students (n = 287); M: mean; SD: standard deviation. * p < .05
Table 12. Descriptive statistics and t-test results for AASL dimensions by educational placement.
Table 12. Descriptive statistics and t-test results for AASL dimensions by educational placement.
DimensionGS
(n = 197)
M (SD)
NGS
(n = 287)
M (SD)
t-Test Cohen’s d [95% CI]η2Effect Size
Interpretation
AALA1.62 (.894)1.88 (.902)−3.15 *** −0.291 [−0.473, −0.109].021Small
AASDL2.932 (.624)3.078 (.629)−2.45 ** −0.227 [−0.409, −0.045].013Small
Total AASL2.272 (.614)2.48 (.611)3.567 ***−0.330 [−0.512, −0.147].026Small–Medium
Note: ** p < 0.01; *** p < 0.001. AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASL AI-Assisted Science Learning; GS: gifted students (n = 197); NGS: non-gifted students (n = 287); M: mean; SD: standard deviation.
Table 13. Comparison of AASL among different groups.
Table 13. Comparison of AASL among different groups.
NMSDSourceSum of SquaresMean SquareFpScheffe
AALA
GM1161.65.91between9.45233.1513.937.009NGF > GF
GF681.52.83
NGM1101.84.95within364.115455.800
NGF1651.90.87
Total4591.77.90total373.568458
AASDL
GM1163.00.65between5.03431.6784.542.004NGM > GF
GF682.79.51 NGM > GF
NGM1103.13.66within168.108455.369
NGF1653.06.58
Total4593.02.61total173.142458
AASL
GM1162.33.64between6.49432.1655.909<.001NGM > GF
GF682.16.55 NGF > GF
NGM1102.48.58within166.681455.366
NGF1652.48.62
Total4592.39.61total173.175458
Note: AALA: AI-Assisted Learning Applications; AASDL: AI-Assisted Self-Directed Learning; AASL AI-Assisted Science Learning; GM: gifted male students; GF: gifted female students; NGM: non-gifted male students; NGS: non-gifted female students.
Table 14. Differences in AI literacy according to gender.
Table 14. Differences in AI literacy according to gender.
DimensionMale
(n = 226)
M (SD)
Female
(n = 233)
M (SD)
t-Test Cohen’s d [95% CI]η2Effect Size Interpretation
AIER3.54 (0.75)3.41 (0.60)2.19 * 0.204 [0.020, 0.387].010Small
AISE3.47 (0.90)3.15 (0.75)4.21 ***0.393 [0.208, 0.577].037Small–Medium
AILE3.30 (0.97)2.93 (0.86)4.27 ***0.399 [0.214, 0.583].038Small–Medium
AIAI3.71 (0.89)3.59 (0.76)1.50 0.140 [−0.043, 0.324].005Trivial–Small
Total AI Literacy3.51 (0.78)3.27 (0.63)3.57 ***0.333 [0.149, 0.517].027Small
Note: * p < .05; *** p < .001. AIER: Ethics and Responsibility in AI; AISE: AI Self-Efficacy; AILE: AI Learning Engagement; AIAI: AI Application and Interest; GS: gifted students; NS: non-gifted students; M: mean; SD: standard deviation.
Table 15. Differences in AI literacy according to educational placement.
Table 15. Differences in AI literacy according to educational placement.
DimensionGS
(n = 197)
M (SD)
NS
(n = 287)
M (SD)
t-Test Cohen’s d [95% CI]η2Effect Size
Interpretation
AIER3.61 (0.69)3.35 (0.68)4.15 *** 0.384 [0.200, 0.566].035Small–Medium
AISE3.40 (0.87)3.25 (0.83)1.94 0.180 [−0.002, 0.361].008Small
AILE3.07 (0.95)3.13 (0.94)−0.68−0.063 [−0.244, 0.118].001Trivial
AIAI3.73 (0.84)3.58 (0.82)1.85 *0.172 [−0.010, 0.353].007Small
Total AI Literacy3.45 (0.72)3.33 (0.73)1.85 *0.172 [−0.010, 0.353].007Small
Note. * p < .05; *** p < .001. Positive d indicates higher scores for non-gifted students (NGS). Overall effect sizes are small, with η2 < .04, indicating modest group differences in AI literacy.
Table 16. Comparison of four groups across the dimensions of AI literacy.
Table 16. Comparison of four groups across the dimensions of AI literacy.
MSDSourceSum of SquaresMean
Square
FpPost-Test
AIER GM > NGM, GF > NGF (p < .05)
GM > NGF (p < .001)
GM3.68.73between9.41733.1397.000.000
GF3.57.52
NGM3.40.76within204.0474553.139
NGF3.34.62
Total3.47.68total213.464458
AISE GM > GF, NGM > NGF (p < .05)
GM > NGF (p < .01)
GM3.54.92Between13.36634.4556.536.000
GF3.20.68
NGM3.40.87Within310.156455.682
NGF3.13.78
Total3.31.84total323.523458
AILE NGM > GF, NGM > NGF (p < .01)
GM3.211.01Between17.48135.8276.959.000
GF2.90.80
NGM3.40.93Within380.976455.837
NGF2.95.88
Total3.11.93total398.457458
AIAI
GM3.79.90Between3.81631.2727.036.000
GF3.66.68
NGM3.61.87Within307.509455.676
NGF3.56.79
Total3.65.82total311.325458
Total scale GM > NGF (p < .01)
GM3.56.79Between7.35132.4507.036.000
GF3.33.53
NGM3.45.78Within228.512455.502
NGF3.25.66
Total3.39.72total235.862458
Note: AIER: Ethics and Responsibility in AI; AISE: AI Self-Efficacy; AILE: AI Learning Engagement; AIAI: AI Application and Interest; GM: gifted male students, GF: gifted female students, NGM: non-gifted male students; NGF: non-gifted female students; M: mean; SD: standard deviation.
Table 17. Descriptive statistics of students’ perceptions of AI (N = 484).
Table 17. Descriptive statistics of students’ perceptions of AI (N = 484).
ItemN%Observed %
A technology that helps answer or solve problems44328.391.5
A tool that assists with writing, calculations, and programming39325.181.2
A chatting or drawing program36523.375.4
A powerful but potentially dangerous tool34922.372.1
Other181.13.7
Total1568100.0324
Note: Participants could select multiple AI tools; therefore, the percentages exceed 100%. Percentages are calculated based on the total sample size (N = 484), representing the proportion of students who reported using each tool.
Table 18. Descriptive statistics of AI tools used by students (N = 484).
Table 18. Descriptive statistics of AI tools used by students (N = 484).
AI ToolN%Observed %
ChatGPT46123.895.2
AI translation tools37519.377.5
AI voice assistants26513.754.8
AI drawing tools23912.349.4
e-du(TALP)1678.634.5
Gemini1196.124.6
Cool AI(CooC-Cloud)1135.823.3
Copilot713.714.7
Gamma583.012.0
Suno432.28.9
Aisk (CooC+)201.04.1
Other90.51.9
Total1940100.0400.8
Note: Participants could select multiple AI tools; therefore, the percentages exceed 100%. Percentages are calculated based on the total sample size (N = 484), representing the proportion of students who reported using each tool.
Table 19. Gender and educational placement differences regarding concerns about AI use in science learning.
Table 19. Gender and educational placement differences regarding concerns about AI use in science learning.
DimensionMale
M (SD)
Female
M (SD)
t-Test GS
M (SD)
NGS
M (SD)
t-Test
Content inaccuracy/inappropriateness3.12 (0.82)3.15 (0.66)−0.383.25 (0.72)3.05 (0.74)2.90 **
Overreliance on AI2.95 (0.92)3.04 (0.78)−1.203.03 (0.90)2.96 (0.84)0.80
Data security and privacy2.95 (0.90)3.14 (0.66)−2.63 **2.98 (0.88)3.09 (0.74)−1.41
AI can be used safely without concern3.10 (0.99)3.31 (0.79)−2.58 *3.38 (0.83)3.06 (0.94)3.81 ***
Note: * p < .05; ** p < .01; *** p < .001. Male (n = 226), Female (n = 233); GS: gifted students (n = 197); NGS: non-gifted students (n = 287); M: mean; SD: standard deviation.
Table 20. Group differences in AI-related concerns in science learning.
Table 20. Group differences in AI-related concerns in science learning.
MSDSourceSSMSFpPost-Test
Content GM > NGM
GM3.26.80between5.4631.8213.375.018
GF3.24.55
NGM2.97.81within245.43455.539
NGF3.11.70
Total3.13.74total250.89458
Over-reliance
GM2.96.96Between2.123.708.970.407
GF3.15.76
NGM2.94.88Within331.87455.729
NGF3.00.79
Total3.00.85total333.99458
Data security GF > GM
GM2.87.98Between5.8531.9503.123.026
GF3.18.57
NGM3.03.81Within284.2455.625
NGF3.13.70
Total3.05.80total290.4458
Without concern GM > NGM, GF > NGM, NGF > NGM
GM3.44.81Between34.21311.40315.389.000
GF3.41.74
NGM2.741.04Within337.13455.741
NGF3.27.81
Total3.21.90total371.34458
Note: GM: gifted male students; GF: gifted female students; NGM: non-gifted male students; NGF: non-gifted female students; M: mean; SD: standard deviation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.-H.; Kuo, C.-C.; Wu, C.-W. A Comparative Analysis of AI Use in Scientific Inquiry Learning Among Gifted and Non-Gifted Students. Educ. Sci. 2025, 15, 1611. https://doi.org/10.3390/educsci15121611

AMA Style

Li M-H, Kuo C-C, Wu C-W. A Comparative Analysis of AI Use in Scientific Inquiry Learning Among Gifted and Non-Gifted Students. Education Sciences. 2025; 15(12):1611. https://doi.org/10.3390/educsci15121611

Chicago/Turabian Style

Li, Mei-Huei, Ching-Chih Kuo, and Chiao-Wen Wu. 2025. "A Comparative Analysis of AI Use in Scientific Inquiry Learning Among Gifted and Non-Gifted Students" Education Sciences 15, no. 12: 1611. https://doi.org/10.3390/educsci15121611

APA Style

Li, M.-H., Kuo, C.-C., & Wu, C.-W. (2025). A Comparative Analysis of AI Use in Scientific Inquiry Learning Among Gifted and Non-Gifted Students. Education Sciences, 15(12), 1611. https://doi.org/10.3390/educsci15121611

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop