You are currently on the new version of our website. Access the old version .

AI in Education

AI in Education is an international, peer-reviewed, scholarly, open access journal on both the theoretical and practical applications of artificial intelligence (AI) within educational environments published quarterly online by MDPI.

All Articles (6)

Understanding how learners process data visualizations with seductive details is essential for improving comprehension and engagement. This study examined the influence of task-relevant and task-irrelevant seductive details on attentional distribution and comprehension in the context of data story learning, using COVID-19 data visualizations as experimental materials. A gaze-based methodology was applied, using eye-movement data and saliency maps to visualize learners’ attentional patterns while processing bar graphs with varying embellishments. Results showed that task-relevant seductive details supported comprehension for learners with higher visuospatial abilities by guiding attention toward textual information, while task-irrelevant details hindered comprehension, particularly for those with lower visuospatial abilities who focused disproportionately on visual elements. Working memory capacity emerged as a significant predictor of attentional distribution. Additionally, repeated exposure to data visualizations enhanced participants’ ability to recognize visualization types, improving efficiency and reducing reliance on legends and supplementary text. Overall, this study highlights the cognitive mechanisms underlying visualization processing in data story learning and provides practical implications for education, human–computer interaction, and adaptive technology design, emphasizing the importance of tailoring visualization strategies to individual learner differences.

22 January 2026

Example of a learning task with COVID-19-relevant seductive details.

Statistical education is a crucial yet often overlooked aspect of AI in higher education. However, traditional approaches usually focus heavily on procedural knowledge, leaving students anxious about statistics and less confident in applying concepts to real-world problems. This study examines a method that enhances statistical learning outcomes by integrating data visualization and gamification strategies. Students were randomly assigned to either a control group (CG) or an intervention group (IG), and each group was further divided into teams. The curriculum was enhanced in a college statistics course offered for both engineering and math majors. IG students applied data visualization and gamification in a hands-on group project aimed at solving a real-world problem and competed as they presented their results. The effectiveness of this approach was assessed through statistical analyses comparing the performance of IG and CG in surveys, final grades, and project grades. The results from evaluation methods indicated that IG students outperformed CG students, demonstrating a positive impact of gamification on statistics education.

12 December 2025

The Conceptual Framework Linking our Intervention.

We surveyed 297 STEM undergraduates at a single English-medium Sino–UK joint institution to document perceptions of AI chatbots for learning. Students reported high willingness to adopt AI chatbots (78%; 95% CI: 73.1–82.4) alongside concerns about over-reliance (67%; 95% CI: 61.4–72.1), content quality (52%; 95% CI: 46.2–57.5), and reduced human interaction (42%; 95% CI: 36.5–47.8). Over half (52%; 95% CI: 46.3–57.7) requested language/terminology support features, whereas only 16.8% reported language-related barriers. We attempted exploratory factor analysis and k-means clustering, but neither met the inclusion criteria; therefore, we report item-level frequencies only. The findings are descriptive and not generalisable (53% first-year, 80% male convenience sample). These patterns generate testable hypotheses about verification scaffolds, language support utility, and human–AI balance that warrant investigation through controlled studies.

18 November 2025

Distribution of survey respondents by (a) programme and (b) year of study.

This study examines how cognitive biases may shape ethical decision-making in AI-mediated environments, particularly within education and research. As AI tools increasingly influence human judgment, biases such as normalization, complacency, rationalization, and authority bias can lead to ethical lapses, including academic misconduct, uncritical reliance on AI-generated content, and acceptance of misinformation. To explore these dynamics, we developed an LLM-generated synthetic behavior estimation framework that modeled six decision-making scenarios with probabilistic representations of key cognitive biases. The scenarios addressed issues ranging from loss of human agency to biased evaluations and homogenization of thought. Statistical summaries of the synthetic dataset indicated that 71% of agents engaged in unethical behavior influenced by biases like normalization and complacency, 78% relied on AI outputs without scrutiny due to automation and authority biases, and misinformation was accepted in 65% of cases, largely driven by projection and authority biases. These statistics are descriptive of this synthetic dataset only and are not intended as inferential claims about real-world populations. The findings nevertheless suggest the potential value of targeted interventions—such as AI literacy programs, systematic bias audits, and equitable access to AI tools—to promote responsible AI use. As a proof-of-concept, the framework offers controlled exploratory insights, but all reported outcomes reflect text-based pattern generation by an LLM rather than observed human behavior. Future research should validate and extend these findings with longitudinal and field data.

4 October 2025

Flow diagram of the LLM-generated synthetic behavior estimation pipeline. Each scenario–bias pair was iteratively prompted 10,000 times under fixed parameters (temperature, top-p, seed). Outputs were coded against a predefined taxonomy of 15 cognitive biases, aggregated into frequency counts, and organized into contingency tables. Descriptive statistics (chi-square, odds ratios, confidence intervals, Nagelkerke R2) were then applied. Validation steps—including anchoring to the bias taxonomy, manual review, and ethical filtering—were integrated to reduce spurious or misleading results.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
AI Educ. - ISSN 3042-8130