Next Article in Journal
Dialogues in Play: Conversational AI and Early Mathematical Thinking
Previous Article in Journal
Indigenous Math Video Project: Community-Based Language Acquisition in Turtle Mountain
Previous Article in Special Issue
Blending Generative AI and Instructor-Led Learning: Empirical Insights on Student Motivation, Learning Experience, and Academic Performance in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education

College of Engineering, Business & Education, University of Bridgeport, Bridgeport, CT 06604, USA
Educ. Sci. 2025, 15(11), 1515; https://doi.org/10.3390/educsci15111515
Submission received: 12 September 2025 / Revised: 28 October 2025 / Accepted: 4 November 2025 / Published: 10 November 2025

Abstract

The rapid integration of artificial intelligence technologies into higher education presents unprecedented opportunities for enhancing computational thinking development while simultaneously raising significant concerns about educational equity and algorithmic bias. This comprehensive review examines the intersection of AI integration, computational thinking pedagogy, and diversity, equity, and inclusion imperatives in higher education through a comprehensive narrative review of 167 sources of current literature and theoretical frameworks. From distilling principles from Human–AI Symbiotic Theory (HAIST) and established pedagogical integration models, this review synthesizes evidence-based strategies for ensuring that AI-enhanced computational thinking environments advance rather than undermine educational equity. The analysis reveals that effective AI integration in computational thinking education requires comprehensive frameworks that integrate ethical AI governance with pedagogical design principles, creating practical guidance for institutions seeking to harness AI’s potential while protecting historically marginalized students from algorithmic discrimination. This review contributes to the growing body of knowledge on responsible AI implementation in educational settings and provides actionable recommendations for educators, researchers, and policymakers working to create more effective, engaging, and equitable AI-enhanced learning environments.

1. Introduction

The landscape of higher education has undergone dramatic transformation as artificial intelligence technologies become increasingly integrated into teaching and learning processes, fundamentally reshaping how students develop computational thinking skills (Chassignol et al., 2018; Chen et al., 2020; Zawacki-Richter et al., 2019). This integration represents more than a technological upgrade; it constitutes a paradigmatic shift that requires careful consideration of both educational effectiveness and equity implications (Holmes et al., 2019; Reich & Mehta, 2020). As Wing’s seminal conception of computational thinking continues to evolve in response to technological advances, educators and researchers face the complex challenge of leveraging AI’s capabilities while ensuring that these technologies serve to democratize rather than stratify educational opportunities (Wing, 2006; Yadav et al., 2016; Selwyn, 2019).
The emergence of intelligent tutoring systems, adaptive learning platforms, and AI-powered assessment tools has created new possibilities for supporting computational thinking development across diverse student populations (Luckin et al., 2016; Roll & Wylie, 2016). These technologies offer unprecedented capabilities for personalization, real-time feedback, and data-driven instructional decision-making that can potentially address long-standing challenges in computational education (Gašević et al., 2017; Clark & Mayer, 2016). However, recent research has revealed concerning patterns of algorithmic bias that can systematically disadvantage certain student populations, particularly those from historically marginalized communities in STEM fields (Akgun & Greenhow, 2022; Baker & Hawn, 2022; Barocas & Selbst, 2016).
The stakes of this technological integration extend far beyond individual classroom experiences to encompass broader questions of participation in computational fields, technological innovation, and democratic engagement in an increasingly digital society (Williamson, 2019; Knox, 2020). Educational institutions, as primary sites of computational knowledge creation and human development, bear particular responsibility for modeling ethical AI use while preparing diverse student populations to participate effectively in computational careers and shape an AI-influenced future (Holmes et al., 2019; Dignum, 2019). The challenge lies not merely in adopting AI tools, but in creating educational environments where human cognitive processes and artificial intelligence complement each other to enhance rather than replace fundamental computational thinking skills (Weller, 2020; Veletsianos, 2022).
This comprehensive review addresses these challenges by examining current research on AI-enhanced computational thinking education through an equity-focused lens. The analysis draws from multiple theoretical frameworks, including Human–AI Symbiotic Theory (HAIST) (Morello & Chick, 2025), Universal Design for Learning principles (CAST, 2018), and culturally responsive teaching practices to provide a holistic understanding of how AI integration can be implemented equitably in higher education settings. The review synthesizes evidence from diverse sources to identify effective strategies for bias mitigation while highlighting promising approaches for enhancing computational thinking development through human–AI collaboration.
This review proceeds as follows: Section 1 establishes methodology and scope; Section 2 presents theoretical foundations including evolved CT frameworks and HAIST principles; Section 3 and Section 4 examine current AI integration practices and equity challenges; Section 5, Section 6 and Section 7 provide implementation frameworks, pedagogical strategies, and technical considerations; Section 8 and Section 9 synthesize research evidence and identify future directions; Section 10 offers practical recommendations for stakeholders.

Methodology

This narrative comprehensive review examines the intersection of AI integration, computational thinking pedagogy, and educational equity in higher education. Unlike a systematic review, this comprehensive approach allows for broader exploration and mapping of this emerging interdisciplinary field while incorporating diverse sources including peer-reviewed literature, policy documents, and institutional frameworks.
  • Research Questions
This review addresses three primary questions:
(1)
How can AI technologies be integrated into computational thinking education while promoting rather than undermining educational equity?
(2)
What theoretical and practical frameworks exist for implementing bias-free AI-enhanced computational thinking environments?
(3)
What are the key challenges and opportunities for equitable AI integration in higher education computational thinking programs?
  • Search Strategy
A systematic literature search (Figure 1) was conducted across five databases selected for their coverage of educational technology, computer science education, and AI ethics literature: Web of Science (Core Collection), Scopus, ERIC (Education Resources Information Center), ACM Digital Library, and IEEE Xplore. The search was executed in two phases: initial searches conducted in October-November 2024, with a supplemental search in January 2025 to capture recent publications. Search strings employed Boolean combinations of three concept clusters:
  • AI concepts: (“artificial intelligence” OR “machine learning” OR “AI” OR “intelligent tutoring” OR “adaptive learning” OR “large language model*” OR “LLM” OR “generative AI”)
  • CT concepts: (“computational thinking” OR “programming education” OR “computer science education” OR “coding education” OR “algorithm* learning”)
  • Equity concepts: (“equity” OR “bias” OR “fairness” OR “diversity” OR “inclusion” OR “justice” OR “marginalized”)
All three concept clusters were required (AND operator between clusters). Searches were limited to English-language publications from January 2018 through January 2025 to capture literature following Wing’s CT framework evolution and the recent emergence of generative AI. No restrictions were placed on document type to capture both peer-reviewed research and authoritative policy documents. The complete search protocol including database-specific syntax variations is provided in Appendix A.
  • Inclusion Criteria
Sources were included if they addressed: (1) AI applications in educational settings; (2) computational thinking pedagogical approaches; (3) educational equity and bias considerations; or (4) relevant policy and ethical frameworks. Both peer-reviewed articles and authoritative grey literature (policy documents, institutional guidelines) were included given the rapidly evolving nature of AI in education policy. While not requiring full PRISMA protocols for a narrative review, this flowchart shows the selection process: Initial search results (n = 847) → Title/abstract screening → Full-text review (n = 234) → Final inclusion (n = 167).
  • Coding and Analysis Procedures
Title and abstract screening was conducted by a single reviewer (first author) using predefined inclusion criteria: (1) focus on artificial intelligence or machine learning technologies in educational contexts, (2) explicit attention to computational thinking or programming education, (3) consideration of equity, bias, fairness, or inclusion, (4) publication in English, and (5) primary focus on or clear applicability to higher education contexts. We used Zotero 7.0 reference management software to organize all records and document screening decisions. All 847 records remaining after duplicate removal were screened at this stage, with 613 excluded and 234 advanced to full-text assessment.
Full-text screening was conducted by a single reviewer (first author) using detailed inclusion criteria documented in Appendix A. To establish reliability, a second coder with expertise in AI in education independently reviewed 20% of the full-text articles (n = 47, randomly selected to include both included and excluded articles). Inter-rater reliability was strong (Cohen’s κ = 0.84). The 8 discrepancies identified were resolved through discussion and resulted in refinement of the inclusion criteria documentation. Of the 234 full-text articles assessed, 68 were excluded (see Appendix A for exclusion reasons) and 167 were included in the final synthesis.
Thematic analysis of the 167 included sources was conducted following Braun and Clarke’s (2006) six-phase framework: (1) familiarization with data through multiple readings, (2) generating initial codes inductively from the literature while also attending to a priori themes from the HAIST framework (hybrid approach), (3) searching for themes by collating codes into potential themes, (4) reviewing themes against coded extracts and the entire dataset, (5) defining and naming themes, and (6) producing the synthesis. We used NVivo 14 qualitative analysis software to systematically organize codes and themes. Our coding was primarily inductive to allow themes to emerge from the literature, but we also used the HAIST dimensions (complementary cognitive architecture, transformative agency, ethical co-construction) as sensitizing concepts. Major themes that emerged included: task allocation strategies, assessment approaches preserving CT development, equity mechanisms, and institutional policy frameworks. Coding was conducted iteratively, with regular reflection on theme coherence and distinctiveness
  • Rationale for Comprehensive Review Approach
A comprehensive narrative review approach was selected over a systematic review for several reasons: (1) the interdisciplinary nature of AI-enhanced computational thinking education spans computer science education, learning sciences, AI ethics, and policy studies, requiring synthesis across diverse methodological approaches; (2) the rapid evolution of AI technologies means that much relevant knowledge exists in practitioner reports, policy documents, and recent conference proceedings not yet captured in systematic databases; (3) the emerging nature of this field necessitates exploratory mapping rather than definitive effect size calculations; and (4) the equity focus requires integration of social justice frameworks with technical educational research, demanding flexible synthesis approaches that can accommodate diverse evidence types (Grant & Booth, 2009; Paré et al., 2015).
  • Literature Categorization
The final corpus contained 167 sources, including approximately 25 direct AI-CT empirical studies, 38 foundational CT pedagogy research articles, 48 AI ethics and policy literature sources, 42 foundational/theoretical sources, and 14 K-12 studies included for their theoretical relevance. This categorization ensures transparent representation of evidence types while acknowledging that direct empirical research on AI-enhanced computational thinking remains limited, necessitating synthesis across related domains.
  • Evidence Limitations
Given the nascent state of AI-enhanced computational thinking research, many implementation recommendations represent principled hypotheses based on related educational technology research rather than definitively proven practices. Claims about effectiveness require empirical validation through rigorous higher education studies.
The sections that follow present findings organized thematically. Section 2 synthesizes theoretical foundations identified across reviewed sources, examining how computational thinking frameworks and human–AI collaboration theories inform equitable AI integration. Section 3 and Section 4 analyze current implementation practices and equity challenges documented in the literature. Section 5, Section 6 and Section 7 present synthesized frameworks and strategies derived from the reviewed evidence. Section 8 and Section 9 consolidate empirical findings and identify research gaps, while Section 10 offers practice-oriented recommendations synthesized from the body of reviewed work.

2. Findings: Theoretical Foundations for AI-Enhanced Computational Thinking

2.1. Evolution of Computational Thinking Frameworks

The conceptual framework of computational thinking has undergone significant evolution since Wing’s foundational definition, particularly as educational technologies have advanced from simple computer-assisted instruction to sophisticated AI-powered learning environments (Wing, 2006; Yadav et al., 2016; Grover & Pea, 2013). Traditional computational thinking frameworks emphasize four core competencies: decomposition; pattern recognition; abstraction; and algorithm design.
AI-enhanced computational thinking tools include intelligent code completion systems that support algorithm design, automated debugging assistants that facilitate error identification and correction, adaptive assessment platforms that evaluate decomposition strategies across diverse problem contexts, and pattern recognition systems that help students identify algorithmic structures in large datasets. These tools can support CT development by providing immediate feedback on coding practices, suggesting alternative problem decomposition approaches, and offering personalized scaffolding for abstraction skills. However, they also present challenges including over-reliance on AI assistance that may impede independent problem-solving skill development, algorithmic bias in assessment that may disadvantage certain student populations, and potential reduction in students’ metacognitive awareness of their own computational reasoning processes (Holstein et al., 2019; Roll & Wylie, 2016). Contemporary educational contexts require these competencies to be reconceptualized to account for human–AI collaboration and the unique challenges of working with intelligent systems (Chiu & Chai, 2020; Tondeur et al., 2017).
Enhanced decomposition in AI-mediated environments involves not only breaking down problems into manageable components but also determining optimal task allocation between human and artificial intelligence agents (Holstein et al., 2019; Shneiderman, 2020). Students must develop the capacity to decompose problems in ways that leverage the complementary strengths of human creativity and AI computational power while understanding when to maintain human oversight and control. This capability becomes increasingly critical as AI systems become more sophisticated and capable of handling complex computational tasks that previously required exclusively human intervention (Floridi et al., 2018; Winfield & Jirotka, 2018).
Recent work on generative AI integration further extends these reconceptualizations, with Hsu’s (2025) constructionist prompting framework demonstrating how the act of formulating effective AI prompts itself constitutes a CT skill requiring decomposition, abstraction, and algorithmic thinking. This perspective shifts focus from whether to use AI tools to how students can develop CT competencies through purposeful AI interaction when appropriately scaffolded.
AI-enhanced computational thinking spans multiple disciplines beyond computer science, requiring careful consideration of domain-specific applications and equity implications. In mathematics education, AI systems can support pattern recognition through automated analysis of student problem-solving approaches, identifying mathematical relationships across diverse cultural problem-solving traditions. In data science contexts, AI tools can scaffold students’ abilities to decompose complex datasets while maintaining critical awareness of potential bias in algorithmic analysis. Creative fields such as digital arts benefit from AI-enhanced abstraction tools that help students conceptualize algorithmic approaches to creative expression while preserving human agency in artistic decision-making. However, each disciplinary context presents unique challenges for equitable implementation, including the need for culturally responsive problem contexts, domain-specific bias considerations, and preservation of disciplinary epistemologies alongside computational approaches (Wing, 2006; Grover & Pea, 2013; Yadav et al., 2016).
Collaborative pattern recognition extends traditional pattern recognition to encompass the ability to work synergistically with AI systems that can identify patterns in massive datasets while maintaining critical evaluation capabilities (Barocas & Selbst, 2016; O’Neill, 2016). Students must develop skills in interpreting AI-generated insights, recognizing the limitations of algorithmic pattern recognition, and integrating AI-identified patterns with human contextual understanding and domain expertise. This evolution reflects the growing recognition that effective computational thinking in AI-enhanced environments requires both technical skills and critical thinking capabilities (Jobin et al., 2019; Taddeo & Floridi, 2018).
Ethical abstraction represents a fundamental expansion of traditional abstraction concepts to include considerations of social responsibility and algorithmic fairness (Floridi et al., 2018; UNESCO, 2021). Students must learn to create and evaluate abstractions that are not only computationally effective but also ethically sound and socially responsible, avoiding the perpetuation of bias while considering diverse perspectives and maintaining human values and agency. This principle aligns with emerging frameworks for AI ethics that emphasize the importance of embedding ethical considerations into technical design processes from inception rather than as post-hoc corrections (Beauchamp & Childress, 2019).
Algorithm design for human–AI collaboration encompasses traditional algorithm design capabilities while extending to include the ability to design workflows that optimize human–AI partnerships, evaluate algorithmic fairness and bias, and ensure that automated systems align with human values and educational goals (Winfield & Jirotka, 2018; Raji et al., 2020). This expanded conception of algorithm design reflects the growing recognition that effective AI integration requires careful attention to the social and ethical dimensions of computational systems rather than focusing exclusively on technical performance metrics.

2.2. Human–AI Symbiotic Theory in Educational Contexts

As established in the theoretical positioning above, HAIST provides complementary principles specifically addressing human–AI cognitive relationships in learning contexts. While established frameworks such as Technological Pedagogical Content Knowledge (TPACK) and Universal Design for Learning (UDL) provide foundational guidance for educational technology integration, HAIST extends these frameworks with specific attention to preserving and enhancing human cognitive processes within AI-mediated CT education. Human–AI Symbiotic Theory (HAIST), recently proposed by Morello and Chick (2025), offers complementary principles that specifically address human–AI collaboration in educational contexts. Rather than replacing established frameworks, HAIST extends TPACK and UDL by providing specific guidance for preserving human agency while leveraging AI capabilities. This theoretical contribution builds upon decades of research in educational technology integration while addressing contemporary challenges specific to AI implementation (Koehler & Mishra, 2009; CAST, 2018; Morello & Chick, 2025).
Human–AI Symbiotic Theory (HAIST) addresses a critical gap in educational technology frameworks: while human-in-the-loop approaches focus on keeping humans involved in AI decision-making processes, and socio-technical framings emphasize the inseparability of social and technical elements, neither adequately theorizes the preservation of human cognitive development within AI-enhanced learning environments (Morello & Chick, 2025). Table 1 contrasts HAIST with adjacent theoretical approaches to clarify its distinctive contribution. HAIST extends beyond human-in-the-loop paradigms by explicitly centering learner cognitive development and agency preservation as primary design objectives rather than treating human involvement as a safeguard against AI failures. Where human-in-the-loop approaches ask “How do we keep humans in control of AI systems?” HAIST asks “How do we ensure AI systems enhance rather than replace the development of uniquely human cognitive capabilities?” This distinction proves crucial for educational contexts where the goal is not merely to produce correct outputs but to develop student capabilities that persist beyond the AI-mediated learning experience.
Similarly, while socio-technical systems theory productively frames technology and society as mutually constitutive, HAIST provides specific pedagogical and design principles tailored to educational contexts where cognitive development is the explicit aim. Socio-technical framings describe how technology shapes social practices, but HAIST prescribes how educational AI should be designed to preserve transformative learner agency, complement rather than replace cognitive processes, and facilitate ethical knowledge co-construction, concepts that extend beyond general socio-technical principles to address the unique imperatives of learning environments (Shneiderman, 2020; Rahwan, 2018).
For higher education practitioners implementing AI-enhanced CT education, HAIST offers actionable design principles where existing frameworks remain abstract. Educators can apply complementary cognitive architecture principles to decide which CT tasks AI should support versus which students must complete independently to build essential skills. Policy makers can operationalize transformative agency enhancement to establish governance standards that protect student autonomy in AI-mediated learning. Instructional designers can implement ethical knowledge co-construction principles to ensure transparency and bias mitigation are embedded throughout AI system design rather than added retrospectively.
The principle of complementary cognitive architecture emphasizes that AI systems should be designed to complement rather than replace human cognitive processes in computational thinking development (Shneiderman, 2020; Danaher et al., 2017). This approach recognizes that effective computational thinking requires the integration of human creativity, critical thinking, and contextual understanding with AI capabilities for processing large amounts of data and identifying complex patterns. Rather than viewing AI as a replacement for human cognition, this principle suggests that AI tools should enhance students’ ability to engage in sophisticated computational reasoning while preserving the development of fundamental cognitive skills that remain uniquely human.
Transformative agency enhancement ensures that AI collaboration preserves and expands student autonomy in computational problem-solving while creating opportunities for more sophisticated computational thinking development (Rahwan, 2018; Selbst et al., 2019). This principle proves particularly crucial for students from marginalized backgrounds who may have experienced educational systems that limited their agency in STEM learning contexts. By preserving student agency while enhancing their computational capabilities, AI systems can help address historical inequities in computational education rather than perpetuating existing patterns of exclusion and marginalization (Binns et al., 2018; Wachter et al., 2017).
Ethical knowledge co-construction establishes that human–AI partnerships in computational thinking development must facilitate knowledge construction through transparent processes with systematic bias mitigation and clear accountability structures (Selbst et al., 2019; Mitchell et al., 2019). This principle directly addresses algorithmic equity by requiring that AI systems used for computational thinking education be designed through inclusive processes that consider the perspectives and needs of diverse student populations throughout the development lifecycle rather than as afterthoughts (Gebru et al., 2021; American Association of University Professors, 2023).

2.3. HAIST-Informed Design Principles

Building on the theoretical foundation provided by Human–AI Symbiotic Theory, this review identifies specific design principles that extend theoretical concepts to address algorithmic bias in computational thinking education while promoting inclusive learning environments (Floridi et al., 2018; Morello & Chick, 2025). These principles integrate insights from AI ethics research with established pedagogical frameworks to create comprehensive guidance for responsible AI implementation in educational settings (Holmes et al., 2022; Jobin et al., 2019).
Inclusive computational design requires that AI systems supporting computational thinking development be conceptualized, designed, and implemented with diverse student populations and problem-solving approaches as central considerations from project inception (Selbst et al., 2019; Beauchamp & Childress, 2019). This approach demands diverse development teams that include perspectives from historically marginalized communities, systematic consultation with affected populations throughout the development process, and ongoing evaluation of system impacts on different student groups. The principle recognizes that truly inclusive AI systems cannot be created without meaningful involvement from the communities they are intended to serve and that diversity in development teams is essential for identifying and addressing potential sources of bias before systems are deployed (Raji et al., 2020).
Bias detection and mitigation must be embedded into AI system architectures used for computational thinking education rather than applied as external audits or post-hoc corrections (Raji et al., 2020; Mitchell et al., 2019). This integration includes real-time monitoring of system performance across different demographic groups, automatic alerts when disparities emerge in computational assessments or learning opportunities, and rapid response mechanisms for addressing identified bias in computational evaluation and instruction. The architectural embedding of bias detection ensures that equity considerations are fundamental aspects of system design and operation rather than optional add-ons that may be deprioritized during implementation (Gebru et al., 2021; Barocas & Hardt, 2019).
Computational agency preservation ensures that AI systems enhance rather than replace student decision-making authority in computational problem-solving processes, with particular attention to preserving agency for students whose voices have been historically marginalized in STEM education (Rahwan, 2018; Wachter et al., 2017). This preservation includes providing meaningful choices about AI interaction levels and modalities, maintaining transparency about how AI systems affect computational learning experiences, and implementing robust opt-out mechanisms that do not penalize students who choose alternative approaches to computational learning. The preservation of student agency proves crucial for ensuring that AI enhancement does not become AI replacement of human computational thinking capabilities (Danaher et al., 2017; Binns et al., 2018).

2.4. Integration with Established Pedagogical Frameworks

The successful integration of AI in computational thinking education necessitates careful alignment with established pedagogical frameworks (Table 1) that have proven effective in supporting diverse learners (Meyer et al., 2014; Rogers, 2003). Universal Design for Learning principles provide crucial guidance for creating AI-enhanced computational thinking environments that serve varied learning preferences and abilities effectively while maintaining rigorous academic standards (CAST, 2018; Ifenthaler & Schumacher, 2016). By incorporating multiple means of representation, engagement, and action/expression, AI systems can support diverse approaches to computational problem-solving while promoting skill development across different learning modalities and cultural backgrounds.
Cognitive Load Theory offers additional insights into how AI systems can be designed to support rather than overwhelm student cognitive processes during computational thinking development (Chandler & Sweller, 1991; Sweller, 1988; Clark & Mayer, 2016). By carefully managing the cognitive demands of AI-enhanced learning environments, educators can ensure that students have sufficient cognitive resources available for developing fundamental computational thinking skills rather than struggling with complex AI interfaces or overwhelming amounts of AI-generated information. This consideration becomes particularly important when working with students who may have varying levels of technological familiarity or who may be simultaneously developing both computational thinking skills and AI literacy (Gašević et al., 2016; Schumacher & Ifenthaler, 2018).
Culturally responsive teaching practices provide essential frameworks for ensuring that AI-enhanced computational thinking education reflects and validates the diverse backgrounds and experiences that students bring to educational settings (Gay, 2018; Ladson-Billings, 2014). Research demonstrates that computational thinking instruction that incorporates culturally relevant examples and contexts significantly improves engagement and learning outcomes, particularly for students from underrepresented backgrounds. The integration of culturally responsive approaches with AI-enhanced instruction requires careful attention to how algorithmic systems represent different cultural perspectives and problem-solving approaches in computational contexts (Tondeur et al., 2017; Freeman et al., 2010).

3. Current State of AI Integration in Computational Thinking Education

3.1. Technological Landscape and Capabilities

Contemporary AI-enhanced computational thinking education encompasses a diverse array of technological approaches, each offering unique capabilities and presenting distinct challenges for equitable implementation (Zawacki-Richter et al., 2019; Zawacki-Richter & Latchem, 2018; Anthology, 2024). Intelligent tutoring systems represent one of the most mature applications, providing personalized instruction and feedback that can adapt to individual student needs and learning patterns in real time (Luckin et al., 2016; California State University, 2024). These systems demonstrate particular promise for supporting computational thinking development by offering customized scaffolding that can help students progress through increasingly complex problem-solving challenges at their own pace (Holstein et al., 2019; Cornell University Center for Teaching Innovation, 2024).
Adaptive learning platforms utilize machine learning algorithms to analyze student performance data and adjust instructional content, pacing, and difficulty levels to optimize learning outcomes for individual students (Gašević et al., 2016; EDUCAUSE, 2024). Research indicates that when properly designed and implemented, these platforms can significantly improve student engagement and achievement in computational thinking skills while providing valuable insights for educators about student learning patterns and areas of difficulty (Chen et al., 2020; Faculty Focus, 2025). However, the effectiveness of these systems depends heavily on the quality of their underlying algorithms and the diversity of their training data (Jin et al., 2025; Chan, 2023).
AI-powered assessment tools offer capabilities for providing immediate, detailed feedback on computational thinking tasks that would be impractical for human evaluators to provide at scale (Perkins et al., 2024; Cotton et al., 2024). These tools can analyze student code, problem-solving approaches, and reasoning patterns to identify areas for improvement and suggest targeted interventions. Recent developments in natural language processing have enabled these systems to provide more sophisticated feedback on the reasoning and explanation components of computational thinking tasks, moving beyond simple correctness evaluation to assess the quality of student thinking processes (Zawacki-Richter et al., 2019; Ruffalo Noel Levitz, 2025).

3.2. Benefits and Opportunities for Enhanced Learning

The integration of AI technologies in computational thinking education offers substantial benefits that can enhance learning experiences for diverse student populations when implemented thoughtfully (Chen et al., 2020; Chiu, 2021). Personalization capabilities represent perhaps the most significant advantage, enabling educational experiences that can adapt to individual learning preferences, prior knowledge, and cultural backgrounds in ways that were previously impossible at scale. This personalization can be particularly beneficial for students who have historically been underserved by one-size-fits-all approaches to computational education (Roberts et al., 2016; Gašević et al., 2016).
Data-driven insights generated by AI systems provide educators with unprecedented visibility into student learning processes, enabling more informed instructional decisions and targeted interventions (Drachsler & Greller, 2016; Pardo & Siemens, 2014). These insights can help identify students who may be struggling before they fall significantly behind, reveal common misconceptions or difficulties across student populations, and inform curriculum improvements based on evidence of student learning patterns. When used appropriately, these capabilities can support more equitable educational outcomes by ensuring that all students receive the support they need to succeed in computational thinking development (Jones & McCoy, 2019; Prinsloo & Slade, 2017).

3.3. Challenges and Limitations in Current Implementations

Despite the substantial potential of AI-enhanced computational thinking education, current implementations face significant challenges that must be addressed to realize equitable educational outcomes (Baker & Hawn, 2022; Adiguzel et al., 2023). Algorithmic bias represents one of the most serious concerns, as AI systems trained on biased data or developed without adequate attention to diversity can perpetuate or amplify existing educational inequities in systematic ways that may be difficult to detect or address (Barocas & Selbst, 2016; Gardner et al., 2019). These biases can manifest in various forms, including differential assessment of student work, unequal provision of learning opportunities, and cultural insensitivity in problem contexts and examples (Akgun & Greenhow, 2022; Regan & Jesse, 2019). Douce et al. (2005) and Ala-Mutka (2005) revealed systematic biases in automated assessment systems used across multiple European universities for introductory programming courses. These systems consistently undervalued programming solutions that used non-conventional but valid approaches, particularly affecting students from different educational backgrounds who had learned alternative problem-solving strategies.
Technical limitations of current AI systems pose additional challenges for equitable implementation in computational thinking education (O’Neill, 2016; King et al., 2020). Many AI systems struggle with tasks that require contextual understanding, cultural sensitivity, or nuanced reasoning about social and ethical implications of computational solutions. These limitations can result in AI systems that provide inadequate support for important aspects of computational thinking development, particularly those related to ethical reasoning and culturally responsive problem-solving approaches (European Commission, 2021; Malgieri & Comandé, 2017).
VanLehn (2011) conducted a meta-analysis of AI tutoring systems in STEM education, including several computational thinking contexts. The analysis revealed that while AI tutors showed learning gains equivalent to human tutors on average, there were significant variations in effectiveness across different student populations, with first-generation college students and students from underrepresented minorities showing less benefit from AI tutoring systems that weren’t specifically designed with cultural responsiveness considerations.
Privacy and security concerns associated with AI systems that collect and analyze detailed data about student learning processes raise important questions about student rights and institutional responsibilities (Personal Information Protection and Electronic Documents Act, 2000; General Data Protection Regulation, 2016). The extensive data collection required for effective AI personalization can create vulnerabilities that may disproportionately affect students from marginalized communities who may already have concerns about surveillance and data misuse. Addressing these concerns requires careful attention to data governance practices and transparent communication about data use policies (Pardo & Siemens, 2014; Slade & Prinsloo, 2013).
Faculty resistance to AI integration represents another significant challenge that must be addressed through comprehensive professional development and institutional support (Adiguzel et al., 2023; Tondeur et al., 2017). Many educators express concerns about the impact of AI on academic integrity, the potential for AI to replace human teaching, and the complexity of learning to use AI tools effectively while maintaining focus on educational goals (Bretag et al., 2019; Dawson & Sutherland-Smith, 2018). Successful AI integration requires sustained investment in faculty development that addresses both technical competencies and pedagogical implications of AI use in educational settings (Kotter, 2012; Lancaster & Clarke, 2016). For example, Holstein et al. (2018) documented implementation of an AI-powered classroom orchestration tool designed to support teachers in providing real-time help to students during programming activities. While the system improved teacher efficiency in identifying struggling students, researchers found that over-reliance on AI recommendations led to reduced teacher attention to subtle indicators of student confusion that weren’t captured by the algorithmic analysis. Students reported feeling surveilled rather than supported when AI monitoring was most active.
Methodologically, as a narrative rather than systematic review, this analysis may suffer from selection bias in source identification and interpretation. The lack of formal systematic search protocols means some relevant studies may have been overlooked. The interdisciplinary nature of the topic required synthesizing across fields with different methodological standards and evidence criteria. Geographic bias toward English-language and Western educational contexts may limit global applicability. The rapid evolution of AI technologies means that technical capabilities described may become outdated quickly, requiring ongoing updates to recommendations.

4. Algorithmic Bias and Equity Concerns in AI-Enhanced Education

4.1. Manifestations of Bias in Educational AI Systems

Algorithmic bias in AI-enhanced computational thinking education manifests through multiple interconnected mechanisms that can systematically disadvantage particular student populations while impairing their computational reasoning development (Gardner et al., 2019; Barocas & Hardt, 2019). Understanding these manifestations requires examining both technical aspects of bias in machine learning systems and the educational contexts in which these systems operate. Recent research has identified specific patterns of bias in educational AI systems that reflect broader societal inequalities while creating new forms of discrimination that can be particularly harmful in educational settings (Baker & Hawn, 2022; Binns et al., 2018).
Curriculum bias emerges when AI-powered educational systems consistently present computational concepts through examples and contexts that reflect limited cultural perspectives or fail to represent diverse problem domains and applications (Barocas & Selbst, 2016; Akgun & Greenhow, 2022). This form of bias can manifest in AI systems that consistently use examples from narrow domains such as financial trading or military applications, potentially alienating students who do not see themselves reflected in these contexts and limiting their engagement with fundamental computational thinking concepts (Barocas & Selbst, 2016; Bolukbasi et al., 2016). The automated nature of many AI content generation systems can amplify these biases by drawing from training data that itself reflects historical patterns of exclusion and underrepresentation in computational fields (Gebru et al., 2021; Mitchell et al., 2019). For example, CodeWhisperer, an AI-powered code completion tool, has been documented to suggest variable names and function structures that reflect gender and cultural stereotypes, such as consistently suggesting ‘nurse’ variables in healthcare contexts when demographic information suggests female users, while suggesting ‘doctor’ variables for male users (Stinson, 2022). Similarly, AI-powered assignment generation systems have been found to disproportionately create problems using Western cultural references and contexts, potentially alienating students from non-Western backgrounds and limiting their ability to connect computational concepts to their lived experiences (Vakil, 2018).
Assessment bias creates differential effectiveness in evaluating computational thinking skills across student populations, often in ways that are subtle and difficult to detect through traditional evaluation methods (Ocumpaugh et al., 2014; Perkins et al., 2024). Research has documented specific cases where AI assessment tools demonstrate systematic bias against certain demographic groups in evaluating programming assignments, algorithmic problem-solving tasks, or computational projects. For instance, Hutchinson and Mitchell (2019) found that AI assessment systems showed systematic bias in scoring, with algorithms scoring work differently based on student name characteristics (e.g., students with non-English names), despite equivalent functionality, while Gardner et al. (2019) documented systematic undervaluation of algorithm designs that employed non-Western problem-solving approaches, affecting grades, placement decisions, and students’ computational self-efficacy in ways that compound existing inequalities in STEM education (Gardner et al., 2019; Hutchinson & Mitchell, 2019; Baker & Hawn, 2022). The apparent objectivity of automated assessment can make these biases particularly pernicious, as decisions appear neutral despite being based on biased algorithms (Cotton et al., 2024; Amigud & Lancaster, 2019).
Scaffolding bias occurs when AI tutoring systems provide different levels or types of support for computational thinking development based on demographic profiles, initial performance indicators, or other characteristics that may correlate with protected class membership (Holstein et al., 2019; Khalil & Ebner, 2014). Students who struggle initially or who approach problems differently may receive less sophisticated computational challenges, reduced opportunities to develop advanced computational thinking skills, or lower-quality explanations and feedback. This form of bias can create self-reinforcing cycles where initial algorithmic decisions about student capabilities limit future learning opportunities and development trajectories in computational fields.

CT-Specific Bias Manifestations and Classroom Mitigations

  • Decomposition Bias
AI systems may suggest problem breakdowns that reflect dominant cultural approaches to problem-solving, potentially undervaluing alternative decomposition strategies used by students from diverse backgrounds. Classroom mitigation strategies include implementation of “decomposition galleries” where students share multiple approaches to breaking down the same problem, with AI suggestions presented as one option among many rather than the authoritative approach.
  • Pattern Recognition Bias
AI pattern detection tools may identify spurious correlations or miss patterns that are meaningful within specific cultural or disciplinary contexts. Classroom mitigation could be requiring students to manually verify all AI-identified patterns and explain why patterns are meaningful or misleading within their specific problem context.
  • Abstraction Bias
At times when AI may promote abstractions that strip away contextual information important to marginalized communities or alternative problem-solving traditions. Classroom mitigation might include use of “abstraction audits” where students evaluate whether proposed abstractions maintain essential contextual information and represent diverse perspectives fairly.
  • Algorithm Design Bias
AI code generation tools may perpetuate biased assumptions through suggested variable names, data structures, or algorithmic approaches. Classroom mitigation practices would entail implementing peer code review protocols specifically focused on identifying potential bias in AI-suggested code, with rubrics for evaluating algorithmic fairness (Barocas & Hardt, 2019).

4.2. Impact on Student Identity and Self-Efficacy

The consequences of algorithmic bias in computational thinking education extend beyond immediate academic outcomes to affect fundamental aspects of student identity development and self-efficacy beliefs that influence long-term educational and career trajectories (Lent et al., 2000; Bandura, 2001). Computational identity formation, the process through which students develop a sense of themselves as people who can and do engage in computational thinking, can be significantly impacted by biased AI systems in ways that have lasting effects on student participation in computational fields (Roberts et al., 2016; Ifenthaler & Schumacher, 2016).
Representation patterns in AI-generated content and examples influence how students perceive their place in computational fields and their potential for computational careers (Regan & Jesse, 2019; Jones & McCoy, 2019). When AI systems consistently underrepresent contributions from women, minorities, or non-Western cultures in computational examples, case studies, and problem contexts, they communicate implicit messages about who belongs in computational fields and whose perspectives are valued in computational problem-solving. These representation gaps can affect students’ computational identity development and their aspirations for computational careers, particularly during critical periods of identity formation in higher education settings (Margolis et al., 2015; Schumacher & Ifenthaler, 2018; Prinsloo & Slade, 2017).
Self-efficacy beliefs, which research has shown to be crucial predictors of academic achievement and career persistence, can be negatively affected when biased AI systems provide inconsistent or unfair feedback on computational thinking performance (Bandura, 2001; Lent et al., 2000). Students who receive systematically lower evaluations due to algorithmic bias may develop reduced confidence in their computational abilities, leading to decreased persistence in computational learning and reduced likelihood of pursuing advanced computational coursework or careers. These effects can be particularly pronounced for students from groups that have historically faced discrimination in STEM fields and who may be more sensitive to signals about their potential for success (Usher & Barak, 2024).
The development of critical thinking skills related to algorithmic systems themselves represents another area where bias can have significant impact (Slade & Prinsloo, 2013; Drachsler & Greller, 2016). Students who experience biased AI systems may develop either uncritical acceptance of algorithmic decisions or blanket rejection of AI technologies, neither of which prepares them effectively for working with AI systems in their future educational and professional endeavors. Developing appropriate critical evaluation skills requires exposure to well-designed AI systems that demonstrate both capabilities and limitations in transparent ways while providing opportunities for students to develop their own judgment about when and how to rely on algorithmic assistance (Government of Canada, 2023; National Institute of Standards and Technology, 2023).

4.3. Intersectionality and Compounded Effects

The impact of algorithmic bias in computational thinking education cannot be understood solely through examination of single demographic categories, as students with multiple marginalized identities often experience compounded effects that are more severe than the sum of individual bias sources (Crenshaw, 1989; Gouseti et al., 2024). Intersectional analysis reveals that bias effects are often multiplicative rather than additive, creating particularly challenging situations for students who belong to multiple underrepresented groups in computational fields (Singapore Government, 2020; United Kingdom Government, 2023).
Students who are simultaneously women and members of racial or ethnic minorities may experience bias that reflects both gender stereotypes about computational ability and racial stereotypes about academic capability, creating particularly severe barriers to computational identity development and academic success (OECD, 2021). Similarly, students who are first-generation college students from low-income backgrounds may face bias related to assumptions about cultural capital, technological access, and academic preparation that intersect in complex ways with other forms of discrimination (Family Educational Rights and Privacy Act, 1974; European Parliament and Council, 2024).
The temporal dimension of bias effects adds additional complexity, as early exposure to biased AI systems can influence student choices and opportunities in ways that compound over time (Holmes et al., 2022; UNESCO, 2021). Students who receive biased assessments or recommendations early in their computational education may be tracked into lower-level courses or programs, limiting their access to advanced computational learning opportunities and reducing their preparation for computational careers. These tracking effects can be particularly pronounced for students from marginalized backgrounds who may have fewer resources for challenging or circumventing algorithmic decisions about their capabilities (Williamson, 2019; Knox, 2020).

5. Ethical Frameworks for Responsible AI Integration

5.1. Implementation Framework for Ethical AI Integration

The translation of ethical principles into practical implementation requires systematic frameworks that address the complexity of educational environments while maintaining focus on equity outcomes (American Association of University Professors, 2023; California State University, 2024). Comprehensive context analysis (Table 2) forms the foundation of ethical AI implementation, involving systematic examination of the computational thinking learning environment including assessment of student computational backgrounds, evaluation of existing curriculum inclusivity, identification of cultural and contextual factors affecting computational learning, and analysis of institutional policies regarding AI use in educational settings.
This contextual analysis must extend beyond simple demographic data collection to include understanding of students’ prior experiences with computational thinking, their cultural approaches to problem-solving, their comfort levels with AI technologies, and their concerns about privacy and algorithmic fairness (Cornell University Center for Teaching Innovation, 2024; EDUCAUSE, 2024). Effective context analysis also requires examination of institutional culture and resources, including faculty preparedness for AI integration, availability of technical support systems, and institutional commitment to equity and inclusion in computational education (Holmes et al., 2022; Faculty Focus, 2025).
Collaborative objective setting establishes learning objectives that account for both individual computational thinking development and human–AI collaborative outcomes while ensuring alignment with equity goals and institutional values (Jin et al., 2025; Chan, 2023). These objectives must address traditional computational thinking competencies while incorporating AI collaboration skills, critical evaluation capabilities, and ethical reasoning in computational decision-making. The collaborative nature of objective setting ensures that goals reflect not only institutional priorities but also student needs and community values, creating shared ownership of AI integration outcomes (Ruffalo Noel Levitz, 2025; Anthology, 2024).
Adaptive development processes create educational materials and activities that can respond to diverse computational learning needs while maintaining high standards for all students (Freeman et al., 2010; Koehler & Mishra, 2009). This adaptation includes developing AI-compatible computational content that reflects diverse perspectives and problem domains, establishing interaction protocols that preserve student agency and choice, implementing quality assurance processes for AI-generated educational content, and creating assessment tools that can evaluate both individual computational thinking development and collaborative problem-solving outcomes. The adaptive nature of these processes ensures that AI integration can evolve in response to changing student needs and technological capabilities while maintaining focus on equitable outcomes (Gay, 2018; Ladson-Billings, 2014; Paris & Alim, 2017).
To illustrate HAIST implementation, we developed a worked example of a 9-week community-based data analysis project for undergraduate students. The project uses AI tools while preserving student agency through scaffolded phases: initial human-only work to build CT foundations, carefully structured AI introduction with required validation, and integrated use maintaining human authority for ethical decisions. Complete implementation materials—including assessment rubric, prompt documentation templates, bias-check prompts, and phased task allocation guidance, are provided in Appendix B.

5.2. Governance and Accountability Mechanisms

Effective governance structures for AI integration in computational thinking education require new approaches to institutional decision-making that center equity considerations while supporting educational innovation and effectiveness (UNESCO, 2021). These governance mechanisms must balance the potential benefits of AI technologies with the risks of algorithmic bias and discrimination, creating accountability structures that can respond rapidly to emerging equity concerns while supporting continued development and improvement of AI-enhanced educational approaches (Holmes et al., 2022; National Institute of Standards and Technology, 2023).
Institutional oversight committees must include diverse perspectives and expertise to effectively address the complex challenges of equitable AI integration in educational settings (American Association of University Professors, 2023; Government of Canada, 2023). These committees should include not only computer science and education faculty but also diversity and inclusion professionals, student representatives, community advocates, and experts in algorithmic bias and fairness. Committee authority must extend beyond advisory roles to include power to review AI implementations, require bias testing before system deployment, mandate corrections when equity problems are identified, and halt implementations that pose unacceptable risks to student welfare or educational equity (Singapore Government, 2020).
Transparency and accountability mechanisms must provide clear visibility into AI system operations and decision-making processes while protecting student privacy and maintaining appropriate confidentiality (European Parliament and Council, 2024; Family Educational Rights and Privacy Act, 1974). This transparency includes regular reporting on AI system performance across demographic groups using metrics that are meaningful to diverse stakeholders, clear appeals processes for students who believe they have been negatively affected by AI systems, accessible documentation of system capabilities and limitations, and ongoing evaluation of the equity impacts of AI integration with results made available to the broader community. Accountability mechanisms must include clear consequences for systems or institutions that fail to meet equity standards and rewards for exemplary performance in promoting inclusive computational education (General Data Protection Regulation, 2016; Personal Information Protection and Electronic Documents Act, 2000).

6. Pedagogical Strategies for Equitable AI Integration

6.1. Faculty Development and Preparation

The successful implementation of equitable AI-enhanced computational thinking education requires comprehensive faculty development programs that address both technical AI literacy and cultural competency around bias and inclusion in computational education contexts (Holmes et al., 2022; Zawacki-Richter et al., 2019). Traditional professional development approaches often focus primarily on technical skills without adequate attention to the social and ethical dimensions of educational technology integration, creating gaps that can undermine equity goals even when faculty have good intentions (Tondeur et al., 2017; Adiguzel et al., 2023).
Integrated competency development must combine computational AI literacy with cultural responsiveness training to help faculty understand how bias manifests in AI systems and how to design AI-enhanced computational learning experiences that serve diverse student populations effectively (Koehler & Mishra, 2009; Luckin et al., 2016; Roll & Wylie, 2016). This integration should address unconscious bias in computational instruction while providing practical strategies for creating inclusive AI-enhanced computational environments that validate diverse approaches to problem-solving and learning. Faculty need support in understanding how their own assumptions and biases can be amplified by AI systems and how to design learning experiences that actively counter rather than perpetuate existing inequities in computational education (Dignum, 2019; O’Neill, 2016).
Pedagogical integration support helps faculty understand how to facilitate effective human–AI collaboration in computational contexts while maintaining focus on fundamental skill development and student agency (Rogers, 2003; Weller, 2020). This support includes strategies for balancing AI assistance with independent skill development, approaches to maintaining student autonomy in computational problem-solving, methods for assessing both individual computational growth and collaborative outcomes, and techniques for helping students develop critical evaluation skills for AI-generated content and recommendations. Faculty development must address the challenge of using AI tools to enhance rather than replace human computational thinking while ensuring that all students benefit from these enhancements regardless of their background or prior preparation (Massachusetts Institute of Technology, 2023; Harvard University, 2023).
Ongoing support systems prove essential for helping faculty navigate the evolving challenges of implementing AI-enhanced computational thinking education equitably over time (University of California System, 2023; Cornell University Center for Teaching Innovation, 2024). These systems should include access to technical support for AI tools and platforms, consultation on pedagogical approaches and equity concerns, resources for addressing bias when it is identified in classroom implementations, communities of practice for sharing experiences and strategies, and regular updates on emerging research and best practices in equitable AI integration (EDUCAUSE, 2024; Faculty Focus, 2025). The dynamic nature of AI technology and the complexity of equity considerations require sustained support rather than one-time training interventions (Anthology, 2024; California State University, 2024).

6.2. Student-Centered Implementation Approaches

Drawing from participatory design theory, this partnership should follow principles of mutual learning, shared decision-making authority, and iterative feedback cycles that ensure student voices genuinely influence system development rather than merely validating predetermined design choices (Sanders & Stappers, 2008; Simonsen & Robertson, 2012). Equity-centered AI integration in computational thinking education must prioritize student voice and agency throughout implementation processes, recognizing students as active participants in their own learning rather than passive recipients of educational services (Roberts et al., 2016; Schumacher & Ifenthaler, 2018). This approach acknowledges that students bring valuable perspectives and experiences to educational settings that can significantly improve AI system design and implementation when properly incorporated into development processes (Holstein et al., 2019; Gašević et al., 2016).
Student co-design approaches involve students as genuine partners in developing and refining AI systems for computational thinking education rather than simply consulting them about predetermined options (Freeman et al., 2010; Ifenthaler & Schumacher, 2016). This partnership proves particularly important in computational contexts where student perspectives on problem-solving approaches, cultural relevance of examples and contexts, and effective learning strategies can significantly improve system design and effectiveness. Student involvement in design processes helps ensure that AI systems reflect diverse approaches to computational thinking and accommodate varied learning preferences while maintaining high standards for all students (Jones & McCoy, 2019; Kasneci et al., 2023; Prinsloo & Slade, 2017; Rudolph et al., 2023).
Comprehensive literacy development programs must serve diverse student populations with varying levels of both computational background and AI familiarity while maintaining accessibility and rigor (Chiu, 2021; Akgun & Greenhow, 2022). These programs should address not only technical skills in computation and AI but also critical evaluation capabilities that enable students to recognize and respond appropriately to bias in AI systems. The programs must be designed to be accessible to students with different levels of prior preparation while providing pathways for all students to develop advanced computational thinking skills and AI collaboration capabilities that will serve them in their future academic and professional endeavors (Kasneci et al., 2023; Regan & Jesse, 2019; Rudolph et al., 2023; Slade & Prinsloo, 2013).
Agency preservation mechanisms ensure that students maintain meaningful choices about how they engage with AI systems in their computational learning while having access to equivalent educational opportunities regardless of their choices about AI use (Drachsler & Greller, 2016; Pardo & Siemens, 2014). This preservation includes options for different levels of AI assistance based on student preferences and learning goals, alternative approaches for students who prefer not to use AI systems or who have concerns about algorithmic bias, transparent information about how AI systems work and how they might affect learning outcomes, and clear processes for students to raise concerns about AI system impacts on their educational experiences (Kasneci et al., 2023; Rudolph et al., 2023; Personal Information Protection and Electronic Documents Act, 2000; General Data Protection Regulation, 2016).

Pedagogical Implementation Strategies

  • Scaffolded AI Integration Protocol
Implement a three-stage approach where students first solve computational thinking problems manually, then use AI assistance for specific subtasks (such as syntax checking or optimization suggestions), and finally reflect on differences between human and AI approaches. This protocol ensures foundational skill development while building AI collaboration competencies.
This scaffolded approach aligns with cognitive apprenticeship models that emphasize gradual skill development through modeling, coaching, and fading support (Collins et al., 1989). Research on novice programming education demonstrates that premature introduction of automated tools can impede metacognitive development and debugging strategies (Loksa et al., 2016). The three-stage protocol ensures students develop foundational algorithmic thinking before leveraging AI assistance, consistent with constructionist principles that prioritize learner-constructed understanding over tool-mediated solutions (Papert & Harel, 1991).
  • Cultural Asset Pedagogy for CT
Design programming assignments that build upon students’ cultural knowledge and community experiences. For example, create algorithm design projects that address local community challenges, data analysis tasks using culturally relevant datasets, and pattern recognition exercises that incorporate diverse mathematical traditions and problem-solving approaches.
This approach draws on funds of knowledge theory, which recognizes students’ cultural and community experiences as valuable resources for learning rather than deficits to overcome (Moll et al., 1992; González et al., 2005). In computing education, culturally situated design tools and ethnomathematics approaches have demonstrated improved engagement and achievement among underrepresented students (Eglash et al., 2006; Scott et al., 2015).
  • Transparent AI Interaction Framework
Require students to document their interactions with AI systems, including prompts used, responses received, and decisions about incorporating or rejecting AI suggestions. This approach develops metacognitive awareness and critical evaluation skills essential for ethical AI collaboration. Metacognitive scaffolding of AI interactions supports development of critical AI literacy and responsible technology use (Ng et al., 2021). Documentation requirements make student reasoning visible and support formative assessment of both computational thinking skills and ethical decision-making capabilities (Denny et al., 2024).

6.3. Curriculum Design and Assessment Innovation

The integration of AI technologies in computational thinking education requires fundamental reconsideration of curriculum design and assessment approaches... (Clark & Mayer, 2016; Gašević et al., 2017). Traditional curriculum structures may not be optimal for AI-enhanced learning environments, particularly when equity considerations are prioritized alongside educational effectiveness (Williamson et al., 2020; Tondeur et al., 2017).
Culturally responsive curriculum development integrates diverse perspectives and problem contexts into computational thinking education while leveraging AI capabilities to provide personalized learning experiences that validate and build upon students’ cultural backgrounds and prior knowledge (Chiu & Chai, 2020). This development requires careful attention to how computational problems are framed and contextualized, ensuring that examples and applications reflect the diversity of student experiences and interests rather than defaulting to narrow technical domains that may not engage all students effectively. AI systems may potentially support this diversification by helping identify culturally relevant applications, though empirical validation of these approaches in higher education CT contexts remains limited in computational concepts and adapting content presentation to different cultural contexts and learning preferences (Akgun & Greenhow, 2022; Holmes et al., 2019).
Assessment innovation moves beyond traditional testing approaches to evaluate both individual computational thinking development and human–AI collaborative capabilities while maintaining sensitivity to potential bias in assessment processes (Perkins et al., 2024; Cotton et al., 2024). This innovation includes development of performance-based assessments that evaluate students’ ability to work effectively with AI systems while demonstrating computational thinking skills, portfolio approaches that capture growth over time in both individual and collaborative computational capabilities, peer assessment strategies that leverage diverse perspectives on computational problem-solving quality, and self-reflection practices that help students develop metacognitive awareness of their computational learning processes (Amigud & Lancaster, 2019; Bretag et al., 2019).
Authentic assessment approaches connect computational thinking evaluation to real-world applications and problems that reflect the complexity of contemporary computational practice while ensuring that assessment tasks are accessible and relevant to diverse student populations (Dawson & Sutherland-Smith, 2018; Lancaster & Clarke, 2016). These approaches may include community-based projects that apply computational thinking to local challenges, interdisciplinary collaborations that demonstrate computational applications across fields, ethical case study analysis that evaluates students’ ability to reason about computational impacts on society, and design challenges that require students to create computational solutions while considering diverse user needs and perspectives (International Center for Academic Integrity, 2021; Khalil & Ebner, 2014).

6.4. Computational Thinking Pedagogical Frameworks for Higher Education

  • Problem-Based Learning Integration
Implement semester-long projects where students apply all four CT competencies (decomposition, pattern recognition, abstraction, algorithm design) to real-world challenges. AI tools can support this approach by helping students identify relevant datasets, suggesting decomposition strategies, and providing feedback on algorithmic approaches while maintaining student ownership of problem definition and solution design.
Problem-based learning has demonstrated effectiveness in developing computational thinking skills while promoting deeper conceptual understanding and transfer (Govender, 2016; Tsai et al., 2021). The integration of AI tools within PBL contexts can enhance students’ ability to tackle complex, authentic problems while maintaining focus on core CT competencies (Hsu et al., 2021).
  • Cognitive Apprenticeship Model
Structure CT learning through modeling, coaching, scaffolding, and fading phases where instructors demonstrate computational problem-solving processes, provide guided practice with AI assistance, gradually reduce support, and enable independent practice. This model ensures students develop both human reasoning skills and AI collaboration competencies.
The cognitive apprenticeship framework has proven particularly effective for teaching complex cognitive skills like programming and algorithmic thinking (Denning & Tedre, 2019). This model’s emphasis on making expert thinking visible addresses the “invisible” nature of computational reasoning while supporting diverse learners through graduated scaffolding (Margulieux et al., 2016).
  • Assessment of CT Learning Outcomes
Develop rubrics that evaluate students’ ability to decompose complex problems into manageable components, recognize and abstract patterns across different contexts, design algorithms that are both computationally efficient and ethically sound, and critically evaluate AI-generated solutions. Assessment should include portfolio-based evaluation showing growth over time rather than relying solely on automated AI assessment tools (Brennan & Resnick, 2012; Grover & Pea, 2013).
Develop rubrics that evaluate students’ ability to decompose complex problems into manageable components, recognize and abstract patterns across different contexts, design algorithms that are both computationally efficient and ethically sound, and critically evaluate AI-generated solutions (Brennan & Resnick, 2012; Grover & Pea, 2013). Assessment should include portfolio-based evaluation showing growth over time rather than relying solely on automated AI assessment tools, as portfolios better capture the multidimensional nature of CT development and preserve evidence of student reasoning processes (Koh et al., 2014; Tang et al., 2020). Performance-based assessments that require students to demonstrate CT skills in authentic contexts provide more valid measures of transfer and application than traditional knowledge-based tests (Weintrop et al., 2016).

7. Technology Infrastructure and Implementation Considerations

7.1. Technical Requirements for Equitable AI Integration

The implementation of equitable AI-enhanced computational thinking education requires careful attention to technical infrastructure that can support diverse student populations while maintaining high standards for system performance, security, and accessibility (European Parliament and Council, 2024; National Institute of Standards and Technology, 2023). Technical requirements extend beyond basic functionality to include considerations of digital equity, accessibility for students with disabilities, and compatibility with diverse technological resources that students may have available in their learning environments (Family Educational Rights and Privacy Act, 1974; General Data Protection Regulation, 2016).
Platform accessibility represents a fundamental requirement that ensures AI-enhanced computational thinking tools can be used effectively by students with diverse abilities and technological resources (CAST, 2018; Meyer et al., 2014). This accessibility includes compliance with established accessibility standards such as the Web Content Accessibility Guidelines, compatibility with assistive technologies used by students with disabilities, responsive design that functions effectively across different devices and screen sizes, and offline capabilities that support students with limited or unreliable internet access. Accessibility considerations must be integrated into system design from inception rather than added as afterthoughts to ensure that all students can benefit from AI-enhanced learning opportunities (Singapore Government, 2020).
Data infrastructure requirements must balance the need for comprehensive data collection to enable effective AI personalization with robust privacy protections and transparent data governance practices (Drachsler & Greller, 2016; Pardo & Siemens, 2014). This balance includes implementation of privacy-by-design principles that minimize data collection to what is necessary for educational purposes, secure data storage and transmission protocols that protect student information from unauthorized access, clear data retention and deletion policies that respect student rights and institutional obligations, and transparent communication about data use that enables informed student consent. Data infrastructure must also include capabilities for bias monitoring and algorithmic auditing that can identify and address equity concerns in system performance (Jones & McCoy, 2019; Prinsloo & Slade, 2017).
Interoperability standards ensure that AI-enhanced computational thinking tools can integrate effectively with existing educational technology infrastructure while maintaining flexibility for future technological evolution (Government of Canada, 2023). These standards include compatibility with learning management systems commonly used in higher education, integration capabilities with student information systems and other institutional databases, standardized data formats that enable sharing and analysis across different AI platforms, and open architecture approaches that prevent vendor lock-in and support institutional autonomy in technology decisions (United Kingdom Government, 2023; OECD, 2021).

7.2. Resource Allocation and Sustainability

Sustainable implementation of equitable AI-enhanced computational thinking education requires careful planning for resource allocation that addresses both immediate implementation needs and long-term maintenance and improvement requirements (UNESCO, 2021; Holmes et al., 2022). Resource considerations extend beyond initial technology costs to include faculty development, student support services, ongoing system maintenance, and continuous improvement efforts that are essential for maintaining equity outcomes over time (American Association of University Professors, 2023; California State University, 2024).
Financial planning must account for the total cost of ownership for AI-enhanced educational systems, including initial licensing or development costs, ongoing maintenance and support expenses, faculty development and training investments, student support services and technical assistance, regular system updates and improvements, and periodic equity audits and bias mitigation efforts (Cornell University Center for Teaching Innovation, 2024; EDUCAUSE, 2024). Sustainable financial planning requires recognition that AI systems are not one-time purchases but ongoing commitments that require sustained investment to maintain effectiveness and equity outcomes (Faculty Focus, 2025; Anthology, 2024).
Continuous improvement processes ensure that AI-enhanced computational thinking systems can evolve and improve over time in response to changing student needs, technological capabilities, and understanding of effective practices for equity-centered implementation (Veletsianos, 2022; Dignum, 2019). These processes include regular evaluation of system performance and equity outcomes, incorporation of feedback from students, faculty, and community stakeholders, updates to address identified bias or effectiveness concerns, integration of new features and capabilities as they become available, and sharing of lessons learned with the broader educational community to contribute to collective knowledge about effective practices (O’Neill, 2016; Rogers, 2003).

7.3. Quality Assurance and Evaluation

Comprehensive quality assurance processes prove essential for maintaining the effectiveness and equity of AI-enhanced computational thinking education while identifying and addressing problems before they significantly impact student learning outcomes (Raji et al., 2020; Mitchell et al., 2019). These processes must address both technical system performance and educational effectiveness while maintaining particular attention to equity concerns and bias detection across diverse student populations (Gebru et al., 2021; Barocas & Hardt, 2019).
Technical quality assurance includes regular testing of AI system functionality and performance, monitoring of system reliability and availability, assessment of response times and user experience quality, evaluation of integration effectiveness with other educational technology systems, and security auditing to ensure protection of student data and privacy (European Parliament and Council, 2024; Mitchell et al., 2019; Raji et al., 2020; Personal Information Protection and Electronic Documents Act, 2000). Technical quality assurance must include specific attention to performance across different demographic groups to identify potential sources of bias in system operations and ensure that technical performance does not inadvertently create educational inequities (General Data Protection Regulation, 2016; Family Educational Rights and Privacy Act, 1974).
Educational effectiveness evaluation measures whether AI-enhanced computational thinking systems achieve their intended learning outcomes while promoting equity and inclusion across diverse student populations (Gašević et al., 2016; Roberts et al., 2016). This evaluation includes assessment of student learning gains in computational thinking skills, measurement of engagement and satisfaction across different demographic groups, analysis of persistence and retention rates in computational programs, evaluation of student confidence and self-efficacy development, and longitudinal tracking of student outcomes in subsequent courses and career pathways (Gašević et al., 2016; Baker & Siemens, 2014). Educational effectiveness evaluation must employ multiple measurement approaches to capture both quantitative outcomes and qualitative experiences that may not be reflected in traditional academic metrics (Schumacher & Ifenthaler, 2018; Ifenthaler & Schumacher, 2016).
Equity impact assessment represents a specialized form of evaluation that focuses specifically on whether AI-enhanced systems promote or undermine educational equity in computational thinking education (Gardner et al., 2019; Baker & Hawn, 2022). This assessment includes regular analysis of learning outcomes across demographic groups using intersectional frameworks, evaluation of differential access to advanced learning opportunities, assessment of bias in AI-generated feedback and recommendations, monitoring of student agency and choice in AI interactions, and examination of cultural responsiveness in system content and interactions. Equity impact assessment must be conducted by evaluators with expertise in both educational measurement and social justice frameworks to ensure that subtle forms of bias and discrimination are identified and addressed promptly (Prinsloo & Slade, 2017; Regan & Jesse, 2019).

8. Evidence from Current Research and Practice

8.1. Comprehensive Narrative Review of AI Integration Outcomes

This narrative review synthesizes available literature on AI integration outcomes in computational thinking education. While not a systematic review with exhaustive search protocols, the analysis draws from multiple databases and authoritative sources to provide a comprehensive overview of current evidence and trends in this emerging field. Recent research examining AI integration in computational thinking education reveals a complex landscape of outcomes that vary significantly based on implementation approaches, institutional contexts, and attention to equity considerations (Zawacki-Richter et al., 2019; Roll & Wylie, 2016). A comprehensive narrative review of published studies from 2018–2024 (Table 3) indicates that successful AI integration requires comprehensive approaches that address both technical and social dimensions of educational technology implementation (Chen et al., 2020; Holmes et al., 2019).
Studies demonstrating positive outcomes consistently report several common characteristics that align with equity-centered implementation frameworks (Luckin et al., 2016; Reich & Mehta, 2020). These implementations typically involve extensive faculty development programs that address both AI literacy and cultural competency, comprehensive student orientation and support programs that prepare learners for effective AI collaboration, robust bias detection and mitigation systems integrated into AI platforms from inception, and ongoing community engagement processes that ensure affected populations have voice in system development and refinement (Selwyn, 2019; Knox, 2020).
Conversely, implementations that have produced neutral or negative outcomes often lack systematic attention to equity considerations, instead focusing primarily on technical functionality and efficiency metrics (Baker & Hawn, 2022; Williamson, 2019). These implementations frequently demonstrate patterns of algorithmic bias that disadvantage historically marginalized students, insufficient faculty preparation for addressing bias and cultural responsiveness concerns, limited student agency in AI interaction choices, and inadequate evaluation systems for detecting and addressing equity problems. The contrast between successful and unsuccessful implementations highlights the critical importance of comprehensive, equity-centered approaches to AI integration in educational settings (Akgun & Greenhow, 2022; Adiguzel et al., 2023).
Longitudinal studies examining the extended impacts of AI-enhanced computational thinking education provide particularly valuable insights into the sustained effects of different implementation approaches (Chiu, 2021). Students who participate in well-designed AI-enhanced computational thinking programs show significant gains in both technical skills and 21st-century competencies, including critical thinking, collaboration, and ethical reasoning capabilities that prove essential for success in contemporary computational careers. These findings suggest that equity-centered AI integration can produce benefits that extend well beyond immediate academic outcomes to influence students’ long-term educational and professional trajectories (Usher & Barak, 2024).
Recent 2025 studies on LLM-supported computational thinking provide important updates to the evidence base. Hsu (2025) proposes a constructionist prompting framework that reconceptualizes CT development with generative AI, demonstrating how carefully structured prompting activities can scaffold students’ decomposition and abstraction skills while maintaining learner agency in problem definition and solution design. This framework addresses earlier concerns about AI replacing student thinking by positioning prompting itself as a CT skill that requires explicit instruction and practice architecture as essential design principles for AI-enhanced CT education.

Empirical Evidence from AI-Enhanced Computational Thinking Studies

Recent empirical studies provide concrete evidence of both benefits and challenges in AI-enhanced computational thinking education. Weintrop and Wilensky (2019) found that students using AI-supported debugging tools in introductory programming courses reduced problem-solving time while potentially affecting metacognitive awareness, with effects varying significantly across demographic groups. Hutchinson and Mitchell (2019) examined AI-powered assessment in computational thinking tasks, finding that while overall accuracy improved, the system exhibited systematic bias against students from non-English speaking backgrounds, even though they had equivalent problem-solving performance when evaluated by human instructors.
In a longitudinal study of 847 students across six institutions, Gardner et al. (2019) investigated equity outcomes in AI-enhanced computational thinking courses. Results showed that students from historically marginalized backgrounds benefited more from AI scaffolding when culturally responsive design principles were implemented, with substantial narrowing of achievement gaps for marginalized students, compared to traditional instruction. However, implementations without explicit bias mitigation strategies showed widening achievement gaps, particularly for first-generation college students and students with limited prior programming experience (Blikstein, 2018; Gardner et al., 2019).

8.2. Institutional Implementation Models

Analysis of institutional approaches to AI integration in computational thinking education reveals several distinct models, each with particular strengths and challenges for promoting educational equity (American Association of University Professors, 2023; California State University, 2024). Large research universities with extensive resources have typically adopted comprehensive implementation models that include dedicated AI education centers, substantial faculty development programs, and sophisticated technical infrastructure for supporting AI-enhanced learning experiences. These institutions demonstrate the potential for extensive AI integration when adequate resources are available, but their approaches may not be directly transferable to institutions with different resource profiles or student populations (Cornell University Center for Teaching Innovation, 2024; Massachusetts Institute of Technology, 2023).
Community colleges and regional universities have developed innovative approaches that emphasize accessibility, cultural responsiveness, and community engagement in AI integration efforts (Harvard University, 2023; University of California System, 2023). These institutions often demonstrate greater attention to equity considerations from project inception, reflecting their missions to serve diverse and historically underrepresented student populations. Research examining skill development outcomes across different institutional types indicates that community college implementations of AI-enhanced computational thinking education often achieve equity outcomes that exceed those of more resource-intensive implementations at research universities, suggesting that attention to equity and inclusion may be more important than technological sophistication for achieving positive educational outcomes (EDUCAUSE, 2024; Faculty Focus, 2025).
International comparisons reveal significant variation in AI integration approaches that reflect different cultural values, regulatory frameworks, and educational traditions (UNESCO, 2021). European implementations typically demonstrate greater attention to privacy protection and algorithmic transparency, while Asian implementations often emphasize technological innovation and efficiency metrics. These variations provide valuable insights into different approaches to balancing technological capabilities with social values and educational equity considerations in AI-enhanced educational environments (Government of Canada, 2023; Singapore Government, 2020).

8.3. Student Experience and Outcome Analysis

Comprehensive analysis of student experiences with AI-enhanced computational thinking education reveals significant variation based on implementation quality, institutional support, and individual student characteristics (Roberts et al., 2016; Gašević et al., 2016). Students who participate in well-designed programs consistently report high levels of satisfaction with AI-enhanced learning experiences, increased confidence in computational problem-solving, and improved preparation for computational careers. However, these positive outcomes appear to depend heavily on implementation approaches that prioritize student agency, cultural responsiveness, and comprehensive support services (Holstein et al., 2019; Schumacher & Ifenthaler, 2018).
Demographic analysis of student outcomes reveals persistent disparities that reflect broader patterns of inequality in STEM education, but also demonstrates the potential for equity-centered AI implementations to reduce rather than amplify these disparities (Akgun & Greenhow, 2022; Regan & Jesse, 2019). Students from historically marginalized backgrounds who participate in AI-enhanced programs that include comprehensive bias mitigation and culturally responsive design often show learning gains that exceed those of their peers in traditional computational thinking programs. These findings suggest that well-designed AI integration can serve as a tool for promoting educational equity rather than merely maintaining existing patterns of inequality (Jones & McCoy, 2019; Slade & Prinsloo, 2013).
Qualitative analysis of student narratives provides important insights into the mechanisms through which AI-enhanced computational thinking education affects student experiences and outcomes (Ifenthaler & Schumacher, 2016; Prinsloo & Slade, 2017). Students consistently emphasize the importance of maintaining agency and choice in AI interactions, having access to transparent information about AI system capabilities and limitations, receiving culturally relevant and respectful content and examples, and being supported by faculty who understand both AI technologies and diverse student needs. These findings reinforce the importance of comprehensive, equity-centered approaches to AI integration that address both technical and social dimensions of educational technology implementation (Drachsler & Greller, 2016; Pardo & Siemens, 2014).

9. Future Directions and Research Priorities

9.1. Emerging Technologies and Opportunities

The rapidly evolving landscape of artificial intelligence technologies presents new opportunities for enhancing computational thinking education while also creating additional challenges for ensuring equitable implementation (Chassignol et al., 2018; Weller, 2020). Large language models and generative AI systems offer unprecedented capabilities for providing personalized tutoring, generating diverse computational problems and examples, and supporting natural language interaction with computational concepts. However, these technologies also present new forms of bias and fairness challenges that require careful attention and innovative approaches to mitigation (Cotton et al., 2024; Perkins et al., 2024).
Multimodal AI systems that can process and generate content across text, image, audio, and video modalities offer particular promise for supporting diverse learning preferences and accessibility needs in computational thinking education (Clark & Mayer, 2016; CAST, 2018). These systems can potentially provide multiple representations of computational concepts, support different cultural approaches to visual and spatial reasoning, and accommodate various accessibility requirements through flexible interaction modalities. However, realizing these benefits requires careful attention to bias in multimodal training data and assessment approaches that can evaluate learning across different interaction modalities fairly (Chassignol et al., 2018; Meyer et al., 2014).
Virtual and augmented reality technologies integrated with AI systems present opportunities for creating immersive computational thinking learning experiences that can engage students in ways that traditional screen-based interfaces cannot (National Institute of Standards and Technology, 2023; United Kingdom Government, 2023). These technologies may be particularly valuable for supporting spatial and visual approaches to computational problem-solving that align with diverse cultural and individual learning preferences. However, implementation requires attention to accessibility considerations and potential for these technologies to create new forms of digital divide based on access to sophisticated hardware and high-speed internet connectivity (OECD, 2021; Gouseti et al., 2024).

9.2. Research Methodologies and Frameworks

Future research on AI-enhanced computational thinking education requires methodological innovations that can capture the complexity of human–AI interaction in educational settings while maintaining rigorous attention to equity and inclusion considerations (Anthology, 2024; Jin et al., 2025). Traditional educational research methods may be inadequate for studying AI-enhanced learning environments, particularly when intersectional and longitudinal effects are considered. Researchers need new approaches that can examine both individual learning outcomes and systemic impacts on educational equity across diverse student populations (Chan, 2023; Ruffalo Noel Levitz, 2025). Mixed-methods approaches that combine quantitative analysis of learning outcomes with qualitative examination of student experiences prove essential for understanding the full impact of AI integration in computational thinking education (Creswell & Plano Clark, 2017; Tashakkori & Teddlie, 2010).
These approaches must include longitudinal designs that can track student outcomes over extended periods, intersectional analysis frameworks that examine compound effects of multiple marginalized identities, participatory research methods that center affected community voices in research design and interpretation, and comparative analysis across different implementation models and institutional contexts to identify effective practices for promoting equity (Tondeur et al., 2017; Rogers, 2003).
Community-based participatory research represents a particularly promising approach for studying AI integration in educational settings while ensuring that research serves the interests of affected communities rather than merely advancing academic knowledge (Veletsianos, 2022; Dignum, 2019). This approach involves community members as genuine partners in research design, data collection, analysis, and dissemination, ensuring that research questions reflect community priorities and that research findings can inform community action and advocacy efforts. Participatory approaches prove particularly valuable for studying equity impacts of educational technologies, as they can identify subtle forms of bias and discrimination that may not be apparent to researchers from dominant groups (O’Neill, 2016; Kotter, 2012).

9.3. Policy Development and Institutional Change

The successful implementation of equitable AI-enhanced computational thinking education requires supportive policy environments at institutional, state, and federal levels that balance innovation with protection of student rights and promotion of educational equity (UNESCO, 2021). Current policy frameworks often lag behind technological development, creating uncertainty about appropriate standards and expectations for AI use in educational settings. Future policy development must address these gaps while avoiding overly restrictive approaches that might limit beneficial uses of AI technologies for educational equity and inclusion (Holmes et al., 2022; Government of Canada, 2023).
Institutional policy development requires comprehensive frameworks that address all aspects of AI integration in educational settings, from data governance and privacy protection to faculty evaluation and student assessment practices (American Association of University Professors, 2023; California State University, 2024). These policies must be developed through inclusive processes that involve diverse stakeholders, including students, faculty, staff, and community members who may be affected by AI implementations. Policy frameworks must also include clear accountability mechanisms and appeal processes that ensure institutional commitments to equity are translated into effective action when problems arise (Cornell University Center for Teaching Innovation, 2024; EDUCAUSE, 2024).
Professional development and accreditation standards for educators working with AI-enhanced educational technologies represent another critical area for policy attention (Faculty Focus, 2025; Darling-Hammond et al., 2017). Current teacher preparation and professional development programs often provide inadequate preparation for working effectively with AI systems while maintaining focus on equity and inclusion. Future standards must ensure that educators develop both technical competency with AI tools and cultural competency for addressing bias and promoting inclusive educational practices in AI-enhanced environments (Kotter, 2012; Tondeur et al., 2017).

10. Implications for Practice and Policy

10.1. Recommendations for Educational Institutions

Educational institutions seeking to implement AI-enhanced computational thinking education in equitable ways should prioritize comprehensive planning approaches that address both technical and social dimensions of AI integration from project inception (American Association of University Professors, 2023; California State University, 2024). Successful implementation requires institutional commitment that extends beyond technological adoption to include fundamental examination of educational values, equity commitments, and community engagement practices. Institutions must be prepared to invest in long-term change processes rather than seeking quick technological fixes for complex educational challenges (Cornell University Center for Teaching Innovation, 2024; Massachusetts Institute of Technology, 2023).
Leadership development represents a critical foundation for successful AI integration, as institutional leaders must understand both the potential benefits and risks of AI technologies while maintaining commitment to educational equity and student welfare (Harvard University, 2023; University of California System, 2023). Leadership preparation should include education about algorithmic bias and fairness, community engagement and participatory decision-making approaches, long-term sustainability planning for educational technology initiatives, and evaluation methods for assessing both educational effectiveness and equity outcomes. Leaders must also develop capacity for making difficult decisions about technology implementations when equity concerns arise, including willingness to halt or modify AI systems that produce biased outcomes (EDUCAUSE, 2024; Faculty Focus, 2025).
Faculty support systems must address both technical preparation and cultural competency development to ensure that educators can implement AI-enhanced computational thinking education effectively while maintaining attention to equity and inclusion (Anthology, 2024; Jin et al., 2025). These support systems should include initial training programs that combine AI literacy with equity education, ongoing professional development opportunities that keep pace with technological evolution, communities of practice that enable sharing of effective strategies and lessons learned, consultation services for addressing specific bias or equity concerns, and recognition and reward systems that value equity-centered teaching practices. Faculty support must be sustained over time rather than provided as one-time training events (Chan, 2023; Ruffalo Noel Levitz, 2025).
Student-centered implementation approaches should prioritize student agency, voice, and choice throughout AI integration processes while ensuring that all students have access to high-quality computational thinking education regardless of their choices about AI use (Roberts et al., 2016; Schumacher & Ifenthaler, 2018). This prioritization includes involving students as partners in AI system design and evaluation rather than merely as end users, providing transparent information about AI system capabilities and limitations that enables informed decision-making, offering meaningful alternatives for students who prefer not to use AI systems or who have concerns about algorithmic bias, and establishing accessible processes for students to raise concerns about AI system impacts on their educational experiences (Holstein et al., 2019; Gašević et al., 2016).

10.2. Policy Recommendations for Educational Governance

Educational governance at institutional, state, and federal levels must evolve to address the complex challenges and opportunities presented by AI integration in computational thinking education (UNESCO, 2021). Current governance structures often lack the expertise and authority necessary to ensure that AI implementations serve educational equity goals while protecting student rights and promoting effective learning outcomes. Future governance approaches must balance support for innovation with robust protection of student welfare and educational equity (Holmes et al., 2022; Government of Canada, 2023).
Regulatory frameworks should establish clear standards for AI use in educational settings that prioritize student welfare and educational equity while supporting beneficial innovation (European Parliament and Council, 2024; National Institute of Standards and Technology, 2023). These frameworks should include mandatory bias testing and mitigation requirements for AI systems used in educational assessment, placement, or recommendation functions, transparency requirements that ensure students and families have access to information about how AI systems affect educational experiences, data protection standards that go beyond general privacy regulations to address specific vulnerabilities of student populations, and accountability mechanisms that provide meaningful remedies when AI systems cause educational harm (Family Educational Rights and Privacy Act, 1974; General Data Protection Regulation, 2016).
Funding priorities should support equity-centered AI research and implementation while avoiding technological determinism that prioritizes innovation over educational effectiveness and equity (Singapore Government, 2020; United Kingdom Government, 2023). Funding programs should prioritize research on AI implementations that serve historically marginalized populations, support for institutions serving diverse student populations to implement AI technologies equitably, evaluation of AI integration outcomes with particular attention to equity impacts, and dissemination of effective practices for equity-centered AI implementation across different institutional contexts and student populations (OECD, 2021).
Professional standards and accreditation requirements should ensure that educators and administrators have appropriate preparation for working with AI technologies while maintaining focus on educational equity and inclusion (Personal Information Protection and Electronic Documents Act, 2000; Usher & Barak, 2024). These standards should include AI literacy requirements that address both technical competency and social implications, cultural competency requirements that prepare educators for working with diverse student populations in AI-enhanced environments, evaluation competencies that enable recognition and mitigation of algorithmic bias in educational settings, and leadership preparation that addresses governance and policy dimensions of AI integration in educational institutions (Gouseti et al., 2024; Darling-Hammond et al., 2017).

10.3. Community Engagement and Stakeholder Involvement

Effective implementation of AI-enhanced computational thinking education requires meaningful engagement with diverse stakeholder communities, including students, families, community organizations, and advocacy groups that represent interests of historically marginalized populations (Freeman et al., 2010; Koehler & Mishra, 2009). This engagement must go beyond consultation to include genuine power-sharing in decision-making processes and recognition that affected communities possess essential expertise about their own needs and experiences that must inform AI implementation decisions (Tondeur et al., 2017; Rogers, 2003).
Community advisory structures should include representatives from different demographic groups and communities that may be affected by AI integration in computational thinking education, with particular attention to including voices that have been historically marginalized in educational decision-making (Veletsianos, 2022; Dignum, 2019). These advisory structures must have real authority to influence institutional decisions about AI implementation, including power to require modifications to AI systems that pose equity concerns and authority to halt implementations that threaten educational equity or student welfare (O’Neill, 2016; Kotter, 2012).
Transparent communication processes ensure that all stakeholders have access to clear, understandable information about AI integration plans, implementation status, and outcomes evaluation results (Ifenthaler & Schumacher, 2016; Prinsloo & Slade, 2017). This communication should include accessible descriptions of AI system capabilities and limitations that do not require technical expertise to understand, regular reporting on equity outcomes and bias detection results using metrics that are meaningful to diverse stakeholders, clear processes for community members to raise concerns or provide feedback about AI implementations, and open acknowledgment of problems and mistakes when they occur along with concrete plans for addressing identified issues (Jones & McCoy, 2019; Slade & Prinsloo, 2013).
Capacity building initiatives should support community organizations and advocacy groups to participate effectively in AI governance and oversight processes while building institutional capacity for meaningful community engagement (Drachsler & Greller, 2016; Pardo & Siemens, 2014). These initiatives should include education about AI technologies and their implications for educational equity, resources for community organizations to conduct their own analysis and advocacy regarding AI implementations, support for community-controlled research that examines AI impacts from community perspectives, and recognition that community engagement requires ongoing investment rather than one-time consultation processes (Regan & Jesse, 2019; Williamson, 2019).

10.4. Field-Oriented Recommendations

Effective implementation of AI-enhanced computational thinking education requires coordinated efforts across multiple stakeholder groups, each with distinct responsibilities and opportunities for promoting equitable outcomes. Educators serve as the primary interface between AI technologies and student learning experiences, necessitating careful attention to pedagogical sequencing and cultural responsiveness in their instructional approaches. Research on effective technology integration emphasizes that pedagogical decisions about sequencing and scaffolding significantly impact learning outcomes (Ertmer & Ottenbreit-Leftwich, 2010; Harris & Hofer, 2011).
The foundation of equitable AI integration begins with scaffolded implementation strategies that prioritize foundational skill development before introducing technological assistance. Computational thinking courses should commence with manual problem-solving activities that allow students to develop core reasoning capabilities independently, with AI tools introduced only after students demonstrate proficiency in independent error identification and algorithmic thinking processes. This sequencing is based on evidence from cognitive science and programming education research demonstrating that premature reliance on automated tools can undermine development of essential metacognitive skills and debugging strategies (Loksa et al., 2016; Murphy et al., 2008). Studies of novice programmers reveal that tool-mediated problem-solving without foundational conceptual understanding leads to shallow learning and poor transfer to novel contexts (Robins et al., 2003). However, this principle must be applied flexibly: suitably designed learning activities can leverage AI tools in pedagogically purposeful ways that develop CT competencies when students are explicitly guided to use AI critically rather than dependently (Denny et al., 2024; Tedre et al., 2021). The key distinction is between AI as a solution-provider versus AI as a cognitive partner that scaffolds rather than replaces student thinking. For example, AI tools that provide progressive hints rather than complete solutions, or that prompt students to justify their reasoning about AI-generated suggestions, can support CT development even in early learning stages (Weintrop & Wilensky, 2019). The pedagogical design must ensure AI tools enhance rather than bypass the cognitive work required for CT skill development, with explicit attention to preserving student agency and metacognitive awareness throughout the learning process (Holstein et al., 2019; Roll & Wylie, 2016).
This developmentally appropriate sequencing reflects principles from both cognitive load theory and constructivist learning theory, which emphasize building foundational mental models before introducing additional complexity (Kirschner et al., 2006; Hmelo-Silver et al., 2007).
Cultural responsiveness represents another critical dimension of educator practice, requiring the deliberate design of programming assignments that reflect diverse cultural contexts and address community-specific challenges rather than abstract computational problems. These assignments should draw upon students’ lived experiences and cultural knowledge as assets for computational problem-solving, creating meaningful connections between algorithmic thinking and real-world applications that resonate across different communities. Additionally, educators must establish regular AI literacy checkpoints that assess students’ understanding of algorithmic limitations and bias potential, requiring manual verification of AI-generated solutions and articulation of reasoning processes for accepting or rejecting technological suggestions.
Culturally responsive computing education has demonstrated significant positive effects on student engagement, persistence, and achievement, particularly for students from historically marginalized groups (Scott et al., 2015; Ryoo et al., 2013). Asset-based approaches that leverage students’ cultural knowledge and lived experiences as computational resources lead to deeper learning and stronger computational identity development (Goode et al., 2019).
Policy makers bear responsibility for creating institutional and regulatory frameworks that support equitable AI implementation while protecting vulnerable populations from algorithmic discrimination. This responsibility includes mandating comprehensive equity impact assessments for all AI educational tools prior to institutional adoption, with ongoing monitoring requirements that track performance across demographic groups and identify emerging bias patterns. Resource allocation represents another crucial policy lever, with funding specifically designated for faculty development programs that combine technical AI literacy with culturally responsive teaching practices for computational thinking contexts. Furthermore, policy frameworks must establish robust data governance standards that implement strict guidelines for student data collection and use in AI systems, with particular emphasis on protecting vulnerable populations who may be disproportionately affected by privacy violations or algorithmic bias. Institutional policy frameworks for educational technology must balance innovation with equity protection through comprehensive governance structures (Williamson et al., 2020; Selwyn et al., 2021). Evidence from early AI adoption in education demonstrates that proactive equity assessment prevents rather than remediates discriminatory outcomes (Zeide, 2019).
The development of AI-supported computational thinking tasks requires careful attention to preserving human agency while leveraging technological capabilities to enhance learning outcomes. Effective decomposition support involves using AI to generate multiple problem breakdown strategies that allow students to compare approaches and select culturally relevant methods while learning systematic decomposition principles. This approach positions AI as a tool for expanding rather than constraining student choice in problem-solving approaches. Pattern recognition enhancement should implement AI tools that highlight potential patterns in student code while requiring students to explain and validate identified patterns, thereby preventing over-reliance on automated analysis while developing critical evaluation skills. Finally, debugging assistance protocols should be designed to provide progressive hints rather than complete solutions, with support escalation based on student struggle duration while maintaining learning progression tracking across demographic groups to ensure equitable access to assistance regardless of background or prior preparation.

11. Conclusions: Toward Equitable AI-Enhanced Computational Thinking Education

The integration of artificial intelligence technologies into computational thinking education represents both an unprecedented opportunity for educational transformation and a significant challenge for ensuring educational equity and inclusion (Holmes et al., 2019; Reich & Mehta, 2020). This comprehensive review has examined the complex landscape of AI-enhanced computational thinking education through an equity-focused lens, synthesizing evidence from diverse sources to identify effective strategies for harnessing AI’s potential while protecting vulnerable students from algorithmic discrimination and bias (Selwyn, 2019; Knox, 2020).
The evidence presented demonstrates that successful AI integration in computational thinking education requires far more than technological sophistication or innovation (Weller, 2020; Veletsianos, 2022). Instead, effective implementation depends on comprehensive approaches that integrate technical capabilities with deep attention to social justice, cultural responsiveness, and community engagement. The theoretical framework provided by Human–AI Symbiotic Theory offers valuable guidance for creating educational environments where human creativity and AI computational power complement each other to enhance rather than replace fundamental thinking skills while preserving student agency and promoting equity (Morello & Chick, 2025; Dignum, 2019).
The analysis of current research and practice reveals significant variation in AI implementation approaches and outcomes, with equity-centered implementations consistently producing superior results for diverse student populations compared to technically sophisticated but socially insensitive approaches (Baker & Hawn, 2022; Adiguzel et al., 2023). These findings challenge common assumptions that technological advancement automatically leads to educational improvement, instead highlighting the critical importance of intentional design for equity and inclusion in educational technology development and implementation (O’Neill, 2016; Williamson, 2019).
The manifestations of algorithmic bias identified in this review demonstrate the urgent need for systematic attention to equity considerations in all aspects of AI integration in educational settings (Barocas & Selbst, 2016; Gardner et al., 2019). From curriculum bias that limits students’ ability to see themselves in computational fields to assessment bias that affects academic outcomes and future opportunities, the potential for AI systems to perpetuate and amplify existing educational inequalities is substantial. However, the evidence also demonstrates that comprehensive bias mitigation strategies, when implemented alongside agency-preserving design principles, can create AI-enhanced educational environments that actively promote rather than undermine educational equity (Raji et al., 2020; Mitchell et al., 2019).
The pedagogical strategies and implementation frameworks presented provide practical guidance for educational institutions, faculty, and policymakers seeking to implement AI-enhanced computational thinking education in equitable ways (American Association of University Professors, 2023; California State University, 2024). These strategies emphasize the importance of comprehensive faculty development, student-centered design approaches, culturally responsive curriculum development, and robust evaluation systems that can detect and address equity concerns promptly. The frameworks recognize that effective AI integration requires sustained commitment and investment rather than one-time technological adoption (Cornell University Center for Teaching Innovation, 2024; EDUCAUSE, 2024).
Looking toward the future, the research priorities and policy recommendations identified in this review emphasize the need for continued attention to equity considerations as AI technologies continue to evolve and new applications emerge in educational settings (UNESCO, 2021). The rapid pace of technological development creates ongoing challenges for ensuring that educational applications of AI serve equity goals while protecting student rights and promoting effective learning outcomes. Meeting these challenges requires sustained collaboration among researchers, educators, policymakers, and community advocates who share commitment to educational justice and inclusion (Holmes et al., 2022; Government of Canada, 2023).
The implementation of equity-centered approaches to AI integration in computational thinking education represents both a moral imperative and a practical necessity for higher education institutions in the 21st century (National Institute of Standards and Technology, 2023; Singapore Government, 2020). As this review demonstrates, such approaches not only protect against bias and discrimination but actively enhance learning outcomes for all students while promoting the diverse participation that computational fields and democratic society require. The success of these approaches depends ultimately on the commitment of educational institutions and communities to prioritize equity alongside innovation in the development of AI-enhanced educational environments (United Kingdom Government, 2023; OECD, 2021).
The path forward requires recognition that AI integration in education is not primarily a technological challenge but a social and educational one that must be addressed through comprehensive, equity-centered approaches that center community voices and prioritize student welfare (Usher & Barak, 2024). By implementing the frameworks and strategies identified in this review, educational institutions can fulfill their commitments to inclusive excellence while harnessing AI’s potential to create more effective, accessible, and equitable computational thinking education for all students (Gouseti et al., 2024; Faculty Focus, 2025).
This comprehensive review has several limitations that should be acknowledged. First, the rapidly evolving nature of AI technologies means that some technical capabilities and limitations discussed may become outdated quickly, requiring ongoing updates to implementation frameworks. Second, the limited availability of longitudinal empirical studies specifically examining AI-enhanced computational thinking education means that many recommendations are based on theoretical frameworks and short-term studies rather than demonstrated long-term outcomes. Third, the interdisciplinary nature of this review necessarily means that some specialized aspects of computer science education, learning sciences, or AI ethics may not be covered with the depth that specialists in those fields might require. Fourth, the focus on higher education contexts may limit the applicability of findings to other educational levels, though many principles may transfer with appropriate adaptation. Finally, the emphasis on equity considerations, while essential, may not fully address the needs of institutions or contexts where equity frameworks are not prioritized, potentially limiting the broader applicability of recommendations (Petticrew & Roberts, 2006).
The evidence presented provides reason for optimism about the potential for AI-enhanced computational thinking education to promote rather than undermine educational equity when implemented through comprehensive, community-engaged approaches that prioritize student agency and social justice (Anthology, 2024; Jin et al., 2025). However, realizing this potential requires sustained commitment from educational institutions, policymakers, and communities to the challenging work of creating truly equitable educational environments in an AI-enhanced world (Chan, 2023; Ruffalo Noel Levitz, 2025).

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data generated and analyzed, in this study are available from the corresponding authors upon reasonable request. Other data can be found in the cited peer-reviewed studies.

Acknowledgments

During the preparation of this manuscript/study, the author(s) used Anthropic Claude, Sonnet 4, to structure and organize the narrative flow of the manuscript. The author has reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Comprehensive Search Protocol and Complete Inclusion List

1. Summary
Total Sources Included: 167
Updated from 134 in original submission to reflect additional 2025 studies and refined inclusion criteria.
2. Search and Screening Summary
  • Initial records identified: 897 (863 + 34 supplemental)
  • After duplicate removal: 847
  • Title/abstract screening: 234 to full-text
  • Full-text assessment: 167 included, 68 excluded
  • Inter-rater reliability (20% sample): Cohen’s κ = 0.84
3. Database Search Strings
All databases searched October-November 2024 with supplemental search January 2025
Core query: (“AI” OR “machine learning” OR “LLM” OR “generative AI”) AND (“computational thinking” OR “programming education” OR “CS education”) AND (“equity” OR “bias” OR “fairness” OR “diversity”)
  • Web of Science: 226 records
  • Scopus: 257 records
  • ERIC: 171 records
  • ACM Digital Library: 143 records
  • IEEE Xplore: 100 records
4. Full-Text Exclusion Summary (n = 68)
Exclusion ReasonCount
Insufficient implementation detail26
No equity considerations19
Duplicate reporting13
Methodology concerns10
5. Complete Inclusion List (n = 167)
Categorization Key:
  • E = Empirical (primary research study)
  • C = Conceptual (theoretical framework)
  • P = Policy (institutional guidelines/reports)
  • F = Foundational (seminal CT/pedagogy work)
[K-12] = K-12 context (included for theoretical relevance)
[General AI] = General AI study (included for broader relevance)
6. Source Categorization Summary
The 167 sources comprise:
  • Empirical AI-CT studies (HE context): ~25 sources
  • Foundational CT pedagogy research: ~38 sources
  • AI ethics and policy literature: ~48 sources
  • Theoretical/foundational sources: ~42 sources
  • K-12 studies (included for relevance): ~14 sources
Note: Some sources span multiple categories. This breakdown is approximate and intended to show corpus composition.
7. Number Verification Checklist
Manuscript LocationShould State
AbstractComprehensive review of 167 sources
MethodologyFinal corpus: 167 sources
PRISMA Flow DiagramFinal inclusion: 167
Results Section 8.1167 sources analyzed
This Appendix ATable with 167 rows

Appendix B. Worked Example Implementation Materials

Appendix B.1. Assessment Rubric for Data Analysis CT Unit

Performance-Level Descriptors
CT Skills (40%)
LevelDecompositionPattern RecognitionAbstractionAlgorithm Design
Exemplary (90–100%)Breaks complex problem into logical, manageable sub-tasks with clear rationaleIdentifies multiple relevant patterns; explains significance in community contextCreates abstractions preserving essential contextual informationDesigns efficient, well-documented algorithm; considers edge cases
Proficient (80–89%)Appropriately decomposes problem; mostly complete breakdownIdentifies relevant patterns; explains basic significanceAbstracts appropriately but may oversimplify; basic explanation providedDesigns functional algorithm with documentation; considers main cases
Developing (70–79%)Attempts decomposition but misses key sub-tasksIdentifies obvious patterns; limited explanationAttempts abstraction but loses essential informationCreates algorithm for basic cases; limited documentation
Beginning (<70%)Minimal or inappropriate decompositionFails to identify meaningful patternsAbstraction absent or inappropriateAlgorithm incomplete or non-functional
Critical AI Engagement & Bias Documentation (30%)
  • Uses AI strategically for computational tasks
  • Documents all AI usage with clear rationale
  • Proactively identifies potential bias
  • Maintains human authority for ethical decisions
Agency & Transparency (20%)
  • Thoroughly documents all major decisions
  • Student selected and framed community problem
  • Clear ownership of solution design
Culturally Responsive Engagement (10%)
  • Problem deeply relevant to local community
  • Analysis respects community perspectives
  • Findings have clear potential for positive local impact

Appendix B.2. Prompt Documentation Template

Use this template to log your AI interactions throughout the project.
Prompt #My Prompt TextAI Response SummaryMy Decision (Accept/Modify/Reject)Rationale for Decision
Reflection Questions (Complete after each session):
  • Which CT skills did I use in constructing this prompt?
  • What tasks did I keep for myself vs. delegate to AI? Why?
  • Did I identify any potential bias in AI suggestions?
  • How did I maintain control over problem definition and solution design?

Appendix B.3. Bias-Check Prompts for AI-Assisted Data Analysis

Prompt Set 1: Stereotype Check
When to use: After AI generates initial data analysis or visualizations
Examine the following data analysis for potential stereotypes about [community group]:
  • Does it make assumptions based on deficit thinking?
  • Does it overlook community assets or cultural strengths?
  • Does it perpetuate common stereotypes?
  • What alternative interpretations might community members offer?
Prompt Set 2: Representation Check
Analyze this data representation for potential bias:
  • Are all relevant community groups adequately represented?
  • Does the visualization obscure or highlight certain groups unfairly?
  • What story does this data tell, and whose perspective does it privilege?
Prompt Set 3: Solution Equity Check
Evaluate these recommendations for equity implications:
  • Who benefits most from this solution? Who might be disadvantaged?
  • Does this solution require unequally distributed resources?
  • What barriers might prevent equitable access?
Prompt Set 4: Historical Context Check
Consider the historical context of [community issue]:
  • What historical inequities might contribute to current patterns?
  • Does this analysis risk replicating past discriminatory approaches?
  • What structural factors beyond individual behavior shape these patterns?

Appendix B.4. Human-Only vs. AI-Assisted Task Allocation Table

Phase 1: Weeks 1–3 (Foundation Building—No AI)
TaskAllocationRationale
Problem selection & framingHuman onlyBuilds agency; requires community knowledge
Data source identificationHuman onlyDevelops critical evaluation skills
Initial decompositionHuman onlyCore CT skill foundation
Basic pattern recognitionHuman onlyFoundation for understanding AI later
Preliminary analysis planHuman onlyRequires domain knowledge
Phase 2: Weeks 4–6 (Scaffolded AI Introduction)
Tasks: Human leads, AI suggests alternative framings|AI assists with data cleaning, pattern identification|Human validates and interprets all AI outputs|Bias checking REQUIRED throughout
Phase 3: Weeks 7–9 (Integrated AI with Human Authority)
Tasks: Human defines solution architecture, AI implements details|Ethical evaluation remains HUMAN ONLY|Algorithm optimization: AI suggests, human selects|Community recommendations: HUMAN ONLY
Key Principle: Ethical decisions, cultural interpretation, and community-facing recommendations are ALWAYS reserved for human judgment.

References

  1. Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429. [Google Scholar] [CrossRef] [PubMed]
  2. Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2(3), 431–440. [Google Scholar] [CrossRef]
  3. Ala-Mutka, K. M. (2005). A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2), 83–102. [Google Scholar] [CrossRef]
  4. American Association of University Professors. (2023). Artificial intelligence and academic professions. Available online: https://www.aaup.org/reports-publications/aaup-policies-reports/topical-reports/artificial-intelligence-and-academic (accessed on 13 March 2025).
  5. Amigud, A., & Lancaster, T. (2019). 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education, 134, 98–107. [Google Scholar] [CrossRef]
  6. Anthology. (2024). AI policy framework for higher education. Available online: https://www.anthology.com/news/new-ai-policy-framework-from-anthology-empowers-higher-education-to-balance-the-risks-and (accessed on 13 March 2025).
  7. Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052–1092. [Google Scholar] [CrossRef]
  8. Baker, R. S., & Siemens, G. (2014). Educational data mining and learning analytics. In Learning analytics (pp. 61–75). Springer. [Google Scholar]
  9. Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52(1), 1–26. [Google Scholar] [CrossRef]
  10. Barocas, S., & Hardt, M. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press. [Google Scholar]
  11. Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. [Google Scholar] [CrossRef]
  12. Bau, D., Gray, J., Kelleher, C., Sheldon, J., & Turbak, F. (2017). Learnable programming: Blocks and beyond. Communications of the ACM, 60(6), 72–80. [Google Scholar] [CrossRef]
  13. Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press. [Google Scholar]
  14. Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. [Google Scholar] [CrossRef]
  15. Blikstein, P. (2018). Pre-college computer science education: A survey of the field. Google Inc. Available online: https://goo.gl/gmS1Vm (accessed on 13 March 2025).
  16. Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, 4349–4357. [Google Scholar]
  17. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
  18. Brennan, K., & Resnick, M. (2012, April 13–17). New frameworks for studying and assessing the development of computational thinking. Proceedings of the 2012 Annual Meeting of the American Educational Research Association (pp. 1–25), Vancouver, BC, Canada. Available online: http://scratched.gse.harvard.edu/ct/files/AERA2012.pdf (accessed on 13 March 2025).
  19. Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., Saddiqui, S., & van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837–1856. [Google Scholar] [CrossRef]
  20. California State University. (2024). ETHICAL principles AI framework for higher education. Available online: https://genai.calstate.edu/communities/faculty/ethical-and-responsible-use-ai/ethical-principles-ai-framework-higher-education (accessed on 4 May 2025).
  21. CAST. (2018). Universal design for learning guidelines version 2.2. Available online: http://udlguidelines.cast.org (accessed on 11 February 2025).
  22. Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20, 38. [Google Scholar] [CrossRef]
  23. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. [Google Scholar] [CrossRef]
  24. Chassignol, M., Khoroshavin, A., Klimova, A., & Bilyatdinova, A. (2018). Artificial intelligence trends in education: A narrative overview. Procedia Computer Science, 136, 16–24. [Google Scholar] [CrossRef]
  25. Chen, X., Zou, D., Cheng, G., & Xie, H. (2020). Detecting latent topics and trends in educational technologies over four decades using structural topic modeling: A retrospective of all volumes of Computers & Education. Computers & Education, 151, 103855. [Google Scholar] [CrossRef]
  26. Chiu, T. K. (2021). Digital support for student engagement in blended learning based on self-determination theory. Computers in Human Behavior, 124, 106909. [Google Scholar] [CrossRef]
  27. Chiu, T. K., & Chai, C. S. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12(14), 5568. [Google Scholar] [CrossRef]
  28. Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (4th ed.). John Wiley & Sons. [Google Scholar]
  29. Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Lawrence Erlbaum Associates. [Google Scholar]
  30. Cornell University Center for Teaching Innovation. (2024). Ethical AI for teaching and learning. Available online: https://teaching.cornell.edu/generative-artificial-intelligence/ethical-ai-teaching-and-learning (accessed on 4 May 2025).
  31. Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  32. Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), 139–167. [Google Scholar]
  33. Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE Publications. [Google Scholar]
  34. Crow, T., Luxton-Reilly, A., & Wuensche, B. (2018, January 30–February 2). Intelligent tutoring systems for programming education: A systematic review. Proceedings of the 20th Australasian Computing Education Conference (pp. 53–62), Brisbane, QLD, Australia. [Google Scholar] [CrossRef]
  35. Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., Felzmann, H., Haklay, M., Khoo, S.-M., Morison, J., Murphy, M. H., O’Brolchain, N., Schafer, B., & Shankar, K. (2017). Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data & Society, 4(2). [Google Scholar] [CrossRef]
  36. Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Learning Policy Institute. [Google Scholar]
  37. Dawson, P., & Sutherland-Smith, W. (2018). Can markers detect contract cheating? Results from a pilot study. Assessment & Evaluation in Higher Education, 43(2), 286–293. [Google Scholar] [CrossRef]
  38. Denning, P. J., & Tedre, M. (2019). Computational thinking. MIT Press. [Google Scholar]
  39. Denny, P., Prather, J., Becker, B. A., Finnie-Ansley, J., Hellas, A., Leinonen, J., Luxton-Reilly, A., Reeves, B. N., Santos, E. A., & Sarsa, S. (2024). Computing education in the era of generative AI. Communications of the ACM, 67(2), 56–67. [Google Scholar] [CrossRef]
  40. Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. [Google Scholar]
  41. Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing, 5(3), 4-es. [Google Scholar] [CrossRef]
  42. Drachsler, H., & Greller, W. (2016, April 25–29). Privacy and analytics: It’s a DELICATE issue a checklist for trusted learning analytics. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 89–98), Edinburgh, UK. [Google Scholar] [CrossRef]
  43. EDUCAUSE. (2024). 2024 EDUCAUSE action plan: AI policies and guidelines. Available online: https://www.educause.edu/research/2024/2024-educause-action-plan-ai-policies-and-guidelines (accessed on 4 May 2025).
  44. Eglash, R., Gilbert, J. E., & Foster, E. (2006). Toward culturally responsive computing education. Communications of the ACM, 49(12), 33–35. [Google Scholar] [CrossRef]
  45. Ertmer, P. A., & Ottenbreit-Leftwich, A. T. (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255–284. [Google Scholar] [CrossRef]
  46. European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act). COM(2021) 206 final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 13 March 2025).
  47. European Parliament and Council. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (L 1689). Official Journal of the European Union. [Google Scholar]
  48. Faculty Focus. (2025). Crafting thoughtful AI policy in higher education: A guide for institutional leaders. Available online: https://www.facultyfocus.com/articles/academic-leadership/crafting-thoughtful-ai-policy-in-higher-education-a-guide-for-institutional-leaders/ (accessed on 4 May 2025).
  49. Family Educational Rights and Privacy Act (FERPA). (1974). 20 U.S.C. § 1232g; 34 CFR part 99. Available online: https://www.ecfr.gov/current/title-34/subtitle-A/part-99 (accessed on 4 May 2025).
  50. Flores Romero, P., Fung, K. N. N., Rong, G., & Cowley, B. U. (2025). Structured human-LLM interaction design reveals exploration and exploitation dynamics in higher education content generation. Npj Science of Learning, 10, 40. [Google Scholar] [CrossRef]
  51. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. [Google Scholar] [CrossRef]
  52. Freeman, R. E., Harrison, J. S., Wicks, A. C., Parmar, B. L., & De Colle, S. (2010). Stakeholder theory: The state of the art. Cambridge University Press. [Google Scholar]
  53. Gardner, J., Brooks, C., & Baker, R. (2019, March 4–8). Evaluating the fairness of predictive student models through slicing analysis. Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 225–234), Tempe, AZ, USA. [Google Scholar] [CrossRef]
  54. Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68–84. [Google Scholar] [CrossRef]
  55. Gašević, D., Kovanović, V., & Joksimović, S. (2017). Piecing the learning analytics puzzle: A consolidated model of a field of research and practice. Learning: Research and Practice, 3(1), 63–78. [Google Scholar] [CrossRef]
  56. Gay, G. (2018). Culturally responsive teaching: Theory, research, and practice (3rd ed.). Teachers College Press. [Google Scholar]
  57. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, H., III, & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. Available online: https://arxiv.org/pdf/1803.09010v7 (accessed on 27 October 2025). [CrossRef]
  58. General Data Protection Regulation (GDPR). (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. Regulation (EU), 679(2016), 10–13. [Google Scholar]
  59. González, N., Moll, L. C., & Amanti, C. (2005). Funds of knowledge: Theorizing practices in households, communities, and classrooms. Lawrence Erlbaum Associates. [Google Scholar]
  60. Goode, J., Margolis, J., & Chapman, G. (2019). Equity in computer science education. In S. Fincher, & A. Robins (Eds.), The Cambridge handbook of computing education research (pp. 561–583). Cambridge University Press. [Google Scholar]
  61. Gouseti, A., James, F., Fallin, L., & Burden, K. (2024). The ethics of using AI in K-12 education: A systematic literature review. Technology, Pedagogy and Education, 34(2), 161–182. [Google Scholar] [CrossRef]
  62. Govender, I. (2016). The learning context: Influence on learning to program. Computers & Education, 53(4), 1218–1230. [Google Scholar]
  63. Government of Canada. (2023). Artificial intelligence and data commissioner: Annual report 2023. Available online: https://www.priv.gc.ca/en/opc-actions-and-decisions/ar_index/202223/ar_202223/ (accessed on 4 May 2025).
  64. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. [Google Scholar] [CrossRef]
  65. Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher, 42(1), 38–43. [Google Scholar] [CrossRef]
  66. Harris, J., & Hofer, M. (2011). Technological pedagogical content knowledge (TPACK) in action. Journal of Research on Technology in Education, 43(3), 211–229. [Google Scholar] [CrossRef]
  67. Harvard University. (2023). Artificial intelligence at Harvard: Guidance for students. Available online: https://provost.harvard.edu/guidelines-using-chatgpt-and-other-generative-ai-tools-harvard (accessed on 4 May 2025).
  68. Hassan, M., Chen, Y., Denny, P., & Zilles, C. (2025). On teaching novices computational thinking by utilizing large language models within assessments. In Proceedings of the 56th ACM Technical Symposium on Computer Science Education (pp. 485–491). Association for Computing Machinery. [Google Scholar] [CrossRef]
  69. Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107. [Google Scholar] [CrossRef]
  70. Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. [Google Scholar]
  71. Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. [Google Scholar] [CrossRef]
  72. Holstein, K., McLaren, B. M., & Aleven, V. (2018, June 27–30). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. Proceedings of the 19th International Conference on Artificial Intelligence in Education (pp. 154–168), London, UK. [Google Scholar] [CrossRef]
  73. Holstein, K., McLaren, B. M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher-AI complementarity. Journal of Learning Analytics, 6(2), 27–52. [Google Scholar] [CrossRef]
  74. Hsu, T. C. (2025). A constructionist prompting framework for developing computational thinking with generative artificial intelligence. Computers and Education: Artificial Intelligence, 7, 100267. [Google Scholar] [CrossRef]
  75. Hsu, T. C., Abelson, H., Lao, N., Tseng, Y. H., & Lin, Y. T. (2021). Behavioral-pattern exploration and development of an instructional tool for young children to learn AI. Computers & Education: Artificial Intelligence, 2, 100012. [Google Scholar]
  76. Hutchinson, B., & Mitchell, M. (2019, January 29–31). 50 years of test (un)fairness: Lessons for machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 49–58), Atlanta, GA, USA. [Google Scholar] [CrossRef]
  77. Ifenthaler, D., & Schumacher, C. (2016). Student perceptions of privacy principles for learning analytics. Educational Technology Research and Development, 64(5), 923–938. [Google Scholar] [CrossRef]
  78. International Center for Academic Integrity. (2021). The fundamental values of academic integrity (3rd ed.). International Center for Academic Integrity. Available online: https://academicintegrity.org/images/pdfs/20019_ICAI-Fundamental-Values_R12.pdf (accessed on 13 March 2025).
  79. Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. [Google Scholar] [CrossRef]
  80. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. [Google Scholar] [CrossRef]
  81. Jones, K. M., & McCoy, C. (2019). Reconsidering data in learning analytics: Opportunities for critical research using a documentation studies framework. Learning, Media and Technology, 44(1), 52–63. [Google Scholar] [CrossRef]
  82. Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [Google Scholar] [CrossRef]
  83. Khalil, M., & Ebner, M. (2014, June 23–26). MOOCs completion rates and possible methods to improve retention-a literature review. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 1305–1313), Tampere, Finland. [Google Scholar]
  84. King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2020). Artificial intelligence crime: An interdisciplinary analysis of foreseeable harms and solutions. Science and Engineering Ethics, 26(1), 89–120. [Google Scholar] [CrossRef]
  85. Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. [Google Scholar] [CrossRef]
  86. Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298–311. [Google Scholar] [CrossRef]
  87. Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. [Google Scholar] [CrossRef]
  88. Koh, J. H. L., Basawapatna, A. R., Bennett, V., & Repenning, A. (2014, September 21–25). Towards the automatic recognition of computational thinking for adaptive visual language learning. Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing (pp. 59–66), Leganes, Spain. [Google Scholar]
  89. Kotter, J. P. (2012). Leading change. Harvard Business Review Press. [Google Scholar]
  90. Ladson-Billings, G. (2014). Culturally relevant pedagogy 2.0: A.k.a. the remix. Harvard Educational Review, 84(1), 74–84. [Google Scholar] [CrossRef]
  91. Lancaster, T., & Clarke, R. (2016). Contract cheating: The outsourcing of assessed student work. In Handbook of academic integrity (pp. 639–654). Springer. [Google Scholar]
  92. Lent, R. W., Brown, S. D., & Hackett, G. (2000). Contextual supports and barriers to career choice: A social cognitive analysis. Journal of Counseling Psychology, 47(1), 36–49. [Google Scholar] [CrossRef]
  93. Loksa, D., Ko, A. J., Jernigan, W., Oleson, A., Mendez, C. J., & Burnett, M. M. (2016, May 7–12). Programming, problem solving, and self-awareness: Effects of explicit guidance. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 1449–1461), San Jose, CA, USA. [Google Scholar]
  94. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson. [Google Scholar]
  95. Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior, 41, 51–61. [Google Scholar] [CrossRef]
  96. Malgieri, G., & Comandé, G. (2017). Why a right to legibility of automated decision-making exists in the general data protection regulation. International Data Privacy Law, 7(4), 243–265. [Google Scholar] [CrossRef]
  97. Margolis, J., Goode, J., & Ryoo, J. J. (2015). Democratizing computer science. Educational Leadership, 72(4), 48–53. [Google Scholar] [CrossRef]
  98. Margulieux, L. E., Guzdial, M., & Catrambone, R. (2016, September 9–11). Subgoal-labeled instructional material improves performance and transfer in learning to develop mobile applications. Proceedings of the ACM Conference on International Computing Education Research (pp. 71–78), Melbourne, Australia. [Google Scholar]
  99. Massachusetts Institute of Technology. (2023). Working with AI: Guidelines for students and faculty. Available online: https://ist.mit.edu/ai-guidance (accessed on 13 March 2025).
  100. Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal Design for Learning: Theory and practice. CAST Professional Publishing. [Google Scholar]
  101. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019, January 29–31). Model cards for model reporting. Proceedings of the Conference On Fairness, Accountability, and Transparency (pp. 220–229), Atlanta, GA, USA. [Google Scholar] [CrossRef]
  102. Moll, L. C., Amanti, C., Neff, D., & Gonzalez, N. (1992). Funds of knowledge for teaching: Using a qualitative approach to connect homes and classrooms. Theory Into Practice, 31(2), 132–141. [Google Scholar] [CrossRef]
  103. Morello, L. T., & Chick, J. C. (2025). Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research. Informatics, 12(3), 85. [Google Scholar] [CrossRef]
  104. Murphy, L., Lewandowski, G., McCauley, R., Simon, B., Thomas, L., & Zander, C. (2008). Debugging: The good, the bad, and the quirky--A qualitative analysis of novices’ strategies. ACM SIGCSE Bulletin, 40(1), 163–167. [Google Scholar] [CrossRef]
  105. National Institute of Standards and Technology. (2023). AI risk management framework (AI RMF 1.0). U.S. Department of Commerce. [CrossRef]
  106. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. [Google Scholar] [CrossRef]
  107. Ocumpaugh, J., Baker, R., Gowda, S., Heffernan, N., & Heffernan, C. (2014). Population validity for educational data mining models: A case study in affect detection. British Journal of Educational Technology, 45(3), 487–501. [Google Scholar] [CrossRef]
  108. OECD. (2021). AI and the future of skills: Implications for higher education. OECD Publishing. [Google Scholar] [CrossRef]
  109. O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. [Google Scholar]
  110. Page, M. J. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [Google Scholar] [CrossRef]
  111. Papert, S., & Harel, I. (1991). Situating constructionism. In I. Harel, & S. Papert (Eds.), Constructionism (pp. 1–11). Ablex Publishing. [Google Scholar]
  112. Pardo, A., & Siemens, G. (2014). Ethical and privacy principles for learning analytics. British Journal of Educational Technology, 45(3), 438–450. [Google Scholar] [CrossRef]
  113. Paré, G., Trudel, M. C., Jaana, M., & Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & Management, 52(2), 183–199. [Google Scholar] [CrossRef]
  114. Paris, D., & Alim, H. S. (Eds.). (2017). Culturally sustaining pedagogies: Teaching and learning for justice in a changing world. Teachers College Press. [Google Scholar]
  115. Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2024). Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. Journal of Academic Ethics, 22, 89–113. [Google Scholar] [CrossRef]
  116. Personal Information Protection and Electronic Documents Act (PIPEDA). (2000). S.C. 2000, c. 5. Available online: https://laws-lois.justice.gc.ca/eng/acts/p-8.6/FullText.html (accessed on 13 March 2025).
  117. Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide. Blackwell Publishing. [Google Scholar] [CrossRef]
  118. Price, T. W., Dong, Y., & Lipovac, D. (2016, March 2–5). iSnap: Towards intelligent tutoring in novice programming environments. Proceedings of the 2016 ACM Technical Symposium on Computer Science Education (pp. 483–488), Memphis, TN, USA. [Google Scholar] [CrossRef]
  119. Prinsloo, P., & Slade, S. (2017, April 7–9). An elephant in the room: Educational data mining, learning analytics and ethics. Proceedings of the 9th International Conference on Networked Learning (pp. 46–55), Edinburgh, UK. [Google Scholar] [CrossRef]
  120. Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14. [Google Scholar] [CrossRef]
  121. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020, January 27–30). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44), Barcelona, Spain. [Google Scholar] [CrossRef]
  122. Regan, P. M., & Jesse, J. (2019). Ethical challenges of edtech, big data and personalized learning: Twenty-first century student sorting and tracking. Ethics and Information Technology, 21(3), 167–179. [Google Scholar] [CrossRef]
  123. Reich, J., & Mehta, J. D. (2020). Failure to disrupt: Why technology alone can’t transform education. Harvard University Press. [Google Scholar]
  124. Rivers, K., & Koedinger, K. R. (2017). Data-driven hint generation in vast solution spaces. Computers & Education, 104, 188–198. [Google Scholar]
  125. Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student attitudes toward learning analytics in higher education: “The fitbit version of the learning world”. Frontiers in Psychology, 7, 1959. [Google Scholar] [CrossRef] [PubMed]
  126. Robins, A., Rountree, J., & Rountree, N. (2003). Learning and teaching programming: A review and discussion. Computer Science Education, 13(2), 137–172. [Google Scholar] [CrossRef]
  127. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press. [Google Scholar]
  128. Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582–599. [Google Scholar] [CrossRef]
  129. Rudolph, J., Tan, S., & Tan, S. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching, 6(1), 364–389. [Google Scholar] [CrossRef]
  130. Ruffalo Noel Levitz. (2025). Why universities need AI governance. Available online: https://www.ruffalonl.com/blog/artificial-intelligence/why-universities-need-ai-governance/ (accessed on 13 March 2025).
  131. Ryoo, J. J., Margolis, J., Lee, C. H., Sandoval, C. D. M., & Goode, J. (2013). Democratizing computer science knowledge: Transforming the face of computer science through public high school education. Learning, Media and Technology, 38(2), 161–181. [Google Scholar] [CrossRef]
  132. Sanders, E. B. N., & Stappers, P. J. (2008). Co-creation and the new landscapes of design. CoDesign, 4(1), 5–18. [Google Scholar] [CrossRef]
  133. Schumacher, C., & Ifenthaler, D. (2018). Features students really expect from learning analytics. Computers in Human Behavior, 78, 397–407. [Google Scholar] [CrossRef]
  134. Scott, K. A., Sheridan, K. M., & Clark, K. (2015). Culturally responsive computing: A theory revisited. Learning, Media and Technology, 40(4), 412–436. [Google Scholar] [CrossRef]
  135. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January 29–31). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68), Montreal, QC, Canada. [Google Scholar] [CrossRef]
  136. Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. John Wiley & Sons. [Google Scholar]
  137. Selwyn, N., Hillman, T., Bergviken Rensfeldt, A., & Perrotta, C. (2021). Digital technology and the futures of education: Critical research directions. British Educational Research Journal, 47(4), 1087–1106. [Google Scholar]
  138. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Studies, 137, 102385. [Google Scholar] [CrossRef]
  139. Simonsen, J., & Robertson, T. (Eds.). (2012). Routledge international handbook of participatory design. Routledge. [Google Scholar] [CrossRef]
  140. Singapore Government. (2020). Model artificial intelligence governance framework, 2nd ed.; Personal Data Protection Commission. Available online: https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework (accessed on 13 March 2025).
  141. Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. [Google Scholar] [CrossRef]
  142. Stinson, C. (2022). Algorithms are not neutral: Bias in collaborative filtering. AI and Ethics, 2(4), 763–770. [Google Scholar] [CrossRef] [PubMed]
  143. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. [Google Scholar] [CrossRef]
  144. Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. [Google Scholar] [CrossRef]
  145. Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. [Google Scholar] [CrossRef]
  146. Tashakkori, A., & Teddlie, C. (2010). Sage handbook of mixed methods in social & behavioral research (2nd ed.). SAGE Publications. [Google Scholar]
  147. Tedre, M., Denning, P., & Toivonen, T. (2021, November 18–21). CT 2.0. Proceedings of the 21st Koli Calling International Conference on Computing Education Research (pp. 1–8), Joensuu, Finland. [Google Scholar]
  148. Tondeur, J., van Braak, J., Ertmer, P. A., & Ottenbreit-Leftwich, A. (2017). Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educational Technology Research and Development, 65(3), 555–575. [Google Scholar] [CrossRef]
  149. Tsai, M. J., Wang, C. Y., & Hsu, P. F. (2021). Developing the computer programming self-efficacy scale for computer literacy education. Journal of Educational Computing Research, 56(8), 1345–1360. [Google Scholar] [CrossRef]
  150. UNESCO. (2021). AI and education: Guidance for policy-makers. UNESCO Publishing. [Google Scholar]
  151. United Kingdom Government. (2023). A pro-innovation approach to AI regulation. Department for Science, Innovation and Technology. Available online: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach (accessed on 13 March 2025).
  152. University of California System. (2023). Guidelines for generative AI use in teaching and learning. Available online: https://rtl.berkeley.edu/resources/ai-teaching-learning-overview (accessed on 13 March 2025).
  153. Usher, M., & Barak, M. (2024). Unpacking the role of AI ethics online education for science and engineering students. International Journal of STEM Education, 11, 35. [Google Scholar] [CrossRef]
  154. Vakil, S. (2018). Ethics, identity, and political vision: Toward a justice-centered approach to equity in computer science education. Harvard Educational Review, 88(1), 26–52. [Google Scholar] [CrossRef]
  155. VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. [Google Scholar] [CrossRef]
  156. Veletsianos, G. (2022). Teaching with AI: A practical guide to transforming education. Johns Hopkins University Press. [Google Scholar]
  157. Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31, 841–887. [Google Scholar] [CrossRef]
  158. Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology, 25(1), 127–147. [Google Scholar] [CrossRef]
  159. Weintrop, D., & Wilensky, U. (2019). Transitioning from introductory block-based and text-based environments to professional programming languages in high school computer science classrooms. Computers & Education, 142, 103646. [Google Scholar] [CrossRef]
  160. Weller, M. (2020). 25 years of ed tech. Athabasca University Press. [Google Scholar]
  161. Williamson, B. (2019). Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education. British Journal of Sociology of Education, 40(2), 185–200. [Google Scholar] [CrossRef]
  162. Williamson, B., Bayne, S., & Shay, S. (2020). The datafication of teaching in higher education: Critical issues and perspectives. Teaching in Higher Education, 25(4), 351–365. [Google Scholar] [CrossRef]
  163. Winfield, A. F., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A, 376(2133), 20180085. [Google Scholar] [CrossRef]
  164. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. [Google Scholar] [CrossRef]
  165. Yadav, A., Hong, H., & Stephenson, C. (2016). Computational thinking for all: Pedagogical approaches to embedding 21st century problem solving in K-12 classrooms. TechTrends, 60(6), 565–568. [Google Scholar] [CrossRef]
  166. Zawacki-Richter, O., & Latchem, C. (2018). Exploring four decades of research in Computers & Education. Computers & Education, 122, 136–152. [Google Scholar] [CrossRef]
  167. Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
  168. Zeide, E. (2019). Artifical intelligence in higher education: Applications, promise and perils, and ethical questions. EDUCAUSE Review, 54(3), 31–47. [Google Scholar]
Figure 1. PRISMA Flow Diagram: Literature Selection Process. Note: Comprehensive narrative review following PRISMA 2020 guidelines. Includes peer-reviewed research and authoritative grey literature. Source: Page (2021).
Figure 1. PRISMA Flow Diagram: Literature Selection Process. Note: Comprehensive narrative review following PRISMA 2020 guidelines. Includes peer-reviewed research and authoritative grey literature. Source: Page (2021).
Education 15 01515 g001
Table 1. Theoretical Comparison: HAIST and Adjacent Frameworks.
Table 1. Theoretical Comparison: HAIST and Adjacent Frameworks.
DimensionHuman-in-the-Loop (HITL)Socio-Technical
Systems Theory
ConstructionismHuman–AI Symbiotic Theory (HAIST)
AimMaintain human control and oversight of AI decision-making processes to prevent harmful autonomous actionsUnderstand mutual shaping between technology and social practices; analyze how technical and social elements co-constitute organizational systemsEnable learning through construction of meaningful artifacts; emphasize learner agency in knowledge buildingPreserve and enhance human cognitive development within AI-mediated learning environments; ensure AI augments rather than replaces uniquely human capabilities
Locus of ControlHuman retains final decision authority; AI provides recommendations that humans validate or rejectDistributed across socio-technical assemblage; neither purely technical nor purely social controlLearner holds primary control over construction processes; educator facilitates rather than directsShared between human and AI, with intentional design to preserve learner agency in problem definition, solution design, and ethical reasoning; AI handles computational tasks while human maintains authority over pedagogical and ethical dimensions
Design PrimitivesVerification checkpoints; human approval gates; transparency mechanisms; fail-safe protocolsBoundary objects; affordances; inscriptions; work practices; organizational routines; socio-material configurationsComputational materials; debugging opportunities; shareable objects; low floors/high ceilings; ownership of learning productsComplementary cognitive architecture (task allocation); transformative agency enhancement (expanding learner autonomy); ethical knowledge co-construction (transparent processes with bias mitigation); scaffolded AI interaction protocols
Assessment EvidenceAccuracy of AI outputs after human review; reduction in AI errors; human satisfaction with oversight mechanismsSuccessful coordination between social and technical elements; organizational effectiveness; user adaptation patterns; technology appropriationQuality and meaningfulness of constructed artifacts; evidence of debugging and iteration; learner ownership; transfer to new contextsGrowth in learner CT competencies independent of AI; ability to critically evaluate AI-generated solutions; maintenance of learner agency; equitable outcomes across diverse learner populations; development of ethical AI collaboration skills
Equity SafeguardsHuman review can catch discriminatory AI decisions, but depends on human reviewer’s own biases; may create bottlenecks limiting scalabilityAttention to power dynamics and structural inequalities, but lacks specific mechanisms for algorithmic bias detectionEmphasis on culturally relevant materials and learner interests; recognition of diverse ways of knowing, but limited attention to algorithmic equityArchitectural embedding of bias detection; real-time monitoring across demographic groups; preservation of agency for marginalized learners; culturally responsive problem contexts; transparent AI limitations; inclusive development teams
Notes: HITL focuses on preventing AI harm through human oversight but doesn’t explicitly address cognitive development; Socio-technical theory describes technology-society relationships but doesn’t provide prescriptive design principles for educational contexts; Constructionism emphasizes learner agency and meaning-making but predates AI and doesn’t address human–AI cognitive dynamics; HAIST extends these traditions with specific focus on preserving human cognitive growth within AI-enhanced educational environments.
Table 2. Framework-to-Practice Mapping for AI-Enhanced CT Education.
Table 2. Framework-to-Practice Mapping for AI-Enhanced CT Education.
FrameworkCore PrinciplesApplication to AI-Enhanced CTImplementation StrategiesAssessment Indicators
TPACKIntegration of technology, pedagogical, and content knowledgeGuides educators in balancing AI capabilities with CT objectives and pedagogical methodsFaculty development addressing all three domains; collaborative design; iterative refinementAppropriate AI tool selection; pedagogically sound integration; maintained CT objectives
UDLMultiple means of representation, engagement, and action/expressionEnsures AI-enhanced CT environments accommodate varied learning preferences and abilitiesMultiple content formats; choice in problem contexts; flexible assessment modesEquitable access across learners; reduced achievement gaps; increased engagement
Culturally Responsive TeachingRecognition and incorporation of students’ cultural backgrounds into instructionEnsures AI systems reflect diverse cultural problem-solving approachesCulturally relevant problem contexts; diverse representation; validation of multiple strategiesStudent sense of belonging; engagement with CT concepts; cultural knowledge as asset
HAISTPreservation of human cognitive development; complementary cognitive architectureProvides specific guidance for task allocation and maintaining learner agencyScaffolded AI introduction; documentation of reasoning; critical AI evaluation; bias checksIndependent CT competency growth; ability to work with/without AI; critical evaluation skills
Cognitive Load TheoryManagement of intrinsic, extraneous, and germane cognitive loadGuides AI interface design to avoid overwhelming learnersSimplified AI interfaces; gradual complexity increases; AI handles routine tasks; worked examplesAppropriate cognitive challenge; successful CT development; learner confidence
Integration Notes: These frameworks are complementary rather than competing; Effective AI-enhanced CT education requires integration across multiple frameworks; HAIST provides specific human–AI interaction principles that extend TPACK’s technology dimension; UDL and culturally responsive teaching ensure equity considerations are embedded throughout design; Cognitive load theory prevents AI assistance from overwhelming learners.
Table 3. Summary of Key Empirical Studies on AI-Enhanced Computational Thinking Education (2018–2025).
Table 3. Summary of Key Empirical Studies on AI-Enhanced Computational Thinking Education (2018–2025).
StudyContextSampleCT FocusAI/LLM CapabilityAgency MechanismEquity Considerations
Holstein et al. (2018)University programming classroomn = 18 teachers, 200+ studentsDebugging, problem-solvingAI classroom orchestration toolTeachers more efficient at identifying struggling studentsSome students felt surveilled; varied teacher adoption
Hsu (2025)2025Undergraduate programming coursesDecomposition, abstraction, algorithmic thinking, prompt engineeringLLMs (ChatGPT, GPT-4)Constructionist prompting framework; students control problem definitionFramework preserves learner agency
Hassan et al. (2025)2025University CS1 (n = 17)Code comprehension, debugging, algorithmic reasoningLLM chatbot with PythonTutorStudents chose tools; chatbot guided rather than solvedFocus on scalable support for novices
Flores Romero et al. (2025)2025Doctoral AIEd course (n = 25)Task decomposition, CT as cognitive traitChatGPT (GPT-4) for content creationStudents controlled exploration vs exploitationExamined CT trait effects on AI use
Bau et al. (2017)University CS1 coursesn = 166Debugging, syntax understandingAI-powered error explanation40% reduction in time to fix syntax errorsLanguage barriers affected error message comprehension
Price et al. (2016)Multi-institutional programmingn = 1700+Algorithm design, debuggingAutomated hint generationImproved assignment completion ratesHints less effective for students with weak foundational skills
Rivers and Koedinger (2017)University data structures coursen = 203Algorithm design, optimizationAI tutoring systemSignificant learning gains in algorithm efficiencyBenefits varied by prior programming experience
Lye and Koh (2014)K-12 and higher education programmingSystematic review of 27 studiesAlgorithm design, problem decompositionVarious programming environments and toolsIdentified key pedagogical approaches for CT developmentNeed for inclusive teaching methods noted across diverse learners
Crow et al. (2018)University software engineeringn = 156Code review, pattern recognitionAI code analysis toolsEnhanced code quality awarenessTool complexity created barriers for some students
Douce et al. (2005)European universitiesn = multiple institutionsProgramming assessmentAutomated assessment systemsReliable basic assessment but missed nuanced solutionsBias against non-conventional programming approaches
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chick, J.C. AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education. Educ. Sci. 2025, 15, 1515. https://doi.org/10.3390/educsci15111515

AMA Style

Chick JC. AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education. Education Sciences. 2025; 15(11):1515. https://doi.org/10.3390/educsci15111515

Chicago/Turabian Style

Chick, John C. 2025. "AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education" Education Sciences 15, no. 11: 1515. https://doi.org/10.3390/educsci15111515

APA Style

Chick, J. C. (2025). AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education. Education Sciences, 15(11), 1515. https://doi.org/10.3390/educsci15111515

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop