AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education
Abstract
1. Introduction
Methodology
- Research Questions
- (1)
- How can AI technologies be integrated into computational thinking education while promoting rather than undermining educational equity?
- (2)
- What theoretical and practical frameworks exist for implementing bias-free AI-enhanced computational thinking environments?
- (3)
- What are the key challenges and opportunities for equitable AI integration in higher education computational thinking programs?
- Search Strategy
- AI concepts: (“artificial intelligence” OR “machine learning” OR “AI” OR “intelligent tutoring” OR “adaptive learning” OR “large language model*” OR “LLM” OR “generative AI”)
- CT concepts: (“computational thinking” OR “programming education” OR “computer science education” OR “coding education” OR “algorithm* learning”)
- Equity concepts: (“equity” OR “bias” OR “fairness” OR “diversity” OR “inclusion” OR “justice” OR “marginalized”)
- Inclusion Criteria
- Coding and Analysis Procedures
- Rationale for Comprehensive Review Approach
- Literature Categorization
- Evidence Limitations
2. Findings: Theoretical Foundations for AI-Enhanced Computational Thinking
2.1. Evolution of Computational Thinking Frameworks
2.2. Human–AI Symbiotic Theory in Educational Contexts
2.3. HAIST-Informed Design Principles
2.4. Integration with Established Pedagogical Frameworks
3. Current State of AI Integration in Computational Thinking Education
3.1. Technological Landscape and Capabilities
3.2. Benefits and Opportunities for Enhanced Learning
3.3. Challenges and Limitations in Current Implementations
4. Algorithmic Bias and Equity Concerns in AI-Enhanced Education
4.1. Manifestations of Bias in Educational AI Systems
CT-Specific Bias Manifestations and Classroom Mitigations
- Decomposition Bias
- Pattern Recognition Bias
- Abstraction Bias
- Algorithm Design Bias
4.2. Impact on Student Identity and Self-Efficacy
4.3. Intersectionality and Compounded Effects
5. Ethical Frameworks for Responsible AI Integration
5.1. Implementation Framework for Ethical AI Integration
5.2. Governance and Accountability Mechanisms
6. Pedagogical Strategies for Equitable AI Integration
6.1. Faculty Development and Preparation
6.2. Student-Centered Implementation Approaches
Pedagogical Implementation Strategies
- Scaffolded AI Integration Protocol
- Cultural Asset Pedagogy for CT
- Transparent AI Interaction Framework
6.3. Curriculum Design and Assessment Innovation
6.4. Computational Thinking Pedagogical Frameworks for Higher Education
- Problem-Based Learning Integration
- Cognitive Apprenticeship Model
- Assessment of CT Learning Outcomes
7. Technology Infrastructure and Implementation Considerations
7.1. Technical Requirements for Equitable AI Integration
7.2. Resource Allocation and Sustainability
7.3. Quality Assurance and Evaluation
8. Evidence from Current Research and Practice
8.1. Comprehensive Narrative Review of AI Integration Outcomes
Empirical Evidence from AI-Enhanced Computational Thinking Studies
8.2. Institutional Implementation Models
8.3. Student Experience and Outcome Analysis
9. Future Directions and Research Priorities
9.1. Emerging Technologies and Opportunities
9.2. Research Methodologies and Frameworks
9.3. Policy Development and Institutional Change
10. Implications for Practice and Policy
10.1. Recommendations for Educational Institutions
10.2. Policy Recommendations for Educational Governance
10.3. Community Engagement and Stakeholder Involvement
10.4. Field-Oriented Recommendations
11. Conclusions: Toward Equitable AI-Enhanced Computational Thinking Education
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Comprehensive Search Protocol and Complete Inclusion List
- Initial records identified: 897 (863 + 34 supplemental)
- After duplicate removal: 847
- Title/abstract screening: 234 to full-text
- Full-text assessment: 167 included, 68 excluded
- Inter-rater reliability (20% sample): Cohen’s κ = 0.84
- Web of Science: 226 records
- Scopus: 257 records
- ERIC: 171 records
- ACM Digital Library: 143 records
- IEEE Xplore: 100 records
| Exclusion Reason | Count |
| Insufficient implementation detail | 26 |
| No equity considerations | 19 |
| Duplicate reporting | 13 |
| Methodology concerns | 10 |
- E = Empirical (primary research study)
- C = Conceptual (theoretical framework)
- P = Policy (institutional guidelines/reports)
- F = Foundational (seminal CT/pedagogy work)
- Empirical AI-CT studies (HE context): ~25 sources
- Foundational CT pedagogy research: ~38 sources
- AI ethics and policy literature: ~48 sources
- Theoretical/foundational sources: ~42 sources
- K-12 studies (included for relevance): ~14 sources
| Manuscript Location | Should State |
| Abstract | Comprehensive review of 167 sources |
| Methodology | Final corpus: 167 sources |
| PRISMA Flow Diagram | Final inclusion: 167 |
| Results Section 8.1 | 167 sources analyzed |
| This Appendix A | Table with 167 rows |
Appendix B. Worked Example Implementation Materials
Appendix B.1. Assessment Rubric for Data Analysis CT Unit
| Level | Decomposition | Pattern Recognition | Abstraction | Algorithm Design |
| Exemplary (90–100%) | Breaks complex problem into logical, manageable sub-tasks with clear rationale | Identifies multiple relevant patterns; explains significance in community context | Creates abstractions preserving essential contextual information | Designs efficient, well-documented algorithm; considers edge cases |
| Proficient (80–89%) | Appropriately decomposes problem; mostly complete breakdown | Identifies relevant patterns; explains basic significance | Abstracts appropriately but may oversimplify; basic explanation provided | Designs functional algorithm with documentation; considers main cases |
| Developing (70–79%) | Attempts decomposition but misses key sub-tasks | Identifies obvious patterns; limited explanation | Attempts abstraction but loses essential information | Creates algorithm for basic cases; limited documentation |
| Beginning (<70%) | Minimal or inappropriate decomposition | Fails to identify meaningful patterns | Abstraction absent or inappropriate | Algorithm incomplete or non-functional |
- Uses AI strategically for computational tasks
- Documents all AI usage with clear rationale
- Proactively identifies potential bias
- Maintains human authority for ethical decisions
- Thoroughly documents all major decisions
- Student selected and framed community problem
- Clear ownership of solution design
- Problem deeply relevant to local community
- Analysis respects community perspectives
- Findings have clear potential for positive local impact
Appendix B.2. Prompt Documentation Template
| Prompt # | My Prompt Text | AI Response Summary | My Decision (Accept/Modify/Reject) | Rationale for Decision |
- Which CT skills did I use in constructing this prompt?
- What tasks did I keep for myself vs. delegate to AI? Why?
- Did I identify any potential bias in AI suggestions?
- How did I maintain control over problem definition and solution design?
Appendix B.3. Bias-Check Prompts for AI-Assisted Data Analysis
- Does it make assumptions based on deficit thinking?
- Does it overlook community assets or cultural strengths?
- Does it perpetuate common stereotypes?
- What alternative interpretations might community members offer?
- Are all relevant community groups adequately represented?
- Does the visualization obscure or highlight certain groups unfairly?
- What story does this data tell, and whose perspective does it privilege?
- Who benefits most from this solution? Who might be disadvantaged?
- Does this solution require unequally distributed resources?
- What barriers might prevent equitable access?
- What historical inequities might contribute to current patterns?
- Does this analysis risk replicating past discriminatory approaches?
- What structural factors beyond individual behavior shape these patterns?
Appendix B.4. Human-Only vs. AI-Assisted Task Allocation Table
| Task | Allocation | Rationale |
| Problem selection & framing | Human only | Builds agency; requires community knowledge |
| Data source identification | Human only | Develops critical evaluation skills |
| Initial decomposition | Human only | Core CT skill foundation |
| Basic pattern recognition | Human only | Foundation for understanding AI later |
| Preliminary analysis plan | Human only | Requires domain knowledge |
References
- Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429. [Google Scholar] [CrossRef] [PubMed]
- Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2(3), 431–440. [Google Scholar] [CrossRef]
- Ala-Mutka, K. M. (2005). A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2), 83–102. [Google Scholar] [CrossRef]
- American Association of University Professors. (2023). Artificial intelligence and academic professions. Available online: https://www.aaup.org/reports-publications/aaup-policies-reports/topical-reports/artificial-intelligence-and-academic (accessed on 13 March 2025).
- Amigud, A., & Lancaster, T. (2019). 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education, 134, 98–107. [Google Scholar] [CrossRef]
- Anthology. (2024). AI policy framework for higher education. Available online: https://www.anthology.com/news/new-ai-policy-framework-from-anthology-empowers-higher-education-to-balance-the-risks-and (accessed on 13 March 2025).
- Baker, R. S., & Hawn, A. (2022). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32(4), 1052–1092. [Google Scholar] [CrossRef]
- Baker, R. S., & Siemens, G. (2014). Educational data mining and learning analytics. In Learning analytics (pp. 61–75). Springer. [Google Scholar]
- Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52(1), 1–26. [Google Scholar] [CrossRef]
- Barocas, S., & Hardt, M. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press. [Google Scholar]
- Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. [Google Scholar] [CrossRef]
- Bau, D., Gray, J., Kelleher, C., Sheldon, J., & Turbak, F. (2017). Learnable programming: Blocks and beyond. Communications of the ACM, 60(6), 72–80. [Google Scholar] [CrossRef]
- Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press. [Google Scholar]
- Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. [Google Scholar] [CrossRef]
- Blikstein, P. (2018). Pre-college computer science education: A survey of the field. Google Inc. Available online: https://goo.gl/gmS1Vm (accessed on 13 March 2025).
- Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 29, 4349–4357. [Google Scholar]
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Google Scholar] [CrossRef]
- Brennan, K., & Resnick, M. (2012, April 13–17). New frameworks for studying and assessing the development of computational thinking. Proceedings of the 2012 Annual Meeting of the American Educational Research Association (pp. 1–25), Vancouver, BC, Canada. Available online: http://scratched.gse.harvard.edu/ct/files/AERA2012.pdf (accessed on 13 March 2025).
- Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., Saddiqui, S., & van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837–1856. [Google Scholar] [CrossRef]
- California State University. (2024). ETHICAL principles AI framework for higher education. Available online: https://genai.calstate.edu/communities/faculty/ethical-and-responsible-use-ai/ethical-principles-ai-framework-higher-education (accessed on 4 May 2025).
- CAST. (2018). Universal design for learning guidelines version 2.2. Available online: http://udlguidelines.cast.org (accessed on 11 February 2025).
- Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20, 38. [Google Scholar] [CrossRef]
- Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. [Google Scholar] [CrossRef]
- Chassignol, M., Khoroshavin, A., Klimova, A., & Bilyatdinova, A. (2018). Artificial intelligence trends in education: A narrative overview. Procedia Computer Science, 136, 16–24. [Google Scholar] [CrossRef]
- Chen, X., Zou, D., Cheng, G., & Xie, H. (2020). Detecting latent topics and trends in educational technologies over four decades using structural topic modeling: A retrospective of all volumes of Computers & Education. Computers & Education, 151, 103855. [Google Scholar] [CrossRef]
- Chiu, T. K. (2021). Digital support for student engagement in blended learning based on self-determination theory. Computers in Human Behavior, 124, 106909. [Google Scholar] [CrossRef]
- Chiu, T. K., & Chai, C. S. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12(14), 5568. [Google Scholar] [CrossRef]
- Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (4th ed.). John Wiley & Sons. [Google Scholar]
- Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Lawrence Erlbaum Associates. [Google Scholar]
- Cornell University Center for Teaching Innovation. (2024). Ethical AI for teaching and learning. Available online: https://teaching.cornell.edu/generative-artificial-intelligence/ethical-ai-teaching-and-learning (accessed on 4 May 2025).
- Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
- Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1989(1), 139–167. [Google Scholar]
- Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE Publications. [Google Scholar]
- Crow, T., Luxton-Reilly, A., & Wuensche, B. (2018, January 30–February 2). Intelligent tutoring systems for programming education: A systematic review. Proceedings of the 20th Australasian Computing Education Conference (pp. 53–62), Brisbane, QLD, Australia. [Google Scholar] [CrossRef]
- Danaher, J., Hogan, M. J., Noone, C., Kennedy, R., Behan, A., De Paor, A., Felzmann, H., Haklay, M., Khoo, S.-M., Morison, J., Murphy, M. H., O’Brolchain, N., Schafer, B., & Shankar, K. (2017). Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data & Society, 4(2). [Google Scholar] [CrossRef]
- Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Learning Policy Institute. [Google Scholar]
- Dawson, P., & Sutherland-Smith, W. (2018). Can markers detect contract cheating? Results from a pilot study. Assessment & Evaluation in Higher Education, 43(2), 286–293. [Google Scholar] [CrossRef]
- Denning, P. J., & Tedre, M. (2019). Computational thinking. MIT Press. [Google Scholar]
- Denny, P., Prather, J., Becker, B. A., Finnie-Ansley, J., Hellas, A., Leinonen, J., Luxton-Reilly, A., Reeves, B. N., Santos, E. A., & Sarsa, S. (2024). Computing education in the era of generative AI. Communications of the ACM, 67(2), 56–67. [Google Scholar] [CrossRef]
- Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. [Google Scholar]
- Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing, 5(3), 4-es. [Google Scholar] [CrossRef]
- Drachsler, H., & Greller, W. (2016, April 25–29). Privacy and analytics: It’s a DELICATE issue a checklist for trusted learning analytics. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 89–98), Edinburgh, UK. [Google Scholar] [CrossRef]
- EDUCAUSE. (2024). 2024 EDUCAUSE action plan: AI policies and guidelines. Available online: https://www.educause.edu/research/2024/2024-educause-action-plan-ai-policies-and-guidelines (accessed on 4 May 2025).
- Eglash, R., Gilbert, J. E., & Foster, E. (2006). Toward culturally responsive computing education. Communications of the ACM, 49(12), 33–35. [Google Scholar] [CrossRef]
- Ertmer, P. A., & Ottenbreit-Leftwich, A. T. (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255–284. [Google Scholar] [CrossRef]
- European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act). COM(2021) 206 final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 13 March 2025).
- European Parliament and Council. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (L 1689). Official Journal of the European Union. [Google Scholar]
- Faculty Focus. (2025). Crafting thoughtful AI policy in higher education: A guide for institutional leaders. Available online: https://www.facultyfocus.com/articles/academic-leadership/crafting-thoughtful-ai-policy-in-higher-education-a-guide-for-institutional-leaders/ (accessed on 4 May 2025).
- Family Educational Rights and Privacy Act (FERPA). (1974). 20 U.S.C. § 1232g; 34 CFR part 99. Available online: https://www.ecfr.gov/current/title-34/subtitle-A/part-99 (accessed on 4 May 2025).
- Flores Romero, P., Fung, K. N. N., Rong, G., & Cowley, B. U. (2025). Structured human-LLM interaction design reveals exploration and exploitation dynamics in higher education content generation. Npj Science of Learning, 10, 40. [Google Scholar] [CrossRef]
- Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. [Google Scholar] [CrossRef]
- Freeman, R. E., Harrison, J. S., Wicks, A. C., Parmar, B. L., & De Colle, S. (2010). Stakeholder theory: The state of the art. Cambridge University Press. [Google Scholar]
- Gardner, J., Brooks, C., & Baker, R. (2019, March 4–8). Evaluating the fairness of predictive student models through slicing analysis. Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 225–234), Tempe, AZ, USA. [Google Scholar] [CrossRef]
- Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success. The Internet and Higher Education, 28, 68–84. [Google Scholar] [CrossRef]
- Gašević, D., Kovanović, V., & Joksimović, S. (2017). Piecing the learning analytics puzzle: A consolidated model of a field of research and practice. Learning: Research and Practice, 3(1), 63–78. [Google Scholar] [CrossRef]
- Gay, G. (2018). Culturally responsive teaching: Theory, research, and practice (3rd ed.). Teachers College Press. [Google Scholar]
- Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé, H., III, & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. Available online: https://arxiv.org/pdf/1803.09010v7 (accessed on 27 October 2025). [CrossRef]
- General Data Protection Regulation (GDPR). (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. Regulation (EU), 679(2016), 10–13. [Google Scholar]
- González, N., Moll, L. C., & Amanti, C. (2005). Funds of knowledge: Theorizing practices in households, communities, and classrooms. Lawrence Erlbaum Associates. [Google Scholar]
- Goode, J., Margolis, J., & Chapman, G. (2019). Equity in computer science education. In S. Fincher, & A. Robins (Eds.), The Cambridge handbook of computing education research (pp. 561–583). Cambridge University Press. [Google Scholar]
- Gouseti, A., James, F., Fallin, L., & Burden, K. (2024). The ethics of using AI in K-12 education: A systematic literature review. Technology, Pedagogy and Education, 34(2), 161–182. [Google Scholar] [CrossRef]
- Govender, I. (2016). The learning context: Influence on learning to program. Computers & Education, 53(4), 1218–1230. [Google Scholar]
- Government of Canada. (2023). Artificial intelligence and data commissioner: Annual report 2023. Available online: https://www.priv.gc.ca/en/opc-actions-and-decisions/ar_index/202223/ar_202223/ (accessed on 4 May 2025).
- Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. [Google Scholar] [CrossRef]
- Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher, 42(1), 38–43. [Google Scholar] [CrossRef]
- Harris, J., & Hofer, M. (2011). Technological pedagogical content knowledge (TPACK) in action. Journal of Research on Technology in Education, 43(3), 211–229. [Google Scholar] [CrossRef]
- Harvard University. (2023). Artificial intelligence at Harvard: Guidance for students. Available online: https://provost.harvard.edu/guidelines-using-chatgpt-and-other-generative-ai-tools-harvard (accessed on 4 May 2025).
- Hassan, M., Chen, Y., Denny, P., & Zilles, C. (2025). On teaching novices computational thinking by utilizing large language models within assessments. In Proceedings of the 56th ACM Technical Symposium on Computer Science Education (pp. 485–491). Association for Computing Machinery. [Google Scholar] [CrossRef]
- Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107. [Google Scholar] [CrossRef]
- Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign. [Google Scholar]
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. [Google Scholar] [CrossRef]
- Holstein, K., McLaren, B. M., & Aleven, V. (2018, June 27–30). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. Proceedings of the 19th International Conference on Artificial Intelligence in Education (pp. 154–168), London, UK. [Google Scholar] [CrossRef]
- Holstein, K., McLaren, B. M., & Aleven, V. (2019). Co-designing a real-time classroom orchestration tool to support teacher-AI complementarity. Journal of Learning Analytics, 6(2), 27–52. [Google Scholar] [CrossRef]
- Hsu, T. C. (2025). A constructionist prompting framework for developing computational thinking with generative artificial intelligence. Computers and Education: Artificial Intelligence, 7, 100267. [Google Scholar] [CrossRef]
- Hsu, T. C., Abelson, H., Lao, N., Tseng, Y. H., & Lin, Y. T. (2021). Behavioral-pattern exploration and development of an instructional tool for young children to learn AI. Computers & Education: Artificial Intelligence, 2, 100012. [Google Scholar]
- Hutchinson, B., & Mitchell, M. (2019, January 29–31). 50 years of test (un)fairness: Lessons for machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 49–58), Atlanta, GA, USA. [Google Scholar] [CrossRef]
- Ifenthaler, D., & Schumacher, C. (2016). Student perceptions of privacy principles for learning analytics. Educational Technology Research and Development, 64(5), 923–938. [Google Scholar] [CrossRef]
- International Center for Academic Integrity. (2021). The fundamental values of academic integrity (3rd ed.). International Center for Academic Integrity. Available online: https://academicintegrity.org/images/pdfs/20019_ICAI-Fundamental-Values_R12.pdf (accessed on 13 March 2025).
- Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. [Google Scholar] [CrossRef]
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. [Google Scholar] [CrossRef]
- Jones, K. M., & McCoy, C. (2019). Reconsidering data in learning analytics: Opportunities for critical research using a documentation studies framework. Learning, Media and Technology, 44(1), 52–63. [Google Scholar] [CrossRef]
- Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. [Google Scholar] [CrossRef]
- Khalil, M., & Ebner, M. (2014, June 23–26). MOOCs completion rates and possible methods to improve retention-a literature review. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 1305–1313), Tampere, Finland. [Google Scholar]
- King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2020). Artificial intelligence crime: An interdisciplinary analysis of foreseeable harms and solutions. Science and Engineering Ethics, 26(1), 89–120. [Google Scholar] [CrossRef]
- Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. [Google Scholar] [CrossRef]
- Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298–311. [Google Scholar] [CrossRef]
- Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70. [Google Scholar] [CrossRef]
- Koh, J. H. L., Basawapatna, A. R., Bennett, V., & Repenning, A. (2014, September 21–25). Towards the automatic recognition of computational thinking for adaptive visual language learning. Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing (pp. 59–66), Leganes, Spain. [Google Scholar]
- Kotter, J. P. (2012). Leading change. Harvard Business Review Press. [Google Scholar]
- Ladson-Billings, G. (2014). Culturally relevant pedagogy 2.0: A.k.a. the remix. Harvard Educational Review, 84(1), 74–84. [Google Scholar] [CrossRef]
- Lancaster, T., & Clarke, R. (2016). Contract cheating: The outsourcing of assessed student work. In Handbook of academic integrity (pp. 639–654). Springer. [Google Scholar]
- Lent, R. W., Brown, S. D., & Hackett, G. (2000). Contextual supports and barriers to career choice: A social cognitive analysis. Journal of Counseling Psychology, 47(1), 36–49. [Google Scholar] [CrossRef]
- Loksa, D., Ko, A. J., Jernigan, W., Oleson, A., Mendez, C. J., & Burnett, M. M. (2016, May 7–12). Programming, problem solving, and self-awareness: Effects of explicit guidance. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 1449–1461), San Jose, CA, USA. [Google Scholar]
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson. [Google Scholar]
- Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior, 41, 51–61. [Google Scholar] [CrossRef]
- Malgieri, G., & Comandé, G. (2017). Why a right to legibility of automated decision-making exists in the general data protection regulation. International Data Privacy Law, 7(4), 243–265. [Google Scholar] [CrossRef]
- Margolis, J., Goode, J., & Ryoo, J. J. (2015). Democratizing computer science. Educational Leadership, 72(4), 48–53. [Google Scholar] [CrossRef]
- Margulieux, L. E., Guzdial, M., & Catrambone, R. (2016, September 9–11). Subgoal-labeled instructional material improves performance and transfer in learning to develop mobile applications. Proceedings of the ACM Conference on International Computing Education Research (pp. 71–78), Melbourne, Australia. [Google Scholar]
- Massachusetts Institute of Technology. (2023). Working with AI: Guidelines for students and faculty. Available online: https://ist.mit.edu/ai-guidance (accessed on 13 March 2025).
- Meyer, A., Rose, D. H., & Gordon, D. (2014). Universal Design for Learning: Theory and practice. CAST Professional Publishing. [Google Scholar]
- Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019, January 29–31). Model cards for model reporting. Proceedings of the Conference On Fairness, Accountability, and Transparency (pp. 220–229), Atlanta, GA, USA. [Google Scholar] [CrossRef]
- Moll, L. C., Amanti, C., Neff, D., & Gonzalez, N. (1992). Funds of knowledge for teaching: Using a qualitative approach to connect homes and classrooms. Theory Into Practice, 31(2), 132–141. [Google Scholar] [CrossRef]
- Morello, L. T., & Chick, J. C. (2025). Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research. Informatics, 12(3), 85. [Google Scholar] [CrossRef]
- Murphy, L., Lewandowski, G., McCauley, R., Simon, B., Thomas, L., & Zander, C. (2008). Debugging: The good, the bad, and the quirky--A qualitative analysis of novices’ strategies. ACM SIGCSE Bulletin, 40(1), 163–167. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology. (2023). AI risk management framework (AI RMF 1.0). U.S. Department of Commerce. [CrossRef]
- Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. [Google Scholar] [CrossRef]
- Ocumpaugh, J., Baker, R., Gowda, S., Heffernan, N., & Heffernan, C. (2014). Population validity for educational data mining models: A case study in affect detection. British Journal of Educational Technology, 45(3), 487–501. [Google Scholar] [CrossRef]
- OECD. (2021). AI and the future of skills: Implications for higher education. OECD Publishing. [Google Scholar] [CrossRef]
- O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group. [Google Scholar]
- Page, M. J. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [Google Scholar] [CrossRef]
- Papert, S., & Harel, I. (1991). Situating constructionism. In I. Harel, & S. Papert (Eds.), Constructionism (pp. 1–11). Ablex Publishing. [Google Scholar]
- Pardo, A., & Siemens, G. (2014). Ethical and privacy principles for learning analytics. British Journal of Educational Technology, 45(3), 438–450. [Google Scholar] [CrossRef]
- Paré, G., Trudel, M. C., Jaana, M., & Kitsiou, S. (2015). Synthesizing information systems knowledge: A typology of literature reviews. Information & Management, 52(2), 183–199. [Google Scholar] [CrossRef]
- Paris, D., & Alim, H. S. (Eds.). (2017). Culturally sustaining pedagogies: Teaching and learning for justice in a changing world. Teachers College Press. [Google Scholar]
- Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2024). Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. Journal of Academic Ethics, 22, 89–113. [Google Scholar] [CrossRef]
- Personal Information Protection and Electronic Documents Act (PIPEDA). (2000). S.C. 2000, c. 5. Available online: https://laws-lois.justice.gc.ca/eng/acts/p-8.6/FullText.html (accessed on 13 March 2025).
- Petticrew, M., & Roberts, H. (2006). Systematic reviews in the social sciences: A practical guide. Blackwell Publishing. [Google Scholar] [CrossRef]
- Price, T. W., Dong, Y., & Lipovac, D. (2016, March 2–5). iSnap: Towards intelligent tutoring in novice programming environments. Proceedings of the 2016 ACM Technical Symposium on Computer Science Education (pp. 483–488), Memphis, TN, USA. [Google Scholar] [CrossRef]
- Prinsloo, P., & Slade, S. (2017, April 7–9). An elephant in the room: Educational data mining, learning analytics and ethics. Proceedings of the 9th International Conference on Networked Learning (pp. 46–55), Edinburgh, UK. [Google Scholar] [CrossRef]
- Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14. [Google Scholar] [CrossRef]
- Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020, January 27–30). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44), Barcelona, Spain. [Google Scholar] [CrossRef]
- Regan, P. M., & Jesse, J. (2019). Ethical challenges of edtech, big data and personalized learning: Twenty-first century student sorting and tracking. Ethics and Information Technology, 21(3), 167–179. [Google Scholar] [CrossRef]
- Reich, J., & Mehta, J. D. (2020). Failure to disrupt: Why technology alone can’t transform education. Harvard University Press. [Google Scholar]
- Rivers, K., & Koedinger, K. R. (2017). Data-driven hint generation in vast solution spaces. Computers & Education, 104, 188–198. [Google Scholar]
- Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student attitudes toward learning analytics in higher education: “The fitbit version of the learning world”. Frontiers in Psychology, 7, 1959. [Google Scholar] [CrossRef] [PubMed]
- Robins, A., Rountree, J., & Rountree, N. (2003). Learning and teaching programming: A review and discussion. Computer Science Education, 13(2), 137–172. [Google Scholar] [CrossRef]
- Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press. [Google Scholar]
- Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582–599. [Google Scholar] [CrossRef]
- Rudolph, J., Tan, S., & Tan, S. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching, 6(1), 364–389. [Google Scholar] [CrossRef]
- Ruffalo Noel Levitz. (2025). Why universities need AI governance. Available online: https://www.ruffalonl.com/blog/artificial-intelligence/why-universities-need-ai-governance/ (accessed on 13 March 2025).
- Ryoo, J. J., Margolis, J., Lee, C. H., Sandoval, C. D. M., & Goode, J. (2013). Democratizing computer science knowledge: Transforming the face of computer science through public high school education. Learning, Media and Technology, 38(2), 161–181. [Google Scholar] [CrossRef]
- Sanders, E. B. N., & Stappers, P. J. (2008). Co-creation and the new landscapes of design. CoDesign, 4(1), 5–18. [Google Scholar] [CrossRef]
- Schumacher, C., & Ifenthaler, D. (2018). Features students really expect from learning analytics. Computers in Human Behavior, 78, 397–407. [Google Scholar] [CrossRef]
- Scott, K. A., Sheridan, K. M., & Clark, K. (2015). Culturally responsive computing: A theory revisited. Learning, Media and Technology, 40(4), 412–436. [Google Scholar] [CrossRef]
- Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January 29–31). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68), Montreal, QC, Canada. [Google Scholar] [CrossRef]
- Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. John Wiley & Sons. [Google Scholar]
- Selwyn, N., Hillman, T., Bergviken Rensfeldt, A., & Perrotta, C. (2021). Digital technology and the futures of education: Critical research directions. British Educational Research Journal, 47(4), 1087–1106. [Google Scholar]
- Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Studies, 137, 102385. [Google Scholar] [CrossRef]
- Simonsen, J., & Robertson, T. (Eds.). (2012). Routledge international handbook of participatory design. Routledge. [Google Scholar] [CrossRef]
- Singapore Government. (2020). Model artificial intelligence governance framework, 2nd ed.; Personal Data Protection Commission. Available online: https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework (accessed on 13 March 2025).
- Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. [Google Scholar] [CrossRef]
- Stinson, C. (2022). Algorithms are not neutral: Bias in collaborative filtering. AI and Ethics, 2(4), 763–770. [Google Scholar] [CrossRef] [PubMed]
- Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. [Google Scholar] [CrossRef]
- Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. [Google Scholar] [CrossRef]
- Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. [Google Scholar] [CrossRef]
- Tashakkori, A., & Teddlie, C. (2010). Sage handbook of mixed methods in social & behavioral research (2nd ed.). SAGE Publications. [Google Scholar]
- Tedre, M., Denning, P., & Toivonen, T. (2021, November 18–21). CT 2.0. Proceedings of the 21st Koli Calling International Conference on Computing Education Research (pp. 1–8), Joensuu, Finland. [Google Scholar]
- Tondeur, J., van Braak, J., Ertmer, P. A., & Ottenbreit-Leftwich, A. (2017). Understanding the relationship between teachers’ pedagogical beliefs and technology use in education: A systematic review of qualitative evidence. Educational Technology Research and Development, 65(3), 555–575. [Google Scholar] [CrossRef]
- Tsai, M. J., Wang, C. Y., & Hsu, P. F. (2021). Developing the computer programming self-efficacy scale for computer literacy education. Journal of Educational Computing Research, 56(8), 1345–1360. [Google Scholar] [CrossRef]
- UNESCO. (2021). AI and education: Guidance for policy-makers. UNESCO Publishing. [Google Scholar]
- United Kingdom Government. (2023). A pro-innovation approach to AI regulation. Department for Science, Innovation and Technology. Available online: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach (accessed on 13 March 2025).
- University of California System. (2023). Guidelines for generative AI use in teaching and learning. Available online: https://rtl.berkeley.edu/resources/ai-teaching-learning-overview (accessed on 13 March 2025).
- Usher, M., & Barak, M. (2024). Unpacking the role of AI ethics online education for science and engineering students. International Journal of STEM Education, 11, 35. [Google Scholar] [CrossRef]
- Vakil, S. (2018). Ethics, identity, and political vision: Toward a justice-centered approach to equity in computer science education. Harvard Educational Review, 88(1), 26–52. [Google Scholar] [CrossRef]
- VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. [Google Scholar] [CrossRef]
- Veletsianos, G. (2022). Teaching with AI: A practical guide to transforming education. Johns Hopkins University Press. [Google Scholar]
- Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31, 841–887. [Google Scholar] [CrossRef]
- Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology, 25(1), 127–147. [Google Scholar] [CrossRef]
- Weintrop, D., & Wilensky, U. (2019). Transitioning from introductory block-based and text-based environments to professional programming languages in high school computer science classrooms. Computers & Education, 142, 103646. [Google Scholar] [CrossRef]
- Weller, M. (2020). 25 years of ed tech. Athabasca University Press. [Google Scholar]
- Williamson, B. (2019). Policy networks, performance metrics and platform markets: Charting the expanding data infrastructure of higher education. British Journal of Sociology of Education, 40(2), 185–200. [Google Scholar] [CrossRef]
- Williamson, B., Bayne, S., & Shay, S. (2020). The datafication of teaching in higher education: Critical issues and perspectives. Teaching in Higher Education, 25(4), 351–365. [Google Scholar] [CrossRef]
- Winfield, A. F., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A, 376(2133), 20180085. [Google Scholar] [CrossRef]
- Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. [Google Scholar] [CrossRef]
- Yadav, A., Hong, H., & Stephenson, C. (2016). Computational thinking for all: Pedagogical approaches to embedding 21st century problem solving in K-12 classrooms. TechTrends, 60(6), 565–568. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., & Latchem, C. (2018). Exploring four decades of research in Computers & Education. Computers & Education, 122, 136–152. [Google Scholar] [CrossRef]
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. [Google Scholar] [CrossRef]
- Zeide, E. (2019). Artifical intelligence in higher education: Applications, promise and perils, and ethical questions. EDUCAUSE Review, 54(3), 31–47. [Google Scholar]

| Dimension | Human-in-the-Loop (HITL) | Socio-Technical Systems Theory | Constructionism | Human–AI Symbiotic Theory (HAIST) |
|---|---|---|---|---|
| Aim | Maintain human control and oversight of AI decision-making processes to prevent harmful autonomous actions | Understand mutual shaping between technology and social practices; analyze how technical and social elements co-constitute organizational systems | Enable learning through construction of meaningful artifacts; emphasize learner agency in knowledge building | Preserve and enhance human cognitive development within AI-mediated learning environments; ensure AI augments rather than replaces uniquely human capabilities |
| Locus of Control | Human retains final decision authority; AI provides recommendations that humans validate or reject | Distributed across socio-technical assemblage; neither purely technical nor purely social control | Learner holds primary control over construction processes; educator facilitates rather than directs | Shared between human and AI, with intentional design to preserve learner agency in problem definition, solution design, and ethical reasoning; AI handles computational tasks while human maintains authority over pedagogical and ethical dimensions |
| Design Primitives | Verification checkpoints; human approval gates; transparency mechanisms; fail-safe protocols | Boundary objects; affordances; inscriptions; work practices; organizational routines; socio-material configurations | Computational materials; debugging opportunities; shareable objects; low floors/high ceilings; ownership of learning products | Complementary cognitive architecture (task allocation); transformative agency enhancement (expanding learner autonomy); ethical knowledge co-construction (transparent processes with bias mitigation); scaffolded AI interaction protocols |
| Assessment Evidence | Accuracy of AI outputs after human review; reduction in AI errors; human satisfaction with oversight mechanisms | Successful coordination between social and technical elements; organizational effectiveness; user adaptation patterns; technology appropriation | Quality and meaningfulness of constructed artifacts; evidence of debugging and iteration; learner ownership; transfer to new contexts | Growth in learner CT competencies independent of AI; ability to critically evaluate AI-generated solutions; maintenance of learner agency; equitable outcomes across diverse learner populations; development of ethical AI collaboration skills |
| Equity Safeguards | Human review can catch discriminatory AI decisions, but depends on human reviewer’s own biases; may create bottlenecks limiting scalability | Attention to power dynamics and structural inequalities, but lacks specific mechanisms for algorithmic bias detection | Emphasis on culturally relevant materials and learner interests; recognition of diverse ways of knowing, but limited attention to algorithmic equity | Architectural embedding of bias detection; real-time monitoring across demographic groups; preservation of agency for marginalized learners; culturally responsive problem contexts; transparent AI limitations; inclusive development teams |
| Framework | Core Principles | Application to AI-Enhanced CT | Implementation Strategies | Assessment Indicators |
|---|---|---|---|---|
| TPACK | Integration of technology, pedagogical, and content knowledge | Guides educators in balancing AI capabilities with CT objectives and pedagogical methods | Faculty development addressing all three domains; collaborative design; iterative refinement | Appropriate AI tool selection; pedagogically sound integration; maintained CT objectives |
| UDL | Multiple means of representation, engagement, and action/expression | Ensures AI-enhanced CT environments accommodate varied learning preferences and abilities | Multiple content formats; choice in problem contexts; flexible assessment modes | Equitable access across learners; reduced achievement gaps; increased engagement |
| Culturally Responsive Teaching | Recognition and incorporation of students’ cultural backgrounds into instruction | Ensures AI systems reflect diverse cultural problem-solving approaches | Culturally relevant problem contexts; diverse representation; validation of multiple strategies | Student sense of belonging; engagement with CT concepts; cultural knowledge as asset |
| HAIST | Preservation of human cognitive development; complementary cognitive architecture | Provides specific guidance for task allocation and maintaining learner agency | Scaffolded AI introduction; documentation of reasoning; critical AI evaluation; bias checks | Independent CT competency growth; ability to work with/without AI; critical evaluation skills |
| Cognitive Load Theory | Management of intrinsic, extraneous, and germane cognitive load | Guides AI interface design to avoid overwhelming learners | Simplified AI interfaces; gradual complexity increases; AI handles routine tasks; worked examples | Appropriate cognitive challenge; successful CT development; learner confidence |
| Study | Context | Sample | CT Focus | AI/LLM Capability | Agency Mechanism | Equity Considerations |
|---|---|---|---|---|---|---|
| Holstein et al. (2018) | University programming classroom | n = 18 teachers, 200+ students | Debugging, problem-solving | AI classroom orchestration tool | Teachers more efficient at identifying struggling students | Some students felt surveilled; varied teacher adoption |
| Hsu (2025) | 2025 | Undergraduate programming courses | Decomposition, abstraction, algorithmic thinking, prompt engineering | LLMs (ChatGPT, GPT-4) | Constructionist prompting framework; students control problem definition | Framework preserves learner agency |
| Hassan et al. (2025) | 2025 | University CS1 (n = 17) | Code comprehension, debugging, algorithmic reasoning | LLM chatbot with PythonTutor | Students chose tools; chatbot guided rather than solved | Focus on scalable support for novices |
| Flores Romero et al. (2025) | 2025 | Doctoral AIEd course (n = 25) | Task decomposition, CT as cognitive trait | ChatGPT (GPT-4) for content creation | Students controlled exploration vs exploitation | Examined CT trait effects on AI use |
| Bau et al. (2017) | University CS1 courses | n = 166 | Debugging, syntax understanding | AI-powered error explanation | 40% reduction in time to fix syntax errors | Language barriers affected error message comprehension |
| Price et al. (2016) | Multi-institutional programming | n = 1700+ | Algorithm design, debugging | Automated hint generation | Improved assignment completion rates | Hints less effective for students with weak foundational skills |
| Rivers and Koedinger (2017) | University data structures course | n = 203 | Algorithm design, optimization | AI tutoring system | Significant learning gains in algorithm efficiency | Benefits varied by prior programming experience |
| Lye and Koh (2014) | K-12 and higher education programming | Systematic review of 27 studies | Algorithm design, problem decomposition | Various programming environments and tools | Identified key pedagogical approaches for CT development | Need for inclusive teaching methods noted across diverse learners |
| Crow et al. (2018) | University software engineering | n = 156 | Code review, pattern recognition | AI code analysis tools | Enhanced code quality awareness | Tool complexity created barriers for some students |
| Douce et al. (2005) | European universities | n = multiple institutions | Programming assessment | Automated assessment systems | Reliable basic assessment but missed nuanced solutions | Bias against non-conventional programming approaches |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chick, J.C. AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education. Educ. Sci. 2025, 15, 1515. https://doi.org/10.3390/educsci15111515
Chick JC. AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education. Education Sciences. 2025; 15(11):1515. https://doi.org/10.3390/educsci15111515
Chicago/Turabian StyleChick, John C. 2025. "AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education" Education Sciences 15, no. 11: 1515. https://doi.org/10.3390/educsci15111515
APA StyleChick, J. C. (2025). AI-Enhanced Computational Thinking: A Comprehensive Review of Ethical Frameworks and Pedagogical Integration for Equitable Higher Education. Education Sciences, 15(11), 1515. https://doi.org/10.3390/educsci15111515
