Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education
Abstract
1. Introduction
2. The Specter of Cognitive Debt in the Age of AI
2.1. From Clinical Neurology to Digital Pedagogy: Redefining Cognitive Debt
2.2. Mechanisms of Accrual: Cognitive Offloading and Automation Bias
2.3. The Erosion of Critical Thinking and Professional Judgment
3. A Pedagogical Framework for Critical Engagement
3.1. Grounding AI Integration in Learning Theory
3.2. An Experiential Framework for Mitigating Cognitive Debt
3.3. Micro, Mezzo, and Macro Level Applications
4. The Institutional and Professional Context
4.1. Ethical Guardrails and Professional Standards
4.2. Institutional Responsibility: Crafting Policies for Academic Integrity and Disclosure
5. Discussion
5.1. Implications for Faculty Development
5.2. Implications for Assessment and Accreditation
5.3. The Future of Social Work in a Post-AI World
5.4. A Forward-Looking Research Agenda
5.5. Limitations of the Framework
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AC | Abstract Conceptualization |
AE | Active Experimentation |
AI | Artificial Intelligence |
CE | Concrete Experience |
CK | Content Knowledge |
CSWE | Council on Social Work Education |
EPAS | Educational Policy and Accreditation Standards |
NASW | National Association of Social Workers |
PK | Pedagogical Knowledge |
RNT | Repetitive Negative Thinking |
RO | Reflective Observation |
SAMR | Substitution, Augmentation, Modification, and Redefinition |
TK | Technological Knowledge |
TPACK | Technological Pedagogical Content Knowledge |
References
- Ahn, H., Smith, L., & Jones, R. (2025). Artificial intelligence (AI) literacy for social work: Implications for core competencies. Journal of Social Work Education, 61(2), 214–229. [Google Scholar] [CrossRef]
- Amini, M., Lee, K.-F., Yiqiu, W., & Ravindran, L. (2025). Proposing a framework for ethical use of AI in academic writing based on a conceptual review: Implications for quality education. Interactive Learning Environments, 1–25. [Google Scholar] [CrossRef]
- Avgerinou, M. D., Karampelas, A., & Stefanou, V. (2023). Building the plane as we fly it: Experimenting with GenAI for scholarly writing. Irish Journal of Technology Enhanced Learning, 7(2), 61–74. [Google Scholar] [CrossRef]
- Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information, 15(11), 676. [Google Scholar] [CrossRef]
- Cong-Lem, N., Nguyen, T. T., & Nguyen, K. N. H. (2025). Critical thinking in the age of generative AI: Effects of a short-term experiential learning intervention on EFL learners. International Journal of TESOL Studies, 250522, 1–21. [Google Scholar] [CrossRef]
- Council on Social Work Education. (2022). 2022 educational policy and accreditation standards. Available online: https://www.cswe.org/accreditation/policies-process/2022epas/ (accessed on 23 July 2025).
- Council on Social Work Education. (2024). Strategic plan 2026–2030. Available online: https://www.cswe.org/about-cswe/2026-strategic-plan/ (accessed on 23 July 2025).
- Digital Education Council. (2025a, January 28). What faculty want: Key results from the global AI faculty survey 2025. Available online: https://www.digitaleducationcouncil.com/post/what-faculty-want-key-results-from-the-global-ai-faculty-survey-2025 (accessed on 23 July 2025).
- Digital Education Council. (2025b, March 3). DEC AI literacy framework. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-ai-literacy-framework (accessed on 23 July 2025).
- Dolan, E. W. (2025, March 21). AI tools may weaken critical thinking skills by encouraging cognitive offloading, study suggests. PsyPost. Available online: https://www.psypost.org/ai-tools-may-weaken-critical-thinking-skills-by-encouraging-cognitive-offloading-study-suggests/ (accessed on 23 July 2025).
- Easter, T., Lee, A., & Bailey, D. (2024). Artificial intelligence in language education: A mixed-methods study of teacher perspectives and challenges. National Social Science Technology Journal, 12(1), 203–234. [Google Scholar]
- Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. [Google Scholar] [CrossRef]
- Flaherty, H. B., Henshaw, L. A., Lee, S. R., Herrera, C., Whitney, K., Auerbach, C., & Beckerman, N. L. (2025). Harnessing new technology and simulated role plays for enhanced engagement and academic success in online social work education. Studies in Clinical Social Work: Transforming Practice, Education and Research, 1–21. [Google Scholar] [CrossRef]
- Freire, P. (1970). Pedagogy of the oppressed. Herder and Herder. [Google Scholar]
- Gkintoni, E., Antonopoulou, H., Sortwell, A., & Halkiopoulos, C. (2025). Challenging cognitive load theory: The role of educational neuroscience and artificial intelligence in redefining learning efficacy. Brain Sciences, 15(2), 203. [Google Scholar] [CrossRef]
- Guichard, E., & Smith, M. (2024, December 11). Keeping it real—The value of human-centered skills. AACSB. Available online: https://www.aacsb.edu/insights/articles/2024/12/keeping-it-real-the-value-of-human-centered-skills (accessed on 23 July 2025).
- Hodgson, D., Goldingay, S., Boddy, J., Nipperess, S., & Watts, L. (2021). Problematising artificial intelligence in social work education: Challenges, issues and possibilities. The British Journal of Social Work, 52(4), 2119–2137. [Google Scholar] [CrossRef]
- hooks, b. (1994). Teaching to transgress: Education as the practice of freedom. Taylor & Francis Group. [Google Scholar]
- Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 248. [Google Scholar] [CrossRef]
- Kahn, L., Probasco, E. S., & Kinoshita, R. (2024, November). AI safety and automation bias: The downside of human-in-the-loop. Center for Security and Emerging Technology. Available online: https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/ (accessed on 23 July 2025).
- Kaushik, A., Yadav, S., Browne, A., Lillis, D., Williams, D., McDonnell, J., Grant, P., Connolly Kernan, S., Sharma, S., & Arora, M. (2025). Exploring the impact of generative artificial intelligence in education: A thematic analysis. arXiv. [Google Scholar] [CrossRef]
- Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169–183. [Google Scholar] [CrossRef]
- Knowles, M. S. (1980). The modern practice of adult education: From pedagogy to andragogy (2nd ed.). Cambridge Book Company. [Google Scholar]
- Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall. [Google Scholar]
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv, arXiv:2501.10134. [Google Scholar] [CrossRef]
- Marchant, N. L., & Howard, R. J. (2015). Cognitive debt and Alzheimer’s disease. Journal of Alzheimer’s Disease, 44, 755–770. [Google Scholar] [CrossRef] [PubMed]
- Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
- Mollick, E., & Mollick, L. (2023, March 16). Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts. SSRN. Available online: https://ssrn.com/abstract=4391243 (accessed on 23 July 2025).
- National Association of Social Workers. (2021). Code of ethics. National Association of Social Workers. [Google Scholar]
- National Association of Social Workers. (2025). NASW standards for clinical social work in social work practice. National Association of Social Workers. [Google Scholar]
- National Association of Social Workers, Association of Social Work Boards, Council on Social Work Education & Clinical Social Work Association. (2017). NASW, ASWB, CSWE, & CSWA standards for technology in social work practice. National Association of Social Workers. [Google Scholar]
- Ng, B. Y., Li, J., Tong, X., Ye, K., Yenne, G., Chandrasekaran, V., & Li, J. (2025). Analyzing security and privacy challenges in generative AI usage guidelines for higher education. arXiv. [Google Scholar] [CrossRef]
- Nuwasiima, M., Ahonon, M. P., & Kadiri, C. (2024). The role of artificial intelligence (AI) and machine learning in social work practice. World Journal of Advanced Research and Reviews, 24(1), 080–097. [Google Scholar] [CrossRef]
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishers. [Google Scholar]
- Partnership on AI. (2023, February 27). PAI’s responsible practices for synthetic media: A framework for collective action. Available online: https://syntheticmedia.partnershiponai.org/ (accessed on 23 July 2025).
- Parveen, N. K., & Kumar, N. R. (2024). Cognitive offloading: A review. The International Journal of Indian Psychology, 12(2), 4668–4681. [Google Scholar] [CrossRef]
- Petricini, T., Wu, C., & Zipf, S. (2024). Perceptions about artificial intelligence and ChatGPT use by faculty and students. Transformative Dialogues: Teaching and Learning Journal, 17(2), 63–87. [Google Scholar] [CrossRef]
- Puentedura, R. R. (n.d.). Building transformation: An introduction to the SAMR model [PowerPoint slides]. Hippasus. Available online: https://www.hippasus.com/rrpweblog/archives/2014/08/22/BuildingTransformation_AnIntroductionToSAMR.pdf (accessed on 23 July 2025).
- Reamer, F. G. (2023). Artificial intelligence in social work: Emerging ethical issues. International Journal of Social Work Values and Ethics, 20(2), 52–71. [Google Scholar] [CrossRef]
- Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. [Google Scholar] [CrossRef]
- Rodriguez, M. Y., Goldkind, L., Victor, B. G., Hiltz, B., & Perron, B. E. (2024). Introducing generative artificial intelligence into the MSW curriculum: A proposal for the 2029 Educational Policy and Accreditation Standards. Journal of Social Work Education, 60(2), 174–182. [Google Scholar] [CrossRef]
- Romeo, G., & Conti, D. (2025). Exploring automation bias in human-AI collaboration: A review and implications for explainable AI. AI & Soc. [Google Scholar] [CrossRef]
- Sharma, S., Singh, A., & Goel, D. (2025). A study on embracing digital media and AI in higher education: From chalkboards to chatbots. International Journal for Research Publication and Seminar, 16(2), 124–133. [Google Scholar] [CrossRef]
- Singer, J. B., Creswell Báez, J., & Rios, J. A. (2023). AI creates the message: Integrating AI language learning models into social work education and practice. Journal of Social Work Education, 59(2), 294–302. [Google Scholar] [CrossRef]
- Sisk, V. F., Burgoyne, A. P., Sun, J., Butler, J. L., & Macnamara, B. N. (2018). To what extent and under which circumstances are growth mind-sets important to academic achievement? Two meta-analyses. Psychological Science, 29(4), 549–571. [Google Scholar] [CrossRef]
- Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. [Google Scholar] [CrossRef]
- UNESCO. (2022). Recommendation on the ethics of artificial intelligence. UNESCO. [Google Scholar]
- UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 23 July 2025).
- Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. [Google Scholar] [CrossRef]
CSWE Competency | Concrete Experience (CE) | Reflective Observation (RO) | Abstract Conceptualization (AC) | Active Experimentation (AE) |
---|---|---|---|---|
Demonstrate Ethical and Professional Behavior | Use an AI tool to generate a response to a complex ethical dilemma (e.g., a dual relationship scenario). | Analyze the AI’s response against the NASW Code of Ethics. Identify where the AI’s logic deviates from professional standards and discuss the limitations of algorithmic “ethics” | Research and synthesize ethical frameworks for AI in human services Develop a personal ethical decision-making model for AI-assisted practice. | Rewrite the AI’s response to the ethical dilemma, providing a new response that is fully aligned with social work values, citing specific ethical codes to justify changes. |
Engage Anti-Racism, Diversity, Equity, and Inclusion (ADEI) | Use an AI image generator with a prompt like “a social worker helping a family.” This is an act of creating “synthetic media” (Partnership on AI, 2023). | Critically analyze the generated images for racial, gender, class, and ability-based stereotypes. Discuss the inherent bias towards U.S. culture and thinking” often found in AI (Easter et al., 2024). | Research how bias is encoded in AI training data (O’Neil, 2016). Formulate principles for creating culturally sensitive and anti-oppressive AI prompts. | Collaboratively write and test a series of new, highly specific prompts designed to generate a diverse range of images. Compare outputs and refine the prompts iteratively. |
Engage in Policy Practice | Use an AI tool to summarize a piece of proposed legislation relevant to social work. | Fact-check the AI’s summary against the original bill text. Identify any key omissions or misinterpretations, which are forms of extrinsic and intrinsic hallucination (Ji et al., 2023). | Research the political and economic interests behind the legislation. Conceptualize a policy brief that outlines the bill’s potential impact on vulnerable populations, using a social justice lens. | Draft a formal policy brief that uses the AI-generated summary as a starting point but corrects its flaws and adds a critical social work perspective and specific recommendations. |
Engage in Practice-Informed Research and Research-Informed Practice | Use an AI tool to generate a literature review on a specific clinical intervention. | Evaluate the AI-generated review for accuracy and bias, a process that can be “very disruptive for the general flow of writing” but is essential for critical AI literacy (Avgerinou et al., 2023). Check if cited sources are real and relevant. | Learn the principles of systematic literature reviews. Develop a search strategy and inclusion/exclusion criteria for a proper review of the topic. | Conduct a small-scale, authentic literature search using university library databases. Write a critical comparison between their rigorous search and the initial AI-generated review, highlighting the value of professional research skills. |
AI Use Case in Social Work Education | Data Privacy/Confidentiality Risk | Algorithmic Bias Risk | Academic Integrity Risk | Cognitive Debt Risk | Required Human Oversight |
---|---|---|---|---|---|
Brainstorming research topics | Low: Assumes no personal/client data is used in prompts. | Medium: AI may suggest topics based on biased data, over-representing mainstream issues. | Low: Considered a legitimate pre-writing strategy. Disclosure of use is best practice. | Low: Can be a useful starting point, but must be followed by independent research and refinement. | Medium: Instructor should guide students to critique and broaden the AI’s suggestions. |
Summarizing academic articles | Low: If using public articles. High: If uploading copyrighted or sensitive documents to a public tool. | Medium: AI may misinterpret nuance or fail to capture the author’s critical stance, producing a flattened summary. | High: Submitting an AI summary as one’s own work is plagiarism. Use requires explicit permission and attribution. | High: Offloads the core skill of reading for comprehension and synthesis, a key driver of cognitive debt (Risko & Gilbert, 2016). | High: Students must compare the summary to the original text to identify inaccuracies and omissions (hallucinations) (Ji et al., 2023). |
Drafting client case notes | Extreme: Using any real client information in a public AI tool is a severe ethical and legal violation (HIPAA, NASW Code) (Ng et al., 2025; Reamer, 2023). | High: AI may use stereotypical language or make biased assumptions based on demographic information included in a hypothetical prompt. | Medium: In a professional context, this could be fraud. In an educational one, it bypasses skill development. | High: Prevents students from learning the critical skill of translating complex human interactions into professional documentation. | Absolute: Professional social worker must write/rewrite all notes, assuming full responsibility. AI should not replace human judgment (National Association of Social Workers, 2025). |
Simulating a client interview | Low: If the simulation uses a hypothetical persona and no real data is shared. | High: The AI “client” will reflect the biases of its training data, potentially responding in stereotypical or unrealistic ways. | Low: The simulation is a means to an end; the assessment is on the student’s reflective analysis of the interaction. | Low: If followed by the full Kolb cycle (reflection, conceptualization, experimentation). High: If used as a simple Q&A without critical debriefing. | High: Instructor must facilitate a thorough debriefing, focusing on skill application and the AI’s limitations. |
Analyzing community data | Medium: Depends on the dataset. Public census data is low-risk; anonymized agency data is higher risk. | High: The choice of data and the interpretation of patterns can be subject to significant bias, leading to flawed conclusions about community needs (O’Neil, 2016). | Medium: Failure to critically evaluate the data sources and the AI’s analytical methods constitutes poor scholarship. | Medium: Can automate complex calculations but risks creating a false sense of objectivity if the user does not understand the underlying principles. | High: Instructor and student must critically vet data sources, question algorithmic assumptions, and supplement quantitative findings with qualitative understanding. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Watts, K.J. Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Educ. Sci. 2025, 15, 1304. https://doi.org/10.3390/educsci15101304
Watts KJ. Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Education Sciences. 2025; 15(10):1304. https://doi.org/10.3390/educsci15101304
Chicago/Turabian StyleWatts, Keith J. 2025. "Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education" Education Sciences 15, no. 10: 1304. https://doi.org/10.3390/educsci15101304
APA StyleWatts, K. J. (2025). Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Education Sciences, 15(10), 1304. https://doi.org/10.3390/educsci15101304