Next Article in Journal
Cultural Empathy in AI-Supported Collaborative Learning: Advancing Inclusive Digital Learning in Higher Education
Previous Article in Journal
Learning Course Improvement Tools: Search Has Led to the Development of a Maturity Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education

College of Social Work, University of Kentucky, Lexington, KY 40508, USA
Educ. Sci. 2025, 15(10), 1304; https://doi.org/10.3390/educsci15101304
Submission received: 24 July 2025 / Revised: 23 September 2025 / Accepted: 26 September 2025 / Published: 2 October 2025

Abstract

The rapid integration of Generative Artificial Intelligence in higher education challenges social work as student adoption outpaces pedagogical guidance. This paper argues that the unguided use of AI fosters cognitive debt: a cumulative deficit in critical thinking, ethical reasoning, and professional judgment that arises from offloading cognitive tasks. To counter this risk, a pedagogical model is proposed, synthesizing experiential learning, andragogy, and critical pedagogies. The framework reframes AI from a passive information tool into an active object of critical inquiry. Through structured assignments across micro, mezzo, and macro practice, the model guides students through cycles of concrete experience with AI, reflective observation of its biases, abstract conceptualization of ethical principles, and active experimentation with responsible professional use. Aligned with professional ethical standards, the model aims to prepare future social workers to scrutinize and shape AI as a tool for social justice. The paper concludes with implications for faculty development, institutional policy, accreditation, and a forward-looking research agenda.

1. Introduction

The landscape of higher education is being irrevocably reshaped by the proliferation of Generative Artificial Intelligence (AI). Recent surveys reveal a staggering rate of adoption among students; a 2024 global survey indicates that 86% are now using AI tools in their academic work (Sharma et al., 2025). This student-led revolution in learning practices has created a significant pedagogical paradox. While students have eagerly integrated AI into their academic workflows, faculty and institutional readiness have lagged alarmingly behind. A 2025 global survey found that while a majority of faculty have used AI in teaching, their use is overwhelmingly minimal (Digital Education Council, 2025b). This is compounded by a profound lack of institutional guidance, as 80% of faculty report a lack of clarity on how to apply AI in their teaching, and only 4% feel their institution’s AI guidelines are fully comprehensive (Digital Education Council, 2025b). A systematic analysis of university policies confirms they are often vague and place the primary burden of responsible use on individual students and faculty (Ng et al., 2025). This has left many institutions reactive, creating a pedagogical vacuum where the primary users of a transformative technology are developing habits of engagement largely devoid of expert guidance or structured learning objectives (Batista et al., 2024; Eke, 2023; Walter, 2024).
The existence of this vacuum is not merely an academic concern, it is a critical risk factor for the development of professional competence. The core issue is not the technology itself, but the asymmetrical adoption rate between students and educators. This disparity ensures that the default mode of AI use is driven by the path of least cognitive resistance—efficiency and task completion—rather than by pedagogical goals. In the absence of formal instruction on how to engage with AI critically, students naturally gravitate toward its most obvious affordance: the ability to reduce the time and mental effort required for academic tasks. This unguided, efficiency-focused behavior directly fosters cognitive offloading and automation bias, which are the primary mechanisms for accruing a new and insidious form of cognitive debt. The problem, therefore, is not simply that AI can be misused, but that the current educational environment actively encourages its misuse by failing to provide a structured alternative (Kirschner & van Merriënboer, 2013).
This risk is particularly acute in a profession like social work. The field is already confronting the multifaceted impacts of AI, from its use in high-stakes predictive risk modeling in child welfare (Nuwasiima et al., 2024) to its deployment in mental health chatbots (Reamer, 2023). Scholars within the field have issued an urgent call to action, arguing that it is an ethical imperative for educators to engage with this technology to ensure it aligns with the profession’s core values (Reamer, 2023; Singer et al., 2023). Professional bodies like the Council on Social Work Education (CSWE) recognize the imperative to prepare students for a digital working life, explicitly including AI in their strategic planning (Council on Social Work Education, 2024). However, this preparation must be carefully balanced with the profession’s foundational commitment to ethical, relational, and justice-oriented practice (Hodgson et al., 2021). The uncritical use of AI threatens to erode the very skills that form the bedrock of social work competence: empathy, nuanced professional judgment, and the capacity for deep critical thinking (Hodgson et al., 2021).
While the ethical challenges of AI are global, the specific professional standards and educational context for this analysis are primarily rooted in the United States. This paper introduces the concept of AI-induced cognitive debt as the central risk stemming from this unguided adoption. It posits that to mitigate this debt and harness AI’s potential for good, social work education must transcend simplistic technology integration models. It must adopt a robust pedagogical framework grounded in experiential and critical learning theories. Such a framework is designed to intentionally shift the use of AI from a tool for passive cognitive offloading to an object of active, critical, and reflective inquiry. By doing so, social work education can align technological advancement with the development of professional competence and ethical integrity, preparing graduates who are not only technologically literate but also critically conscious and profoundly human.
To build this argument and construct the proposed framework, this work employs a synthesis of foundational theories from education and social work, combined with a review of the emerging empirical and ethical literature on AI, to propose a novel pedagogical model. It is aligned with a tradition of scholarly work that aims to build new frameworks to address complex educational challenges (e.g., Mishra & Koehler, 2006). The goal is not to present new empirical data, but to provide a theory-driven and practical framework to guide future pedagogy, policy, and research in social work education. The subsequent sections will detail the theoretical underpinnings of this model, its practical application, and its alignment with the institutional and professional context of social work.

2. The Specter of Cognitive Debt in the Age of AI

The rapid integration of AI into the academic lives of students necessitates a new conceptual language to describe the potential cognitive consequences. The term cognitive debt borrowed and adapted from clinical neurology, provides a powerful framework for understanding the risks associated with the uncritical use of these technologies.

2.1. From Clinical Neurology to Digital Pedagogy: Redefining Cognitive Debt

The concept of “Cognitive Debt” was originally formulated by researchers Natalie L. Marchant and Robert J. Howard in the context of Alzheimer’s disease research. They proposed it to “characterize thoughts and behaviors that increase vulnerability to symptomatic Alzheimer’s disease” (Marchant & Howard, 2015). Their model suggests that certain cognitive processes, far from building mental acuity, can actively deplete cognitive reserves. The primary mechanism they identified is Repetitive Negative Thinking (RNT), a perseverative and distressing thought process common to conditions like depression and anxiety (Marchant & Howard, 2015). This form of cognitive debt is not merely the absence of protective factors like high educational attainment (known as cognitive reserve), but an active process of resource depletion that increases vulnerability to neuropathology.
This article extends this powerful concept from the realm of internal, negative thought loops to the domain of externalized cognitive processes facilitated by AI. Recent neuroscientific research provides a compelling basis for this extension. A study using electroencephalography to monitor the brain activity of students writing essays found that participants who used Large Language Models for assistance displayed the most diminished connectivity and showed signs of cognitive under-engagement compared to those who used only their brains or a search engine (Kosmyna et al., 2025). This finding suggests a new pathway for accruing cognitive debt: not through the over-exertion of negative internal processes, but through the chronic under-engagement of essential cognitive functions due to their externalization onto an AI tool (Kosmyna et al., 2025). Such reframing is critical for understanding the pedagogical challenge. The clinical form of cognitive debt described by Marchant and Howard is a liability accrued through the presence of a harmful internal process (RNT) that depletes a finite resource. The educational form of cognitive debt, in contrast, is an opportunity cost accrued through the absence of a beneficial internal process—namely, the effortful thinking, deep processing, and schema-building that constitute genuine learning (Gkintoni et al., 2025; Sweller, 1988). In the language of Cognitive Load Theory, uncritical AI use minimizes the germane cognitive load—the productive mental effort required for deep learning—while often increasing the extraneous cognitive load—the unproductive effort spent navigating a tool’s interface or correcting its flawed output (Gkintoni et al., 2025). Every time a student offloads a critical thinking task to an AI without a structured pedagogical purpose, they miss a vital opportunity to invest in their own cognitive reserves. This failure to build cognitive assets, repeated over time, results in a significant and potentially debilitating cognitive debt.

2.2. Mechanisms of Accrual: Cognitive Offloading and Automation Bias

Two primary mechanisms drive the accumulation of this educational cognitive debt: cognitive offloading and automation bias. Cognitive Offloading is the act of using external tools or physical actions to reduce the mental processing requirements of a task (Parveen & Kumar, 2024; Risko & Gilbert, 2016). This is a natural human strategy for managing the inherent limitations of working memory; writing a shopping list or saving a phone number in a contact list are everyday examples of benign cognitive offloading (Risko & Gilbert, 2016). However, when applied to the complex tasks central to higher education, this practice becomes problematic. The over-reliance on AI to perform core academic functions—such as summarizing texts, brainstorming ideas, structuring arguments, or analyzing data—can prevent the deep, effortful cognitive engagement required to construct robust, durable mental models (Sweller, 1988). This can lead to a failure to internalize information and skills, leaving the student with a superficial, fragile understanding of the material (Kosmyna et al., 2025). The student may complete the assignment, but the learning that the assignment was designed to produce does not occur. Long-term reliance on offloading can lead to cognitive “skill decay”, which is the gradual deterioration of a developed ability (Risko & Gilbert, 2016).
Automation Bias is the tendency to over-rely on, or place undue trust in, the recommendations and outputs of automated systems, often leading individuals to disregard their own judgment or contradictory information (Kahn et al., 2024; Romeo & Conti, 2025). This cognitive shortcut, where automated output replaces information seeking and processing, is particularly pronounced under conditions of high cognitive load, task complexity, and time pressure—all of which are common in a student’s life (Romeo & Conti, 2025). Furthermore, research shows that automation bias is more prevalent among inexperienced users, a category that includes students who are novices in their field of study (Romeo & Conti, 2025). This bias manifests in two distinct error types: errors of commission, where a student accepts and incorporates incorrect AI-generated information into their work, and errors of omission, where a student fails to identify a critical issue or take necessary action because the AI did not prompt them to do so (Kahn et al., 2024).

2.3. The Erosion of Critical Thinking and Professional Judgment

The confluence of cognitive offloading and automation bias creates a situation where students become dependent on AI to perform cognitive tasks, thereby failing to develop and internalize the underlying skills themselves. A growing body of research points to a significant negative correlation between frequent AI tool usage and critical thinking abilities (e.g., Dolan, 2025). This is not a hypothetical risk; educators are already observing a tangible decline in students’ capacity to apply knowledge and their willingness to persevere through complex problems (Easter et al., 2024; Kaushik et al., 2025). While the term cognitive debt is a novel adaptation for social work education, the underlying concern that students are using AI to “avoid putting in the effort they need to learn” (p. 223) and are being “robbed…of the opportunity to actually learn” (Easter et al., 2024, p. 224) is a well-documented theme among educators observing these tools in practice. Indeed, one educator lamented that for many students, “The art of taking time to figure out a problem has been lost” (Kaushik et al., 2025, p. 6).
For a profession like social work, this erosion of critical thinking translates directly into an erosion of professional judgment. Professional judgment in social work is not a simple, mechanical application of rules but a sophisticated and dynamic interplay of theoretical knowledge, ethical principles, empathic understanding, and critical analysis of complex, often ambiguous, human situations. The uncritical use of AI fundamentally threatens this delicate process. If students become habituated to seeking and accepting instant, algorithmically generated answers, they risk failing to develop the capacity for independent, reasoned judgment. The danger is amplified by the known limitations of AI tools, which are prone to producing biased, inaccurate, or entirely fabricated information, often referred to as “hallucinations” (Ji et al., 2023). A social worker relying on such flawed outputs to inform a clinical assessment, a treatment plan, or a policy recommendation would not only be providing substandard service but would be committing a profound failure of professional judgment and a significant ethical breach (National Association of Social Workers, 2021; Reamer, 2023). The cognitive debt accrued in the classroom is thus cashed out as real-world risk in professional practice, with potentially devastating consequences for vulnerable clients and communities.

3. A Pedagogical Framework for Critical Engagement

To counteract the accumulation of cognitive debt, social work education requires a pedagogical antidote that is as sophisticated as the technology it seeks to address. A simple list of “dos and don’ts” is insufficient. What is needed is a flexible framework that transforms AI from a potential cognitive liability into a catalyst for deeper learning. Such a framework can be constructed by synthesizing foundational theories of experiential, adult, and critical learning.

3.1. Grounding AI Integration in Learning Theory

An effective pedagogy for the age of AI cannot be developed in a theoretical vacuum. It must be grounded in established principles of how people, and specifically adult learners in professional programs, learn best. By weaving together the insights of David Kolb, Malcolm Knowles, Paulo Freire, and bell hooks, we can construct a multi-layered approach that addresses the process of learning, the stance of the learner, and the critical ethos required to engage with a powerful and ethically complex technology.
David Kolb’s experiential learning theory provides the foundational process for our framework. Kolb famously defined learning as “the process whereby knowledge is created through the transformation of experience” (Kolb, 1984). This is not a linear event but a continuous, cyclical process. It begins with a Concrete Experience (CE), which involves a direct, tangible engagement with something new. Following this experience, the learner moves to Reflective Observation (RO), stepping back to review and reflect upon what happened. Through this reflection, the learner engages in Abstract Conceptualization (AC), forming new ideas, modifying existing concepts, and drawing conclusions. The cycle culminates in Active Experimentation (AE), where the learner applies these newly formed concepts to new situations, thereby testing their validity and beginning the cycle anew (Kolb, 1984). This four-stage cycle offers a powerful cognitive scaffold to combat the one-dimensional, input-output nature of simplistic AI use. Instead of a single step—“ask AI, get answer”—educators can design learning activities that intentionally guide students through all four stages of the cycle. The CE can be a structured interaction with an AI tool, but the pedagogical value lies in the subsequent, mandatory stages. The processes of RO, AC, and AE ensure that the student, not the AI, performs the essential cognitive labor of deconstruction, meaning-making, and knowledge construction. Recent empirical research has confirmed that even a short-term intervention explicitly based on Kolb’s cycle can produce statistically significant and large improvements in students’ critical thinking skills when using AI (Cong-Lem et al., 2025). This deliberate structure transforms the AI from a cognitive crutch into a specific object of inquiry, a catalyst for a deeper and more durable learning process.
While Kolb’s cycle provides the process, Malcolm Knowles’ theory of andragogy defines the learner’s stance. Knowles (1980) argued that adult learners are fundamentally different from children and that their learning is most effective when it acknowledges their unique characteristics. His core principles state that adults are most motivated to learn when they understand why they are learning something and perceive its immediate relevance. Furthermore, effective adult learning respects that they are self-directed, allows them to draw upon their lived experiences as a rich resource, is oriented toward real-life tasks, and is problem-centered rather than subject-centered. Finally, Knowles (1980) posits that adults are driven primarily by intrinsic motivation, such as the desire for personal growth or career advancement. These principles are profoundly resonant for social work students, who are adult learners preparing for a demanding professional role. They are not interested in learning for its own sake; they need to understand how new knowledge and skills apply directly to the complex problems they will face in practice. Therefore, an effective AI pedagogy must respect their autonomy and leverage their existing knowledge base. A prohibitive stance of simply demanding that students avoid using AI is destined to fail because it is authoritarian and disconnected from their reality, clashing with the adult learner’s self-concept of self-directivity (Knowles, 1980). In contrast, a problem-centered approach that challenges students to use AI to solve a relevant social work problem, and to grapple with the ethical dilemmas that arise, is far more likely to be engaging, motivating, and educationally effective.
If Kolb provides the process and Knowles defines the learner, the critical pedagogies of Paulo Freire and bell hooks supply the essential critical ethos. Freire famously critiqued the traditional banking model of education, in which the teacher deposits knowledge into the minds of passive students, who are treated as empty receptacles (Freire, 1970). He argued this model is inherently oppressive, as it discourages critical thought and reinforces existing power structures. The uncritical use of AI represents the ultimate instantiation of the banking model: the AI acts as the omniscient, all-knowing teacher that deposits seemingly perfect information into the passive student, who is merely required to copy and paste. Freire advocated for a problem-posing model of education, where teachers and students engage in a critical, co-creative dialogue to understand and act upon the world (Freire, 1970). bell hooks extended this vision, calling for an engaged pedagogy that embraces the whole person—intellectual, emotional, and spiritual—and that “teaches to transgress” the boundaries of traditional education and systems of domination (hooks, 1994). For hooks, education must be a practice of freedom that actively challenges dominant narratives and empowers marginalized voices (hooks, 1994).
Applying this lens to AI integration is transformative. A critical pedagogy frames AI not as a neutral, objective tool, but as a powerful cultural artifact that is deeply embedded with the biases, values, and power structures of the society that created it (Easter et al., 2024; Hodgson et al., 2021; O’Neil, 2016). The training data for large language models reflects historical and ongoing injustices, and the algorithms themselves are designed and controlled by a small number of powerful corporations (Hodgson et al., 2021). The pedagogical goal, therefore, is not just to use AI, but to critique it. This involves making the AI itself the object of study, deconstructing its outputs, questioning its sources of authority, and empowering students to resist its potential to perpetuate stereotypes and social injustice (Amini et al., 2025).
The synthesis of these pedagogical theories creates a powerful, unified framework. Kolb, Knowles, Freire, and hooks are not competing alternatives but are nested, synergistic layers. Kolb provides the process (the four-stage cycle), Knowles defines the learner’s stance (self-directed and problem-focused), and Freire and hooks provide the critical ethos (challenging power and practicing freedom). A learning activity designed at the intersection of all three becomes far more than a simple exercise. The task is no longer “use AI to write a case summary”. Instead, within this synthesized framework, the task becomes: “You are a social worker with a client from a marginalized community. Use an AI tool to help you draft an initial case summary (CE framed as a relevant problem). Now, as a group, let us deconstruct the language, assumptions, and potential biases in the AI’s output. What stereotypes might be reinforced? What important cultural context is missing? (RO guided by a critical, transgressive lens). Based on our critique and your knowledge of social work ethics, develop a set of principles for ethically editing and augmenting AI-generated clinical notes to ensure they are anti-oppressive (AC as a self-directed process). Finally, rewrite the case summary according to your new principles, justifying every change you made from the original AI output (AE as a practice of freedom)”. This integrated approach transforms the activity from a demonstration of technical skill into a profound practice of critical, ethical, professional development, directly inoculating the student against cognitive debt.

3.2. An Experiential Framework for Mitigating Cognitive Debt

Operationalizing this synthesized pedagogical theory requires a practical framework that guides curriculum design and classroom practice. To ensure that the educational philosophies of Kolb, Knowles, Freire, and hooks are translated into effective, well-structured classroom activities, it is useful to employ established models for technology integration. These models provide a common language and a set of decision-making tools for educators, ensuring that the use of AI is always purposeful and pedagogically sound. By utilizing technology integration models like TPACK and SAMR, a structured approach that ensures AI is used not just effectively, but also thoughtfully and ethically, transforming it from an “answer machine” into an object of critical inquiry can be created.
The Technological Pedagogical Content Knowledge (TPACK) framework provides the guiding philosophy for this process. Developed by Mishra and Koehler (2006), TPACK posits that effective technology integration is not about technology alone, but occurs at the dynamic intersection of three core domains of knowledge: Technological Knowledge (TK) (understanding the tools), Pedagogical Knowledge (PK) (understanding how people learn), and Content Knowledge (CK) (understanding the subject matter). TPACK’s central tenet is that pedagogical goals and content must drive technology choices; it explicitly guards against adopting tools for their own sake (Mishra & Koehler, 2006).
While TPACK provides the strategic “why”, the SAMR model, developed by Ruben Puentedura (n.d.), offers a tactical toolkit for the “how”. SAMR provides a four-level spectrum for classifying and evaluating the degree of technology integration in a learning task. These levels are often grouped into two categories, moving from lower-order to higher-order thinking. The first category, Enhancement, includes Substitution, where technology acts as a direct substitute with no functional change, and Augmentation, where technology substitutes a traditional tool but offers functional improvements. The second category, Transformation, represents a more significant pedagogical shift. It includes Modification, where technology allows for a significant redesign of the task, and Redefinition, where technology enables the creation of entirely new tasks that were previously inconceivable (Puentedura, n.d.). Empirical evidence associates stronger learning effect sizes with transformative-level tasks, reinforcing the move beyond simple substitution (Puentedura, n.d.). For this paper’s framework, TPACK acts as the strategic filter to select a SAMR level that best achieves the desired learning goal. This ensures that every use of AI is a deliberate pedagogical choice designed to maximize deep learning rather than superficial engagement.
To mitigate cognitive debt, the curriculum must be fundamentally reoriented. The goal is to shift the perception and use of AI from an omniscient oracle that provides final answers to a flawed, biased, and powerful object of study that demands scrutiny. This pivot aligns with the growing consensus that AI literacy is a crucial competency for all students (Digital Education Council, 2025a; UNESCO, 2023; Walter, 2024). A social work curriculum should therefore scaffold AI literacy in a progressive and integrated manner. An effective way to structure this is by synthesizing two leading models: the competency framework from UNESCO and the developmental levels from the Digital Education Council (DEC).
The UNESCO framework defines AI literacy across three core competencies: (1) Knowing and Understanding AI, which involves foundational knowledge of the technology and its limits; (2) Judging and Evaluating AI, which is the critical ability to assess outputs for bias and ethical issues; and (3) Doing and Applying AI, which involves the practical and ethical use of AI tools. To teach these competencies progressively, this paper adopts the three levels of mastery proposed by the DEC: an Introductory level (Awareness), an Intermediate level (Application), and an Advanced level (Leadership/Optimization). At the introductory level, students build skills in Knowing and Understanding AI. As they move to the intermediate level, the focus shifts to Judging and Evaluating AI. Finally, at the advanced level, students are challenged with tasks centered on ethically Doing and Applying AI in professional contexts.

3.3. Micro, Mezzo, and Macro Level Applications

The following section details specific, replicable example assignments that operationalize the experiential framework across micro, mezzo, and macro levels of social work practice. Each assignment is structured around Kolb’s four-stage learning cycle:
Micro Practice (Clinical Skills Development). In a micro-practice assignment focused on clinical skills, the learning cycle begins with a CE where students use an AI chatbot to conduct a simulated intake interview, an approach shown to increase student self-efficacy (Flaherty et al., 2025). Following this, students engage in RO by reviewing the interview transcript and writing a process recording to analyze their clinical skills and the AI’s realism, a crucial debriefing step for learning from simulations (Flaherty et al., 2025). The next stage, AC, requires students to research ethical guidelines for digital practice from the NASW and formulate best-practice principles for using AI as a clinical training tool (National Association of Social Workers, 2025). The cycle concludes with AE, where students work in small groups to redesign the initial AI prompt, creating a more complex client persona that incorporates cultural background and social determinants of health, and then test the new simulation’s effectiveness.
Mezzo Practice (Social Service Agency Policy). For a mezzo-practice scenario centered on agency policy, the CE involves tasking students with using an AI tool to draft a new client data privacy policy. Subsequently, during RO, student groups critique this draft through an anti-oppressive lens, questioning its assumptions and potential to create barriers for marginalized populations (Hodgson et al., 2021). This critique leads to AC, where students research relevant federal regulations, state laws, and professional ethical guidelines to synthesize a rubric for an equitable data privacy policy (National Association of Social Workers, 2025; UNESCO, 2022). Finally, in the AE phase, each group uses their rubric to rewrite the policy, submitting their version with a memorandum that justifies their changes and explains its superior alignment with social work values.
Macro Practice (Community Needs Assessment). At the macro level, an assignment on community needs assessment can begin with a CE in which students use AI data analysis tools to examine public datasets and identify social problems, a common application for AI in human services (Nuwasiima et al., 2024). This is followed by a RO phase, where students must write a reflection paper that critically questions the data-driven narrative, considering which populations might be rendered invisible and how data categories could perpetuate stereotypes (O’Neil, 2016). For AC, students learn the principles of community-based participatory research and design a mixed-methods approach that ethically integrates AI-powered analysis with qualitative methods. The cycle concludes with AE, as students develop a comprehensive grant proposal detailing this mixed-methods design and outlining procedures to ensure community members are treated as research partners, not mere data points.
To further operationalize this approach, Table 1 provides a structured guide for educators, mapping core social work competencies from the CSWE Educational Policy and Accreditation Standards (EPAS) to the four stages of Kolb’s learning cycle with specific, AI-focused activities.

4. The Institutional and Professional Context

The successful implementation of a such a pedagogical framework depends on a supportive ecosystem of ethical guardrails, clear institutional policies, and a commitment to ongoing professional development. The integration of AI into social work education must be deliberately aligned with the profession’s core values, accreditation standards, and global ethical principles.

4.1. Ethical Guardrails and Professional Standards

The responsible use of AI in social work education is not merely a matter of technical proficiency; it is a fundamental ethical imperative (Reamer, 2023; Singer et al., 2023). The proposed experiential framework is designed to be in direct alignment with the standards set by the profession’s key governing and guiding bodies.
CSWE, through its EPAS, sets the benchmark for quality social work education in the United States. EPAS mandates that accredited programs prepare students to demonstrate a set of core competencies, including the ability to “demonstrate ethical and professional behavior” and “engage anti-racism, diversity, equity, and inclusion” (Council on Social Work Education, 2022). The EPAS explicitly states that ethical behavior includes understanding “the ethical use of technology in social work practice” (Council on Social Work Education, 2022). The experiential learning framework directly addresses these mandates by making ethics, the critique of bias, and the critical evaluation of information central to the learning process. This approach is consistent with the emerging scholarly consensus that AI literacy is not merely a technical skill but an essential component of every core competency, from ethical behavior to engaging diversity (Ahn et al., 2025). Indeed, some scholars have formally proposed the creation of a new, standalone EPAS competency focused explicitly on generative AI to ensure its systematic integration into all accredited curricula (Rodriguez et al., 2024). The written reflections and policy drafts produced through these assignments serve as tangible, assessable artifacts that demonstrate student mastery of these core competencies (Ahn et al., 2025).
The National Association of Social Workers (NASW) Code of Ethics is the cornerstone of professional conduct, built upon core values including social justice, integrity, and competence (National Association of Social Workers, 2021). Uncritical AI use directly threatens these values. Relying on AI tools known to be biased violates the principle of social justice. Using inaccurate or “hallucinated” AI output in a clinical setting is a failure of competence. The proposed framework, by building skills in critical evaluation and ethical reasoning, helps students operationalize the NASW Code of Ethics in a digital context. It reframes AI literacy not as an optional technical skill but as an essential component of ethical practice, aligning with the joint NASW, ASWB, CSWE, & CSWA Standards for Technology and Social Work Practice (National Association of Social Workers et al., 2017). Furthermore, the CSWE’s 2026–2030 strategic plan explicitly identifies “anticipate the implications and opportunities of artificial intelligence (AI) and other disruptive technologies” as its primary goal (Council on Social Work Education, 2024). The adoption of a formal, evidence-informed pedagogical framework like the one proposed here is a direct and proactive response to this strategic directive, positioning social work education as a leader, rather than a follower, in this critical area.
The challenge of ethical AI is global, and social work education should situate its response within international standards. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2022) provides the first global standard-setting instrument on this issue. It is grounded in core values of human rights and diversity, and it articulates key principles that should govern AI, including fairness, transparency, accountability, and extensive human oversight. The framework is also consistent with UNESCO’s more recent guidance on generative AI, calling for a human-centered approach and warns against AI use that could lead to a “reduc[tion] of human cognitive capacities” (UNESCO, 2023). The experiential framework operationalizes these high-level principles in the classroom. For instance, the principle of Human Oversight is practiced and reinforced during the RO stage of every assignment, where students are required to critically evaluate and assume responsibility for AI-generated content. The principle of Accountability is taught by requiring students to verify AI outputs and to transparently disclose and justify their use of the technology. Finally, the principles of Fairness and Non-Discrimination are actively explored in activities that require students to deconstruct biased AI outputs and design more equitable alternatives. By explicitly linking classroom pedagogy to these global principles, social work programs can ensure they are preparing graduates to be responsible global citizens.

4.2. Institutional Responsibility: Crafting Policies for Academic Integrity and Disclosure

The current lack of formal guidance for students’ AI use is largely a product of institutional inaction. Many universities have struggled to create clear policies, leading to confusion and inconsistent enforcement (Petricini et al., 2024). Research shows that university AI guidelines are often vague and devolve accountability to individual users (Ng et al., 2025). Policies must move beyond simple prohibition and serve learning. Based on the current literature, this paper recommends that institutional AI policies should be pedagogically grounded, empowering instructors to define acceptable use based on specific learning objectives rather than imposing a universal ban (Walter, 2024). Policies must also mandate disclosure, requiring students to be transparent about their AI use and to justify it as a methodological choice (Amini et al., 2025; Partnership on AI, 2023). Furthermore, they must be centered on data privacy and ethics, educating the campus community about significant security risks (Ng et al., 2025). The ultimate goal is for policy to be focused on skill-building, promoting academic integrity by developing students’ AI literacy and critical thinking skills rather than focusing exclusively on punishment (Amini et al., 2025). To aid educators and administrators in this task, Table 2 presents a dimensional risk assessment framework. This framework supports course-level decisions by weighting risks differently across use cases rather than applying blanket bans. This tool is designed to facilitate a more nuanced, context-dependent evaluation of AI applications in social work education.

5. Discussion

The successful integration of the proposed experiential framework carries significant implications for faculty development, program accreditation, the future of the social work profession, and the direction of educational research.

5.1. Implications for Faculty Development

A persistent barrier to responsible AI integration is the widespread lack of faculty training and institutional support (Batista et al., 2024; Digital Education Council, 2025b). The proposed framework offers a clear roadmap for faculty development. Training initiatives must move beyond generic workshops and be grounded in the core tenets of the framework: adult learning principles (Knowles), experiential pedagogy (Kolb), critical theory (Freire and hooks), and practical technology integration models (TPACK, SAMR). This ensures that faculty development is not just about learning to use a new tool, but about learning how to teach with it in a way that deepens critical thinking and upholds professional ethics (Mollick & Mollick, 2023).

5.2. Implications for Assessment and Accreditation

A key component of implementing this framework is the development of methods for assessing the very skills it aims to cultivate. To address the challenge of quantifying the mitigation of cognitive debt, a multi-faceted approach to assessment is required. First, the experiential assignments detailed in Table 1 can be evaluated using detailed, competency-based rubrics. These rubrics should be designed to measure specific dimensions of critical thinking, such as the ability to identify bias, the logical coherence of an argument, and the ethical application of professional standards. Second, programs can employ pre- and post-intervention measures to assess changes in student self-efficacy regarding the ethical use of technology. Validated instruments, such as the Social Work Self-Efficacy Scale (SWSE), have been used in experimental studies to demonstrate that structured, technology-based interventions can significantly increase student confidence and competence (Flaherty et al., 2025). Finally, the qualitative analysis of student-produced artifacts, such as the reflective observation papers and policy memorandums, can provide rich, nuanced evidence of their ethical reasoning and critical consciousness, offering a deeper insight into their learning process.
This approach has a symbiotic relationship with accreditation. A robust pedagogical framework serves as the linchpin connecting AI adoption, faculty training, and accreditation. By adopting a formal framework, a social work program creates a specific agenda for its faculty development efforts. In turn, the experiential assignments and reflective assessments that emerge from this pedagogy generate precisely the kind of tangible evidence of student learning that is required for CSWE accreditation. It provides a mechanism for programs to demonstrate innovation and to produce artifacts that map directly to EPAS core competencies, such as ethical behavior and engaging with diversity (Ahn et al., 2025; Rodriguez et al., 2024). This transforms the “AI problem” from a threat to academic integrity into a demonstrable opportunity for educational advancement and rigorous quality assurance.

5.3. The Future of Social Work in a Post-AI World

The long-term impact of AI on the social work profession will be profound. It is likely that AI will automate many routine administrative and data analysis tasks (Nuwasiima et al., 2024). This technological shift will not render social workers obsolete; rather, it will elevate the importance of the uniquely human skills that are difficult, if not impossible, to automate (Guichard & Smith, 2024). The future value of a social worker will reside ever more in their capacity for complex critical thinking, genuine empathy, sophisticated ethical reasoning, creativity, and the ability to build authentic therapeutic relationships (Hodgson et al., 2021). Social work education must pivot its focus to explicitly teach, cultivate, and assess these core human competencies, using AI as a tool to free up the cognitive space for this deeper work. Furthermore, the profession is uniquely positioned to play a leading role in shaping the ethical development and deployment of AI in the broader human services sector (Singer et al., 2023). Social workers’ expertise in systems thinking, social justice, and the dignity and worth of the person is desperately needed to identify and mitigate the algorithmic biases that can harm vulnerable populations (Reamer, 2023). The curriculum must therefore expand to prepare students for this critical advocacy role, equipping them to be not just consumers of technology, but its ethical conscience.

5.4. A Forward-Looking Research Agenda

The framework proposed in this paper is grounded in established theory, but as a novel pedagogical model, its efficacy has not yet been empirically validated. Therefore, it is essential that its implementation be accompanied by a rigorous research agenda to guide the field’s evolution. Key priorities should include the quasi-experimental evaluation of the framework, using pre-test/post-test control group designs to compare student outcomes in critical thinking, ethical reasoning, and self-efficacy (Cong-Lem et al., 2025). Such research should employ validated measures to avoid the methodological flaws that have weakened other areas of educational research (Sisk et al., 2018). Longitudinal studies are also required to follow graduates into the field to examine the long-term impact on professional practice. Concurrently, qualitative and mixed-methods research is needed to identify the most effective models for faculty development and the institutional factors that support responsible AI integration. Finally, the field should pursue a research agenda focused on tool development and bias auditing, creating and testing AI applications specifically designed for social work education and validating new methods for auditing third-party tools for alignment with social work values.

5.5. Limitations of the Framework

It is important to acknowledge the limitations of the proposed framework. First and foremost, as previously noted, it is a conceptual model that requires empirical validation. Its effectiveness is, at present, theoretical. Second, the framework’s successful implementation hinges on significant institutional investment in faculty development and support. The barriers to technology integration, including faculty resistance and resource constraints, are substantial and cannot be overlooked (Flaherty et al., 2025). Third, the framework is intentionally flexible and may require significant adaptation to suit the diverse contexts of different social work programs, from large research universities to smaller teaching-focused colleges. Finally, while the pedagogical and ethical principles discussed are broadly applicable, the specific professional standards and examples used are primarily rooted in the United States (e.g., CSWE, NASW). Future work should focus on adapting and testing the framework in international contexts with different accreditation and professional standards.

6. Conclusions

The unguided, asymmetrical integration of AI into social work education is not a neutral development; it poses a significant and immediate risk of fostering cognitive debt, thereby undermining the critical thinking, ethical reasoning, and professional judgment that are the hallmarks of the profession. To simply ban this technology is untenable, and to ignore it is irresponsible. The most effective and ethical path forward is to reclaim AI for humanistic purposes through a deliberately constructed pedagogical framework. This paper has argued for such a framework, one that synthesizes the process-oriented cycle of experiential learning, the learner-centered principles of andragogy, and the justice-oriented ethos of critical pedagogy. This approach does not reject technology but rather subordinates it to the goals of human development and professional competence. It transforms AI from an answer key into a complex object of inquiry, compelling students to engage in the effortful cognitive work of reflection, conceptualization, and ethical application. By designing assignments that require students to critique AI, deconstruct its biases, and take responsibility for its output, we can mitigate the risk of cognitive offloading and automation bias. This experiential framework provides a practical roadmap for social work educators, a structured agenda for faculty development, and a clear mechanism for meeting accreditation standards. It situates social work education within a global consensus on AI ethics and prepares students for a future where their most valuable skills will be their most human ones. The future-ready social worker must be both technologically literate and profoundly critical, capable of leveraging powerful tools without surrendering professional judgment. The journey to becoming that professional begins not with a software update, but with a pedagogical transformation in the classroom.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACAbstract Conceptualization
AEActive Experimentation
AIArtificial Intelligence
CEConcrete Experience
CKContent Knowledge
CSWECouncil on Social Work Education
EPASEducational Policy and Accreditation Standards
NASWNational Association of Social Workers
PKPedagogical Knowledge
RNTRepetitive Negative Thinking
ROReflective Observation
SAMRSubstitution, Augmentation, Modification, and Redefinition
TKTechnological Knowledge
TPACKTechnological Pedagogical Content Knowledge

References

  1. Ahn, H., Smith, L., & Jones, R. (2025). Artificial intelligence (AI) literacy for social work: Implications for core competencies. Journal of Social Work Education, 61(2), 214–229. [Google Scholar] [CrossRef]
  2. Amini, M., Lee, K.-F., Yiqiu, W., & Ravindran, L. (2025). Proposing a framework for ethical use of AI in academic writing based on a conceptual review: Implications for quality education. Interactive Learning Environments, 1–25. [Google Scholar] [CrossRef]
  3. Avgerinou, M. D., Karampelas, A., & Stefanou, V. (2023). Building the plane as we fly it: Experimenting with GenAI for scholarly writing. Irish Journal of Technology Enhanced Learning, 7(2), 61–74. [Google Scholar] [CrossRef]
  4. Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and higher education: Trends, challenges, and future directions from a systematic literature review. Information, 15(11), 676. [Google Scholar] [CrossRef]
  5. Cong-Lem, N., Nguyen, T. T., & Nguyen, K. N. H. (2025). Critical thinking in the age of generative AI: Effects of a short-term experiential learning intervention on EFL learners. International Journal of TESOL Studies, 250522, 1–21. [Google Scholar] [CrossRef]
  6. Council on Social Work Education. (2022). 2022 educational policy and accreditation standards. Available online: https://www.cswe.org/accreditation/policies-process/2022epas/ (accessed on 23 July 2025).
  7. Council on Social Work Education. (2024). Strategic plan 2026–2030. Available online: https://www.cswe.org/about-cswe/2026-strategic-plan/ (accessed on 23 July 2025).
  8. Digital Education Council. (2025a, January 28). What faculty want: Key results from the global AI faculty survey 2025. Available online: https://www.digitaleducationcouncil.com/post/what-faculty-want-key-results-from-the-global-ai-faculty-survey-2025 (accessed on 23 July 2025).
  9. Digital Education Council. (2025b, March 3). DEC AI literacy framework. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-ai-literacy-framework (accessed on 23 July 2025).
  10. Dolan, E. W. (2025, March 21). AI tools may weaken critical thinking skills by encouraging cognitive offloading, study suggests. PsyPost. Available online: https://www.psypost.org/ai-tools-may-weaken-critical-thinking-skills-by-encouraging-cognitive-offloading-study-suggests/ (accessed on 23 July 2025).
  11. Easter, T., Lee, A., & Bailey, D. (2024). Artificial intelligence in language education: A mixed-methods study of teacher perspectives and challenges. National Social Science Technology Journal, 12(1), 203–234. [Google Scholar]
  12. Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. [Google Scholar] [CrossRef]
  13. Flaherty, H. B., Henshaw, L. A., Lee, S. R., Herrera, C., Whitney, K., Auerbach, C., & Beckerman, N. L. (2025). Harnessing new technology and simulated role plays for enhanced engagement and academic success in online social work education. Studies in Clinical Social Work: Transforming Practice, Education and Research, 1–21. [Google Scholar] [CrossRef]
  14. Freire, P. (1970). Pedagogy of the oppressed. Herder and Herder. [Google Scholar]
  15. Gkintoni, E., Antonopoulou, H., Sortwell, A., & Halkiopoulos, C. (2025). Challenging cognitive load theory: The role of educational neuroscience and artificial intelligence in redefining learning efficacy. Brain Sciences, 15(2), 203. [Google Scholar] [CrossRef]
  16. Guichard, E., & Smith, M. (2024, December 11). Keeping it real—The value of human-centered skills. AACSB. Available online: https://www.aacsb.edu/insights/articles/2024/12/keeping-it-real-the-value-of-human-centered-skills (accessed on 23 July 2025).
  17. Hodgson, D., Goldingay, S., Boddy, J., Nipperess, S., & Watts, L. (2021). Problematising artificial intelligence in social work education: Challenges, issues and possibilities. The British Journal of Social Work, 52(4), 2119–2137. [Google Scholar] [CrossRef]
  18. hooks, b. (1994). Teaching to transgress: Education as the practice of freedom. Taylor & Francis Group. [Google Scholar]
  19. Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 248. [Google Scholar] [CrossRef]
  20. Kahn, L., Probasco, E. S., & Kinoshita, R. (2024, November). AI safety and automation bias: The downside of human-in-the-loop. Center for Security and Emerging Technology. Available online: https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/ (accessed on 23 July 2025).
  21. Kaushik, A., Yadav, S., Browne, A., Lillis, D., Williams, D., McDonnell, J., Grant, P., Connolly Kernan, S., Sharma, S., & Arora, M. (2025). Exploring the impact of generative artificial intelligence in education: A thematic analysis. arXiv. [Google Scholar] [CrossRef]
  22. Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169–183. [Google Scholar] [CrossRef]
  23. Knowles, M. S. (1980). The modern practice of adult education: From pedagogy to andragogy (2nd ed.). Cambridge Book Company. [Google Scholar]
  24. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall. [Google Scholar]
  25. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv, arXiv:2501.10134. [Google Scholar] [CrossRef]
  26. Marchant, N. L., & Howard, R. J. (2015). Cognitive debt and Alzheimer’s disease. Journal of Alzheimer’s Disease, 44, 755–770. [Google Scholar] [CrossRef] [PubMed]
  27. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
  28. Mollick, E., & Mollick, L. (2023, March 16). Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts. SSRN. Available online: https://ssrn.com/abstract=4391243 (accessed on 23 July 2025).
  29. National Association of Social Workers. (2021). Code of ethics. National Association of Social Workers. [Google Scholar]
  30. National Association of Social Workers. (2025). NASW standards for clinical social work in social work practice. National Association of Social Workers. [Google Scholar]
  31. National Association of Social Workers, Association of Social Work Boards, Council on Social Work Education & Clinical Social Work Association. (2017). NASW, ASWB, CSWE, & CSWA standards for technology in social work practice. National Association of Social Workers. [Google Scholar]
  32. Ng, B. Y., Li, J., Tong, X., Ye, K., Yenne, G., Chandrasekaran, V., & Li, J. (2025). Analyzing security and privacy challenges in generative AI usage guidelines for higher education. arXiv. [Google Scholar] [CrossRef]
  33. Nuwasiima, M., Ahonon, M. P., & Kadiri, C. (2024). The role of artificial intelligence (AI) and machine learning in social work practice. World Journal of Advanced Research and Reviews, 24(1), 080–097. [Google Scholar] [CrossRef]
  34. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishers. [Google Scholar]
  35. Partnership on AI. (2023, February 27). PAI’s responsible practices for synthetic media: A framework for collective action. Available online: https://syntheticmedia.partnershiponai.org/ (accessed on 23 July 2025).
  36. Parveen, N. K., & Kumar, N. R. (2024). Cognitive offloading: A review. The International Journal of Indian Psychology, 12(2), 4668–4681. [Google Scholar] [CrossRef]
  37. Petricini, T., Wu, C., & Zipf, S. (2024). Perceptions about artificial intelligence and ChatGPT use by faculty and students. Transformative Dialogues: Teaching and Learning Journal, 17(2), 63–87. [Google Scholar] [CrossRef]
  38. Puentedura, R. R. (n.d.). Building transformation: An introduction to the SAMR model [PowerPoint slides]. Hippasus. Available online: https://www.hippasus.com/rrpweblog/archives/2014/08/22/BuildingTransformation_AnIntroductionToSAMR.pdf (accessed on 23 July 2025).
  39. Reamer, F. G. (2023). Artificial intelligence in social work: Emerging ethical issues. International Journal of Social Work Values and Ethics, 20(2), 52–71. [Google Scholar] [CrossRef]
  40. Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. [Google Scholar] [CrossRef]
  41. Rodriguez, M. Y., Goldkind, L., Victor, B. G., Hiltz, B., & Perron, B. E. (2024). Introducing generative artificial intelligence into the MSW curriculum: A proposal for the 2029 Educational Policy and Accreditation Standards. Journal of Social Work Education, 60(2), 174–182. [Google Scholar] [CrossRef]
  42. Romeo, G., & Conti, D. (2025). Exploring automation bias in human-AI collaboration: A review and implications for explainable AI. AI & Soc. [Google Scholar] [CrossRef]
  43. Sharma, S., Singh, A., & Goel, D. (2025). A study on embracing digital media and AI in higher education: From chalkboards to chatbots. International Journal for Research Publication and Seminar, 16(2), 124–133. [Google Scholar] [CrossRef]
  44. Singer, J. B., Creswell Báez, J., & Rios, J. A. (2023). AI creates the message: Integrating AI language learning models into social work education and practice. Journal of Social Work Education, 59(2), 294–302. [Google Scholar] [CrossRef]
  45. Sisk, V. F., Burgoyne, A. P., Sun, J., Butler, J. L., & Macnamara, B. N. (2018). To what extent and under which circumstances are growth mind-sets important to academic achievement? Two meta-analyses. Psychological Science, 29(4), 549–571. [Google Scholar] [CrossRef]
  46. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. [Google Scholar] [CrossRef]
  47. UNESCO. (2022). Recommendation on the ethics of artificial intelligence. UNESCO. [Google Scholar]
  48. UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 23 July 2025).
  49. Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. [Google Scholar] [CrossRef]
Table 1. Experiential Learning Activities for AI Integration Across the Social Work Curriculum.
Table 1. Experiential Learning Activities for AI Integration Across the Social Work Curriculum.
CSWE CompetencyConcrete Experience (CE)Reflective Observation (RO)Abstract Conceptualization
(AC)
Active Experimentation (AE)
Demonstrate Ethical and Professional BehaviorUse an AI tool to generate a response to a complex ethical dilemma (e.g., a dual relationship scenario).Analyze the AI’s response against the NASW Code of Ethics. Identify where the AI’s logic deviates from professional standards and discuss the limitations of algorithmic “ethics”Research and synthesize ethical frameworks for AI in human services Develop a personal ethical decision-making model for AI-assisted practice.Rewrite the AI’s response to the ethical dilemma, providing a new response that is fully aligned with social work values, citing specific ethical codes to justify changes.
Engage Anti-Racism, Diversity, Equity, and Inclusion (ADEI)Use an AI image generator with a prompt like “a social worker helping a family.” This is an act of creating “synthetic media” (Partnership on AI, 2023).Critically analyze the generated images for racial, gender, class, and ability-based stereotypes. Discuss the inherent bias towards U.S. culture and thinking” often found in AI (Easter et al., 2024).Research how bias is encoded in AI training data (O’Neil, 2016). Formulate principles for creating culturally sensitive and anti-oppressive AI prompts.Collaboratively write and test a series of new, highly specific prompts designed to generate a diverse range of images. Compare outputs and refine the prompts iteratively.
Engage in Policy PracticeUse an AI tool to summarize a piece of proposed legislation relevant to social work.Fact-check the AI’s summary against the original bill text. Identify any key omissions or misinterpretations, which are forms of extrinsic and intrinsic hallucination (Ji et al., 2023).Research the political and economic interests behind the legislation. Conceptualize a policy brief that outlines the bill’s potential impact on vulnerable populations, using a social justice lens.Draft a formal policy brief that uses the AI-generated summary as a starting point but corrects its flaws and adds a critical social work perspective and specific recommendations.
Engage in Practice-Informed Research and Research-Informed PracticeUse an AI tool to generate a literature review on a specific clinical intervention.Evaluate the AI-generated review for accuracy and bias, a process that can be “very disruptive for the general flow of writing” but is essential for critical AI literacy (Avgerinou et al., 2023). Check if cited sources are real and relevant.Learn the principles of systematic literature reviews. Develop a search strategy and inclusion/exclusion criteria for a proper review of the topic.Conduct a small-scale, authentic literature search using university library databases. Write a critical comparison between their rigorous search and the initial AI-generated review, highlighting the value of professional research skills.
Table 2. A Dimensional Risk Assessment Framework for AI Tools in Social Work Education.
Table 2. A Dimensional Risk Assessment Framework for AI Tools in Social Work Education.
AI Use Case in Social Work EducationData Privacy/Confidentiality RiskAlgorithmic Bias RiskAcademic Integrity RiskCognitive Debt RiskRequired Human Oversight
Brainstorming research topicsLow: Assumes no personal/client data is used in prompts.Medium: AI may suggest topics based on biased data, over-representing mainstream issues.Low: Considered a legitimate pre-writing strategy. Disclosure of use is best practice.Low: Can be a useful starting point, but must be followed by independent research and refinement.Medium: Instructor should guide students to critique and broaden the AI’s suggestions.
Summarizing academic articlesLow: If using public articles.
High: If uploading copyrighted or sensitive documents to a public tool.
Medium: AI may misinterpret nuance or fail to capture the author’s critical stance, producing a flattened summary.High: Submitting an AI summary as one’s own work is plagiarism. Use requires explicit permission and attribution.High: Offloads the core skill of reading for comprehension and synthesis, a key driver of cognitive debt (Risko & Gilbert, 2016).High: Students must compare the summary to the original text to identify inaccuracies and omissions (hallucinations) (Ji et al., 2023).
Drafting client case notesExtreme: Using any real client information in a public AI tool is a severe ethical and legal violation (HIPAA, NASW Code) (Ng et al., 2025; Reamer, 2023).High: AI may use stereotypical language or make biased assumptions based on demographic information included in a hypothetical prompt.Medium: In a professional context, this could be fraud. In an educational one, it bypasses skill development.High: Prevents students from learning the critical skill of translating complex human interactions into professional documentation.Absolute: Professional social worker must write/rewrite all notes, assuming full responsibility. AI should not replace human judgment (National Association of Social Workers, 2025).
Simulating a client interviewLow: If the simulation uses a hypothetical persona and no real data is shared.High: The AI “client” will reflect the biases of its training data, potentially responding in stereotypical or unrealistic ways.Low: The simulation is a means to an end; the assessment is on the student’s reflective analysis of the interaction.Low: If followed by the full Kolb cycle (reflection, conceptualization, experimentation). High: If used as a simple Q&A without critical debriefing.High: Instructor must facilitate a thorough debriefing, focusing on skill application and the AI’s limitations.
Analyzing community dataMedium: Depends on the dataset. Public census data is low-risk; anonymized agency data is higher risk.High: The choice of data and the interpretation of patterns can be subject to significant bias, leading to flawed conclusions about community needs (O’Neil, 2016).Medium: Failure to critically evaluate the data sources and the AI’s analytical methods constitutes poor scholarship.Medium: Can automate complex calculations but risks creating a false sense of objectivity if the user does not understand the underlying principles.High: Instructor and student must critically vet data sources, question algorithmic assumptions, and supplement quantitative findings with qualitative understanding.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Watts, K.J. Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Educ. Sci. 2025, 15, 1304. https://doi.org/10.3390/educsci15101304

AMA Style

Watts KJ. Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Education Sciences. 2025; 15(10):1304. https://doi.org/10.3390/educsci15101304

Chicago/Turabian Style

Watts, Keith J. 2025. "Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education" Education Sciences 15, no. 10: 1304. https://doi.org/10.3390/educsci15101304

APA Style

Watts, K. J. (2025). Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Education Sciences, 15(10), 1304. https://doi.org/10.3390/educsci15101304

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop