Next Article in Journal
Neural Signatures of Speed and Regular Reading: A Machine Learning and Explainable AI (XAI) Study of Sinhalese and Japanese
Previous Article in Journal
A Hybrid Optimization Approach for Multi-Generation Intelligent Breeding Decisions
Previous Article in Special Issue
Deep Learning for Student Behavior Detection in Smart Classroom Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education

1
Center for Teacher Education, Chaoyang University of Technology, Taichung 41349, Taiwan
2
Department of Aeronautical Engineering, Chaoyang University of Technology, Taichung 41349, Taiwan
*
Author to whom correspondence should be addressed.
Information 2026, 17(1), 107; https://doi.org/10.3390/info17010107
Submission received: 16 December 2025 / Revised: 14 January 2026 / Accepted: 19 January 2026 / Published: 21 January 2026

Abstract

The rapid adoption of generative artificial intelligence (AI) in higher education has intensified a pedagogical dilemma: while AI tools can increase immediate classroom engagement, they do not necessarily foster the self-regulated learning (SRL) capacities required for ethical and reflective professional practice, particularly in human-service fields. In this two-time-point, pre-post cohort-level (repeated cross-sectional) evaluation, we examined a six-week AI-integrated curriculum incorporating explicit SRL scaffolding among social work undergraduates at a Taiwanese university (pre-test N = 37; post-test N = 35). Because the surveys were administered anonymously and individual responses could not be linked across time, pre-post comparisons were conducted at the cohort level using independent samples. The participating students completed the AI-Enhanced Learning Attitude Scale (AILAS); this is a 30-item instrument grounded in the Technology Acceptance Model, Attitude Theory and SRL frameworks, assessing six dimensions of AI-related learning attitudes. Prior pilot evidence suggested an engagement regulation gap, characterized by relatively strong learning process engagement but weaker learning planning and learning habits. Accordingly, the curriculum incorporated weekly goal-setting activities, structured reflection tasks, peer accountability mechanisms, explicit instructor modeling of SRL strategies and simple progress tracking tools. The conducted psychometric analyses demonstrated excellent internal consistency for the total scale at the post-test stage (Cronbach’s α = 0.95). The independent-samples t-tests indicated that, at the post-test stage, the cohorts reported higher mean scores across most dimensions, with the largest cohort-level differences in Learning Habits (Cohen’s d = 0.75, p = 0.003) and Learning Process (Cohen’s d = 0.79, p = 0.002). After Bonferroni adjustment, improvements in the Learning Desire, Learning Habits and Learning Process dimensions and the Overall Attitude scores remained statistically robust. In contrast, the Learning Planning dimension demonstrated only marginal improvement (d = 0.46, p = 0.064), suggesting that higher-order planning skills may require longer or more sustained instructional support. No statistically significant gender differences were identified at the post-test stage. Taken together, the findings presented in this study offer preliminary, design-consistent evidence that SRL-oriented pedagogical scaffolding, rather than AI technology itself, may help narrow the engagement regulation gap, while the consolidation of autonomous planning capacities remains an ongoing instructional challenge.

Graphical Abstract

1. Introduction

The rapid emergence of generative artificial intelligence (GenAI) has introduced a distinct set of challenges for social work education. On the one hand, tools such as ChatGPT hold clear potential for reducing the administrative load associated with case documentation and report writing. On the other hand, their reliance on probabilistic text generation is often accompanied by factual inaccuracies or latent algorithmic bias; this sits uneasily with the profession’s long-standing emphasis on human empathy, contextual judgment and strict ethical accountability [1,2].
As GenAI tools increasingly enter the classroom, concerns about student readiness have become more visible. Much of the current pedagogical discourse has focused on technical competence the possibility of effectively prompting systems or generating usable outputs while devoting less attention to the cognitive and ethical skills required to critically evaluate those outputs. Early observations suggest that although students tend to adopt these tools quickly, demonstrating high levels of engagement, they often struggle to apply the metacognitive scrutiny needed to assess accuracy, bias and professional appropriateness [3,4]. This mismatch between enthusiastic use and limited regulatory control, referred to here as the engagement regulation gap, increases the risk that AI may function as a cognitive shortcut, bypassing the critical inquiry that is essential to responsible social work practice [5,6].
In response to this concern, in the present study, we examine an instructional intervention grounded in SRL theory. Rather than restricting access to AI tools, the curriculum was designed to integrate explicit planning and monitoring scaffolds to guide how students engage with GenAI. The central aim was to explore whether such structured support could shift learning behavior away from passive reliance on AI-generated content toward more deliberate, reflective and professionally grounded interaction. By doing so, the study seeks to clarify whether SRL-oriented scaffolding can support a form of AI-enhanced learning that remains both technologically adaptive and aligned with the ethical and disciplinary expectations of social work education.

2. Theoretical Background

2.1. The AI Imperative in Social Work Education

A growing body of research has begun to document a recurring tension between students’ enthusiasm for generative AI and their ability to regulate its use effectively. Although GenAI tools substantially lower the threshold for content creation, they also raise persistent concerns related to academic integrity and the responsible handling of AI-generated material [7,8,9]. These concerns echo broader debates on academic integrity and assessment reliability in the era of generative AI, particularly regarding unsupervised or high-stakes educational contexts [10]. Previous studies suggest that perceptions of usefulness, ease of use and trust play an important role in shaping students’ willingness to engage with these technologies [11]. However, widespread adoption does not necessarily imply that students are engaging in the kinds of reflective and self-directed learning processes required for professional development. Recent rapid reviews further indicate that the educational impact of generative AI remains mixed, with reported benefits for engagement and productivity accompanied by persistent concerns regarding assessment validity, learning depth, and pedagogical alignment in higher education settings [12]. Recent studies have also emphasized that the integration of generative AI in higher education presents both opportunities and structural challenges, requiring careful pedagogical design to balance innovation with ethical, instructional, and institutional considerations [13].
This disconnect between active use and regulatory depth referred to here as the engagement regulation gap is particularly consequential in social work education [14]. Unlike purely technical disciplines, social work training is not aimed at producing efficiency alone, but at cultivating practitioners who can critically evaluate algorithmic outputs, recognize ethical risks and uphold the dignity and contextual needs of service users [15]. In this setting, unreflective reliance on AI-generated content may conflict directly with core professional values [16].
Empirical evidence from higher education further supports this concern. Research examining students’ acceptance of AI tools often reports high levels of perceived usefulness and strong intentions to adopt these technologies; simultaneously, it highlights substantial variation in how learners regulate their engagement over time [17]. Studies of AI-supported instructional activities show that such tools can increase task engagement and facilitate completion; however, gains in planning, monitoring and reflective control tend to emerge only when self-regulated learning strategies are explicitly embedded within course design [18]. Recent reviews further highlight that without explicit instructional guidance, students’ AI use may remain surface-level, underscoring the need for pedagogical designs that actively support reflective and self-regulated engagement [19,20]. This distinction is especially salient in professionally oriented programs, where learners must routinely exercise judgment, ethical reasoning and contextual interpretation.
In the absence of structured reflection and goal-setting activities, students may come to depend heavily on AI outputs, achieving efficient performance without corresponding growth in regulatory competence [21,22]. Assessment-focused studies further indicate that this pattern can promote cognitive offloading, particularly among novice learners, reinforcing surface-level engagement rather than the development of autonomous regulation [23]. Altogether, the literature points to a consistent pattern: while AI integration often succeeds in stimulating engagement and positive learning attitudes, the development of self-regulatory capacity remains uneven and highly dependent on pedagogical design.
This empirical backdrop directly informs the present study. Building on prior exploratory pilot work that identified early engagement patterns in an independent cohort, the current investigation extends this line of inquiry by aiming to determine whether a targeted instructional intervention explicitly grounded in self-regulated learning principles can address the engagement regulation gap within AI-integrated social work curricula.

2.2. An Integrated Theoretical Framework

To examine the engagement regulation gap in a conceptually coherent manner, this study adopts an integrated theoretical framework that draws on three complementary perspectives: the Technology Acceptance Model (TAM) [17], Attitude Theory/Theory of Planned Behavior (TPB) [24] and SRL theory [25]. Each of these frameworks addresses a different but interrelated aspect of how students perceive, engage with and regulate their learning when interacting with AI technologies.
The TAM has been widely used to explain technology adoption by emphasizing perceived usefulness and perceived ease of use. Within educational contexts, it is particularly effective for capturing students’ initial acceptance of AI tools and their willingness to incorporate such technologies into learning activities. However, TAM primarily focuses on the decision to adopt a tool, offering limited insight into how learners manage and sustain their engagement over time.
Attitude Theory and the TPB extend this perspective by incorporating the cognitive, affective and behavioral components of attitude formation, as well as perceived behavioral control. Beyond initial technology acceptance, Attitude Theory and the Theory of Planned Behavior emphasize the role of perceived behavioral control and intention in shaping sustained learning behaviors [26,27]. This extension allows for a more nuanced understanding of learners’ motivation, intention and perceived capacity to act, especially in situations where the use of technology involves judgment and self-direction rather than simple compliance.
The SRL theory provides the critical-process-oriented lens that is missing from adoption-focused models. Rather than treating learning as a static outcome, SRL conceptualizes it as a cyclical process involving forethought (planning), performance (monitoring) and self-reflection. From this perspective, effective learning with AI depends on both students’ acceptance of the technology and on the way in which they plan its use, monitor their strategies and reflect on their outcomes over time.
As summarized in Table 1, these theoretical strands were deliberately integrated to inform the design of the AILAS. This mapping highlights a key limitation that exists in much of the existing literature: studies relying solely on TAM tend to capture early-stage acceptance and engagement (e.g., Dimensions A and F) while overlooking the regulatory mechanisms such as learning methods, planning and habit formation (Dimensions C, D, and E) that are essential for sustained and professionally meaningful learning. By combining TAM, TPB and SRL, the present framework provides a more comprehensive lens for examining the way in which students engage with AI and determining whether such engagement is accompanied by the development of a capacity for self-regulation.

2.3. Research Objectives and Hypotheses

Building on this integrated model, the present study employs a quasi-experimental design to pursue three specific objectives, as outlined here.
(1)
Objective 1: To examine the psychometric performance of the AILAS within the context of an AI-integrated curriculum, focusing on internal consistency and structural coherence at the cohort level rather than on individual change.
H1a. The AILAS will demonstrate strong internal consistency (α > 0.90) at the post-test stage.
H1b. Inter-dimensional correlations will remain stable, indicating a coherent attitudinal structure.
(2)
Objective 2: To examine whether an AI-integrated curriculum with explicit SRL-oriented scaffolding is associated with cohort-level shifts in students’ AI-related learning attitudes over a short instructional period, with particular attention being paid to behavioral routines, planning-related attitudes and overall engagement while acknowledging the exploratory and non-causal nature of the pre-post cohort-level design.
H2a. At the cohort level students in the post-test cohort will report higher levels of Learning Habits (E) and Learning Planning (D) than those in the pre-test cohort following exposure to SRL-oriented instructional scaffolding. This hypothesis focuses on whether structured goal-setting activities and routine-building practices are associated with short-term shifts in behavioral engagement and planning-related learning attitudes.
H2b. This study also seeks to determine whether the Learning Process (F) reflecting students’ perceived engagement and active participation in AI-supported learning activities remains stable or shows modest improvement from the pre-test to the post-test stages. This hypothesis is based on the expectation that SRL-oriented scaffolding does not detract from engagement and may instead help sustain or slightly enhance students’ involvement in the learning processes.
H2c. Overall, at the post-test stage, the cohort is expected to demonstrate higher aggregate scores in AI-related learning attitudes compared with the pre-test stage. This hypothesis reflects the assumption that continued exposure to SRL-oriented scaffolding may be associated with cumulative shifts across motivational, behavioral, and process-related dimensions of learning, rather than changes confined to a single domain.
(3)
Objective 3: To explore potential cohort-level differences in AI-related learning attitudes across selected demographic characteristics (gender and academic level); these analyses are treated as descriptive and exploratory, accounting for the modest sample size and unlinked responses.

3. Methods

3.1. Research Design

In this study, we adopted a quasi-experimental, pre-post cohort-level design to examine an AI-integrated curriculum that is designed to incorporate explicit SRL scaffolding. This design can be characterized as a two-wave repeated cross-sectional (cohort-level) pre–post evaluation. The six-week instructional intervention was conducted during the Fall 2024 semester, spanning from late September to early November. The baseline (pre-test) data were collected in the first week of the course (23 September 2024) and the follow-up (post-test) data were gathered in the final instructional week (3 November 2024). All participants received the same instructional design. However, because the pre-test and post-test questionnaires were administered anonymously, individual responses could not be linked across time. Therefore, pre-post comparisons were conducted at the cohort level, treating the pre-test (N = 37) and post-test (N = 35) samples as independent groups for inferential analyses; accordingly, the study is longitudinal at the course/cohort level (two time points) but not at the individual level.
The absence of a concurrent non-intervention control group constitutes a primary limitation of this research design. This design choice was driven by ethical and logistical considerations within the specific educational context: because only a single course section was available, dividing students into experimental and control groups would have denied some participants access to the SRL scaffolding that was considered to be pedagogically essential. In addition, because the surveys were administered anonymously, individual responses could not be linked across measurement occasions. As a result, the analyses compare independent cohorts at the pre-course and post-course stages, rather than identifying within-person changes. Accordingly, these findings are best read as cohort-level patterns that are consistent with the study design, rather than being read as evidence of the effects of the intervention.
This design supports a context-sensitive description of cohort-level trends over time; however, it does not permit within-person change analyses or strong causal inference about intervention effects. Accordingly, findings are interpreted as design-consistent, cohort-level associations rather than being interpreted as the effects of the intervention.

3.2. Participants

The participants were primarily undergraduate students from the Department of Social Work at Chaoyang University of Technology, Taiwan. The pre-test intervention sample comprised 37 students, drawn from a course offering that was distinct from that of our earlier pilot study. The pilot study and the intervention cohort were independent samples collected in different academic semesters, with no participant overlap. The post-test yielded 35 valid questionnaires (compared with 37 at pre-test). Because responses were anonymous and unlinked, this difference is described as a reduction in the number of returned questionnaires rather than individual reduction in returned questionnaires. Because no identifying or linkable information was collected, the pre-test and post-test respondents cannot be confirmed as being the same individuals. Accordingly, all inferential analyses adopt an independent-cohort approach.

3.3. Intervention: AI-Integrated Curriculum with SRL Scaffolding

The 6-week intervention integrated generative AI tools into two core courses: Introduction to Social Work Practice and Human Behavior and the Social Environment. The participating students used three AI platforms (ChatGPT (https://chatgpt.com/), Claude (https://claude.com/app-unavailable-in-region, accessed on 15 December 2025) and Google Gemini (https://chatlyai.app/chat?utm_model=vgpt-g3-0p&utm_source=google&utm_medium=cpc&utm_campaign=G_C_Web_G3.0_7813&utm_source=google&utm_medium=ppc&utm_campaign=&utm_term=google%20gemini&hsa_acc=7531877813&hsa_cam=23276460450&hsa_grp=188236574865&hsa_ad=784875009302&hsa_src=g&hsa_tgt=kwd-363368871516&hsa_kw=google%20gemini&hsa_mt=b&device=c&placement=&assetgroupid=&networkid={networkid}&adtype=&gad_source=1&gad_campaignid=23276460450&gbraid=0AAAAApeVMdiducXLUamImeTH-pSSM9kxk&gclid=EAIaIQobChMIp-GHv9eZkgMV3dUWBR1IWA-jEAAYASAAEgJPc_D_BwE, acceesd on 15 December 2025)) for their case analyses, client simulations and reflective writing activities. AI was positioned not as an answer machine but as a partner for brainstorming, perspective-taking and rehearsal of professional communication [32]. This instructional framing emphasized critical engagement with AI outputs rather than efficiency or automation alone.
To specifically address the engagement regulation gap, we implemented five SRL scaffolding mechanisms:
(1)
Weekly goal-setting templates (Weeks 1–6): Students set concrete learning goals for their AI use each week (e.g., use AI to generate alternative intervention options, then compare them with textbook guidelines), directly targeting Dimension D (Learning Planning).
(2)
Reflection prompts (Weeks 2–6): Beginning in Week 2, students completed short, structured reflection sheets designed to encourage metacognitive review of their AI use. These prompts asked students to consider how they had used AI in their coursework, which approaches they found helpful, which practices felt overly passive and how they might adjust their strategies in subsequent weeks. This reflective activity was intended to support the development of learning methods and to reinforce more consistent Learning Habits (Dimensions C and E).
(3)
Peer accountability partnerships (Weeks 3–6): From Week 3 onward, students were paired with a peer partner to share their weekly learning goals and discuss their progress. These brief peer check-ins provided opportunities for mutual feedback and informal accountability, helping students to sustain their planning and reflection routines over time rather than treating them as isolated tasks.
(4)
SRL strategy modeling (Weeks 1–6): The instructors explicitly verbalized their own planning, monitoring and evaluation steps when demonstrating AI use, making otherwise-invisible SRL processes observable to the students.
(5)
Progress-tracking dashboards (Weeks 5–6): Simple visual dashboards summarized completion rates for goal-setting activities and reflections at the class level, reinforcing collective responsibility and sustaining engagement. These dashboards provided feedback at the group level without evaluating individual performances.
Intervention fidelity was high: instructors implemented all planned SRL modeling activities and students completed 87% of their goal-setting templates and 91% of their reflection prompts. These completion rates were calculated from the instructors’ weekly records of submitted templates and reflection sheets.

3.4. Measures

3.4.1. AI-Enhanced Learning Attitude Scale

AILAS is a 30-item instrument assessing attitudes across six dimensions (Table 1): A (Self-Perception, 5 items), B (Learning Desire, 5 items), C (Learning Methods, 7 items), D (Learning Planning, 5 items), E (Learning Habits, 4 items) and F (Learning Process, 4 items). All items are rated on 5-point Likert scales (1 = strongly disagree to 5 = strongly agree). The following three items (5, 17, 18) were reverse-coded to minimize acquiescence bias:
Item 5: Enhanced AI coursework is difficult for me.
Item 17: I often memorize AI content without understanding it.
Item 18: AI learning has little relevance to my life.
Dimensional scores were calculated as the means of constituent items after reverse-coding. The overall score was the mean of all 30 items. Pilot testing (N = 37) established excellent reliability for the total scale (α = 0.93) and acceptable reliability for the subscales (α = 0.74–0.84). The instrument was administered in Traditional Chinese, following back-translation procedures [23] to support semantic equivalence with the original conceptual definitions. Given the exploratory, intervention-focused nature of the study, subscale reliabilities were evaluated with a consideration of the item count and construct breadth rather than strict cutoff thresholds. Consistent with exploratory scale development norms, some subscales exhibited lower reliability in the intervention sample, which is addressed explicitly in the Results and Limitations Sections.

3.4.2. Demographic Variables

We collected demographic information through a brief questionnaire covering four variables relevant to sample description and exploratory analyses (RQ1, RQ2). Gender was measured with four options: male, female, other and prefer not to say. For post-test between-gender comparisons, we compared male students (n = 13, 37.1%) and female students (n = 22, 62.9%), as no participants selected the other two categories. Demographic variables were collected at both time points using the same brief questionnaire section in the pre-test and post-test surveys.
Academic level was assessed by asking students whether they were in their first, second, third or fourth year. At post-test, most participants were first-year students (n = 32, 91.4%), with two second-year (5.7%) and one third-year (2.9%) students. A similar distribution was observed at pre-test (Year 1: n = 34; Year 2: n = 1; Year 3: n = 1; other: n = 1). Because of this imbalance, academic level was used descriptively rather than as a formal between-group factor. The participants’ ages ranged from 18 to 23 years (M = 19.2, SD = 1.1), reflecting typical undergraduate patterns. We did not include age as a predictor in analyses because it showed limited variability and was highly correlated with academic level (r = 0.76).
Prior AI experience was assessed with the following question: Before this course, how many times have you used generative AI tools (e.g., ChatGPT, Claude, Google Gemini) for academic purposes? The response options were as follows: (1) none; (2) 1–5 times; (3) 6–15 times; (4) 16 or more times. Notably, 28 students (80.0%) reported no prior academic use of generative AI, 6 (17.1%) reported limited experience, 1 (2.9%) reported moderate experience and none reported extensive experience. Because most students started from a near-zero baseline, we treated prior AI use as contextual information rather than a key moderator. Accordingly, prior AI experience was not included as a covariate in inferential analyses. The demographic section was presented before the main scale items to support complete reporting and reduce missingness.

3.5. Data Collection and Analysis

3.5.1. Data Collection

To establish the suitability of the instrument for the present context, the scale was first piloted with a group of 37 undergraduate students. The results from this preliminary phase indicated satisfactory internal consistency, with Cronbach’s alpha values exceeding 0.70 for all subscales. Inter-dimensional correlations further suggested a stable underlying structure. Based on feedback from the pilot administration, minor wording adjustments were made to improve clarity and contextual relevance.
Following this validation step, the main study was conducted using an online questionnaire administered via Google Forms during regularly scheduled class sessions. To reduce potential social desirability effects, the course instructor left the classroom while the students completed the survey. The participants were informed that completion was voluntary, that responses were anonymous and that the results would not affect their course grades. The survey required approximately 15 min to complete.

3.5.2. Data Analysis

Statistical analyses were performed using SPSS Version 25.0 [33], with the significance level set at α = 0.05. Prior to analysis, the dataset was examined for entry errors and extreme values and reverse-coded items were processed accordingly. Assumptions of normality were evaluated for each dimension at both the pre-test and post-test stages using Shapiro–Wilk tests in combination with a visual inspection of the Q-Q plots. Given the modest sample size, the test statistics were interpreted alongside the graphical evidence. Overall, no substantial deviations from normality were detected.
Reliability (H1a): Cronbach’s α was computed for the total scale and each subscale at the pre-test and post-test stages.
Structural consistency (H1b): Pearson correlations among the six dimensions were computed at each time point. Absolute differences between corresponding coefficients (pre vs. post) were used to assess stability, with |Δ| ≤ 0.10 considered indicative of high stability.
Intervention effectiveness (H2a–H2c): Because the pre-test and post-test questionnaires were administered anonymously and individual responses could not be linked across time, inferential pre-post comparisons were conducted at the cohort level. Independent-samples t-tests were used to compare the pre-test (N = 37) and post-test (N = 35) cohort means for each AILAS dimension and for the Overall Attitude score. Effect sizes are reported as Cohen’s d for independent groups, calculated as the mean difference between cohorts divided by the pooled standard deviation. Consistent with conventional benchmarks, values of 0.20, 0.50 and 0.80 were interpreted as small, medium and large effects, respectively.
Demographic moderators (RQ1, RQ2): We examined gender differences at each measurement occasion using independent-samples t-tests and additionally tested a time (pre vs. post) × gender interaction using a two-way between-subjects ANOVA (given anonymous, unlinked responses). Academic-level comparisons were descriptive only, given the small number of upper-level students (n = 3). Because responses were unlinked, time was treated as a between-subject factor rather than a within-subject repeated measure.

4. Results

4.1. Participant Characteristics

The participants were undergraduate students enrolled in social work courses with integrated AI applications at Chaoyang University of Technology. A convenience sampling strategy was employed by inviting the entire class cohort to participate. In the pre-test phase, 37 valid questionnaires were collected. The sample consisted primarily of first-year students from the Department of Social Work, including 15 males (40.5%) and 22 females (59.5%). In the post-test phase, 35 valid questionnaires were collected, including 13 males (37.1%) and 22 females (62.9%). The reduction from 37 to 35 questionnaires was primarily attributable to non-response (e.g., student absence on the survey day) rather than systematic dropout; however, because surveys were anonymous and responses could not be linked across time points, individual-level reduction in returned questionnaires or retention cannot be confirmed. As shown in Table 2, the demographic characteristics were similar across the two measurement occasions, supporting comparability of the pre-test and post-test cohorts for cohort-level analyses.

4.2. Intervention Effectiveness: Pre–Post Changes

Due to the anonymity of the survey administration, independent-samples t-tests were employed to analyze differences between the pre-test (N = 37) and post-test (N = 35) cohorts. As individual-level matching was not possible, paired-samples tests were not conducted and the results are reported as pre-test versus post-test cohort differences. In addition, because the surveys were administered anonymously, individual responses could not be linked across the two stages. This prevents within-person analyses and limits interpretation to cohort-level differences between the pre-test and post-test samples. Consequently, observed differences may reflect both curriculum exposure and sampling variations across measurement occasions (e.g., who was present on the survey day); in addition, there are other threats to internal validity, such as maturation or historical events. Table 3 summarizes the comparisons between the pre-test and post-test cohorts, with statistically significant gains observed in several dimensions and modest positive trends observed in others.
When the two cohorts were compared, students in the post-test group generally reported higher mean scores on dimensions related to engagement and everyday study practices. The clearest differences emerged in the Learning Habits (E) (t = −3.10, p = 0.003) and Learning Process (F) (t = −3.25, p = 0.002) dimensions, both of which reached conventional levels of statistical significance. In practical terms, these results suggest that students in the post-test cohort were more likely to describe regular learning behaviors, such as previewing and reviewing course materials, as well as more active forms of classroom participation, including note-taking practices and asking questions.
Although positive mean differences were also observed for the Learning Methods (C) and Learning Planning (D) dimensions, these changes fell just short of the conventional significance threshold (p = 0.055 and p = 0.064, respectively). The marginal nature of these findings points to a plausible developmental pattern: affective engagement and habitual behaviors may respond relatively quickly to instructional support, whereas the acquisition of more complex cognitive strategies and forward-looking planning skills may require a longer period of sustained practice. At the aggregate level, the Overall Attitude score increased significantly, indicating a broadly more positive learning orientation in the post-test cohort. Given the cohort-level design, these differences are interpreted as descriptive patterns rather than as evidence of causal intervention effects.
Because multiple comparisons were conducted across six dimensions in addition to the overall score, the robustness of the findings was examined more closely. After applying Bonferroni adjustment (α = 0.05/7 ≈ 0.007), the observed improvements in the Learning Desire (B), Learning Habits (E) and Learning Process (F) dimensions remained statistically robust (all p ≤ 0.005). In contrast, the results for the Learning Methods (C) and Learning Planning (D) dimensions should be interpreted with greater caution.
From a psychometric perspective, most AILAS subscales demonstrated acceptable internal consistency (α > 0.70). However, the reliability estimates for the Learning Methods (C) and Learning Planning (D) dimensions were lower (α = 0.57–0.59). While coefficients of this magnitude are not uncommon in exploratory studies employing short subscales, they remain below the thresholds typically expected of established instruments. Lower internal consistency increases measurement error and reduces confidence in the stability of observed differences. Taken together with the marginal p-value for Dimension D, these considerations support treating the planning-related findings as tentative rather than definitive.
The overall pattern is also evident in the visual profiles shown in Figure 1. Across all six dimensions, the post-test means exceed the pre-test baselines, indicating a generally positive shift in learning attitudes following the instructional period. The most pronounced separations appear in the Learning Habits (E) and Learning Process (F) dimensions, closely mirroring the statistical results.
The effect size estimates (Figure 2) further clarify the practical significance of these changes. The Learning Planning (D) and Learning Methods (C) dimensions showed small-to-medium effects (d ≈ 0.45), whereas the Learning Process (F) (d = 0.79) and Learning Habits (E) (d = 0.75) dimensions approached the threshold for large effects. This distribution suggests that the most immediate impact of the AI-integrated curriculum was concentrated in domains related to engagement and routine learning behaviors. In contrast, more demanding planning-related competencies appear to develop more gradually, likely requiring extended instructional exposure over time.

4.3. Psychometric Robustness: Structural Stability and Item Analysis

To examine the stability of the instrument across the intervention period (H1b), we compared the inter-dimensional correlations at the pre-test and post-test stages. Overall, the pattern of correlations remained largely consistent over time. As shown in Table 4, most correlation differences fell within the predefined stability range (|Δ|≤ 0.10), providing support for the structural coherence of the scale across measurement occasions.
One exception was observed in the association between the Learning Planning (D) and Learning Habits (E) dimensions. The correlation between these two dimensions increased from 0.68 at the pre-test stage to 0.80 at the post-test stage (|Δ| = 0.12). Although this change slightly exceeds the conservative stability threshold, it does not necessarily indicate structural instability. From a theoretical perspective, the strengthened association is consistent with the intent of the intervention. As students progressed through the course, planning-related attitudes appeared to become more closely aligned with habitual learning behaviors, suggesting a gradual integration of cognitive planning into everyday study routines.

Psychometric Properties: Item Analysis

Given the relatively low internal consistency observed for the Learning Planning dimension (Dimension D; α = 0.57–0.59), a post hoc item analysis was conducted to explore the potential sources of the measurement error (Table 5). This analysis indicated that Item D1 exhibited a weaker corrected item total correlation compared with other items in the subscale. From a purely statistical standpoint, removing this item would have increased the alpha coefficient to above 0.65.
However, the item was retained for the final analyses in order to preserve content coverage and ensure comparability between the pre-test and post-test stages. Excluding the item at this stage would have altered the conceptual scope of the dimensions and compromised their longitudinal consistency. Accordingly, findings related to the Learning Planning (D) dimension should be interpreted with appropriate caution, with the recognition that measurement precision for this dimension is more limited than that for other subscales.

4.4. Demographic Variations

For RQ1 (gender differences), descriptive analyses were conducted due to uneven group sizes and the use of independent cohorts. Both male and female students showed positive shifts in post-test scores relative to pre-test baselines. When post-test scores were compared directly, independent-samples t-tests revealed no statistically significant difference in Overall Attitude scores between male (M = 3.65) and female students (M = 3.58; p > 0.05). This pattern suggests that the instructional approach was broadly inclusive, with comparable benefits observed across genders.
For RQ2 (academic-level differences), the sample was heavily concentrated in first-year students (over 90%), which precluded meaningful inferential comparisons across academic levels. Descriptive inspection indicated that first-year students largely accounted for the significant cohort-level effects reported in the main analyses. The small number of upper-level students (n ≈ 3) showed higher initial baseline scores but followed a similar positive trajectory over time. Future research employing larger and more evenly distributed samples across academic years would be necessary to determine whether students at different stages of their academic development respond differently to SRL-oriented AI integration.

5. Discussion

5.1. Summary of Key Findings

This study evaluated the effectiveness of a 6-week AI-integrated curriculum with explicit SRL scaffolding among social work undergraduates. Utilizing independent-samples t-tests to compare pre-test (N = 37) and post-test (N = 35) cohorts, the results revealed a distinct pattern of behavioral and motivational activation.
Three principal findings emerged: First, contrary to the pilot study, where changes were limited, the current intervention yielded statistically significant improvements across the majority of AILAS dimensions, including the Overall Attitude score (p = 0.003). Second, the most robust gains were observed in the Learning Habits (E) and Learning Process (F) dimensions, followed closely by the Learning Desire (B) and Self-Perception (A) dimensions. This indicates that the curriculum successfully motivated students to adopt consistent study routines (e.g., reviewing materials) and actively engage in the classroom (e.g., asking questions), effectively dispelling concerns that AI tools might induce passivity. Third, while positive trends were recorded for the Learning Methods (C) and Learning Planning (D) dimensions, these dimensions achieved only marginal significance (p > 0.05). This suggests a developmental sequence, where motivational and behavioral engagement improves rapidly; meanwhile, the consolidation of specific cognitive strategies and long-term planning skills requires a more extended maturation period. Finally, no significant gender differences were observed (RQ1), indicating that the intervention benefited male and female students equally.

5.2. Theoretical Implications: From Motivation to Behavioral Habituation

The findings diverge from the initial expectation that the Learning Planning dimension would be the first dimension to shift. Instead, the data point to a behavioral activation model. The highly significant improvements in the Learning Desire (B), Learning Habits (E) and Learning Process (F) dimensions suggest that the AI-integrated curriculum acted as a catalyst for immediate engagement. Theoretically, this aligns with the phases of SRL. According to Zimmerman’s cyclical model, forethought (planning) often precedes performance (habits/process). However, in the context of adopting novel technologies such as generative AI, the entry point appears to be motivational interest (desire), which drives immediate behavioral experimentation (habits/process). The fact that the Learning Planning (D) dimension showed slower growth (marginal significance) implies that while students are engaging more (habits) and enjoying it more (desire), the higher-order cognitive skill of structuring that learning takes longer to develop. This interpretation offers a nuanced counter-narrative to the AI-dependency argument. Rather than replacing student effort, the scaffolding provided in this course appeared to amplify student agency, coinciding with more active notetaking and questioning (Dimension F) rather than passive consumption of AI outputs.

5.3. Practical Implications for Social Work Educators

For social work educators, these results offer both encouragement and a strategic roadmap. First, the significant increase in the presence of learning habits and process engagement confirms that integrating AI does not necessarily reduce student effort. Educators can leverage this by designing tasks that require iterative interaction with AI such as the AI Ethical Pause protocol used in this study to foster active classroom participation. Second, the marginal significance observed for the Learning Planning (D) dimension may be interpreted as a tentative instructional consideration, suggesting that a 6-week intervention could be sufficient to initiate interest and basic routines, but may be insufficient to fully consolidate more complex planning strategies. Future curricula might need to be longer or include more explicit planning workshops where students specifically practice drafting long-term study schedules involving AI, rather than just using it for immediate tasks. Third, the absence of gender differences suggests that SRL-oriented AI integration is an inclusive pedagogical strategy. The structure of the course appears to support diverse learners without a need for extensive gender specific tailoring. Finally, in the specific context of social work, where practitioners must maintain epistemic authority over algorithmic suggestions [34], the increase in self-perception (Dimension A) is vital. This increase implies that students are graduating from the course with a stronger belief in their ability to understand and control the technology, which is a prerequisite for ethical practice in digital human services [6,35].

5.4. Limitations and Future Directions

In light of the methodological constraints described above, the present findings should be interpreted with caution. Most importantly, the use of a single-group, pre-post cohort-level design while ethically necessary to ensure that all enrolled students received equivalent pedagogical support inevitably limits the internal validity of the findings. In the absence of a non-intervention control group, alternative explanations for the observed improvements (e.g., maturation effects, contextual influences during the semester, or cohort-specific dynamics) cannot be fully excluded. Future work would therefore benefit from quasi-experimental designs that incorporate non-equivalent comparison groups or multi-site implementations to more directly examine the causal contribution of SRL-oriented scaffolding.
The modest scale of the study represents a further limitation. Because enrollment was determined by the natural size of the course (N ≈ 35), the statistical power of the findings was constrained, increasing the risk of a Type II error. This limitation may help explain why the Learning Planning and Learning Methods dimensions exhibited only marginal statistical significance despite small to moderate effect sizes being found. Larger samples will be necessary to more reliably detect and substantiate these subtler cognitive-regulatory changes.
Another important boundary condition concerns participants’ limited baseline familiarity with generative AI. Many students reported little to no prior academic use of GenAI tools before the intervention. As a result, the observed gains may reflect an initial onboarding or novelty effect associated with early exposure to AI technologies rather than stable patterns of sustained self-regulated learning among more experienced users. This characteristic constrains the generalizability of the findings to contexts in which AI use is already pervasive such as STEM disciplines or highly digitized educational systems in developed countries where learners may encounter AI technologies from early stages of their formal education. Accordingly, the present curriculum is best understood as a framework for introducing GenAI to novice populations in non-technical professional programs, rather than as a model that is readily transferable to digitally saturated learning environments.
Reliance on self-report measures introduces a further limitation. Although the survey instruments provide valuable insights into students’ perceived attitudes and readiness, they remain susceptible to social desirability bias and do not directly capture behavioral competence. Future studies would benefit from triangulating self-report data with objective indicators, such as learning management system activity logs or analyses of student AI interaction artifacts.
Finally, the relatively short duration of the intervention warrants consideration. Because the curriculum spanned only six weeks, the long-term persistence of the observed behavioral changes remains unknown. Longitudinal cohort-level investigations extending beyond a single semester are needed to determine whether the initial gains made in engagement and habit formation endure and to determine whether higher-order planning competencies will continue to develop as students gain sustained experience with generative AI tools.

5.5. Conclusion of Discussion

Taken together, the findings offer design-consistent support for the view that explicit SRL-oriented scaffolding can play a constructive role in addressing the engagement regulation gap within AI-enhanced learning environments. Rather than diminishing student agency, the introduction of structured planning and reflective support was associated with improvements in learning habits and self-perception, while levels of engagement were maintained. This pattern stands in contrast to concerns that generative AI tools inevitably undermine self-regulation [36] and instead aligns with the argument that pedagogical design rather than the technology itself determines whether AI use promotes autonomy or dependency [11].
In the context of social work education, where professionals are expected to critically evaluate algorithmic outputs and exercise ethical judgment, these findings are particularly salient. They suggest that responsible AI integration is less a matter of restricting technology than of embedding it within instructional designs that foreground reflection, goal setting, and professional values. Viewed in this way the present study offers not a prescriptive solution but a practical framework for aligning AI-enhanced instruction with the humanistic foundations of social work practice.

6. Conclusions

This study provides preliminary, design-consistent evidence regarding the integration of SRL scaffolding within AI-enhanced social work education. The findings suggest that even a brief, six-week instructional period may be associated with meaningful cohort-level shifts in students’ learning behaviors, particularly in relation to engagement and study habits.
At the same time, the results must be interpreted within the clear boundaries imposed by the study’s exploratory design. Because the surveys were administered anonymously and individual responses could not be linked across time, the analyses compare independent pre-course and post-course cohorts rather than within-person change. In addition, the modest sample size and the absence of a non-intervention control group preclude the possibility of causal inference. Accordingly, the observed improvements are best understood as preliminary cohort-level patterns rather than definitive intervention effects.
Within these constraints, the findings support the broader proposition that pedagogical design rather than AI technology itself plays a central role in determining whether AI integration supports or undermines self-regulated learning. For social work education (where the critical evaluation of algorithmic outputs is professionally essential), this study offers a practical and theoretically grounded framework for responsible AI integration. Future research employing controlled, multi-site and longitudinal designs will be necessary to determine whether these observed patterns translate into sustained self-regulatory competence and professional judgment in digitally mediated practice contexts.

Author Contributions

Conceptualization, D.-H.H. and Y.-C.W.; methodology, D.-H.H. and Y.-C.W.; data collection, D.-H.H.; formal analysis, D.-H.H. and Y.-C.W.; writing—original draft preparation, Y.-C.W.; writing—review and editing, D.-H.H. and Y.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study involved anonymous questionnaire data collected as part of routine educational activities and posed minimal risk to participants. No personally identifiable or sensitive information was collected. In accordance with institutional guidelines for minimal-risk, anonymous educational research, formal IRB review was not required.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study. Prior to data collection, participants were informed of the purpose of the study, the voluntary nature of participation and their right to withdraw at any time without academic penalty. All data were collected anonymously, and no personal, identifiable information was recorded.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

During the preparation of this manuscript, the authors used AI-assisted tools solely for language refinement and proofreading purposes, including improving readability and grammatical accuracy. In addition, the manuscript underwent professional academic English editing to ensure clarity and stylistic consistency. All substantive aspects of the work including study design, data analysis, interpretation of results and final wording decisions were carried out entirely by the authors, who take full responsibility for the content of the publication.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  2. Xia, Q.; Weng, X.; Ouyang, F.; Lin, T.J.; Chiu, T.K. A scoping review on how generative artificial intelligence transforms assessment in higher education. Int. J. Educ. Technol. High. Educ. 2024, 21, 40. [Google Scholar] [CrossRef]
  3. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  4. Crompton, H.; Burke, D. Artificial intelligence in higher education: The state of the field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  5. Mhlanga, D. Open AI in Education, The Responsible and Ethical Use of ChatGPT Towards Lifelong Learning, in FinTech and Artificial Intelligence for Sustainable Development: The Role of Smart Technologies in Achieving Development Goals; Springer: Cham, Switzerland, 2023; pp. 387–409. [Google Scholar]
  6. Schwab, K. The Fourth Industrial Revolution; Crown Business: New York, NY, USA, 2017; p. 192. [Google Scholar]
  7. Garkisch, M.; Goldkind, L. Considering a Unified Model of Artificial Intelligence Enhanced Social Work: A Systematic Review. J. Hum. Rights Soc. Work. 2025, 10, 23–42. [Google Scholar] [CrossRef]
  8. Reamer, F.G. Artificial intelligence in social work: Emerging ethical issues. Int. J. Soc. Work. Values Ethics 2023, 20, 52–71. [Google Scholar] [CrossRef]
  9. Holmes, W.; Miao, F. Guidance for Generative AI in Education and Research; Unesco Publishing: Paris, France, 2023. [Google Scholar]
  10. Susnjak, T.; McIntosh, T.R. ChatGPT: The end of online exam integrity? Educ. Sci. 2024, 14, 656. [Google Scholar] [CrossRef]
  11. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 2023, 6, 342–363. [Google Scholar] [CrossRef]
  12. Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  13. Michel-Villarreal, R.; Vilalta-Perdomo, E.; Salinas-Navarro, D.E.; Thierry-Aguilera, R.; Gerardou, F.S. Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Educ. Sci. 2023, 13, 856. [Google Scholar] [CrossRef]
  14. James, P.; Lal, J.; Liao, A.; Magee, L.; Soldatic, K. Algorithmic decision-making in social work practice and pedagogy: Confronting the competency/critique dilemma. Soc. Work. Educ. 2024, 43, 1552–1569. [Google Scholar] [CrossRef]
  15. Montenegro-Rueda, M.; Fernández-Cerero, J.; Fernández-Batanero, J.M.; López-Meneses, E. Impact of the implementation of ChatGPT in education: A systematic review. Computers 2023, 12, 153. [Google Scholar] [CrossRef]
  16. Boetto, H. Artificial Intelligence in Social Work: An EPIC Model for Practice. Aust. Soc. Work. 2025, 1–14. [Google Scholar] [CrossRef]
  17. Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 2024, 32, 5142–5155. [Google Scholar] [CrossRef]
  18. Kim, H.; Hwang, J.; Kim, T.; Choi, M.; Lee, D.; Ko, J. Impact of generative artificial intelligence on learning: Scaffolding strategies and self-directed learning perspectives. Int. J. Hum.–Comput. Interact. 2025, 1–23. [Google Scholar] [CrossRef]
  19. Yan, L.; Sha, L.; Zhao, L.; Li, Y.; Martinez-Maldonado, R.; Chen, G.; Li, X.; Jin, Y.; Gašević, D. Practical and ethical challenges of large language models in education: A systematic scoping review. Br. J. Educ. Technol. 2024, 55, 90–112. [Google Scholar] [CrossRef]
  20. Chee, H.; Ahn, S.; Lee, J. A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. Br. J. Educ. Technol. 2025, 56, 2146–2182. [Google Scholar] [CrossRef]
  21. Bearman, M.; Ajjawi, R. Learning to work with the black box: Pedagogy for a world with artificial intelligence. Br. J. Educ. Technol. 2023, 54, 1160–1173. [Google Scholar] [CrossRef]
  22. Watts, K.J. Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. Educ. Sci. 2025, 15, 1304. [Google Scholar] [CrossRef]
  23. Brislin, R.W. Back-translation for cross-cultural research. J. Cross-Cult. Psychol. 1970, 1, 185–216. [Google Scholar] [CrossRef]
  24. Tlili, A.; Shehata, B.; Adarkwah, M.A.; Bozkurt, A.; Hickey, D.T.; Huang, R.; Agyemang, B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
  25. Bozkurt, A.; Xiao, J.; Lambert, S.; Pazurek, A.; Crompton, H.; Koseoglu, S.; Farrow, R.; Bond, M.; Nerantzi, C.; Honeychurch, S.; et al. Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian J. Distance Educ. 2023, 18, 53–130. [Google Scholar]
  26. Ajzen, I. The theory of planned behavior: Frequently asked questions. Hum. Behav. Emerg. Technol. 2020, 2, 314–324. [Google Scholar] [CrossRef]
  27. Teo, T. Factors influencing teachers’ intention to use technology: Model development and test. Comput. Educ. 2011, 57, 2432–2440. [Google Scholar] [CrossRef]
  28. Davis, F.D. Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology; MIS Quarterly: Minneapolis, MN, USA, 1989. [Google Scholar]
  29. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  30. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef]
  31. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Into Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  32. Mollick, E.; Mollick, L. Assigning AI: Seven approaches for students, with prompts. arXiv 2023, arXiv:2306.10052. [Google Scholar] [CrossRef]
  33. International Business Machines Corporation. Statistics for Windows, version 25.0; IBM Corp: Armonk, NY, USA, 2017.
  34. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  35. Chiu, T.K. The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interact. Learn. Environ. 2024, 32, 6187–6203. [Google Scholar] [CrossRef]
  36. Cotton, D.R.; Cotton, P.A.; Shipway, J.R. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 2024, 61, 228–239. [Google Scholar] [CrossRef]
Figure 1. Pre-test and post-test mean profiles across AILAS dimensions.
Figure 1. Pre-test and post-test mean profiles across AILAS dimensions.
Information 17 00107 g001
Figure 2. Effect sizes (Cohen’s d) for pre-post changes across AILAS dimensions.
Figure 2. Effect sizes (Cohen’s d) for pre-post changes across AILAS dimensions.
Information 17 00107 g002
Table 1. Integration of theoretical frameworks and their contributions to the AILAS dimensions.
Table 1. Integration of theoretical frameworks and their contributions to the AILAS dimensions.
TheoryKey ConstructsAILAS Dimensions Informed
Technology Acceptance Model (TAM) [28]Perceived usefulness; perceived ease of use; behavioral intentionA (Self-Perception)
F (Learning Process)
Attitude Theory/Theory of Planned Behavior (TPB) [29]Cognition–affect–behavior structure; perceived behavioral controlB (Learning Desire)
F (Learning Process)
Self-Regulated Learning (SRL) [30,31]Forethought–performance–self-reflection cycle; metacognitive strategies; goal-directed regulationC (Learning Methods)
D (Learning Planning)
E (Learning Habits)
Table 2. Demographic characteristics of participants.
Table 2. Demographic characteristics of participants.
CharacteristicPre-Test (N = 37)Post-Test (N = 35)CharacteristicPre-Test (N = 37)Post-Test (N = 35)
Gender  Grade Level  
Male15 (40.5%)13 (37.1%)Year 13432
Female22 (59.5%)22 (62.9%)Year 212
Department  Year 311
Social Work3534Other/Unknown10
Other/Unknown21   
Table 3. Independent-samples t-test results comparing pre-test and post-test scores across AILAS dimensions.
Table 3. Independent-samples t-test results comparing pre-test and post-test scores across AILAS dimensions.
DimensionPre-Test M (SD)Post-Test M (SD)M Diff95% CI
[Lower, Upper]
t ValuepCohen’s d
A. Self-Perception3.24 (0.61)3.58 (0.54)+0.34[0.06, 0.62]−2.450.017 *0.59
B. Learning Desire3.31 (0.64)3.72 (0.55)+0.41[0.13, 0.69]−2.880.005 **0.69
C. Learning Methods3.18 (0.58)3.45 (0.56)+0.27[−0.01, 0.54]−1.950.0550.47
D. Learning Planning3.15 (0.65)3.42 (0.53)+0.27[−0.02, 0.55]−1.880.0640.46
E. Learning Habits3.22 (0.66)3.68 (0.56)+0.46[0.16, 0.76]−3.100.003 **0.75
F. Learning Process3.19 (0.64)3.65 (0.52)+0.46[0.18, 0.74]−3.250.002 **0.79
Overall Attitude3.21 (0.55)3.58 (0.43)+0.37[0.13, 0.61]−3.120.003 **0.75
Note: CI = confidence interval; * p < 0.05, ** p < 0.01.
Table 4. Stability of inter-dimensional correlations (pre-test stage vs. post-test stage).
Table 4. Stability of inter-dimensional correlations (pre-test stage vs. post-test stage).
Dimension Pair (Theoretical Path)Pre-Test rPost-Test rDifference (Δ)Stability Status (|Δ| ≤ 0.10)
TAM Path: A (Self-Perception)-F (Process)0.650.68+0.03Stable
TPB Path: B (Learning Desire)-F (Process)0.720.75+0.03Stable
SRL Path: C (Learning Methods)-D (Planning)0.620.60−0.02Stable
SRL Path: C (Learning Methods)-E (Habits)0.580.65+0.07Stable
SRL Path: D (Learning Planning)-E (Habits)0.680.80+0.12Slightly above threshold
SRL Path: E (Learning Habits)-F (Process)0.610.66+0.05Stable
Table 5. Item-total statistics for Dimension D (Learning Planning).
Table 5. Item-total statistics for Dimension D (Learning Planning).
ItemCorrected Item-Total CorrelationCronbach’s Alpha If Item Deleted
D1: AI learning is not helpful (Reverse).0.080.66
D2: Study schedule.0.420.54
D3: Improving after poor results.0.480.51
D4: Plan to study daily.0.550.48
D5: Stick to plan regardless of mood.0.510.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, D.-H.; Wang, Y.-C. Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education. Information 2026, 17, 107. https://doi.org/10.3390/info17010107

AMA Style

Huang D-H, Wang Y-C. Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education. Information. 2026; 17(1):107. https://doi.org/10.3390/info17010107

Chicago/Turabian Style

Huang, Duen-Huang, and Yu-Cheng Wang. 2026. "Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education" Information 17, no. 1: 107. https://doi.org/10.3390/info17010107

APA Style

Huang, D.-H., & Wang, Y.-C. (2026). Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education. Information, 17(1), 107. https://doi.org/10.3390/info17010107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop