Next Article in Journal
Investigating Students’ Academic Profiles and Admission Trends: Evidence from an Eleven-Year Study at a South African University
Previous Article in Journal
Engagement and Trust in Mathematics and Technology: A Study with GeoGebra
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Reluctance to Engagement: Aligning Institutional Policy with “Human-in-the-Loop” Pedagogy

1
Department of Biology, Hamline University, Saint Paul, MN 55104, USA
2
Department of English and Communication Studies, Hamline University, Saint Paul, MN 55104, USA
3
Center of Educational Innovation, University of Minnesota, Minneapolis, MN 55455, USA
4
Department of Digital and Studio Arts, Hamline University, Saint Paul, MN 55104, USA
*
Author to whom correspondence should be addressed.
Trends High. Educ. 2026, 5(2), 30; https://doi.org/10.3390/higheredu5020030
Submission received: 30 December 2025 / Revised: 16 March 2026 / Accepted: 18 March 2026 / Published: 26 March 2026

Abstract

The rapid rise in generative AI (GenAI) in higher education creates a tension between institutional goals for AI literacy and everyday classroom practice: while universities increasingly call for ethical and skills-focused engagement, faculty adoption is uneven and often constrained by uncertainty. To examine this gap, we combined campus-wide attitude surveys, a longitudinal content analysis of 1716 syllabi, and a review of publicly available GenAI assignment collections. Results revealed a persistent implementation gap: although sustained professional development was associated with shifts in faculty perspectives, a majority of course-level policies remained prohibitive or punitive and were not aligned with stated institutional aims. While consistent professional development has helped faculty accept the need for GenAI literacy, most individual faculty policies remain prohibitive, at odds with both the institutional push for transparency in GenAI literacy and the faculty’s stated stance toward the need for teaching GenAI skills. Our analysis of publicly available GenAI-themed assignments demonstrated that engaging students with GenAI can take various shapes depending on instructor and course goals. This work positions AI-themed assignments as a practical solution to faculty reluctance, providing a promising pathway for hesitant educators to integrate AI literacy into their courses and meet the evolving needs of their students.

1. Introduction

The emergence of Generative AI (GenAI) as a fixture in higher education has precipitated a fundamental reshaping of the methods used to create, share, and apply knowledge. Recent data confirms that GenAI is here to stay, with usage rates soaring among undergraduates who now view these tools as essential for both academic and personal tasks [1]. This ubiquity presents a dual reality: GenAI can serve as a powerful cognitive partner that can extend human thinking capacity, yet it is simultaneously a wicked problem requiring urgent interdisciplinary study. Early studies show that skillful engagement with GenAI is linked to better content mastery, higher understanding, and higher quality outcomes [2,3,4,5,6,7]. At the same time, over-reliance on GenAI for reasoning and problem-solving has been linked to cognitive atrophy and metacognitive laziness, eroding essential mental skills and leading to shallower understanding and lower information and skill retention [8,9,10,11]. Those who wish to opt out of engaging with AI entirely can rarely do so, considering that AI has permeated most digital systems of everyday use, ranging from smartphones to banking to healthcare to social media and entertainment, and so opting out usually means engaging blindly [12,13,14]. Higher education must treat GenAI both as a phenomenon to be rigorously studied and as a tool to be cautiously adopted, ensuring that efficiency does not come at the cost of human agency or critical judgment.
Sampling of current student behaviors highlights a growing pedagogical divide: learners are integrating GenAI more rapidly than their instructors [15]. Although “only” estimated 20% of students use GenAI to bypass rigorous assessments (what is commonly considered “cheating”), many more employ it to enhance their learning through drafting, revision, and inquiry [16,17,18]. To navigate this landscape, students require a robust mastery of GenAI literacy that transcends basic technical skills. This includes the ability to interrogate systems, craft effective prompts, and critically audit outputs for bias and accuracy. However, the development of these skills is currently hindered by a pervasive stigma of and a lack of clear instruction in effective use of GenAI. When institutional discourse focuses primarily on academic integrity rather than skill acquisition, students internalize a binary view where any GenAI interaction is framed as cheating or a threat to their future career success. In this confusion, students report a desire for guidance rather than bans, noting that hidden use prevents them from learning how to verify information or spot hallucinations [1,10,18,19,20]. Therefore, educators must equip students with the core knowledge for the intentional use of GenAI—to ensure that students understand GenAI applications and capabilities, so that they can make educated decisions about when, if, and how to use GenAI tools. In contexts where GenAI deployment has become commonplace, faculty should prepare students to engage effectively and ethically. This means teaching students how to treat GenAI as a sparring partner rather than an oracle, fostering intentionality in navigating these tools, and ensuring that they know how to be the ‘human-in-the-loop’ who retains ownership of the learning process regardless of the extent to which they choose to utilize the technology.
Successful integration of GenAI literacy relies heavily on faculty who are themselves equipped to lead in this evolving ecosystem. Faculty training, in this environment, is not a luxury but an essential investment, as educators pivot from a policing role to one of coaching and mentorship. Current research highlights a significant gap in educator preparedness, with many faculty feeling ill-equipped to integrate GenAI meaningfully into their pedagogy [15,21,22,23,24]. For many institutions with modest resources, particularly small liberal arts and comprehensive colleges, navigating this shift has been especially complex. Unlike large research universities supported by emerging AI research centers or externally funded pilot programs, smaller institutions often must adapt with limited resources, relying on distributed faculty expertise to manage the transition [15,25,26]. Early reactions across the sector have been generally marked by uncertainty and skepticism, with initial discourse centered on restriction, prohibition, and the preservation of traditional definitions of academic integrity [18,27,28]. As the limitations of detection software became apparent and the professional necessity of AI literacy became undeniable, the conversation began to shift toward adaptation and engagement.
This paper explores that critical transition in faculty attitudes toward GenAI—from reluctance to engagement—through two complementary lenses: an institutional case study of policy and attitude evolution, and a pedagogical analysis of effective assignment design. We examine the institutional landscape of GenAI integration at a small, private, comprehensive university in Minnesota (US). By analyzing syllabus language over three academic years and surveying faculty, staff, and students, we trace the messy reality of GenAI adoption. This started with early policies that favored control and prohibition, leading to continuously disparate perspectives of various campus stakeholders that prevented the emergence of cohesive institutional approaches to GenAI. Our findings highlight the tension between the institutional desire to teach AI literacy, the faculty’s hesitation to implement it, and students’ reluctance to use it. Our data suggest that consistent professional development, anchored in student learning and cognitive development, can lead to rapid outcomes where it matters most. Such results include shifts in professed faculty attitudes toward embedding key AI literacy into the curriculum and an increase in faculty use of GenAI tools to assist with research and personal tasks. Despite this progress, specific paths to implementing GenAI literacy teaching within actual courses remain unclear and elusive.
Considering the positive impact of professional development on faculty attitudes thus far, we posit that providing ongoing and consistent opportunities for faculty to explore the impact of GenAI tools on student learning (positive and negative), the society, and the environment is likely to continue building faculty confidence and willingness to explore these tools in their classrooms. This is especially true when such professional development highlights the positive impact of new pedagogies on student learning and efficacy. It is important to remember that syllabus policies primarily operate as instruments of behavioral regulation, by clarifying expectations and defining acceptable conduct. Assignments, in contrast, shape what students actually practice, reflect upon, and learn. Our analysis of three national GenAI assignment collections suggests that one such path—leading through student-centered assignment-based professional development—could help shift faculty perceptions of GenAI. Rather than seeing it as a tool for cheating, GenAI can be used as a partner for critical thinking. We argue that it is essential, in the context of engaging faculty scattered along the spectrum of GenAI attitudes, to affirm that there is no single right way to adopt GenAI within the higher education classroom; rather, we need to ensure that faculty have the opportunity to explore diverse entry points into GenAI investigation and integration, ranging from low-stakes assistance to transformative collaboration. We argue that supporting faculty in constructing well-designed assignments that maximize transformative student learning is key to moving faculty from a stance of reluctance to one of pedagogical confidence.
In this study, several terms are used in specific ways. By AI/GenAI literacy we do not merely refer to technical proficiency in using AI tools, but rather to the capacity to evaluate outputs, understand limitations and bias, and make intentional decisions about when and how GenAI tools should be used in learning contexts. Human-in-the-loop pedagogy denotes pedagogical design in which students retain cognitive responsibility for judgment, verification, and making meaning rather than delegating these functions to GenAI models.

2. Materials and Methods

This study employs a mixed-methods approach to examine the integration of Generative AI (GenAI) in higher education, combining a longitudinal institutional case study with a broader qualitative analysis of pedagogical artifacts. It addresses the following four complementary research questions.
1. What are faculty, staff, and student attitudes toward GenAI, and how did these attitudes shift over time in relation to institutional professional development?
2. How are GenAI policies articulated by faculty across the institution?
3. What assignment design patterns are implemented in publicly available GenAI assignments?
4. To what extent are faculty attitudes (and subsequent course-level practices) influenced by institutional policy and professional development?

2.1. Institutional Case Study Context and Setting

Hamline University is a small, modestly endowed private Master’s comprehensive University in urban St. Paul, Minnesota, known for its emphasis on experiential learning, service, and social justice. In 2025, 71% of Hamline’s 1600 undergraduates were new majority students: 42% were first-generation college students, 40% were students of color, and 36% of students were eligible for Pell grants. The proportion of these new majority students has steadily increased over the last decade, mirroring the demographic shifts in the greater Twin-Cities region. The University has undergone repeated academic restructuring aimed at reducing administrative barriers and increasing cross-unit collaboration among approximately 115 full-time faculty who typically teach about 80% of its courses. In December 2024, Hamline full-time faculty unanimously adopted a policy that requires each syllabus to describe the corresponding course’s approach to GenAI use, and encourages faculty to refrain from using any GenAI-detection software. Since January of 2023, the University’s Center for Teaching and Learning has offered targeted, persistent, and consistently delivered professional development opportunities related to pedagogical implications of GenAI use by faculty and students and AI literacy (Supplemental Figure S1).

2.2. Campus-Wide Attitude Surveys

To capture stakeholder perspectives on GenAI, we administered a series of surveys to faculty (Spring 2024 and Spring 2025), staff (Spring 2025), and students (Spring 2025 and Fall 2025). The survey instruments assessed participants’ willingness to embed GenAI in assignments, their perception of GenAI’s impact on higher education, and their views on the institution’s role in teaching ethical AI use. 2024 surveys included 24 faculty responses and served as a preliminary pilot during the initial campus rollout of GenAI—the addition of Gemini to the Google Suite available to all University stakeholders. 2025 surveys included 53 faculty responses (47% of full-time faculty), 20 staff responses, and 211 student responses (13% of the undergraduate student population).
All surveys were administered through Google Forms and no identifying information was collected. Participants were invited to respond to surveys through a campus-wide email sent, respectively, to all full time faculty, all staff, and all undergraduate students. The study was conducted in accordance with the Declaration of Helsinki, and determined to be exempt by the Institutional Review Board of Hamline University (protocol code 2024-4-301E; 23 April 2024). Only completed surveys were included in the analysis. The surveys consisted of a combination of multiple choice and open-ended questions. The 2025 and 2024 surveys included the same set of questions. Both faculty and student surveys conducted at different time points likely included an overlapping set of participants. However, responses were not attributed to specific individuals and were not analyzed as such. As the response rates were uneven across groups, the survey results may be affected by nonresponse and self-selection bias.
Statistical analyses were conducted in R. All hypothesis tests were two-sided. Mann–Whitney U tests were used for ordinal/attitudinal comparisons. Χ2 tests were used for contingency tables. For all relevant comparisons, statistical significance level and a standardized effect size (r for Mann–Whitney U tests and Cramér’s V for Χ2 tests) with interpretation thresholds of 0.10 (small), 0.30 (medium), and 0.50 (large) are reported.
Open-ended survey responses were coded by members of the research team. A codebook was developed iteratively (initial codes derived deductively from the conceptual frameworks and then refined inductively). Two research team members independently coded responses. A random subsample was double-coded to check reliability and disagreements were resolved by consensus discussion. A “text segment” was defined as a single respondent comment or discrete clause expressing one idea.

2.3. Longitudinal Syllabus Analysis

To evaluate the evolution of institutional messaging and faculty policy regarding GenAI, we conducted a comprehensive content analysis of all course syllabi submitted as part of the institution’s regular annual syllabus collection (primarily for accreditation purposes) spanning three academic years (Fall 2022 through Spring 2025). Access for research purposes was authorized by the Provost Office and all data were handled under institutional confidentiality protocols. When quoting policy or course text verbatim, these excerpts were carefully de-identified to ensure that neither the instructor nor the specific course could be identified. A total of 1716 syllabi (representing 82% of all courses) were collected from undergraduate and graduate courses across 47 disciplines, including the arts, business, education, humanities, and natural and social sciences. Our analysis does not include syllabi from 18% of courses offered during the study period as no syllabus was available. Because syllabi were absent primarily due to posting practices rather than the presence or absence of GenAI policy language, we treat the analyzed dataset as a large, representative sample of posted syllabi. However, estimates of policy prevalence should be interpreted as applying to publicly available syllabi rather than to all courses offered.
Syllabi were coded by members of the research team, who first participated in a shared training session to develop rubrics and decision criteria and apply them to a pilot subset of syllabi. Following calibration, each syllabus was coded independently. Inter-coder agreement exceeded 90% across all five dimensions for a random subset of 85 syllabi (5% of the sample) that were double-coded. Disagreements were resolved through discussion and minor rubric refinement before the final dataset was consolidated.
GenAI policies outlined in the syllabi were coded using a multi-dimensional approach informed by three recent frameworks. A practical 4E Adoption Framework for GenAI in education foregrounds institutional and instructional readiness, implementation pathways, and the organizational conditions that shape GenAI engagement [24]. The AI Assessment Scale (AIAS) is a rubric-style framework that centers the ethical integration of GenAI into assessment practice, highlighting issues such as fairness, academic integrity, transparency, and alignment between tasks and learning outcomes [29]. A comprehensive AI policy-for-education framework treats AI integration ecologically, connecting policy, curricular design, governance, and support structures across the institution [30]. We used these three complementary frameworks together because, taken as a set, they capture the key dimensions relevant to syllabus policy, adoption process and readiness, assessment ethics and alignment, and institutional ecosystem, moving beyond binary classifications of permit vs. prohibit to characterize how courses frame, scaffold, and govern GenAI use.
Each syllabus was coded for presence, alignment with course goals, disclosure expectations, AI literacy outcomes, and GenAI policy tone (Table 1).

2.4. AI Assignment Analysis Framework and Analysis of AI-Themed Assignments

To explore best practices in AI-infused pedagogical design, we conducted a qualitative thematic analysis of publicly available AI-themed assignments from three national AI assignment collections: The AI Pedagogy Project (Harvard University; https://aipedagogy.org/assignments/ (accessed on 11 September 2025)), AI Assignment Library (University of North Dakota; https://commons.und.edu/ai-assignment-library/ (accessed on 11 September 2025)), and Teaching Repository for AI-Induced Learning (TRAIIL; University of Central Florida; https://stars.library.ucf.edu/traiil/ (accessed on 11 September 2025)). All publicly available GenAI-related assignments from the three national repositories available in October 2025 were included and no topical or quality-based exclusions were applied. Analysis was conducted in accordance with each repository’s licensing and terms of use for publicly shared materials. All content explicitly cited in the analysis was properly attributed to the source repository and, where applicable, to the original instructors or institutions, following repository guidelines (Supplemental Table S2).
Assignments were analyzed to identify patterns in design, scaffolding, and learning outcomes. We utilized the AI Assessment Scale [29] to evaluate assignments across six dimensions (Table 2, Supplemental Table S1). Additionally, learning outcomes explicitly stated in all included assignments were qualitatively analyzed for core learning outcome themes. ChatGPT 5.0 and Gemini 2.5 were used as assistants to human researchers in conducting initial qualitative data analysis for rubric development and assignment analysis. The authors have reviewed and edited the output, re-coded all assignments, and take full responsibility for the content of this publication. Raw pdf files of assignments were submitted to AI-tools.

3. Results

This study combines three complementary sources of evidence to trace a path of an institution through the complex landscape of GenAI engagement. Using campus-wide attitude surveys of faculty, staff, and students, a longitudinal content analysis of 1716 course syllabi, and a qualitative review of three national GenAI assignment repositories, we examine (a) stakeholder attitudes toward GenAI, (b) course-level policy and practice, and (c) assignment designs that might engage faculty. Our primary goals are 1. to map the alignment (and possible implementation gaps) between institutional directives, faculty readiness, and actual classroom practice, and 2. to identify possible approaches to professional development through GenAI-themed assignments, designed to build strong critical thinking and metacognition skills while developing AI literacy.

3.1. Divergent Perspectives on the Need for GenAI Education

Data from the Campus-Wide GenAI Attitude Surveys revealed a notable paradox. Contrary to the narrative of faculty resistance, instructors at this institution have quietly integrated GenAI into their professional workflows at rates exceeding those of their students. While, in Spring 2025, only 26% of Hamline faculty reported never using GenAI for academic purposes and 30%—for personal purposes, a significantly higher portion of Hamline students reported avoiding these tools entirely (49% for academic and 54% for personal purposes; Figure 1).
This usage gap correlated with a deep divide in professional outlook: while over 80% of faculty viewed GenAI as a valuable assistant and believed learning to work with it was essential, nearly half of surveyed students (47%) registered skepticism about GenAI as a significant influence in their chosen field (Table 3).
While faculty and students shared deep concerns about GenAI eroding critical thinking and ethical standards, their practical anxieties diverged significantly. Faculty focused on the deficit of institutional support and training needed to manage the technology, whereas students focused on the tool’s unreliability and its potential to displace their future careers (Table 4).
Crucially, the Campus-Wide GenAI Attitude Surveys exposed an implementation gap. In March 2025, 98% of faculty and 96% of staff respondents noted that the institution bears responsibility for teaching ethical and effective GenAI use (Figure 2a). However, despite recognizing this imperative, faculty respondents reported hesitation to deliver this instruction themselves (Figure 2b). Fewer than 50% of faculty expressed interest in designing assignments that require or expose students to GenAI applications. This reluctance correlated with a generally pessimistic outlook: approximately half of the faculty respondents predicted that the overall impact of GenAI on higher education would be negative (Figure 2c). This data suggests a tension between, on the one hand, faculty recognition of the institutional imperative to teach GenAI literacy and, on the other, lack the confidence or desire to participate in the implementation of this goal in their own teaching.
This hesitation may be associated with a measurable increase in student skepticism. In March 2025, fewer than 60% of student respondents agreed that the university should teach AI skills. Remarkably, student agreement with this sentiment has subsequently rapidly declined. Between March and November 2025, the proportion of student respondents who claimed that GenAI will have a negative or extremely negative impact on higher education surged from 66% to 82% (Mann–Whitney U test; p-value < 0.01; effect size −0.187). Similarly, pessimism regarding GenAI’s impact on their daily work grew from 53% to 65% (Mann–Whitney U test; p-value < 0.05; effect size −0.117). Instead of opting to prepare to face this GenAI threat, student demand for formal instruction significantly dipped: the number of student respondents who indicated that the university should teach effective GenAI use dropped from 58% to just 44% over the same period (Chi-square test; p-value < 0.01; effect size −0.146). Together, these findings suggest that in the absence of clear instructional pathways, GenAI is increasingly framed by students as a liability rather than a professional competency.

3.2. Syllabus Policies: Hesitation, Inconsistency, and Punitive Framing

The longitudinal analysis of 1716 syllabi, spanning from Fall 2022 through Spring 2025, confirms that the hesitation observed in faculty responses to the Campus-Wide GenAI Attitude Surveys translated into the documents that set the parameters of student learning across academic curricula (Figure 3a). Despite a clear, faculty-adopted institution-wide directive requiring transparent GenAI guidelines in each class, policy implementation by individual faculty remained incomplete. As of the 2025 academic year, approximately six months after the faculty vote, only 50% of course syllabi included a specific GenAI policy. Translated into student experience, this data suggests that in half of their courses students were left to navigate the ambiguity of GenAI usage without written guidance, often relying on verbal instructions or assuming a default stance, likely of prohibition.
When GenAI policies did appear in syllabi, they presented a complex reality of idiosyncratic policy implementation. Syllabus AI policies were governed almost exclusively by individual faculty preference rather than by departmental standards, course levels, or program requirements. Consequently, students within the same program of study often encountered contradictory messages regarding GenAI usage, navigating a fragmented landscape where expectations shifted radically from one course to the next.
Furthermore, the prevailing tone of existing syllabus policies defaulted to a cautious posture. In the academic year (AY) 2024–25, approximately 50% of the instructors who did include a GenAI policy in their syllabus framed GenAI use in strictly prohibitive or punitive terms. Prohibitive/punitive policies predominantly focused on academic dishonesty rather than on acceptable use, equated GenAI use with plagiarism or cheating, warned students of detection software, and stipulated grading penalties (Figure 3b). Only five percent of existing syllabi policies explicitly connected GenAI use to course goals and invited student discussion and inquiry (Figure 3b). Some of the most developed policies utilized the syllabus to educate students about the technology’s limitations, such as hallucinations and bias, and encouraged critical engagement rather than blind reliance.
The physical placement of the GenAI policies within the syllabus document often signaled their perceived role in the course. 15% of all syllabi (34% of syllabi with existing GenAI policies) embedded the GenAI guidelines directly within sections titled “Academic Integrity,” “Academic Honesty,” or “Plagiarism” (Figure 3a). This contextual framing reinforces a binary view where GenAI is a threat to be managed and a potential conduct violation. The number of syllabi that contained a stand-alone, clearly labeled GenAI use section (e.g., “Generative Artificial Intelligence Policy”) and introduced the topic as a distinct and significant element of the academic landscape grew from 15% to 30% from AY2023-24 to AY2024-25.
While many syllabi tended toward strict prohibition or extreme restriction to protect the development of voice and analytical skills, some instructors adopted highly functional policies that distinguished between the process of learning and the product of assessment. These policies frequently delineated acceptable use, such as debugging code or brainstorming research topics, while strictly banning the use of GenAI for generating assessable content and frequently requiring detailed transparency and explicit disclosure of the ways the GenAI tools were used.
Inclusion of GenAI skills as an explicit learning outcome was rare, confined to fewer than 2% of analyzed syllabi. Even in courses where GenAI use was permitted, it was seldom tied to specific competency goals. Taken altogether, the data suggests that, across the institution, GenAI tools are either viewed as a logistical challenge to be regulated rather than a professional competency to be developed or, equally likely, that faculty at this point still lack the crucial confidence that would help them integrate GenAI literacies into their courses in ways that would prepare students to navigate an AI-enfused world effectively.
Ultimately, our syllabus analysis revealed the partial effectiveness of a policy-first approach and the need for additional alternative professional development approaches. The Campus-Wide GenAI Attitude Surveys student data suggest that as long as GenAI is managed solely through restrictive clauses in a syllabus, students will continue to view it as a liability to be hidden and not as a competency to be developed. As syllabus policies primarily function as behavioral governance tools, by setting boundaries, signaling expectations, and defining compliance, institutional policy can regulate GenAI use, but has limited impact on developing AI literacy.

3.3. Assignment Analysis: Multiple Entry Points for Integration

How can institutions move from this defensive posture to active engagement? We hypothesized that the solution might lie in affirming faculty reluctance, and in encouraging our colleagues to translate it into course assignments that make their questions about GenAI transparent to their students. Transparent assignments that lead students to experience GenAI shortcomings, fallibility, hallucinations, and other areas that currently give faculty pause when they retreat from teaching GenAI altogether could be a productive entry point into teaching GenAI literacies. Such learning experiences can support students’ critical thinking, reflection, ethical reasoning, information literacy, creativity, learning autonomy, and collaboration and communication skills, while simultaneously helping students develop AI literacy. We were interested in investigating what approaches are taken by instructors who designed publicly available AI-themed assignments.
To explore pedagogical alternatives to restrictive policy frameworks, we analyzed publicly available AI-themed assignments from three national AI assignment collections. Building on the AI Assessment Scale [29], we developed the AI Assignment Analysis Framework (Table 2 and Table S1), evaluating assignments across six dimensions relative to the degree and nature of GenAI integration and the nature of collaboration between humans and GenAI (Table 2 and Table S1, Figure 4). The AAC&U VALUE framework situates assessment design, including the use of rubrics aligned to higher-order learning outcomes, as a vehicle for centering student work and guiding instructional improvement, recognizing that assignments crafted for complexity and transparency are more likely to reveal meaningful evidence of student learning and foster reflective engagement with criteria [31]. We were interested in documenting where AI-themed assignments land in the dimensions of cognitive complexity and metacognition. As no assessment of student learning was evaluated directly in this study, this assignment analysis identified recurring design patterns and pedagogical examples rather than evidence of learning effectiveness within our institutional sample.
Quantitative coding of the sixty-seven assignments demonstrated that AI-themed assignments focused student learning on a high-level Cognitive Complexity (80% of assignments were coded as “Create/Evaluate”), indicating that these assignments consistently tasked students with creating or evaluating knowledge rather than understanding or remembering information. Similarly, these assignments prioritized the process of learning, as evidenced by requirements for prompt logs and iterative drafts, rather than merely the final deliverable product (greater than 95% of analyzed assignments were coded as “Balanced between process and product” or “Process-driven” on the Assignment Emphasis dimension). In contrast, the dimensions of Cognitive Role of AI and Nature of AI Integration showed higher variation, suggesting that, while the assignments uniformly require rigorous human cognitive work, they differed in the way the GenAI tools were integrated, from using GenAI as a basic assistant or a supplemental tool to deploying it as a thinking partner and a learning guide.
Qualitative textual analysis of 116 assignment learning outcomes (Table 5) revealed a decisive pedagogical shift toward active production, with “Creation and Generation” (37.9%) and “Application and Practice” (36.2%) emerging as the dominant trends. Rather than fostering passive consumption, these assignments utilized GenAI to drive high-level cognitive tasks, with nearly one-third of outcomes explicitly requiring “Critical Analysis and Evaluation” (31.9%) of AI-generated content. Furthermore, the prevalence of “Reflection” and “Information Literacy” (both ~20%) underscored a commitment to “human-in-the-loop” competencies, ensuring students retain agency by verifying and refining algorithmic outputs.
Across the assignments five recurring design patterns emerged:
1. GenAI as an Object of Critique and Verification: A primary trend involved shifting the student’s role from a passive consumer of information to an active evaluator. In these assignment designs, GenAI output was treated not as a final answer but as a flawed text requiring rigorous verification. For example, students compared a human-written assessment with an AI-generated critique to identify gaps in nuance and accuracy. Similarly, finance students were tasked with instructing GenAI to perform calculations and then auditing the model’s output against their own manual work. These tasks forced students to exercise validation of GenAI output and clarify their own understanding by identifying where and why the model failed.
2. Mandatory Metacognition and Process Documentation: To ensure students embraced the role of the “human-in-the-loop” in their interactions with GenAI, robust assignments consistently mandated metacognitive reflection. Rather than grading AI-generated products alone, instructors assessed “process evidence”—such as logs of prompts students used and narratives explaining why certain GenAI outputs were accepted or rejected. Students were frequently asked to reflect on their learning journey, documenting how their prompting strategies evolved or how the GenAI outputs influenced their thinking.
3. Operationalizing Ethical Reasoning: Beyond abstract discussions, assignments were operationalizing ethics by requiring students to confront bias and misinformation directly. Some tasks guided students to intentionally misuse GenAI to generate false narratives, thereby exposing the mechanisms of disinformation. Others involved analyzing images generated for specific professional prompts (e.g., “doctor” vs. “nurse”) to reveal and critique the stereotypes embedded in training data. These assignments moved ethics from a compliance checklist to a core learning objective, asking students to grapple with the societal impact of the tools they used.
4. GenAI for Skill Rehearsal and Role-Playing: A significant number of assignments utilized GenAI as a thinking partner for low-stakes practice and simulation. In clinical fields, students engaged in simulated triage with voice-mode chatbots to practice professional communication in real time. In the humanities and social sciences, students prepared for ethnographic interviews by rehearsing with a chatbot prompted to adopt the persona of a specific subject. This approach allowed students to refine rhetorical and active listening skills in a safe environment before applying them in high-stakes real-world contexts.
5. Critique of the Tool and Interface: Some assignments treated the GenAI model itself as a subject of technical and critical study by requiring students to use GenAI to find edge cases in GenAI’s own code, turning the tool into a quality assurance mechanism, or by analyzing how the personality assigned to a GenAI influences user trust and interaction.
The further thematic analysis of the sample assignments showed that there was no single best way to integrate GenAI into assignments, highlighting multiple pedagogical entry points that varied based on the desired level of student engagement and the learning outcomes. We positioned the assignments along the four axes (Figure 5; Supplemental Table S2). (1) Is the assignment discipline-specific? (2) Is the assignment focused on the process of learning and creating or the final product? (3) Does the assignment emphasize skills of using GenAI or GenAI impact on the society, environment, or human well-being? and (4) How much are students engaged in actually using GenAI tools to complete the assignments?
The analyzed AI-themed assignments were distributed across these dimensions and provided multiple entries to engage students with GenAI, depending on the course learning goals, instructor goals, and the desired level of engagement with GenAI, strongly suggesting that successful GenAI adoption does not require a one-size-fits-all overhaul, but rather a strategic alignment where the depth of GenAI engagement is purposefully matched to the instructor’s goals and the course’s learning outcomes.

4. Discussion

Generative AI has emerged as a dual force in higher education: a global phenomenon that demands rigorous study and a set of co-intelligence tools that can augment human thinking, writing, and professional practice. As a phenomenon, GenAI is reshaping the cognitive and social environments our students will inhabit [13,14]; as a tool, it can function as a thinking partner that deepens critical reasoning, strengthens communication, and supports complex problem solving [6,10,25,32,33]. When used casually, poorly, or blindly, in contrast, GenAI can be harmful, both in producing inaccurate information, confirming widespread biases, and eroding cognitive skills of users [34,35,36]. Given these stakes, universities cannot remain as passive observers—instead, institutions must actively shape how GenAI is investigated, taught, and evaluated. Yet our data show that strong institutional intent has not straightforwardly translated into classroom practice.
A critical implementation gap exists between institutional desire for general GenAI integration and the reluctance of faculty to quickly execute it. While institutional rhetoric, professional development opportunities, and institutional policies encourage instructor engagement, our survey data revealed that faculty confidence remained low, due to the rapid emergence of GenAI tools and constant changes within the field and, perhaps more importantly, persisting questions about the GenAI efficacy, accuracy, environmental footprint, and the potential negative impact on student learning and cognitive development. This aligns with recent global findings: while most faculty anticipate having to use GenAI, a vast majority of educators are wary of potential pitfalls, GenAI cognitive, societal, and environmental impact, feel ill-equipped, and lack the pedagogical clarity to implement it effectively [27,37,38]. Instead of designing the structural curriculum changes necessary to make learning AI-resilient and AI-enhanced [39], many instructors retreat to discursive solutions—adding restrictive boilerplate text to syllabi without fundamentally altering how learning is facilitated, supported, and assessed.
This faculty hesitancy manifests in a chaotic course-level policy landscape. Our longitudinal analysis of GenAI syllabi policy statements reveals that students are navigating a fragmented environment where GenAI philosophies and attitudes, with their attendant expectations and value judgments, shift radically from one course to the next. In the absence of sufficient faculty confidence and skill development, many policies default to prohibition, framing GenAI usage strictly through the lens of academic misconduct. The resulting mixed messaging, where some instructors, sometimes teaching within the same program, embrace GenAI as a professional competency while others ban it as cheating, creates significant student anxiety. It is important to note that the confusion, or even outright bans, do not actually prevent GenAI use; rather, as current research has demonstrated, it drives it underground. Like most other “absence only” policies that only accentuate illicit engagement, covert GenAI use leads not only to missed opportunities to educate students on proper use and protection, but frequently to the very outcomes that faculty, through their bans and prohibitions, are trying to prevent—such as cognitive outsourcing, decline, and radical decrease in student confidence and efficacy [40]. Policies that are strictly prohibitive or conspicuously absent leave students without the necessary tools and skills required to adopt GenAI meaningfully [41]. Traditional enforcement and detection strategies are woefully insufficient, and the assessment design must be reconceptualized [39].
If current research is to be believed, and the data suggests that it should, students are using GenAI extensively but secretly, with over 90% of students regularly relying on GenAI in a variety of settings [1]. Perhaps not surprisingly, when asked, students rarely admit to this widespread usage. Following national trends [18,42], only about half of the students in our data set claim to never use GenAI tools for academic purposes. This discrepancy likely reflects a reporting bias triggered by the pervasive stigma and predominantly punitive syllabus language identified in our analysis. When 50% of syllabi frame GenAI solely through the lens of academic misconduct, students may under-report their usage to avoid the risk of being labeled as cheaters. It is possible that such prohibitive usage also contributes to the growing pessimism among students, reported in our campus-wide survey, regarding GenAI’s impact on their future careers and on higher education overall.
In this context, the only way forward is to provide students with sufficient opportunities to learn about GenAI tools to ensure that they can make educated decisions about if, when, and how to deploy them, and to do so skillfully, ethically, and effectively. It is not sufficient for us to proclaim that this should be so through administrative statements and top-down institutional policies. Even unanimously adopted faculty-driven policies can be insufficient, as is evident in the gap we discovered between professed faculty attitudes toward teaching GenAI and actual implementation of pedagogical practices captured in syllabi, as a sole approach to ensuring that students have consistent opportunities to learn about and engage with complex, constantly changing GenAI tools. To truly shift institutional approaches in this new educational landscape, institutions must adapt consistent, longitudinal, multi-pronged approaches that provide the necessary training and long-term support for faculty to continue exploring, learning about, and safely testing GenAI tools in their professional lives and classrooms.
Our data demonstrate that longitudinal participation in professional development workshops is associated with measurable shifts in faculty perspectives, reported practices, and actual course-level GenAI policies. Recent multi-institutional work similarly documents variability in faculty readiness and highlights how targeted professional development increases instructor willingness to experiment with GenAI in teaching contexts [22]. While this indicates that professional development is an effective lever for reframing pedagogical approaches to GenAI, our findings demonstrate that it is a gradual one that must be sustained over time and into the future. In addition, while faculty often recognize GenAI’s potential and report intentions to use it, actual adoption depends heavily on trust in GenAI output, transparency in its use, and perceived usefulness for both faculty work and student learning. Consequently, professional development that goes beyond awareness-raising to build trust, model concrete pedagogical uses, and strengthen faculty self-efficacy through sustained, community-based support is necessary to translate positive attitudes into durable classroom integration [43,44]. Therefore, professional development focused on syllabus policies alone is not enough to bridge the implementation gap, as syllabus policies regulate student behavior but do not cultivate actual AI literacy. To ensure that students learn the skills necessary to navigate an AI-enhanced future, they must encounter plentiful opportunities to explore and think critically about GenAI tools through class-related assignments that are anchored in course and program-based learning outcomes, which means that faculty need to find ways to work toward designing and implementing them. For professional development to translate into sustained classroom change, trust and clarity must be embedded in training by modeling transparent assignments in which instructors and students jointly engage with, question, and learn from exploring GenAI impacts and/or developing GenAI collaborative skills. Professional development initiatives should be directly informed by student perceptions, learning outcomes, and structured student feedback. Our analysis of publicly available national AI-themed assignment collections provides a framework and exemplar prompts that instructors—whether skeptical, curious, or enthusiastic—can adapt as entry points for themselves and their students to build cognitive complexity and metacognitive reflection. Rather than mandating a specific curriculum, we want to encourage and support instructors to “jump in”—try small, reflective tasks or deeper projects—and learn by doing, because experimentation is how practical pedagogy evolves.
Finally, we acknowledge the limitations of this study. This single-institution study relies largely on self-report, syllabus analysis, and assignment coding collected during a period of rapid technological change; these constraints temper generalizability and preclude strong causal claims. Future work should pursue longitudinal and observational designs that trace whether attitude shifts induced by professional development translate into persistent classroom practice and improved student learning, and should test which institutional levers most reliably convert short-term uptake into durable, equitable change.

5. Conclusions

Our study demonstrated a clear implementation gap: while institutions can articulate a commitment to AI literacy, course-level practice remains fragmented, often punitive, and poorly aligned with stated goals. Sustained, community-based professional development was associated with measurable shifts in faculty attitudes. Analysis of national assignment repositories showed multiple viable pedagogical entry points to engaging students in exploration of GenAI tools.
Based on these findings, we recommend three practical, institution-level approaches: (1) encourage faculty to align GenAI course policies with explicit AI-literacy and course learning outcomes; (2) invest in ongoing, scaffolded professional development that models transparent, assignment-focused practices, so faculty can adopt approaches that fit their course goals; and (3) encourage program coordination of GenAI policies to reduce student confusion while preserving multiple, discipline-appropriate pathways for engagement with GenAI; and (4) lean into faculty reluctance and encourage them to develop assignments critically evaluating GenAI impact.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/higheredu5020030/s1, Figure S1: GenAI Timeline: External Pressure and Internal Response at Hamline University; Table S1: The GenAI Assignment Analysis Framework; Table S2. GenAI Assignment Categories.

Author Contributions

Conceptualization, I.M., C.H., M.K. and J.G.; methodology, I.M., C.H., M.K. and J.G.; data curation, I.M., C.H., M.K. and J.G.; writing—original draft preparation, I.M., C.H., M.K. and J.G.; writing—review and editing, I.M., C.H., M.K. and J.G.; visualization, I.M., C.H., M.K. and J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and determined to be exempt by the Institutional Review Board of Hamline University (protocol code 2024-4-301E; 23 April 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The raw data supporting the syllabus analysis and survey results are available from the corresponding author upon reasonable request, subject to privacy protections for human subjects. The assignment descriptions are publicly available through their respective institutional repositories cited above.

Acknowledgments

The authors are grateful for the support of Hamline faculty, students, and staff. During the preparation of this manuscript/study, the authors used ChatGPT 5.0 and Gemini 2.5 as assistants to human researchers in conducting qualitative data analysis. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GenAIGenerative Artificial Intelligence

References

  1. Freeman, J. Student Generative AI Survey; Higher Education Policy Institute Report 2025 HEPI Report; Higher Education Policy Institute: Oxford, UK, 2025; Available online: https://www.hepi.ac.uk/reports/student-generative-ai-survey-2025/ (accessed on 15 December 2025).
  2. Pallant, J.L.; Blijlevens, J.; Campbell, A.; Jopp, R. Mastering knowledge: The impact of generative AI on student learning outcomes. Stud. High. Educ. 2025, 1–22. [Google Scholar] [CrossRef]
  3. Iqbal, J.; Hashmi, Z.F.; Asghar, M.Z.; Abid, M.N. Generative AI tool use enhances academic achievement in sustainable education through shared metacognition and cognitive offloading among preservice teachers. Sci. Rep. 2025, 15, 16610. [Google Scholar] [CrossRef] [PubMed]
  4. Tu, Y.-F. Roles and functionalities of ChatGPT for students with different growth mindsets: Findings of drawing analysis. Educ. Technol. Soc. 2025, 27, 198–214. [Google Scholar]
  5. Bittle, K.; El-Gayar, O. Generative AI and academic integrity in higher education: A systematic review and research agenda. Information 2025, 16, 296. [Google Scholar] [CrossRef]
  6. Wang, J.; Fan, W. The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Int. J. Educ. Technol. High. Educ. 2024, 21, 35. [Google Scholar] [CrossRef]
  7. Bai, Y.; Wang, S. Impact of generative AI interaction and output quality on university students’ learning outcomes: A technology-mediated and motivation-driven approach. Sci. Rep. 2025, 15, 24054. [Google Scholar] [CrossRef]
  8. Dergaa, I.; Saad, H.B.; Glenn, J.M.; Amamou, B.; Aissa, M.B.; Guelmami, N.; Fekih-Romdhane, F.; Chamari, K. From tools to threats: A reflection on the impact of artificial-intelligence chatbots on cognitive health. Front. Psychol. 2024, 15, 1259845. [Google Scholar] [CrossRef]
  9. Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  10. Nasr, N.R.; Tu, C.-H.; Werner, J.; Bauer, T.; Yen, C.-J.; Sujo-Montes, L. Exploring the impact of generative AI ChatGPT on critical thinking in higher education: Passive AI-directed use or human–AI supported collaboration? Educ. Sci. 2025, 15, 1198. [Google Scholar] [CrossRef]
  11. Hassen, M.Z. The impact of AI on students’ reading, critical thinking, and problem-solving skills. Am. J. Educ. Inf. Technol. 2025, 9, 82–90. [Google Scholar] [CrossRef]
  12. Shin, Y. Toward human-centered artificial intelligence for users’ digital well-being: Systematic review, synthesis, and future directions. JMIR Hum. Factors 2025, 12, e69533. [Google Scholar] [CrossRef]
  13. Chang, J.P.-C.; Cheng, S.-W.; Chang, S.M.-J.; Su, K.-P. Navigating the digital maze: A review of AI bias, social media, and mental health in Generation Z. AI 2025, 6, 118. [Google Scholar] [CrossRef]
  14. Storey, V.C.; Yue, W.T.; Zhao, J.L.; Lukyanenko, R. Generative artificial intelligence: Evolving technology, growing societal impact, and opportunities for information systems research. Inf. Syst. Front. 2025, 27, 2081–2102. [Google Scholar] [CrossRef]
  15. Wang, H.; Dang, A.; Wu, Z.; Mac, S. Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Comput. Educ. Artif. Intell. 2024, 7, 100326. [Google Scholar] [CrossRef]
  16. Ravšelj, D.; Keržič, D.; Tomaževič, N.; Umek, L.; Brezovar, N.; Iahad, N.A.; Abdulla, A.A.; Akopyan, A.; Segura, M.W.A.; AlHumaid, J.; et al. Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE 2025, 20, e0315011. [Google Scholar] [CrossRef]
  17. Lee, V.R.; Pope, D.; Miles, S.; Zárate, R.C. Cheating in the age of generative AI: A high-school survey study of cheating behaviours before and after the release of ChatGPT. Comput. Educ. Artif. Intell. 2024, 7, 100253. [Google Scholar] [CrossRef]
  18. Johnston, H.; Wells, R.F.; Shanks, E.M.; Boey, T.; Parsons, B.N. Student perspectives on the use of generative artificial intelligence technologies in higher education. Int. J. Educ. Integr. 2024, 20, 2. [Google Scholar] [CrossRef]
  19. Blahopoulou, J.; Ortiz-Bonnin, S. Student perceptions of ChatGPT: Benefits, costs, and attitudinal differences between users and non-users toward AI integration in higher education. Educ. Inf. Technol. 2025, 30, 19741–19764. [Google Scholar] [CrossRef]
  20. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  21. Ayyoub, A.M.; Khlaif, Z.N.; Shamali, M.; Abu Eideh, B.; Assali, A.; Hattab, M.K.; Barham, K.A.; Tahani, R.K. Advancing higher education with GenAI: Factors influencing educator AI literacy. Front. Educ. 2025, 10, 1530721. [Google Scholar] [CrossRef]
  22. Almisad, B.; Aleidan, A. Faculty perspectives on generative artificial intelligence: Insights into awareness, benefits, concerns, and uses. Front. Educ. 2025, 10, 1632742. [Google Scholar] [CrossRef]
  23. Khlaif, Z.N.; Alkouk, W.A.; Salama, N.; Abu Eideh, B. Redesigning assessments for AI-enhanced learning: A framework for educators in the generative AI era. Educ. Sci. 2025, 15, 174. [Google Scholar] [CrossRef]
  24. Shailendra, S.; Kadel, R.; Sharma, A. Framework for adoption of generative artificial intelligence (GenAI) in education. IEEE Trans. Educ. 2024, 67, 777–785. [Google Scholar] [CrossRef]
  25. Jin, Y.; Yan, L.; Echeverria, V.; Gašević, D.; Martinez-Maldonado, R. Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Comput. Educ. Artif. Intell. 2025, 8, 100348. [Google Scholar] [CrossRef]
  26. Sutedjo, A.; Liu, S.P.; Chowdhury, M. Generative AI in higher education: A cross-institutional study on faculty preparation and resources. Stud. Technol. Enhanc. Learn. 2025, 4. [Google Scholar] [CrossRef]
  27. Wu, F.; Dang, Y.; Li, M. A systematic review of responses, attitudes, and utilization behaviors on generative AI for teaching and learning in higher education. Behav. Sci. 2025, 15, 467. [Google Scholar] [CrossRef]
  28. Kizilcec, R.F.; Huber, E.; Papanastasiou, E.C.; Cram, A.; Makridis, C.A.; Smolansky, A.; Zeivots, S.; Raduescu, C. Perceived impact of generative AI on assessments: Comparing educator and student perspectives in Australia, Cyprus, and the United States. Comput. Educ. Artif. Intell. 2024, 7, 100269. [Google Scholar] [CrossRef]
  29. Perkins, M.; Furze, L.; Roe, J.; MacVaugh, J. The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. J. Univ. Teach. Learn. Pract. 2024, 21, 49–66. [Google Scholar] [CrossRef]
  30. Chan, C.K.Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 2023, 20, 38. [Google Scholar] [CrossRef]
  31. Rhodes, T.L.; Finley, A. Using the VALUE Rubrics for Improvement of Learning and Authentic Assessment; Association of American Colleges and Universities: Washington, DC, USA, 2013. [Google Scholar]
  32. Zhao, G.; Sheng, H.; Wang, Y.; Cai, X.; Long, T. Generative Artificial Intelligence amplifies the role of critical thinking skills and reduces reliance on prior knowledge while promoting in-depth learning. Educ. Sci. 2025, 15, 554. [Google Scholar] [CrossRef]
  33. Qian, Y. Pedagogical applications of Generative AI in Higher Education: A systematic review of the field. TechTrends 2025, 69, 1105–1120. [Google Scholar] [CrossRef]
  34. Gerlich, M. AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
  35. Hua, W.; Wang, J.; Wang, Y. The effects of over-reliance on AI dialogue systems on students’ critical thinking, analytical thinking, and ethical concerns. Smart Learn. Environ. 2024, 11, 16. [Google Scholar]
  36. Fan, Y.; Tang, L.; Le, H.; Shen, K.; Tan, S.; Zhao, Y.; Shen, Y.; Li, X.; Gašević, D. Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. Br. J. Educ. Technol. 2024, 56, 489–530. [Google Scholar] [CrossRef]
  37. Lyu, W.; Zhang, S.; Chung, T.R.; Sun, Y.; Zhang, Y. Understanding the practices, perceptions, and (dis)trust of generative AI among instructors: A mixed-methods study in the U.S. higher education. Comput. Educ. Artif. Intell. 2025, 8, 100383. [Google Scholar] [CrossRef]
  38. Digital Education Council. Digital Education Council Global AI Faculty Survey 2025: AI meets Academia—What Faculty Think. 2025. Available online: https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey (accessed on 15 December 2025).
  39. Corbin, T.; Bearman, M.; Boud, D.; Dawson, P. The wicked problem of AI and assessment. Assess. Eval. High. Educ. 2025, 1–17. [Google Scholar] [CrossRef]
  40. Stone, B.W. Generative AI in higher education: Uncertain students, ambiguous use cases, and mercenary perspectives. Teach. Psychol. 2025, 52, 347–356. [Google Scholar] [CrossRef]
  41. Strzelecki, A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 2023, 32, 5142–5155. [Google Scholar] [CrossRef]
  42. Kirsanov, O.; Kushwah, L.; Selvaretnam, G. Beyond Detection: How Students Use—And Hide—AI in Online Assessments and What Authentic Tasks Can Do About It. J. Acad. Ethics 2026, 24, 14. [Google Scholar] [CrossRef]
  43. Lelescu, A.; Sava, S.; Grosseck, G.; Malita, L. Exploring trust in generative AI for higher education institutions: A systematic literature review focused on educators. Humanit. Soc. Sci. Commun. 2025, 12, 1961. [Google Scholar] [CrossRef]
  44. Shata, A.; Hartley, K. Artificial intelligence and communication technologies in academia: Faculty perceptions and the adoption of generative AI. Int. J. Educ. Technol. High Educ. 2025, 22, 14. [Google Scholar] [CrossRef]
Figure 1. Faculty report using GenAI tools more frequently than students for both academic (Mann–Whitney U test; p-value < 0.01; effect size 0.245) and personal (Mann–Whitney U test; p-value < 0.001; effect size 0.275) purposes. Surveys were conducted in Spring 2025.
Figure 1. Faculty report using GenAI tools more frequently than students for both academic (Mann–Whitney U test; p-value < 0.01; effect size 0.245) and personal (Mann–Whitney U test; p-value < 0.001; effect size 0.275) purposes. Surveys were conducted in Spring 2025.
Higheredu 05 00030 g001
Figure 2. Campus attitudes about the role of GenAI in the curriculum.
Figure 2. Campus attitudes about the role of GenAI in the curriculum.
Higheredu 05 00030 g002
Figure 3. Evolution of the GenAI syllabus policies across three years. (a) The pattern of GenAI policy integration based on different course syllabi. (b) The tone of GenAI policy based on individual instructors.
Figure 3. Evolution of the GenAI syllabus policies across three years. (a) The pattern of GenAI policy integration based on different course syllabi. (b) The tone of GenAI policy based on individual instructors.
Higheredu 05 00030 g003
Figure 4. Evaluation of AI-themed assignments from national collections along the six dimensions identified by the AI Assignment Analysis Framework. Six dimensions are shown in individual rows. Levels 1–4 correspond to the coding rubric levels (Table 2 and Table S1). The proportion of assignments in each category is shown as a percentage relatively to the overall number of assignments (67).
Figure 4. Evaluation of AI-themed assignments from national collections along the six dimensions identified by the AI Assignment Analysis Framework. Six dimensions are shown in individual rows. Levels 1–4 correspond to the coding rubric levels (Table 2 and Table S1). The proportion of assignments in each category is shown as a percentage relatively to the overall number of assignments (67).
Higheredu 05 00030 g004
Figure 5. AI-themed assignments from national AI assignment collections differ in focus and the level of GenAI use. Each cell represents the assignments that fall into a particular category based on assignment characteristics and the level of GenAI use. The color scale corresponds to the proportion of assignments in each category, from white as 0% to black as 10%. While all assignments engage GenAI as either a study object or a tool, they represent different approaches that could be implemented based on the course objectives and the instructor’s goals.
Figure 5. AI-themed assignments from national AI assignment collections differ in focus and the level of GenAI use. Each cell represents the assignments that fall into a particular category based on assignment characteristics and the level of GenAI use. The color scale corresponds to the proportion of assignments in each category, from white as 0% to black as 10%. While all assignments engage GenAI as either a study object or a tool, they represent different approaches that could be implemented based on the course objectives and the instructor’s goals.
Higheredu 05 00030 g005
Table 1. The GenAI syllabus policy analysis framework.
Table 1. The GenAI syllabus policy analysis framework.
DimensionLevel ALevel BLevel C
Presence of AI policyNo AI policy Brief/embedded mention of GenAI with few detailsClearly labeled section summarizing GenAI policy for the course
Policy alignment with course goalsNo rationaleGeneral rationale not tied to specific course goalsRationale for the policy is described and explicitly tied to course goals
AI use disclosure Documenting AI use is not requiredAI use disclosure is required Disclosure of AI use is required, providing guidelines for AI use citations.
AI literacy as explicit learning outcome (LO)No mention of learning to use GenAIVague language about learning how to use GenAI, no specific AI-literacy LOsLists specific AI-literacy Learning Outcomes
Learning climate related to GenAI useExplicitly punitive/deterrent framingMixed tone; GenAI use mostly forbiddenLanguage is aligned with course values, encourages curiosity and invites questions and dialogue
Table 2. The GenAI assignment analysis framework.
Table 2. The GenAI assignment analysis framework.
DimensionLevel 1Level 2Level 3Level 4
Assignment EmphasisFocused exclusively on product Product-
oriented
Balanced between product and processProcess-driven
Focus of Student EffortMetacognition is absent Metacognition as an add-on Metacognition as a component Metacognition as a core of the assignment
Nature of GenAI Integration GenAI exclusively is an object of study, not a tool GenAI use is supplemental GenAI collaboration is required for a part of the project AI collaboration is at the core of the assignment
Cognitive Role of GenAI in LearningGenAI as an object of study, not a toolGenAI as an assistant GenAI as a thinking partner GenAI as a guide for learning
Ethical ConsiderationNo consideration Implicit expectation Explicit requirement Critical engagement
Cognitive ComplexityUnderstand/Remember Analyze ApplyCreate/Evaluate
Note: For a complete version of the rubric, see Supplemental Table S1.
Table 3. Role and Impact of GenAI as perceived by faculty and students.
Table 3. Role and Impact of GenAI as perceived by faculty and students.
QuestionFacultyStudents
How do you perceive the future role of AI in your chosen field of study or career? 1
  • AI will do most of work—entry-level positions in my field will be scarce
14%9%
  • AI will not be a significant influence in my field
6%47%
  • AI will work as a human assistant—learning to work with AI is essential
80%44%
I believe generative AI tools will have a ___ impact on my day-to-day work 2
  • Extremely negative
8%34%
  • Somewhat negative
11%19%
  • Neither positive nor negative
32%23%
  • Somewhat positive
40%19%
  • Extremely positive
9%5%
1 Chi-square test; p < 0.001; effect size 0.345; 2 Mann–Whitney U test; p-value < 0.001; effect size −0.40. Note: Campus-Wide GenAI Attitude Surveys were conducted in March 2025.
Table 4. GenAI-related concerns expressed by faculty and students.
Table 4. GenAI-related concerns expressed by faculty and students.
ThemeFacultyStudents
Erosion of Critical Thinking and Skill Loss
        Students won’t actually learn how to learn, think, and54%31%
        write for themselves.
Institutional Support and Implementation
        Lack of training and professional development to master34%1%
        AI tools and lack of time to become proficient.
Environmental and Ethical Values
        Between the electricity, water use, and carbon emissions,16%12%
        GenAI could have really harmful consequences.
Career Impact
        It’s replacing jobs in my field.6%18%
Academic Integrity and Trust
        It will normalize cheating.11%7%
Five most prominent themes are quantified based on the total number of text segments analyzed (456 for students; 86 for faculty). Categories are not mutually exclusive, as some responses corresponded to multiple concerns. Representative excerpts illustrating each theme are provided to contextualize the coded frequencies.
Table 5. Learning outcomes (LOs) targeted by GenAI assignments.
Table 5. Learning outcomes (LOs) targeted by GenAI assignments.
ThemeFrequency
(% of LOs)
Key Words
Creation and Generation38%Create, Develop, Design, Build, Draft
Application and Practice36%Apply, Demonstrate, Use, Practice, Implement
Critical Analysis and Evaluation32%Critically Evaluate, Analyze, Critique, Assess
AI Literacy and Technical Proficiency28%Prompt, Iterate, Refine, Navigate, Tools
Reflection and Metacognition20%Reflect, Self-Assess, Process, Insight
Information
Information Literacy and Verification20%Verify, Fact-Check, Sources, Accuracy, Validity
Understand, Recognize, Identify, Define
Understanding and Comprehension20%
Ethical Reasoning and Responsibility15%Ethics, Bias, Privacy, Intellectual Property
Collaboration (Human–AI)14%Collaborate, Co-Create, Partner, Dialogue
Communication and Argumentation9%Articulate, Discuss, Argue, Explain
Note: The themes in 116 separate learning outcomes were evaluated. Percentages do not add up to 100% as some learning outcomes map to multiple themes.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Makarevitch, I.; Kostihova, M.; Hilk, C.; Gumiela, J. From Reluctance to Engagement: Aligning Institutional Policy with “Human-in-the-Loop” Pedagogy. Trends High. Educ. 2026, 5, 30. https://doi.org/10.3390/higheredu5020030

AMA Style

Makarevitch I, Kostihova M, Hilk C, Gumiela J. From Reluctance to Engagement: Aligning Institutional Policy with “Human-in-the-Loop” Pedagogy. Trends in Higher Education. 2026; 5(2):30. https://doi.org/10.3390/higheredu5020030

Chicago/Turabian Style

Makarevitch, Irina, Marcela Kostihova, Caroline Hilk, and Josh Gumiela. 2026. "From Reluctance to Engagement: Aligning Institutional Policy with “Human-in-the-Loop” Pedagogy" Trends in Higher Education 5, no. 2: 30. https://doi.org/10.3390/higheredu5020030

APA Style

Makarevitch, I., Kostihova, M., Hilk, C., & Gumiela, J. (2026). From Reluctance to Engagement: Aligning Institutional Policy with “Human-in-the-Loop” Pedagogy. Trends in Higher Education, 5(2), 30. https://doi.org/10.3390/higheredu5020030

Article Metrics

Back to TopTop