Next Article in Journal
Robust Deep Knowledge Tracing with Out-of-Distribution Detection
Previous Article in Journal
Open-Source Large Language Models in Education: A Narrative Review of Evidence, Pedagogical Roles, and Learning Outcomes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Critical AI Media Literacy Perspective on the Future of Higher Education with Artificial Intelligence Through Communities of Practice on Reddit

by
Olivia G. Stewart
Department of Education Specialties, St. John’s University, 8000 Utopia Parkway, Jamaica, NY 11439, USA
AI Educ. 2026, 2(1), 5; https://doi.org/10.3390/aieduc2010005
Submission received: 29 January 2026 / Revised: 23 February 2026 / Accepted: 2 March 2026 / Published: 9 March 2026

Abstract

As artificial intelligence (AI) becomes increasingly integrated into higher education, instructors and institutions face urgent questions about its implications for teaching, learning, and scholarly practice as well as power, agency, and access. This study draws on a critical AI media literacy framework to analyze user-generated discussions in the two largest higher education subreddits on Reddit.com. Through thematic content analysis, I explore faculty perceptions, pedagogical tensions, and imaginative possibilities surrounding AI’s academic role in shaping the current and future landscape of higher education. Findings reveal that discussions of student cheating, AI policies, writing practices, and faculty labor are not merely technical debates but sites where surveillance regimes, accountability structures, and academic precarity are negotiated in real time. Ultimately, I argue that AI in higher education is not simply a technological shift but a structural transformation requiring deliberate, critically informed governance grounded in equity and human agency.

1. Introduction

As artificial intelligence (AI), Generative AI (GenAI), and Language Learning Models (LLMs) like ChatGPT, Claude, and Gemini continue to become increasingly embedded in everyday life, instructors across disciplines are raising urgent questions about the future of academia (Murphy et al., 2025; Nolan, 2025). As illustrated in Table 1, AI use in higher education extends beyond writing and efficiency-oriented tasks, functioning instead as a mediational tool that reshapes sensemaking, decision-making, accessibility, and academic labor across institutional roles. As companies begin to roll out free or discounted versions of their LLMs for educators and students (Anthropic, 2026; Morrone, 2026), their impact on educational contexts is only growing.
This rapid adoption of GenAI technologies is accompanied by mixed reactions. With current studies claiming that using ChatGPT significantly reduces brain activity (Kosmyna et al., 2025), many are concerned about the long-term effects of reliance on AI for knowledge construction. However, many scholars argue that AI integration is not simply about adding new technologies but rather calls for a fundamental reimagining of educational systems. As Almufarreh and Arshad (2023) assert, AI has the potential to transform learning environments to be more personalized, efficient, and responsive provided it is used critically and intentionally (Stewart & Rodgers, 2025). This perspective echoes Mijan et al. (2025), who highlight how AI, when situated within inclusive pedagogical design, can better reflect the diverse needs of today’s learners. I therefore adopt a critical AI media literacy perspective (Stewart & Rodgers, 2025) to frame this study. From a critical AI media literacy perspective, AI uses must be understood as socially situated and value-laden rather than neutral or purely technical. Examining how AI is taken up in higher education foregrounds questions of power, agency, and access, emphasizing the need for intentional, critically informed engagement that centers human judgment and equity in academic contexts.
As a scholar who has been closely involved in studying GenAI and supporting others in its pedagogical adoption, I have increasingly found colleagues approaching me with a mix of excitement and unease with questions ranging from practical to existential. These conversations led me to consider how others outside formal research channels are navigating the shifting academic landscape.
To explore these tensions more broadly, I turned to two of the largest higher education subreddits on Reddit (https://www.reddit.com), with collectively over 250,000 members. Analyzing recent posts and discussions, I examine how academics are publicly negotiating the role of AI in their professional and intellectual lives. These informal spaces offer insight into the anxieties, hopes, and imaginative possibilities around AI in higher education. I conclude by offering a reflection on what a more intentional and humanized future for education might look like in light of GenAI’s growing presence. This study contributes to emerging scholarship on AI in higher education by examining how academics collectively construct, contest, and negotiate AI through informal professional discourse, extending critical AI media literacy beyond classroom contexts into communities of practice.
As the academy currently faces mounting structural pressures, we must also understand how, for better or worse, AI is molding the future of educational research and education in general and what role we wish to play in that restructuring.

1.1. Current Discussions of AI in Academia

To situate the Reddit discussion of AI in academia, it is important to understand the current trends of research and news reports. AI is rapidly reshaping the educational landscape, yet patterns of adoption in higher education remain uneven and, at times, contradictory. A 2023 survey conducted by Turnitin, LLC, involving more than 1600 students and 1000 faculty, found that students most commonly use GenAI for academic purposes such as summarizing texts, clarifying complex concepts, supporting writing tasks, and responding to homework questions. Faculty, by contrast, reported using AI primarily to test prompts and outputs, to explicitly teach students how to engage with GenAI tools, and to design instructional activities or assignments that account for AI use (Shaw et al., 2023). These findings suggest that while students are already integrating AI into their academic workflows, faculty engagement remains more cautious and exploratory than embedded.
This pattern is echoed in Marshik et al. (2025) study of approximately 1000 students and 500 U.S.-based instructors, which found that instructional use of AI among faculty remains limited. Only 24% of instructors reported using AI sometimes or often, while more than half indicated that they never use it for instructional purposes. When AI was incorporated, it was most commonly used for creating assignments or assessments and for teaching students how to use or critically evaluate AI tools. Notably, 90% of instructors reported that they rarely or never encourage students to use AI in their courses. Student use of AI was similarly constrained, with most reporting infrequent academic use, primarily for information searching and editing writing. Although approximately half of students noted the presence of AI-related course policies, far fewer reported explicit instructional conversations about AI, and most indicated that AI had not yet been meaningfully integrated into coursework.
While higher education remains cautious, AI-driven educational models are advancing more rapidly outside of traditional university settings. For example, Alpha School has adopted a model in which students spend approximately two hours per day working with AI tutors, while the remainder of the day is facilitated by “guides” who focus on mentoring, emotional support, and life skills rather than lesson planning, lecturing, or grading (Miller, 2025). Advocates argue that AI can manage routine instruction efficiently, allowing educators to focus on relational and higher-order cognitive work. This model raises important questions about how instructional labor, expertise, and authority may be redefined across educational contexts.
Within higher education, AI’s role in writing instruction has developed as a particular point of tension. Some scholars question whether traditional academic essays will remain viable, while others view AI as an opportunity to shift emphasis toward the writing process rather than the final product (H. Hsu, 2025). A growing body of research suggests that AI can support writing development through personalized feedback and virtual coaching. Merino-Campos (2025) identified broad consensus that AI technologies can optimize learning by providing tailored content and feedback, and several studies have demonstrated that AI-generated feedback can improve writing outcomes for college students, in some cases outperforming traditional, human-only feedback models (Colclasure et al., 2025; Shum et al., 2017; Weber et al., 2025).
Despite these affordances, significant disagreement persists regarding what constitutes ethical AI use, particularly between students and instructors (Barrett & Pack, 2023; Marshik et al., 2025). Marshik et al. (2025) found that 86% of instructors viewed using AI to answer exam or assignment questions as definitively unethical, compared to only 55% of students. Conversely, students were more likely than instructors to consider AI use for information searching or self-testing to be cheating. In their study of the writing process, Barrett and Pack (2023) found that students and teachers generally agreed that AI was more acceptable for early stages of the writing process (e.g., brainstorming and research), but there were deviations in ethical uses for writing and evaluation, especially without disclosure. These discrepancies underscore a lack of shared understanding and highlight the need for clearer, more consistent institutional conversations about AI, learning, and academic integrity. There is still no strong consensus over whether using AI is digital literacy or cheating, and in which contexts.
This literature positions AI in higher education as neither wholly disruptive nor inherently beneficial. Instead, it reveals a field in transition, marked by uneven adoption, unresolved ethical tensions, and competing visions of what teaching and learning should become. These tensions point to the necessity of critical, intentional frameworks that move beyond questions of efficiency or cheating to address power, equity, and the evolving purposes of education itself.

1.2. Student Perceptions of AI Use in Higher Education

Understanding how students perceive and engage with AI is imperative, as attitudes toward AI strongly shape adoption, engagement, and learning outcomes. Empirical research consistently demonstrates that student acceptance and perceived benefits of AI are key predictors of willingness to integrate AI into learning routines (Oc et al., 2024; Sova et al., 2024). Conversely, skepticism regarding AI’s role in education (often rooted in concerns about privacy, cybersecurity, ethical implications, and perceived risks) can significantly hinder meaningful use (Runcan et al., 2026). Within two months of its release in 2022, approximately one-fifth to over one-third of students reported using ChatGPT, with the majority perceiving their use as constituting cheating (Katsamakas et al., 2024).
Trust has arisen as a particularly influential factor in student AI adoption. Research indicates that trust directly predicts students’ willingness to use AI and mediates relationships between expectancy beliefs and acceptance outcomes (Oc et al., 2024). Performance expectancy and perceived ease of use tend to increase trust, while concerns related to data security and risk may undermine it (Runcan et al., 2026). Recent studies further emphasize the complexity of student AI acceptance, pointing to ethical tensions, algorithmic transparency, and students’ readiness to navigate hybrid AI-human learning environments (Gudoniene et al., 2025; S. L. Hsu et al., 2025; Rad & Roman, 2025).
Students in Marshik et al.’s (2025) study expressed cautious optimism, noting AI’s potential to enhance critical thinking, deepen understanding, and provide assistive support such as information gathering and data analysis. Some students reported that AI felt more accessible and less judgmental than instructors, making it easier to ask questions or seek clarification (Morrone, 2024). Similarly, Seo et al. (2021) found that university students valued AI’s capacity for personalization and anonymity, which enabled them to ask more questions and engage more freely without fear of evaluation.

1.3. Instructor Perceptions of AI Use in Higher Education

The literature reflects cautious optimism among instructors regarding AI’s role in higher education. In their study of university professors across disciplines, Seo et al. (2021) found that instructors expressed enthusiasm about AI’s ability to provide just-in-time support for routine tasks, while simultaneously expressing concern that such support could diminish opportunities for personal exploration, discovery, and deep learning. Instructors were optimistic about AI’s potential to help anticipate student needs, yet wary of overreliance on AI for interpreting students’ social cues or learning behaviors.
Marshik et al. (2025) similarly found that instructors viewed AI as a tool with dual potential. Many believed that AI could enhance learning if used thoughtfully, particularly by supporting writing processes and fostering critical thinking. At the same time, instructors expressed concern that irresponsible or uncritical use could undermine learning and academic integrity. These findings suggest that instructors are not rejecting AI outright, but rather grappling with how to integrate it responsibly within pedagogical and ethical boundaries.

1.4. Implications for Future Practice

Broader labor trends further complicate these perceptions. According to the World Economic Forum (2025), employers increasingly value practices such as analytic thinking, resilience, leadership, and creative problem-solving. If one of academia’s central roles is to prepare students for future employment and practice key abilities beyond understanding and memorization, these findings suggest that higher education still has an opportunity to position itself as a space for engendering learning and growing that AI cannot replicate.
A recent Ipsos (2026) poll for Google of over 21,000 adults across 21 countries found that approximately two-thirds of respondents reported using an AI tool or application within the previous twelve months. This represents a substantial increase in adoption, rising 18% since 2024 and 28% since 2023. Perceptions of AI’s impact on work remain divided, with respondents evenly split between those who believe AI will create new jobs and ultimately benefit workers and those who anticipate job loss and harm to the workforce. Despite this uncertainty, a majority of respondents expressed a preference for advancing scientific innovation through AI rather than prioritizing regulatory protections for potentially impacted industries. Additionally, respondents reported broad excitement about AI applications across domains, including personal tutoring, cybersecurity, medical diagnostics, support for small businesses, and pharmaceutical discovery and development.
While AI offers powerful efficiencies, Renkema and Tursunbayeva (2024) caution that AI can limit informal and incidental learning like those spontaneous, exploratory moments that often lead to deeper knowledge (see also Evers & van der Heijden, 2016; Kulkarni et al., 2024). Research also suggests that heavy reliance on AI may contribute to deskilling, as learners have fewer opportunities to engage in deliberate practice or develop expertise holistically (Ardichvili, 2022; Faraj et al., 2018; Kosmyna et al., 2025). This kind of cognitive offloading and ability atrophy suggests that unchecked reliance on AI may diminish opportunities for deliberate practice and deep engagement. Therefore, students and educators alike must continue to develop and practice core abilities before leaning on AI as a shortcut. When engaged critically and purposefully, AI can then become a tool that expands rather than replaces human learning, freeing them to pursue new competencies and deeper thinking (Baskara, 2025; Stewart & Rodgers, 2025).

1.5. Critical AI Media Literacies

Critical digital literacies are socioculturally situated practices that require learners to interrogate how digital texts, tools, and systems are designed, produced, and circulated within relations of power (Vasquez et al., 2019). Grounded in Freirean critical pedagogy (Freire, 1970), they move beyond functional or technical competence to cultivate critical consciousness, enabling learners to examine whose values are encoded in digital media, how bias is produced and normalized, and how meaning is shaped by both digital and physical contexts (Hammer, 2011; Kellner & Share, 2007).
Critical digital AI media literacies therefore require learners to critically engage with AI systems as non-neutral media, examining how algorithms are constructed, how bias is layered and looped through data and design, and how AI both reflects and reinforces dominant ideologies. This literacy positions users not as passive consumers of AI output, but as empowered meaning-makers who can question, revise, and reimagine AI in ways that disrupt inequity and support participatory, democratic futures (Stewart & Rodgers, 2025).
Conceptualizing critical digital literacies in the time of AI is essential to understanding how AI differs from other media. As other media are created by individuals and engaged by individuals, AI is created by many and engaged by many, learning and looping information and bias at breakneck speeds (Douglas, 2017; Stewart & Rodgers, 2025). However, GenAI is also influenced by the user, as their prompt engineering affects the output of their results and potential learning patterns of the larger AI model. Therefore, critically approaching our understanding of AI is crucial in questioning how it can iteratively shape the contexts in which it is taken up.
Creating environments to engage digital media and AI meaningfully, however, requires an ideological shift, breaking from the traditional top-down power structures long-established in education and to which both students and educators are accustomed (Hammer, 2011; Stewart et al., 2021). Such a shift creates the conditions for questioning and potentially reshaping the traditional power structures that organize schooling. However, this work requires time, reflexivity, and a willingness to sit with discomfort. Given the institutional constraints and pressures educators face, many teachers may struggle to enact this kind of sustained critical engagement in practice (Share et al., 2019; Stewart et al., 2021).
Understanding how AI affects instructors, students, and other members of higher education through the lens of critical AI media literacies opens spaces to interrogate the institutional and political ideologies that uphold or break down existing systems of power that determine how and to what degree AI is used in learning spaces. Understanding questions of power, agency, and access around technologies, especially transformative technologies like AI, can help to shed light on the current malleable policies and ethics surrounding it. Therefore, in this study, critical AI media literacy is operationalized through attention to: (a) epistemic authority, (b) governance and surveillance, (c) labor redistribution, (d) access and inequity, and (e) authorship and meaning-making in the discussion of AI use across Reddit threads.

2. Materials & Methods

Grounded in Lave and Wenger’s (1991) and Wenger’s (1998) conceptualization of communities of practice (CoP), this study examines how professors’ discussions of AI on Reddit reflect the three core features of a CoP, including mutual engagement, joint enterprise, and the development of a shared repertoire. Reddit is an open, publicly accessible platform that reports more than 430 million active monthly users and hosts highly specialized communities, known as subreddits (or subs), where users engage in sustained discussions around shared interests and concerns, or what is referred to as “a mutual engagement in an endeavor” in a CoP (Eckert & McConnell-Ginet, 1992, p. 464). I use these discussions as a lens to highlight the broader conversations that are happening in and across academic spaces as AI is increasingly and iteratively shaping our pedagogical and administrative practices.
Increasingly, scholars have analyzed Reddit as a site for examining cultural phenomena and community sense-making, particularly in moments of uncertainty or collective need. For example, to understand the major issues today’s parents face, The Pew Research Center turned to Reddit, noting that parents and users often turn to these communities in times of need or distress to discuss important topics (Lieb et al., 2025; see also Altındağ & Balcıoğlu, 2026; Curto-Sánchez et al., 2026). Through an analysis of posts from the two largest academic subreddits, I examine how faculty publicly negotiate the challenges and possibilities of AI in higher education, highlighting both prevailing concerns and potential opportunities for reframing AI’s role in teaching and learning.
To analyze the data, I employed a qualitative content analysis approach (Gheyle & Jacobs, 2017), which supports the systematic interpretation of textual data through iterative cycles of coding and theme development. This approach enabled me to generate credible and replicable inferences that attend not only to the explicit content of the Reddit threads (or individual posts), but also to the broader sociocultural contexts in which these discussions are situated, including the institutional and university settings they reflect (Krippendorff, 2004).
I conducted a thematic content analysis of user-generated discussions about AI across the two largest Reddit communities dedicated to higher education (see Table 2) guided by the following research questions:
  • How do professors in higher education communities of practice on Reddit publicly discuss the role of AI in teaching, writing, and institutional life?
  • What do these discussions reveal about shifting power relations, surveillance practices, and epistemic authority in contemporary universities and colleges?
To identify relevant data, in January 2026, I searched within these subreddits for the 10 top threads using the keyword “AI” in the Reddit search feature (n = 20 threads, 2617 comments) (see Table 3 and Table 4 for threads). Reddit allows users to sort threads by categories, including “Top,” “Hot,” “Controversial,” “Best,” and “New,” which are all calculated using specific algorithms. “Top” provides the most popular discussions, often with the most comments, and is calculated by the number of upvotes a thread gets minus its downvotes (Reddit.com/answers, 2026). I excluded any threads that were not relevant to the topic (e.g., written by students, not academia-related) or that were over 2 years old, removing four threads. Reddit does not post the date of a thread, but rather the length that it has been live (e.g., 3 h, 3 years). These top threads were then imported into Excel and cleaned. Because many were lengthy (over 300 comments), I began by summarizing each thread as a whole. This initial stage involved identifying the major thoughts and feelings of the users across all 20 threads and examining how those themes reflected the larger consciousness of how AI is changing the landscape of academia and what it means to be a professor or students in this changing time.
Next, I grouped the posts within each thread into stanzas around singular ideas (Gee, 2014) and began coding each stanza, attending to the various arguments and themes within/across the post. In doing so, I was also careful to retain the original structure of the posts, preserving the additional data (e.g., upvote tally, reply thread, username, length posted). I examined the more nuanced patterns within each theme by using iterative thematic and In Vivo coding based on the topic (e.g., “AI can’t do semantic learning”) (Saldaña, 2016). Then, imported the data into Atlas.ti. and condensed these themes and compared across threads (e.g., “AI positive” and “Embrace AI” became “Instructors Embrace AI”). This process allowed me to examine the discursive patterns in these digital public forums and to consider how AI is being framed, challenged, or normalized within higher education discourse. Grounded in a critical AI media literacy perspective, this approach highlights the contextual and contested nature of AI-related perspectives in online spaces.
From a critical ethical stance, researchers must remain reflexive about their proximity to the communities they study and how that positioning shapes methodological choices (Haywood, 2022). In this case, as a member of the community as both a professor and Reddit user, I critically reflected on the posts from an epistemic position where my understanding informed my analysis. Ethical considerations may be complicated in digital spaces where spatial, social, and temporal boundaries are fluid and distinctions between public and private are increasingly unstable (Haywood, 2022). As experiences of privacy shift, individuals create semi-public selves for both known and unknown audiences, blurring traditional boundaries between private and public identity (Giaxoglou, 2017). Due to Reddit’s privacy policy, highly searchable nature, and its use in popular media (e.g., Buzzfeed articles, TV shows, Pew Research studies), users tend to adapt how they present themselves in such a public forum (Giaxoglou, 2017), purposefully and carefully omitting identifying details such as location, age, and even gender. I therefore chose to include selective quotes from users but not thread or usernames so as not to directly identify specific users but did not obtain written consent. I also did not comment on any of the posts so as not to interact with users or direct the conversation.

3. Results

Across these communities of practice, users engaged in sustained and often impassioned discussions about AI, debating institutional policies, student use, their own professional practices, writing with AI, and the future of higher education amid rapid technological change. While these findings are not exhaustive, they offer a snapshot of how some academics are collectively discussing how they are grappling with one of the most consequential issues currently facing the academy. Importantly, these conversations reveal not only practical concerns, but deeper tensions around responsibility, expertise, ethics, and the purposes of higher education itself. The threads do not just provide opinions about technology but also insight into sites where power is negotiated and where institutional authority, academic labor, authorship, and surveillance are discussed and debated in real time.

3.1. Students’ Current Use of AI

The appeal of AI is undeniable for many students, as it can offer quick, easy ways to complete assignments. Students can input/engineer a prompt, perhaps even some of their own ideas, and receive a relatively comprehensive essay or response that would have otherwise taken them hours to write (Bailey & Warner, 2025; Barrett & Pack, 2023). As a result, one of the overwhelming concerns over AI use is student cheating, surfacing issues of surveillance, critical engagement, and the power structures inherent to education. For example, situating cheating within broader structural and societal pressures rather than framing it solely as individual moral failure, some professors sympathize with the students. One user wrote:
The problem is that students cheat for a bunch of reasons we cannot even come close to controlling. Students are embedded in a society that rewards people who cheat and are constantly told that college is a path to a high paying job and nothing more. I cannot change societal pressures or structures.
Another questioned the deeper causes of this behavior:
Maybe we should be asking why cheating is so prevalent. Why do so many students default to cheating when a tool that makes it easy to cheat is readily available? Where has our education system (or society in general, maybe parenting?) failed to where so many students don’t care about engaging ethically in their education?
These concerns point to larger top-down power structures that promote grade giving and receiving based on a normative, Western education system rather than critical, humanizing pedagogy that considers the contextual factors around the student (Kahl, 2013; Kincheloe et al., 2011; Mehta & Aguilera, 2020).
Despite these critical reflections, most discussions centered practical strategies for mitigating cheating. Some professors described turning to oral exams, only to note that tools such as Cluely can operate seamlessly and undetected during video calls, feeding answers in real time. Others described returning to more traditional forms of assessment, such as blue books and proctored exams. Another suggested embedding false or misleading information in prompts to misdirect AI outputs, although many acknowledged that students would quickly learn to identify and bypass these strategies. In a related discussion, some users noted students’ use of image overlays on PDFs to evade AI detectors, leading professors to require Word documents instead. These strategies reflect a broader shift toward surveillance-based governance in higher education, in which faculty are positioned as monitors of technological compliance rather than facilitators of inquiry. From a critical AI media literacy perspective, this dynamic reveals how AI reshapes institutional power relations, transforming assessment into a site of technological policing rather than epistemic engagement.
Several users suggested drastic measures such as eliminating writing assignments altogether. One wrote, “Yep next term is no papers, only quizzes and exams for my humanities classes. I wish it weren’t so but it is what it is.” Despite this concern, many users discussed focusing on the writing process in general, using Google Docs with version histories or requiring multiple drafts or reflective journals throughout the writing process (e.g., “…shifting assessment toward process and reflection rather than just the final text. Requiring students to submit drafts, write brief reflections on what they learned, or explain how they used any tools gives you far more context...”). In addition to similar measures, another user noted that they also use VisibleAI, “which tracks how a document is written over time, including edits and AI assistance. Seeing the writing process changed how we handled integrity cases. It gave us evidence instead of suspicion.”
One professor, in a thread proclaiming their reinvigorated love for teaching after figuring out how to remove AI, described requiring students to leave phones and laptops at the door, using handwritten in-class assignments, and providing a rolling library of books and magazines. Harkening back to the days before technology was prevalent in classrooms, others wondered how scalable this would be in larger courses where it would be challenging to collect phones/laptops. The original poster even ironically mused “You could paste my post into ChatGPT (ironically) and ask it how to scale some of these activities reasonably to a class of 200.” Though the comment acknowledged the irony of the statement, it still reflects a larger reliance on the very technology that they were enthusiastically shunning.
While these strategies illustrate a growing fatigue with surveillance-based approaches to cheating that was expressed often, they also point to a larger issue of how higher education has organized learning around grading, risk, and degrees (Kahl, 2013). As one user wrote, “my contract is to teach them the content… their job is to learn… the older I get, the less I want to play this darn game.” Consistent with Noorbehbahani et al.’s (2022) systematic review, many of these discussions implicitly pointed toward structural changes, ethical framing, formative assessment, and shared responsibility rather than detection alone. Viewed through a critical AI media literacy lens, these conversations suggest a growing recognition that integrity in AI-mediated contexts depends less on restriction and more on cultivating students’ ability to critically engage with AI as a non-neutral, value-laden media form. However, ultimately, most faculty emphasized the importance of holding all students to the same high standards of academic integrity that promote the normative system of education, even as the means of doing so remain contested.

3.2. Students “Cannot Function” Without AI

Beyond cheating, many users expressed concern about what they perceived as students’ growing dependence on AI and the erosion of foundational literacy practices. In a thread lamenting students’ inability to read, faculty discussed difficulties engaging students with texts of even modest length. One user described a steady progression from teaching novels, to novellas, to short excerpts of five to ten pages, and eventually to “2–3 pages,” noting that even this reduction had little effect. Although AI was not identified as the sole cause, users frequently cited it alongside social media use, constant phone access, pandemic-related disruptions, “lazy parenting,” and broader educational policy failures. Several users framed it as a form of dependency or addiction. One wrote:
But in my survey classes, students are already addicted to it, and just can. not. do. their. own. work. It was 30%, now it is much higher. (70% 80%? Freshmen college students today simply cannot look at a blank screen and start to write. They can’t do it.)
These concerns echo Seo et al.’s (2021) findings regarding diminished opportunities for exploration and critical engagement. Students who cannot function without AI may also struggle to critically examine bias, inaccuracies, and embedded ideologies within AI-generated outputs (Bailey & Warner, 2025; Stewart & Rodgers, 2025). It also risks positioning students as passive consumers of algorithmic output rather than critical interpreters and creators of knowledge, thus reducing their agency.
However, while these concerns are genuine, it is important to consider why the overreliance is occurring and for whom based on structural and cognitive shifts in digital literacy practices. We must also then understand how/why educational systems have/have not adapted to rapidly evolving digital practices and epistemologies.

3.3. AI Policies

Consistent with broader confusion across K-12 and higher education (Ghimire & Edwards, 2024; Morrone, 2025; Stracke et al., 2025), users expressed a strong desire for clear AI policies, coupled with frustration over their absence or inconsistency. Many noted that chairs or deans suggested certain responses (e.g., assigning zeros or using AI detectors) despite widespread agreement across many threads that detection tools are unreliable. An adjunct faculty member, in particular, described feeling unsupported, asking, “Can you share your strategy on detecting AI? I’m an adjunct and my university hasn’t given us any guidance.” Policies function as instructional power structures that can protect or distribute blame. When a university does not provide one, especially to more vulnerable faculty such as adjuncts, responsibility is placed on faculty and students while protecting administrators from risk.
A recurring theme involved the framing of students as “customers,” and some users argued that institutional pressures to retain students constrained meaningful enforcement. One wrote:
Why don’t courses just remove all scores related to writing on marking criteria? AI is happening and people will use it and there’s nothing a teacher can do because Ed institutes are based on customer service. If you keep failing students, the university will fail their customers and deans cannot have that. It’s time to move on and accept AI.
Framing students as customers upholds AI governance as a matter of institutional protection rather than educational facilitation. AI policy therefore functions less as pedagogical support and more as risk mitigation. Critical AI media literacy invites interrogation of whose interests these policies serve and how responsibility is divided between the institution, faculty, and students.
At the same time, some institutions adopted more transparent, critically informed approaches. Users described policies emphasizing disclosure, citation, and responsibility: “…we emphasize transparency and collaboration around AI usage. Instead of outright bans, we encourage students to engage with AI tools critically, integrating them into their learning while adhering to academic integrity standards...”
One user explained their complicated feelings over their university’s policy that AI-use must be cited, noting that they were “livid” that they were using AI as a source but understanding in that AI is a tool that is going to be used. Another user noted that their university allows for open AI-use with citations and responsible use: Students are “taught how to use it properly and responsible [sic]. Students are required to name every used AI and how it was used and take full responsibility if anything is wrong—you are responsible to check results.” In these cases, working with students rather than against them appeared to foster shared accountability and alignment with real-world practices. Policies that emphasized disclosure, citation, and student responsibility aligned more closely with a critical AI media literacy orientation, positioning AI as a non-neutral media system that students must actively interrogate rather than conceal. In these cases, policy functioned not as prohibition, but as pedagogy, inviting students to reflect on authorship, bias, and accountability within AI-mediated work.
While some scholars call for consistent AI policies across institutions (Katsamakas et al., 2024) or even across countries (Stracke et al., 2025), these discussions suggest that a one-size-fits-all approach may be neither feasible nor pedagogically appropriate given disciplinary variability, a tension that invites further critical examination.

3.4. Professors’ Current Use of AI

Many users had mixed emotions about their own use of AI. Some framed it as the demise of academia, while others described it as invaluable for offloading administrative labor (e.g., “It’s been very helpful for my administrative responsibilities, summarizing slides, helping me with transition-slide scripts, creating assessments and rubrics. It’s amazingly helpful.”). Many others agreed that it is a tool that can save them a lot of time for tasks like data analysis and coding. Others argued that if institutions discourage AI use, they must also reduce administrative burden user: “If granting agencies, university administrators, etc. would prefer original ideas and that people eschew AI, they need to be pushing for more admin support and less busy work for faculty (and increasingly for grad students and postdocs).”
Many users agreed that AI has a place in tasks that do not require deep critical engagement, such as summarization, coding assistance, or data analysis, reflecting a boundary between intellectual labor and institutional workload. These discussions suggest that faculty are not unilaterally rejecting AI, but instead actively negotiating where human expertise, judgment, and responsibility must remain central.
The continued fostering of literacy practices is not simply about maintaining traditional academic abilities, but about sustaining the critical capacities necessary for interrogating knowledge and systems. Writing is a site of meaning-making, and reading is a site of interpretation through which power, perspective, and ideology can be examined (Freire, 1970). When these practices are uncritically outsourced to AI, students and instructors alike risk losing opportunities to question, revise, and reshape the ideas as well as bias embedded within algorithmic outputs. From a critical AI media literacy perspective, sustained engagement with literacy is essential in developing agency rather than passively consuming AI-generated media.

3.5. The Current and Future State of Writing

When reflecting on their own scholarly writing, most professors expressed caution and restraint in their use of generative AI. Many described using AI for surface-level support, such as editing or structural feedback, while explicitly rejecting its use for idea generation or substantive intellectual work. One user explained: “I write everything myself, the ideas, the research, the arguments, the voice. Then I use AI to suggest tighter phrasing, catch errors, spot repetition, and improve flow. It’s a tool, like spell-check or a thesaurus, just more sophisticated.” This positioning aligns with recommendations in the extant and emerging literature that frame LLMS as tools to support, rather than replace, scholarly thinking (Katsamakas et al., 2024; Susarla et al., 2023). At the same time, users noted persistent uncertainty about where ethical boundaries around AI-assisted writing should be drawn. In a sustained debate across threads, some participants argued that AI should not be used at any stage of manuscript production, while others suggested that generating text from outlines or data may be ethically comparable to long-standing academic practices involving ghostwriters or graduate student support.
Questions of ethical use became especially salient in discussions of manuscript review. When encountering work perceived to be written largely by AI, users disagreed about whether, and at what point, such use constituted misconduct. One user proposed a threshold grounded in critical oversight: “When it is not a supplement that is critically reviewed like all other writing. If you have generative AI create something and toss it in wholesale without checking for accuracy, it is misconduct.” These concerns implicitly reference well-documented limitations of AI, which include hallucinations, inaccuracies, and bias (Katsamakas et al., 2024; Stewart & Rodgers, 2025). In contrast, other users emphasized AI’s potential to function as a cognitive scaffold when used deliberately. As one participant noted, “as long as you are the one doing the critical thinking, chatgpt is like a sidekick who you can bounce ideas with and discuss things that you usually had to wait months to find an expert to talk to and tell you it was feasible or not. so i do not see why everybody complains about AI.” Susarla et al. (2023) similarly argue that LLMs can productively support ideation and feasibility testing when users retain epistemic control.
For some participants, AI’s perceived acceptability varied by genre. One professor suggested that writing tasks such as grant applications did not require the same level of originality or authorial voice (e.g., “Grant text is generic. It’s usually something like a hundred pages of generic text written over and over with maybe one-three actual, core, novel ideas at the center (which could usually be written concisely in about two sentences…)”.
Despite these pragmatic stances, many users expressed dissatisfaction with AI-generated prose, describing it as generic, voice-less, or misaligned with disciplinary norms. Others reported adjusting their writing to avoid being flagged by AI detectors (e.g., “Seriously, the way most detectors punish clarity and structure is just backwards. Now I’m always tweaking things so my writing doesn’t sound too... human? Wild times.”). More fundamentally, several participants emphasized that writing itself constitutes an essential site of thinking and knowledge production.
Similarly, other users expressed that drafting is also part of the data analysis process for them, carefully weighing the data, even in the cutting process. This analysis may be lost or made more difficult in editing what AI produced. One user wrote: “…coming up with ideas, hashing things out—is not just about the end result. It’s also about the process…that’s what’s vital and what AI skips over.” Without the writing process, the user would not be able to make sense of the data. These assertions echo the findings of Reza et al. (2025) who found that writers’ preferred degrees of AI involvement differ across stages of the writing process. Content-oriented writers, such as academics, tend to prioritize authorial ownership during ideation and planning, whereas form-oriented writers, including creatives, place greater emphasis on maintaining control during translation and revision. They found that these preferences are further shaped by writers’ contextual goals, disciplinary values, and underlying conceptions of originality, authorship, and intellectual ownership. These accounts position writing not as a product, but as an epistemic practice through which knowledge is created and refined, a distinction that critical AI media literacy highlights when examining what should still remain primarily with humans.
Across threads, some users explicitly distinguished between expert and novice writers. One professor stated: “Critiquing, enhancing, and augmenting options presented by AI (as you suggest) is certainly a starting point. I take a slightly different view on professionals or experts using AI as a tool, compared with students, who are by definition attempting to build new competencies, using it as a crutch.” This distinction positions faculty as expert meaning-makers who can critically evaluate and regulate AI output, while casting student use as potentially developmentally inappropriate. It reveals how epistemic authority is divided within higher education, as faculty are positioned as capable of regulating AI critically, while students are thought of as developmentally vulnerable to misuse. This kind of boundary-policing of expertise reinforces hierarchical models of epistemological authority despite AI offering equal access that can disrupt traditional notions of institutional power (Stewart & Rodgers, 2025).
Concerns were raised about AI’s encroachment on scholarly practices, including suspicions that peer review is now algorithmically generated. Some users were concerned about the impact that AI is currently and will continue to have on the quality of manuscripts being published as well as access to open data sets. In a thread about journals outright rejecting manuscripts using public data sets, a journal editor themselves, had “seen a massive increase in trash articles… where it is blatantly a copy/paste job with hundreds of similar articles, and it has wasted a huge amount of my time.” Participants worried that such abuses could lead to restricted access to open datasets and erode trust in scholarly communication. In an already siloed field of unequally distributed resources, limiting access to valuable data sets would perpetuate uneven power through access and capital.
In another thread, a professor expressed concern about their own overreliance on AI, prompting responses that foregrounded both emotional and intellectual loss: “I really profoundly feel that AI is severing harder than ever our social bonds, and moments of sharing with others that lead to intellectual sparing, insight and joy of working and writing on the topics we love.” Others suggested strategies to resist overdependence, including blocking AI tools, writing by hand, seeking human feedback, or using AI to generate questions rather than answers. Across these discussions, many shared a sense of skepticism. While AI was widely acknowledged as useful, it was also described as frequently wrong and fundamentally untrustworthy without sustained critical engagement.
Debates over AI-assisted writing expose not merely questions of technique, but deeper anxieties about authorship, epistemic authority, and the boundaries of scholarly legitimacy. Critical AI media literacy frames these tensions as struggles over who is authorized access to knowledge and under what conditions.

3.6. The Future of Professors and the Academy

Professors expressed deeply mixed perspectives regarding the role of AI in higher education. Some viewed it as inevitable and potentially transformative, while others were skeptical, resigned, or openly despairing. One user wrote bluntly, “Academia is long long dead. I am so sorry.” Concerns about declining academic standards were recurrent. As one participant lamented:
…we have tried a million times to explain why this is bad. It’s pretty much impossible to NOT graduate from my school. Credit recovery, unlimited re-takes and do-overs, no deadlines, 50 even if you didn’t turn in the assignment… Unless we start valuing real education as a society, we are, as the kids say, cooked.
In a similar vein, others linked these anxieties to grading policies and institutional pressures. Drawing on K-12 experiences where teachers could not assign grades below 50%, one user warned that accepting underprepared students into higher education further undermines academic rigor.
At the same time, some users argued that AI should be embraced as a core component of contemporary digital literacy (see Baskara, 2025). One stated that it was clear that “AI is coming and the only way to be a competitive candidate in the future will be to know how to use it.” From this perspective, attempts to design “AI-proof” assessments were seen as misdirected. Others went further, framing AI as an essential force multiplier that institutions must accept: They further elaborated that:
The words “unethically using AI” need to be banned, first and foremost. AI is not going away, and its use needs to be encouraged. The sooner educators face that reality, the sooner they can help themselves and their students, whose futures will be enhanced to the degree that they can use and leverage AI as a force multiplier…How do our educational goals, methods, and assessments need to change?
Although a minority dismissed AI as a passing trend, most users agreed that its current forms demand intentional, critical use. Several suggested that an AI-saturated environment could renew the value of liberal arts education and critical thinking. As one contributor noted that “AI will not take your job, people knowing how to use AI will take your job. Therefore we teach how to use AI responsibly and correctly.” Frameworks such as the Critical AI Media Literacy Framework (Stewart & Rodgers, 2025) offer ways to operationalize this stance by foregrounding bias, power, authorship, and agency rather than efficiency alone.
Users also questioned whether higher education should remain the default pathway to employment (e.g., “K-12 needs to be beefed up a bit, and college—while it can be open to all—should not be as required as it is.” And “Many jobs that were once for people that maybe couldn’t or didn’t want to go to school require college degrees now. The worst part being, they actually don’t need a degree to be successful…”). Several argued that college credential inflation has intensified social and financial pressures on students, contributing to disengagement and cheating. Others raised existential concerns about institutional survival amid AI adoption, demographic decline, funding cuts, and shifting labor markets. As Katsamakas et al. (2024) suggest, institutions must adapt rapidly and purposefully to remain viable.
Finally, some users noted an irony in the academic labor market. As one wrote: “…we have academics who can’t get employment being employed by LLM companies training AI to write…It’s poetic, they are rejected from the institution and now use their skills to bring about a fundamental crisis in the institutions that rejected them.” The irony noted by participants, that unemployed academics now train LLMs, underscores the recursive entanglement of academic labor, AI development, and expertise. Critical AI media literacy situates this not as poetic coincidence but as evidence of how epistemic labor and authority are being reallocated across institutional and corporate boundaries.
Taken together, these discussions reveal not only anxiety about AI’s role in higher education, but also deeper questions about expertise, value, and the future purposes of the academy itself. Critically, these tensions underscore the urgency of moving beyond reactive or instrumental responses toward frameworks that center human judgment, equity, and meaning-making in an increasingly automated academic landscape.

3.7. Limitations

Though I explored here a small subset of academics discussing AI and its impact on their lives, this is not an exhaustive and representative look into how all professors feel about AI. As Ferreira et al. (2025) note, “The Internet is a globalised and instantaneous means in which space and time collapse, identity becomes more playful, and ethics become more tenuous; incorporating these aspects is crucial to studying online communities.” (p. 110). The users discussed here may not have fully represented themselves truthfully, playing with their identities, and they were often replying to threads that were days, weeks, or months old, which when discussing technology, can be ages. Furthermore, the data represent those who are present on Reddit, which is a particular culture in itself as all platforms are (Buck, 2012).

4. Discussion & Implications

These Reddit discussions point to larger questions about what higher education is for, whom it serves, and who holds epistemic authority within it. Although concerns about academic dishonesty dominated many of the threads analyzed in this study, the findings suggest that cheating itself is not the most consequential issue raised by GenAI. Ethical debates surrounding digital technologies are not new (e.g., Haywood, 2022; New London Group, 1996; Spilioti & Tagg, 2017), and they have long centered on questions of access and equity. What feels distinct in this moment is how AI surveillance and detection regimes risk reinscribing existing hierarchies. As participants frequently noted, AI detection tools are notoriously unreliable, particularly for multilingual writers and students whose linguistic patterns fall outside dominant norms. Under such conditions, integrity enforcement may disproportionately target already marginalized students. The more urgent question, then, is not simply whether students are cheating, but who is most likely to be suspected, flagged, or penalized and at what cost.
At the same time, academic dishonesty cannot be dismissed as peripheral. Rather, it reflects deeper tensions around authorship, responsibility, and meaning-making in AI-mediated environments. As many users observed, cheating is not new; what has changed is the ease and scale with which it can occur. This shift invites instructors to reconsider not only enforcement strategies but the design of assignments themselves. While some faculty suggested reverting to traditional formats, less attention was given to multimodal approaches that complicate straightforward AI substitution. Assignments that require students to engage across modes, (incorporating voice, image, reflection, or embodied meaning-making) (Stewart, 2023) may create more generative opportunities for learning while also disrupting formulaic AI reproduction.
These concerns ultimately return us to assessment and authority. From a critical pedagogical perspective, assessment is not neutral; it operates within institutional mandates and dominant epistemologies that define learning in objective and prescriptive terms. Traditional grading structures can reinforce what Freire (1970) described as the banking model of education, privileging compliance and reproduction over inquiry and critical engagement (Kahl, 2013; Mehta & Aguilera, 2020). If AI exposes the fragility of assessment systems built primarily around product-based evaluation, it also presents an opportunity to rethink how learning is recognized and valued.
The role of the teacher, therefore, becomes central. When educators shift from positioning themselves as sole arbiters of knowledge to facilitators of inquiry and problem-posing, authority is not relinquished but transformed (Gill & Stewart, 2024; Kincheloe et al., 2011; Stewart et al., 2021). In AI-mediated environments where information is rapidly generated and circulated, this transformation is especially urgent. Students must develop the capacity to interrogate sources, question algorithmic bias, and assert interpretive agency (Stewart & Rodgers, 2025). In this way, teacher authority becomes the condition for cultivating students’ critical consciousness and epistemic responsibility, rather than merely enforcing compliance.
A practical question is what knowledge remains essential for students to learn in a time where information production and retrieval are increasingly automated. As digital literacy practices continue to change, we must question if students still need to learn content that can be easily accessed through AI and continue with processes that can be streamlined or eventually offloaded to these technologies. We must then continue to reflect on the structure and purpose of university curriculum and systems. If certain forms of content are now readily accessible through AI, institutions must reconsider whether traditional distributions and requirements remain aligned with contemporary educational goals. More targeted, discipline-specific pathways may better serve students academically, professionally, and financially. In an era where higher education comes at considerable cost, more focused pathways may serve students better financially, professionally, and cognitively. While some institutions may worry that fewer course requirements will lead to reduced enrollment or credit hours, it may actually create more accessible and streamlined degree programs that could open doors for those who have historically been excluded from higher education. As entry-level roles in coding, research, and paralegal work begin to disappear (VandeHei & Allen, 2025), more learners will need viable and responsive pathways into evolving industries. Colleges and universities must remain responsive and adapt to the markets being iteratively reshaped by technologies at an accelerated rate. As with the rapid transformations brought about by COVID, AI presents institutions with a choice regarding how deliberately and equitably they adapt.
For faculty, AI may enable a return to core academic pursuits like teaching and research by reducing the burden of repetitive administrative tasks. Many participants described using AI to reduce the burden of repetitive administrative tasks, allowing greater engagement with teaching, research, and mentorship. Tasks such as summarizing large bodies of literature, transforming manuscripts into alternative formats, or organizing data can be completed more efficiently with careful oversight. When used critically rather than unreflectively, these once time-consuming tasks can now be completed more efficiently, freeing time for deeper intellectual engagement and more meaningful student interaction. However, from a critical AI media literacy perspective, these shifts signal a deeper reallocation of labor. While repetitive institutional tasks are automated, academic expertise is increasingly extracted, modeled, and monetized by corporate AI systems, often trained on the very intellectual labor within universities. The irony noted by the participant that unemployed academics now train LLMs to replace them underscores how scholarly labor is being shifted across institutional/corporate boundaries. AI does not simply assist academic work; it participates in reorganizing who performs it, who benefits from it, and who controls the infrastructures through which knowledge circulates. Attending to these labor dynamics is essential if universities are to engage AI in ways that protect intellectual autonomy rather than inadvertently accelerating the erosion of academic authority and working conditions.
Furthermore, as researchers, we must also create novel research and writing that AI has not yet seen, and therefore, cannot reproduce. However, the ethical ambiguity within writing for faculty highlighted the power that accompanies authority within the structure of institutionalized higher education. Though all faculty lamented student cheating with the use of AI, very few felt that their use was overreliance or unethical, likely or explicitly due to their epistemological authority and positions of power. As we continue to reflect on our own practices with AI, we must consider how labor, expertise, and epistemological authority drive our developing governance and ethics around AI use in varying contexts.

5. Conclusions

Ultimately, this study suggests that AI in higher education cannot be reduced to questions of cheating, ethics, efficiency, or inevitability. It is a structural force that reshapes authorship, authority, labor, and the conditions under which knowledge is created and afforded legitimacy. Under a critical AI media literacy framework, we do not ask whether AI should be adopted or rejected but rather how its integration redistributes power, whose expertise/voice is amplified or marginalized, and how educational systems might resist reproducing inequity under the guise of innovation. Literacy, inquiry, and disciplinary struggle remain essential not as nostalgic traditions, but as places where critical consciousness and epistemic agency are cultivated. If higher education is to remain a space for intellectual risk-taking, democratic participation, and the creation of new knowledge, we must engage AI deliberately, transparently, and with sustained attention to power. The future of the academy will not be determined by whether AI is present but by how critically and equitably it is governed.

Funding

This research received no external funding.

Institutional Review Board Statement

Although Reddit posts are publicly available and anonymized through usernames, the study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of St. John’s University (IRB-FY2026-29) on 22 July 2025.

Informed Consent Statement

Patient consent was waived due to openly available public data and in accordance with institutional IRB permissions. Per the Reddit Privacy page and user agreement: “Much of the information on the Services is public and accessible to everyone, even without an account. When you submit content (for example, a post, comment, or chat message) to a public part of the Services, any visitors to and users of our Services will be able to see that content, the username associated with that content, and the date and time you originally submitted that content. That content and information may also be available in search results on internet search engines like Google or in responses provided by an AI chatbot like OpenAI’s ChatGPT. You should take the public nature of the Services into consideration before posting. By using the Services, you are directing us to share this information publicly and freely.” (https://www.reddit.com/policies/privacy-policy, 2026; accessed 17 February 2026).

Data Availability Statement

The original data presented in the study are openly available at https://www.reddit.com.

Acknowledgments

In the preparation of this manuscript, AI (ChatGPT 5.2) was used for targeted editing and outlining the abstract. After using these tools/services, the author reviewed and edited the content as needed and takes full responsibility for the content of the published article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Almufarreh, A., & Arshad, M. (2023). Promising emerging technologies for teaching and learning: Recent developments and future challenges. Sustainability, 15(8), 6917. [Google Scholar] [CrossRef]
  2. Altındağ, E., & Balcıoğlu, Y. S. (2026). Gaming disorder in the digital age: A mixed-methods analysis of online communities and patterns of problematic gaming behavior. Journal of Technology in Behavioral Science, 1–24. [Google Scholar] [CrossRef]
  3. Anthropic. (2026, January 20). Anthropic and teach for all launch global AI training initiative for educators. Anthropic.com. Available online: https://www.anthropic.com/news/anthropic-teach-for-all?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top (accessed on 25 January 2026).
  4. Ardichvili, A. (2022). The impact of artificial intelligence on expertise development: Implications for HRD [article]. Advances in Developing Human Resources, 24(2), 78–98. [Google Scholar] [CrossRef]
  5. Bailey, J., & Warner, J. (2025). AI tutors: Hype or hope for education? Education Next, 25(1), 62. [Google Scholar]
  6. Barrett, A., & Pack, A. (2023). Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(1), 59. [Google Scholar] [CrossRef]
  7. Baskara, F. R. (2025). Conceptualizing digital literacy for the AI era: A framework for preparing students in an AI-driven world. Data and Metadata, 4, 530. [Google Scholar] [CrossRef]
  8. Buck, A. (2012). Examining digital literacy practices on social network sites. Research in the Teaching of English, 47(1), 9–38. [Google Scholar] [CrossRef]
  9. Colclasure, B. C., Ruth, T. K., Beasley, V., & Granberry, T. (2025). Examining student perceptions of AI-driven learning: User experience and instructor credibility in higher education. Trends in Higher Education, 4(4), 59. [Google Scholar] [CrossRef]
  10. Curto-Sánchez, E., Salazar-Palacios, G., Martín-Varillas, A., Prieto-Maíllo, E. C., Ramos-González, J., Dávila-González, I., Palacios-Ceña, D., & Cuenca-Zaldivar, J. N. (2026). What patients with asthma share when no one listens: Multimethod observational study of patient narratives on Reddit. Journal of Medical Internet Research, 28, e77027. [Google Scholar] [CrossRef]
  11. Douglas, L. (2017, February 2). AI is not just learning our biases; it is amplifying them. Medium. Available online: https://medium.com/@laurahelendoug-las/ai-Is-Not-just-learning-Our-biases-It-Is-amplifying-Them-4d0dee75931d (accessed on 10 January 2026).
  12. Eckert, P., & McConnell-Ginet, S. (1992). Think practically and look locally: Language and gender as community-based practice. Annual Review of Anthropology, 21, 461–490. [Google Scholar] [CrossRef]
  13. Evers, A., & van der Heijden, B. I. J. M. (2016). Competence and professional expertise. In M. Mulder (Ed.), Competence-based vocational and professional education: Bridging the worlds of work and education [Technical and vo-cational education and training: Issues, concerns and prospects (TVET)] (Vol. 23, pp. 83–101). Springer. [Google Scholar] [CrossRef]
  14. Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62–70. [Google Scholar] [CrossRef]
  15. Ferreira, J. J., Fernandes, C., Veiga, P. M., & Rammal, H. G. (2025). Ethics and the dark side of online communities: Mapping the field and a research agenda. Information Systems and E-Business Management, 23(1), 99–123. [Google Scholar] [CrossRef]
  16. Freire, P. (1970). Pedagogy of the oppressed. Continuum. [Google Scholar]
  17. Gee, J. P. (2014). An introduction to discourse analysis: Theory and method. Routledge. [Google Scholar]
  18. Gheyle, N., & Jacobs, T. (2017). Content analysis: A short overview. Internal Research Note, 1–18. [Google Scholar] [CrossRef]
  19. Ghimire, A., & Edwards, J. (2024). From guidelines to governance: A study of AI policies in education. arXiv, arXiv:2403.15601. [Google Scholar] [CrossRef]
  20. Giaxoglou, K. (2017). Reflections on internet research ethics from language-focused research on web-based mourning: Revisiting the private/public distinction as a language ideology of differentiation. Applied Linguistics Review, 8(2–3), 229–250. [Google Scholar] [CrossRef]
  21. Gill, A., & Stewart, O. G. (2024). The instructional implications of a critical media literacy framework and podcasts in a high school classroom. Journal of Adolescent & Adult Literacy, 68(3), 291–304. [Google Scholar] [CrossRef]
  22. Gudoniene, D., Staneviciene, E., Huet, I., Dickel, J., Dieng, D., Degroote, J., Rocio, V., Butkiene, R., & Casanova, D. (2025). Hybrid teaching and learning in higher education: A systematic literature review. Sustainability, 17(2), 756. [Google Scholar] [CrossRef]
  23. Hammer, R. (2011). Critical media literacy as engaged pedagogy. E-Learning and Digital Media, 8(4), 357–363. [Google Scholar] [CrossRef]
  24. Haywood, C. (2022). Developing a Black feminist research ethic: A methodological approach to research in digital spaces. In V. Del Hierro, & C. VanKooten (Eds.), Methods and methodologies for research in digital writing and rhetoric: Centering positionality in computers and writing scholarship (Vol. 2, Chapter 10, pp. 29–44). The WAC Clearinghouse; University Press of Colorado. [Google Scholar] [CrossRef]
  25. Hsu, H. (2025, June 30). What happens after A.I. destroys college writing? The New Yorker. Available online: https://www.newyorker.com/magazine/2025/07/07/the-end-of-the-english-paper (accessed on 18 December 2025).
  26. Hsu, S. L., Shah, R. S., Senthil, P., Ashktorab, Z., Dugan, C., Geyer, W., & Yang, D. (2025). Helping the helper: Supporting peer counselors via AI-empowered practice and feedback. Proceedings of the ACM on Human-Computer Interaction, 9(2), 1–45. [Google Scholar] [CrossRef]
  27. Ipsos. (2026, January 15). Google/Ipsos multi-country AI survey 2026. Ipsos. Available online: https://www.ipsos.com/en-us/google-ipsos-multi-country-ai-survey-2026#:~:text=The%20third%20annual%20survey%20among,ultimately%20hurt%20workers%20(50%25) (accessed on 20 January 2026).
  28. Kahl, D. H. (2013). Critical communication pedagogy and assessment: Reconciling two seemingly incongruous ideas. International Journal of Communication, 7, 2610–2630. [Google Scholar]
  29. Katsamakas, E., Pavlov, O. V., & Saklad, R. (2024). Artificial intelligence and the transformation of higher education institutions: A systems approach. Sustainability, 16(14), 6118. [Google Scholar] [CrossRef]
  30. Kellner, D., & Share, J. (2007). Critical media literacy: Crucial policy choices for a twenty-first-century democracy. Policy Futures in Education, 5(1), 59–69. [Google Scholar] [CrossRef]
  31. Kincheloe, J. L., McLaren, P., & Steinberg, S. R. (2011). Critical pedagogy and qualitative research: Moving to the bricolage. In N. K. Denzin, & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 163–178). Sage. [Google Scholar]
  32. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on chatgpt: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv, arXiv:2506.08872. [Google Scholar] [CrossRef]
  33. Krippendorff, K. (2004). Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30(3), 411–433. [Google Scholar] [CrossRef]
  34. Kulkarni, M., Mantere, S., Vaara, E., van den Broek, E., Pachidi, S., Glaser, V. L., Gehman, J., Petriglieri, G., Lindebaum, D., Cameron, L. D., Rahman, H. A., Islam, G., & Greenwood, M. (2024). The future of research in an artificial intelligence-driven world. Journal of Management Inquiry, 33(3), 207–229. [Google Scholar] [CrossRef]
  35. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. [Google Scholar]
  36. Lieb, A., Chapekis, A., Shah, S., & Smith, A. (2025, November 12). How parents use online communities. Pew Research Center. Available online: https://www.pewresearch.org/data-labs/2025/11/12/how-parents-use-online-communities/ (accessed on 4 January 2026).
  37. Marshik, T., McCracken, C., Kopp, B., & O’Marrah, M. (2025). Student and instructor perceptions and uses of artificial intelligence in higher education. Teaching of Psychology, 52(3), 339–346. [Google Scholar] [CrossRef]
  38. Mehta, R., & Aguilera, E. (2020). A critical approach to humanizing pedagogies in online teaching and learning. The International Journal of Information and Learning Technology, 37(3), 109–120. [Google Scholar] [CrossRef]
  39. Merino-Campos, C. (2025). The impact of artificial intelligence on personalized learning in higher education: A systematic review. Trends in Higher Education, 4(2), 17. [Google Scholar] [CrossRef]
  40. Mijan, A.-A., Hasan, M. R., & Hasan, M. (2025). AI and academia: Navigating the adoption of artificial intelligence in universities. International Journal of Technology in Education and Science, 9(1), 54–65. [Google Scholar] [CrossRef]
  41. Miller, J. R. (2025, May 14). What happens when teachers are replaced with AI? This school is finding out. Newsweek. Available online: https://www.newsweek.com/alpha-school-brownsville-ai-expanding-2063669 (accessed on 15 December 2025).
  42. Morrone, M. (2024, October 29). AI tutors are already changing higher ed. Axios. Available online: https://www.axios.com/2024/10/29/ai-tutors-college-students-efficiency (accessed on 20 November 2025).
  43. Morrone, M. (2025, August 29). Confusing school AI policies leave families guessing. Axios. Available online: https://www.axios.com/2025/08/29/school-ai-policies-chatgpt (accessed on 30 August 2025).
  44. Morrone, M. (2026, January 15). Microsoft offers free AI tools to teachers and students. Axios. Available online: https://www.axios.com/2026/01/15/microsoft-free-ai-training-educators-students?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top (accessed on 20 January 2026).
  45. Murphy, T., McGill, T., Davidge, T., Montero Acevedo, N., Chkarboul, C., Stocker, C., & Phillips, A. (2025, March 12). California college professors have mixed views on AI in the classroom. EdSource. Available online: https://edsource.org/2025/california-college-professors-have-mixed-views-on-ai-in-the-classroom/728192 (accessed on 8 January 2026).
  46. New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92. [Google Scholar] [CrossRef]
  47. Nolan, B. (2025, July 8). ‘It’s just bots talking to bots’: AI is running rampant on college campuses as students and professors alike lean on the tech. Fortune. Available online: https://fortune.com/2025/07/08/ai-higher-education-college-professors-students-chatgpt/ (accessed on 20 November 2025).
  48. Noorbehbahani, F., Mohammadi, A., & Aminazadeh, M. (2022). A systematic review of research on cheating in online exams from 2010 to 2021. Education and Information Technologies, 27(6), 8413–8460. [Google Scholar] [CrossRef]
  49. Oc, Y., Gonsalves, C., & Quamina, L. (2024). Generative AI in higher education assessments: Examining risk and tech-savviness on student’s adoption. Journal of Marketing Education, 47(2), 138–155. [Google Scholar] [CrossRef]
  50. Rad, D., & Roman, A. (2025). Mapping the motivational architecture of AI literacy: A network analysis of teachers’ multi-dimensional work motivation. Journal of Psychological and Educational Research, 33(2), 203–222. [Google Scholar]
  51. Reddit.com/answers. (2026). Hot vs top ranking on Reddit. Reddit. Available online: https://www.reddit.com/answers/80923537-5c92-42fd-8e02-97d756d9303f/?q=Hot%20vs%20top%20ranking%20on%20Reddit&source=PDP (accessed on 20 November 2025).
  52. Renkema, M., & Tursunbayeva, A. (2024). The future of work of academics in the age of artificial intelligence: State-of-the-art and a research roadmap. Futures, 163, 103453. [Google Scholar] [CrossRef]
  53. Reza, M., Thomas-Mitchell, J., Dushniku, P., Laundry, N., Williams, J. J., & Kuzminykh, A. (2025). Co-writing with AI, on human terms: Aligning research with user demands across the writing process. Proceedings of the ACM on Human-Computer Interaction, 9(7), 1–37. [Google Scholar] [CrossRef]
  54. Runcan, R., Runcan, P. L., Rad, D., & Marina, L. (2026). Exploring students’ attitudes toward the integration of artificial intelligence in education. Societies, 16(1), 21. [Google Scholar] [CrossRef]
  55. Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). Sage. [Google Scholar]
  56. Seo, K., Tang, J., Roll, I., Fels, S., & Yoon, D. (2021). The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education, 18(1), 54. [Google Scholar] [CrossRef]
  57. Share, J., Mamikonyan, T., & Lopez, E. (2019). Critical media literacy in teacher education, theory, and practice. In Oxford research encyclopedia of education. Oxford University Press. [Google Scholar] [CrossRef]
  58. Shaw, C., Yuan, L., Brennan, D., Martin, S., Janson, N., Fox, K., & Bryant, G. (2023). GenAI in higher education: Fall 2023 update of time for class study. Tyton Partners. Available online: https://tytonpartners.com/time-for-class-2023/GenAI-Update (accessed on 20 November 2025).
  59. Shum, S. B., Sándor, Á., Goldsmith, R., Bass, R., & McWilliams, M. (2017). Towards reflective writing analytics: Rationale, methodology and preliminary results. Journal of Learning Analytics, 4(1), 58–84. [Google Scholar] [CrossRef]
  60. Sova, R., Tudor, C., Tartavulea, C., & Dieaconescu, R. I. (2024). Artificial intelligence tool adoption in higher education: A structural equation modeling approach to understanding impact factors among economics students. Electronics, 13(18), 3632. [Google Scholar] [CrossRef]
  61. Spilioti, T., & Tagg, C. (2017). The ethics of online research methods in applied linguistics: Challenges, opportunities, and directions in ethical decision-making. Applied Linguistics Review, 8(2–3), 163–167. [Google Scholar] [CrossRef]
  62. Stewart, O. G. (2023). Using digital media in the classroom as writing platforms for multimodal authoring, publishing, and reflecting. Computers and Composition, 67, 1–21. [Google Scholar] [CrossRef]
  63. Stewart, O. G., Hsieh, B., Smith, A., & Pandya, J. Z. (2021). What more can we do? A scalar approach to examining critical digital literacies in teacher education. Pedagogies: An International Journal, 16(2), 125–137. [Google Scholar] [CrossRef]
  64. Stewart, O. G., & Rodgers, D. J. (2025). Critical media literacy and AI: Understanding layered bias and empowerment in artificial intelligence in education. Learning, Media and Technology, 1–13. [Google Scholar] [CrossRef]
  65. Stracke, M. C., Griffiths, D., Pappa, D., Bećirović, S., Polz, E., Perla, L., Di Grassi, A., Massaro, S., Prifti Skenduli, M., Burgos, D., Punzo, V., Amram, D., Ziouvelou, X., Katsamori, D., Gabriel, S., Nahar, N., Schleiss, J., & Hollins, P. (2025). Analysis of artificial intelligence policies for higher education in Europe. International Journal of Interactive Multimedia and Artificial Intelligence, 9(2), 124–137. [Google Scholar] [CrossRef]
  66. Susarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. (2023). The Janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research, 34(2), 399–408. [Google Scholar] [CrossRef]
  67. VandeHei, J., & Allen, M. (2025, May 28). Behind the Curtain: A white-collar bloodbath. Axios. Available online: https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic (accessed on 8 January 2026).
  68. Vasquez, V. M., Janks, H., & Comber, B. (2019). Critical literacy as a way of being and doing. Language Arts, 96(5), 300–311. [Google Scholar] [CrossRef]
  69. Weber, F., Wambsganss, T., & Söllner, M. (2025). Enhancing legal writing skills: The impact of formative feedback in a hybrid intelligence learning environment. British Journal of Educational Technology, 56(2), 650–677. [Google Scholar] [CrossRef]
  70. Wenger, E. (1998). Communities of practice: Learning as a social system. Systems Thinker, 9(5), 2–3. [Google Scholar] [CrossRef]
  71. World Economic Forum. (2025, January). Future of jobs report 2025. Available online: https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf (accessed on 10 January 2026).
Table 1. Examples of Types of Professional Usage of AI in Higher Education.
Table 1. Examples of Types of Professional Usage of AI in Higher Education.
StudentsFacultyAdministratorsStaff
EditingEditingEditingEditing
SummarizingSummarizingSummarizingSummarizing
Writing SupportWriting SupportWriting SupportWriting Support
Information SourcingInformation SourcingInformation SourcingInformation Sourcing
Translating/Language SupportTranslating/Language SupportTranslating/Language SupportTranslating/Language Support
Transforming Writing into Different Modes/FormsTransforming Writing into Different Modes/FormsTransforming Writing into Different Modes/FormsTransforming Writing into Different Modes/Forms
Academic WritingAcademic WritingPolicy DraftingInstitutional Communication
Image Creation/Multimodal CompositionFigure and Visual PreparationWorkflow AutomationWorkflow Automation
Data Analysis SupportData Analysis SupportEnrollment and Trend AnalysisStudent Success Monitoring
Metacognitive ScaffoldingAssessment Design and AlignmentStrategic Planning ScenariosAdvising Pathway Mapping
Study Strategy PlanningRubric DevelopmentAccreditation DocumentationOnboarding and Orientation Support
Self-Assessment and PracticeMultimodal Assignment DesignDecision-Making SimulationsTraining and Professional Development Materials
Accessibility Supports (e.g., captions, simplified language)Accessibility and UDL AdaptationsEquity Impact ForecastingAccessibility and Accommodation Support
Career Exploration and Portfolio CurationLiterature Review ScopingCollaborator Engagement PlanningKnowledge Base/Help Desk Support
Simulation-Based Learning (case studies, role-play)Classroom Scenario SimulationCrisis Communication DraftingDocumentation Translation
Table 2. Subreddits.
Table 2. Subreddits.
SubredditDescriptionTotal UsersReddit Ranking
r/academiaAn online community for discussing issues related to academia, faculty life, research, and institutional structures. This is NOT the place to ask questions about your homework, your particular school or professors, or to get admission advice! Survey posts must be approved by mods in advance, must include contact/IRB info, and must be specific to academia.90kTop 2%
r/ProfessorsThis sub is for discussions amongst college & university faculty. Whether you are an adjunct, a lecturer, a grad TA or tenured stream if you teach students at the college level, this space is for you! While we welcome students and non-academics lurking and learning, posts and comments are not allowed. If you’re new here, please familiarize yourself with the sub rules and follow them. If you’re ever unsure, feel free to reach out to the moderators for clarification.160kTop 2%
Table 3. Description of Threads from r/Academia as of January 2026.
Table 3. Description of Threads from r/Academia as of January 2026.
r/Academia
General TopicAge as of January 2026Number of Comments
(n = 658)
Number of Up Votes
Academia Should Not Embrace AI2 Months87507
Poor Examples of AI Use in Journal Articles2 Years39278
Why Universities Shouldn’t Use AI Detectors2 Months80267
Tired of AI in Research2 Years45255
Wants to Stop Relying on AI4 Months88233
AI Integrity Violations2 Months81230
Reviewing a Paper Written by AI2 Years46184
Journals are Rejecting Papermills with AI Papers28 Days27174
Humanities are Dead8 Months100171
Frustrated with Students’ AI-Use6 Months65144
Table 4. Description of Threads from r/Professors as of January 2026.
Table 4. Description of Threads from r/Professors as of January 2026.
r/Professors
General TopicAge as of January 2026Number of Comments
(n = 1959)
Number of Up Votes
Accidentally AI Proofing5 Months1421.8k
Taking Small Victories with AI1 Year371.5k
Not Caring Anymore10 Days2961.5k
Not Here to Police Students1 Month2821.2k
Not Everyone Needs to Go to College4 Months3031.1k
Feeling Jaded by the Sub because of AI Discussions8 Months2011.1k
Students Can’t Read7 Months3351.1k
What AI Policies Do You Like?22 Days371k
A New AI Cheating Tactic1 Year135969
Instructor is Enjoying Teaching Again without AI1 Month191959
Note. General topics are given for each thread rather than the exact title to make them less identifiable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stewart, O.G. A Critical AI Media Literacy Perspective on the Future of Higher Education with Artificial Intelligence Through Communities of Practice on Reddit. AI Educ. 2026, 2, 5. https://doi.org/10.3390/aieduc2010005

AMA Style

Stewart OG. A Critical AI Media Literacy Perspective on the Future of Higher Education with Artificial Intelligence Through Communities of Practice on Reddit. AI in Education. 2026; 2(1):5. https://doi.org/10.3390/aieduc2010005

Chicago/Turabian Style

Stewart, Olivia G. 2026. "A Critical AI Media Literacy Perspective on the Future of Higher Education with Artificial Intelligence Through Communities of Practice on Reddit" AI in Education 2, no. 1: 5. https://doi.org/10.3390/aieduc2010005

APA Style

Stewart, O. G. (2026). A Critical AI Media Literacy Perspective on the Future of Higher Education with Artificial Intelligence Through Communities of Practice on Reddit. AI in Education, 2(1), 5. https://doi.org/10.3390/aieduc2010005

Article Metrics

Back to TopTop