Previous Article in Journal
HMI Design of Intelligent Vehicles Based on Multimodal Experiments of Driver Emotions
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Teaching Agency–AI in the University: A Typology Based on Student Voice

by
Tomás Fontaines-Ruiz
1,2,*,
Antonio Ponce-Rojo
3,
Paolo Fabre Merchán
2,
Walther Casimiro Urcos
4 and
Liliana Cánquiz Rincón
5
1
Accounting and Auditing Program, Faculty of Business Sciences, Universidad Técnica de Machala, Machala CP 070205, Ecuador
2
Faculty of Research, Universidad Estatal de Milagro, Milagro CP 091708, Ecuador
3
Centro Universitario de Los Altos, Universidad de Guadalajara, Guadalajara CP 44160, Mexico
4
Universidad Nacional de Educación, Lurigancho-Chosica, Lima CP 15472, Peru
5
Department of Humanities, Universidad de La Costa, Barranquilla CP 50366, Colombia
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2026, 10(4), 34; https://doi.org/10.3390/mti10040034
Submission received: 6 January 2026 / Revised: 9 March 2026 / Accepted: 19 March 2026 / Published: 27 March 2026

Abstract

Generative AI is reshaping university teaching and creating tension around authority, evidence, and accountability when decisions are made using algorithms. From a student perspective, this study constructed a typology of distributed teacher–AI agency (TAI) and examined the discursive mechanisms that produce the illusion of teacher autonomy. A non-experimental, cross-sectional, explanatory study was conducted: a lexicometric analysis of the ALCESTE (IRAMUTEQ) questionnaire, using open-ended responses from 3120 students (Mexico, n = 2051; Ecuador, n = 1069), segmented into 1077 units, and analyzed using positioning theory. Co-agency was operationalized using Teacher Agency (A), Delegation to AI (D), Governance (G: disclosure, criteria, verification), and the Illusion Index (II = A/(D + G + 1)). Three configurations emerged: Immediate Customizer (28.8%) with very high A and minimal D/G (II = 25.4); Technological Literacy Facilitator (27.3%) with visible delegation and safeguards (II ≈ 2.0); and Operational Optimizer (43.9%) oriented toward accelerating tasks with moderate governance (II ≈ 2.7). The illusion was associated with the agentive erasure of AI and a rhetoric of immediacy/efficiency that replaced verifiable criteria. These findings transform the student voice into a criteria-based diagnostic tool for strengthening traceability, minimal verification, and responsible orchestration of AI in higher education.

1. Introduction

Artificial intelligence (AI) is reshaping educational ecosystems. It fosters the personalization of teaching and learning processes, especially in inclusive contexts [1,2], and promotes teaching practices that renew the educational experience [3,4]. By using AI, students have more pathways to understand what they are studying, thanks to algorithmic feedback. Faculty members can adjust the curriculum and teaching methods based on the learning patterns revealed by AI, and the system is optimized with the data generated by the interaction. In this intra-activity [5], teaching is understood as a framework formed by students, teachers, curricula, and algorithms [6].
Problems emerge when this framework is not explicitly established. In university teaching mediated by AI, the core challenge is not the presence of the algorithm itself, but rather the pedagogical approach that the teacher constructs around it and, above all, the degree to which this approach is traceable and justifiable for the students. The risk arises when AI functions are incorporated without making the criteria, limits, and verification procedures that govern their use visible, which can create the illusion of a greater conceptual understanding than actually exists. The problem is compounded if AI’s echo chambers and informational biases, including hallucinations and user-pity patterns, are not recognized [7,8], weakening accountability and epistemic confidence in learning processes.
When this occurs, educational risks emerge that compromise the relevance and certainty of what students learn. The traceability of the knowledge discussed becomes obscured, and human control over the validity and interpretation of content is weakened. Furthermore, the criteria by which the teacher decides what to delegate to AI, as well as the procedures for recognizing the quality and reliability of the information incorporated into the class, are rendered invisible, resulting in a lack of demonstration of the epistemic value of the ideas shared in the classroom [9,10]. In this vacuum, students’ trust in the mediation, evaluation, and fairness of the learning process tends to shift toward substitute signals that can transform the teacher’s discursive eloquence into a criterion for determining truth.
Faced with this reality, the literature warns of persistent risks: ethical and curricular tensions, cognitive resistance, biases, opacity, and dilemmas between personalization, privacy, and algorithmic fairness [11,12,13,14]. However, the way in which transparent and auditable pedagogical mediation is configured—one that makes human control over decisions, evidence, and the verification process in the classroom visible —is often under-examined. In this sense, the problem is both practical and political as it forces us to consider what is delegated to the algorithm, what is pedagogically governed, and what is left to operate as a black box.
Based on the above, this research addresses teacher identity through the discursive traces left by their approach to working with information literacy (IL), as perceived by students during classroom practice. Without IL, epistemic decisions and operations tend to be concentrated in the teacher (selecting sources, producing explanations, validating, and correcting); however, when IL is integrated, these operations are reorganized into a phenomenon called distributed agency or teacher–IL co-agency. Within this framework, IL does not replace the teacher’s epistemic responsibility; its role focuses on producing and transforming proposals (drafts, examples, alternatives, reformulations, explanations, and explorations), whereas the teacher governs, defines, demands evidence, verifies, justifies, and evaluates the formative process, making it auditable what is accepted, what is discarded, and why.
However, at the intersection of transparency, control, and explainability, a theater of governance can emerge, in which the teacher claims to govern AI while effective dependence grows, and explanations are offered without real control over the data, models, or objectives. In this scenario, teacher authority becomes symbolic because practical agency is displaced to the algorithm, evidenced in expressions such as “that’s how the model works!” [15].
Recent typologies have already described teachers’ positions regarding sociotechnical technologies; for example, the proxy agent (AI supports and the teacher decides) and the scaffolder (critical and socio-emotional scaffolding to discriminate quality). These typologies warn that excessive automation and immediate feedback can transform AI into an authority, weakening the sense of human control and promoting passivity in both students and teachers [16,17,18]. Nevertheless, these descriptions are usually based on normative frameworks or interpretations centered on the teacher’s perspective. What remains to be understood is how these configurations become visible and evaluable in daily practice from the perspective of those who experience and judge them (the students).
This absence leaves a blind spot in understanding the phenomenon under study because teacher agency—AI—is distributed between teachers’ decisions and the algorithm’s input and manifests itself in didactic interactions. Therefore, students, in addition to expressing their opinions, participate in the construction of teacher identity as it is exercised because they receive, interpret, and, in practice, validate or question the teacher’s pedagogical authority to set criteria, guide learning, and justify decisions when interacting with the algorithm. In their accounts, students distinguish between teaching uses of AI that structure thought without replacing it, others that function as a shortcut, and still others in which the teacher relinquishes epistemic control or takes refuge in “the system said so.” This experiential record is key to differentiating formative co-agency from covert dependence, especially when the vocabularies of efficiency, control, or transparency are persuasive without real governance, defined limits, verification, or visible accountability [15].
Consequently, examining teacher–AI co-agency from the student perspective enables opportunities for improvement because it makes visible which dimensions of mediation are sustained (or weakened) in daily practice, and thus the conditions for preserving agency and epistemic trust [17]. This justifies the use of student narratives to typify distributed teacher–AI agency configurations, allowing for the identification of configurations associated with greater epistemic control, explicit verifiable criteria, greater traceability, and situated literacy, as opposed to others where a more opaque or uncritical delegation predominates. This shift is strategic because it moves the discussion from the ‘ideal teacher’ to the ‘experienced teacher’ and transforms student perception into an operational input for training and governance of AI use, precisely where the risk of the illusion of autonomy becomes pedagogically costly for the institution.
Accordingly, this study aims to describe a typology of distributed teacher–AI agency configurations from the perspective of university students. In particular, this study seeks to characterize the observable practices that comprise these practices, explain their contribution to the transition from transmission to orchestration of learning, and examine their implications for student agency and epistemic confidence. To this end, the following research questions were posed:
RQ1: What typologies of distributed teacher agency (TAA) emerge from student voices, and how do they reconfigure teaching work from transmission to the orchestration of learning? This question identifies the typology based on discursive traces and practices reported by students and links it to the reconfiguration of teaching work toward orchestration (design, coordination, and regulation of mediations in interaction with the algorithm). Its contribution is to transform the “use of TAA” into a classifiable and comparable empirical object for analysis.
RQ2: What discursive positions underpin each configuration and how do they manifest in observable practices? Through this question, the researchers linked the positions that organize each configuration with observable behaviors and lexical-discursive markers to operationalize the typology and guide teacher training and institutional improvement in the incorporation of AI.
RQ3: How and under what discursive mechanisms do the identified teacher–AI configurations produce an illusion of teacher autonomy that masks algorithmic dependencies and constraints? This question examines the performative effect of apparent autonomy and the discursive resources that sustain it when teacher control is overrepresented and algorithmic mediation is not auditable. Its relevance is epistemic and ethical; without auditability, trust tends to rely on substitute signals (apparent fluency or accuracy) rather than verifiable evidence.
As can be seen, these questions are articulated as an analytical sequence that supports the study’s objective and reinforces its internal consistency. Taken together, these elements allow us to move from identifying configurations of distributed teacher agency (RQ1) to explaining the discursive positions that underpin them and their translation into observable practices (RQ2), and finally, to examine their epistemic and ethical implications when governance becomes opaque and the illusion of teacher autonomy emerges (RQ3). Thus, the constructed typology not only describes the modes of incorporating distributed agency but also allows us to interpret its consequences for the traceability of criteria, accountability, and epistemic trust in university teaching contexts mediated by algorithms.

2. Materials and Methods

This study is explanatory in nature as it reveals patterns of lexical co-occurrence in the discourse of university students to create a typology of teachers who use artificial intelligence. A non-experimental, cross-sectional design was used because the analyzed corpus was constructed at a single point in time, and no variables were manipulated. A lexicometric procedure was adopted using the ALCESTE methodology (Contextual Lexical Analysis of a Set of Text Segments; [19,20,21,22]) to segment, classify, and organize textual data into stable lexical classes. These classes were then interpreted using positioning theory [23,24,25] as a reference point to explain how students configure the teacher in different roles and positions in relation to artificial intelligence. This combination of statistical analysis of textual data and interpretive analysis provides internal validity to the research, as the typologies studied arise from a reproducible and transparent procedure, and their subsequent analysis lends hermeneutic depth to showing what these typologies mean in terms of the practices and representations of the phenomenon.

2.1. Actors and Contexts

The study included 3120 university students from Mexico and Ecuador, distributed as follows: 2051 students from the University Center for Economic and Administrative Sciences of the University of Guadalajara (Mexico) enrolled in the following programs: administration, financial administration and systems, government administration and public policy, public accounting, economics, gastronomic business management, environmental management and economics, business engineering, marketing, international business, human resources, public relations and communication, information technology, and tourism. In addition, 1069 students from the Faculty of Business Sciences at the Technical University of Machala (Ecuador) were included, specifically from accounting and auditing, digital finance, economics, business administration, and foreign trade programs.

2.2. Data Collection Instrument

A self-administered digital instrument was administered via Google Forms and consisted only of two open-ended questions designed to elicit student accounts of teacher AI use in pedagogical contexts. The items were: (i) “Think of a real professor who has used AI in your course (for example, to explain, provide feedback, assess, or prepare activities) and answer the following: What qualities or skills do you find extraordinary?” and (ii) “Considering the same professor, what do they do to help you use AI better as a student? Describe specific actions (what they ask of you, how they guide you, and how they teach you to verify or apply criteria).” Given this wording, the instrument was designed to elicit teacher–AI mediation configurations that students recognized as pedagogically noteworthy, rather than to capture the full spectrum of student perceptions of teacher AI use. Accordingly, the resulting typology should be interpreted as a typology derived from student discourse under this elicitation frame. The form was administered in Spanish, and the lexicometric analysis was conducted on the original Spanish corpus. Any English renderings of prototypical segments are provided for reporting purposes only and do not affect the analytical procedures. The relevance of the questions to the study objective was evaluated by a panel of eight specialists using Aiken’s V [26], yielding a value of 0.92. The instrument was distributed to active and formally enrolled students at the participating universities through invitations sent to their institutional email addresses. Participation was voluntary, and all participants provided informed consent. The form was available from 16–20 June 2025.

2.3. Unit of Analysis and Corpus Composition

An intentional corpus comprising open-ended responses has been constructed [27,28]. The text was segmented into 1077 elementary context units (ECUs); of these, 1069 (99.26%) were successfully classified, reflecting a high level of internal stability in processing. In total, the corpus included 3450 forms with 38,951 occurrences, from which 2203 lemmas were identified. Of the set of forms, 2092 were considered suitable for inclusion in the classification process, whereas 109 were categorized as supplementary, as they only provided supporting information without influencing class construction. The mean number of forms per segment was 36.16, indicating a lexical density suitable for homogeneous segmentation of the ECUs. These compositional characteristics of the corpus allow for the recognition of dominant semantic fields and their vocabularies [19], which is key to achieving the objectives of the present study.

2.4. Corpus Analysis Using the ALCESTE Methodology

The core of this methodology is to identify co-occurring terms in different parts of the corpus and thereafter group the segments into homogenous lexical categories [19,20]. This process was performed in six steps.
(i)
Corpus normalization: conversion of the corpus to lowercase, preservation of accents, removal of non-alphabetic characters, standardization of spaces, use of **** as a separator, and removal of duplicates and empty records.
(ii)
Segmentation or division of the corpus into blocks of 40 words as the minimum unit of contextual reference (one to two sentences). This size is most commonly used because it balances local semantic coherence and statistical power for calculating χ2 in the segment x form matrix, and it is the standard reported in recent implementations and applications of this method [29,30].
(iii)
Lemmatization, which reduces words to their basic form to generate stable classes without losing contextual meaning [31,32]. This normalization reduces morphological dispersion and improves the statistical consistency of hierarchical classification by operating on lemmas instead of inflections [33].
(iv)
Estimation of the lexical co-occurrence matrix to capture local word associations within each context unit to support lexical grouping into homogenous categories.
(v)
Iterative class decomposition to maximize lexical differences and promote internal cohesion. In this stage, the χ2 value is considered to determine a term’s class membership and maximize interclass differentiation and internal cohesion.
(vi)
Selection of representative lemmas based on χ2 (df = 1) with α = 0.01 (χ2 ≥ 6.63) and strong associations for χ2 ≥ 10.83 (α = 0.001). The magnitude of χ2 reflects the actual strength of the association: classes with a higher concentration of a term reach higher values. A minimum of pedagogically relevant expressions (e.g., “immediate feedback,” “reliable sources”) were preserved to avoid substantial semantic loss during the interpretation process.
These stages were operationalized using IRaMuTeQ software (version 0.8 alpha 7) [34], which was developed in R with Python support [34]. This software allowed for the distinction between active forms (participating in the partitioning) and supplementary forms (projected without guiding the classification) to retain useful contextual markers without biasing class formation (e.g., adverbials or metadiscourses that contribute color but not thematic content). This usage is consistent with the methodological guidelines established by researchers when employing IRaMuTeQ in their research [30,35].

2.5. Detection of Teacher–AI Mediation Typologies Derived from Student Discourse

To support typology detection and the analysis of the illusion of autonomy, we defined four operational parameters a priori and then operationalized them on a corpus built from student responses. The parameters (Teacher agency A, Delegation to AI D, Governance G, and the Illusion Index II) were specified deductively from the study’s conceptual framework and the related literature, whereas the empirical instantiation of these parameters relied on discourse evidence (overrepresented lemmas, co-occurrences, and prototypical segments) within each lexical class. Thus, the model is theory-guided in its definitions and data-driven in the selection of representative lexical markers that make these dimensions visible in student discourse.
Teacher agency (A) refers to the extent to which student discourse attributes pedagogical initiative, decision-making, design, regulation, adaptation, and responsibility for the didactic process to the teacher. Delegation to AI (D) captures the extent to which students make explicit that tasks, outputs, or cognitive–pedagogical functions are transferred to AI, including epistemically relevant delegations (e.g., generating explanations, drafting responses, organizing content, proposing solutions, supporting evaluative judgments). Governance (G) captures the visibility of teacher control over AI use, including disclosure, explicit criteria, limits, sources, verification, traceability, and justification, as well as other markers that make AI management publicly inspectable within the learning process. The Illusion Index (II) functions as a relational measure of the risk of performative autonomy, i.e., the discursive appearance of teacher control without commensurate visibility of delegation and governance. This logic is supported by evidence showing that explanations may be persuasive without being epistemically sufficient when they lack verification and explicit limits [9,10], and that human–AI combinations do not guarantee better outcomes and may degrade them when coordination and decision criteria are weakly governed [36]. Accordingly, II estimates the imbalance between teacher-attributed control (A) and the visibility of delegation and governance (D + G) in student discourse [15,37]. In simple terms, II increases when student discourse attributes high control to the teacher while providing limited signals of what was delegated to AI and how that use was verified, limited, or governed.
The corpus was automatically segmented into three lexical classes. In each class, the lexical core was delimited by selecting overrepresented terms (high χ2; p ≤ 0.001) and retaining terms that co-occurred within a ±5-word window to ensure interpretable semantic patterns (e.g., personalize « needs; adapt « content). The meaning of each constellation was anchored in prototypical segments, defined as the highest-scoring fragments (sum of χ2 contributions of marked words), which function as typical cases of the identified pattern.
From these classes, each typology was characterized through three operational axes derived from student discourse: teacher agency (A), delegation to AI (D), and governance (G). Each axis was computed as a lemma-level lexical density per 1000 words using representative overrepresented terms mapped to A/D/G according to previously established operational definitions (see the typology tables for lemma lists and prototypical evidence). This normalization enables comparisons across classes of different sizes. For example, if a class contains 10,000 words and 690 corresponds to axis A, the operation 690/10,000 × 1000 yields a density of 69 per 1000 words. Based on these axes, the illusion index (II) was computed as II = A/(D + G + 1). II is treated as an operational (heuristic) imbalance indicator rather than a psychometric scale; it captures the extent to which teacher-attributed agencies outweigh the visible presence of delegation and governance markers in student discourse. The +1 term functions as a smoothing constant to avoid division by zero and to stabilize the indicator when D + G is very low. Accordingly, II increases when A is high, while D and G remain weakly visible and decrease when delegation and governance become more explicit.
In addition, to ensure that the interpretation of II does not change substantively when the details of its computation are modified, we incorporated a low-burden robustness check to assess the indicator’s interpretive consistency. Specifically, we considered the sensitivity of II to reasonable variations in the smoothing constant (+0.5, +1, and +2), and contrasted it with an alternative unsmoothed formulation (II’ = A/(D + G)). The purpose of this check was not to “validate” II as a psychometric scale but to ensure that the substantive reading it provides—the imbalance between teacher-attributed agency (A) and the visibility of delegation/governance (D + G) in student discourse—does not depend on a specific parameterization. On this basis, II was retained as an operational/heuristic indicator suitable for describing the risk of performative autonomy under the elicitation conditions of the corpus. Typology labels were assigned by converging (i) lexical cores and their co-occurrences, (ii) the highest-scoring prototypical segments, and (iii) the A–D–G–II profile so that each type is justified from statistical, textual, and interpretive standpoints.

2.6. Detection of Teacher Attitudes

To uncover attitudinal traits, verbs and adverbs of action were identified within the lexical core of each class (terms with a high χ2 score) to specify the behavioral components of attitude (personalize, adapt, detect, immediately), evaluative verbs and adjectives to associate them with the cognitive component of attitude (effective, critical, capable), and relational verbs and adjectives of treatment for the socio-emotional component (motivate, accompany, approachable, empathetic). Co-occurrence graphs with windows of five or more were constructed using these lemmas. The pattern was then validated using prototypical segments (high score = sum of χ2 scores of the marked forms present), from which brief quotations were extracted.

3. Results

This research identified three types of teachers: (i) The Immediate Personalizer. This teacher adjusts the lesson in real time by modulating the pace, content, and examples based on observations and providing immediate feedback. This ability operates as a situated agency. The profile is based on co-occurrence constellations, such as personalize ≪ needs, adapt ≪ content, feedback ≪ immediate, time ≪ real, reinforced by a prototypical segment (score ≈ 1314.76) that anchors responsive personalization (see Table 1).
In metric terms, this profile presented A = 123.7/1000, D = 2.92/1000, and G = 1.46/1000, resulting in an Illusion Index (II) of II = A/(D + G + 1) = 25.4. Asymmetry (A ≫ D, G) explains the representation of “controlling everything,” while algorithmic mediation remains invisible (suggested examples, content reorganization). This performative authority is articulated through four convergent attitudes: (1) real-time individual micro-adjustments (curriculum as a flexible device); (2) continuous monitoring with immediate response; (3) dynamic orchestration of content and resources “on the fly”; and (4) an incipient ethical awareness, which advocates the responsible use of technology without consistent explanatory procedures.
In this typology, the teacher is represented as the driving force of the experience [38,39,40,41] because algorithmic mediation is not made visible. Without the disclosure of delegations (D) and safeguards/guarantees (G), ethical tensions emerge that can delegitimize their authority in microcurricular management [42]. These risks are amplified when automation, rapid feedback, and teacher–student–system hierarchies coexist because the system can be interpreted as the bearer of truth, displacing the human locus of control and normalizing passivity or conformity in knowledge management [17]. However, the path to improvement is direct: make delegation visible (“AI did X”) and enhance governance (verification and explicit limits) to move from a performative agency to a justifiable and auditable agency, preserving the pedagogical power of the profile and resolving the ethical Achilles heel revealed by II.
(ii) Technological literacy specialists (Table 2). In this type, student discourse frames AI as an object of knowledge: teachers are portrayed as teaching how to use AI tools and prompts and as making governance criteria more explicit (e.g., correct/responsible use, plagiarism checking, verification, and source-related cues). Accordingly, the discursive focus shifts from “using AI to teach” to “teaching how to use AI.” The lexicometric evidence supports this interpretation through an overrepresented lexical core (use, teach, tools, technology, AI, prompts, correct, ethical, responsible, plagiarism), recurrent co-occurrence constellations (use ↔ AI; teach ↔ AI; tools ↔ AI; AI ↔ technology; AI ↔ correct; teach ↔ tools), and high-scoring prototypical segments that explicitly mention instruction on prompts, correct use, and responsible practice.
At the level of lexical densities, this class shows A ≈ 29.3/1000, D ≈ 7.55/1000, and G ≈ 6.66/1000, yielding a low Illusion Index (II ≈ 2.0 = A/(D + G + 1)). This comparatively balanced profile indicates that student discourse more often makes delegations explicit (e.g., “AI does X”) and foregrounds governance markers (criteria, verification, limits, disclosure, traceability), thereby reducing the discursive gap between teacher-attributed agency and the visible mediation of AI. Compared with the immediate personalizer profile, this typology is less centered on the pedagogical customization of instructional delivery and more centered on the guidance and regulation of AI use. The prominence of action verbs (teach, use) and operational vocabulary (manage, know-how, updated) supports a practical expert stance focused on teaching tool operation. At the same time, the recurrent presence of normative terms (correct/ethical/responsible) indicates that student discourse frames AI use as a practice governed by explicit expectations of legitimacy and appropriate use. In this sense, the discourse associated with this class portrays a form of authority grounded in making AI mediation more explicit, consistent with the low II reported in Table 2.
(iii) Operational optimizer (43.9% ST): Defined by transforming AI into a workflow accelerator for high-volume, low-ambiguity tasks (Table 3). This profile is supported by the following overrepresented lexical cores: topic, method, information, do, perform, search, explain, presentation, fast, work, extract, simple/easy, effective, grade, correct, indication, prompts, and minutes (s). It is organized into the following co-occurrence constellations: do/perform ↔ presentations; search/extract/obtain ↔ information; explain ↔ topic; grade/correct ↔ assignments; prompts ↔ use/perform; quick/seconds/minutes ↔ do/explain—and it is anchored in the prototype (score ≈ 332.1): “instant responses, quotes/summaries, simpler tasks, and streamlined work with more accurate information.” This semantic field explicitly focuses on performance: accelerating, reducing friction, and transforming scattered inputs into usable outputs (summaries, presentations, drafts, and initial corrections).
In metric terms, it presents A ≈ 68.9/1000, D ≈ 18.9/1000, and G ≈ 6.30/1000, resulting in a moderate Illusion Index (II ≈ 2.69 = A/(D + G + 1)). Medium-to-high agency organizes the work; highly visible delegation falls on operational subtasks (search and screening, synthesis, layout/presentation, preliminary proofreading); and low-to-medium governance indicates partial criteria (selective verification, unsystematic boundaries). The overall assessment suggests a sense of control without completely obscuring what AI does; however, there is room for improvement in terms of clarifying the rules and traceability.
At the attitudinal level, optimizers demonstrate a pragmatic, solution-oriented approach focused on efficiency: they value time efficiency, avoid unproductive ambiguity, and celebrate what is actionable. They feel pressured to produce quickly and focus on clarity, specificity, and process closure. This urgency is channeled through micro-verification (one-off checks of sources/appropriateness), which transforms haste into a minimum guarantee of quality training: progress is made, products are delivered, and pace is sustained. In the context of large cohorts, tight deadlines, or the need for early scaffolding (e.g., providing initial feedback or consolidating support materials), this type of process acts as a time compressor, making feasible what would otherwise fall outside the scope of instruction.
Its pedagogical value is tangible when efficiency is a prerequisite, and risk arises when speed replaces justification. With low-to-medium governance (G), the likelihood of undetected errors or opaque criteria increases. The path to improvement is straightforward: (i) enhance governance with simple checklists (minimum sources, suitability criteria, usage limits) integrated into the workflow; (ii) maintain visible delegation (“the AI did X under Y condition”); and (iii) reserve teaching agency for design decisions (sequences, quality criteria), preventing it from being diluted by mere acceleration. In this context, a balance between speed and pedagogical utility is expected, with basic traceability sustaining epistemic confidence without losing the operational speed that defines this typology.

4. Discussion

In response to RQ1, three robust typologies of AI-assisted teaching emerged from student discourse: Immediate Personalizer (28.8% ST), Technological Literacy Teacher (27.3% ST), and Operational Optimizer (43.9% ST). The literacy and optimizer profiles show functional proximity, as both foreground technical–procedural mediation (explicit tool references and task orientation). By contrast, the personalizer foregrounds adjustments to pace, content, and perceived difficulties supported by immediate feedback. In lexical-density terms, the profiles differ clearly: the personalizer combines high A with low D/G (high II); the literacy teacher combines moderate A with more visible D and G (low II); and the optimizer combines medium–high A with high D and low–medium G (moderate II). Because the empirical material consists of open-ended student discourse, these patterns should be interpreted as perceived mediation configurations rather than direct observations of enacted classroom practice. Within that scope, student accounts shift from transmission-oriented narratives toward orchestration-oriented narratives, where teachers are portrayed as coordinating mediations and justifying decisions in relation to algorithmic outputs [43].
This distribution can be read as three orchestration functions, as represented in the student accounts. The personalizer frames co-agency in pedagogical responsiveness (micro-adjustments and real-time feedback). The literacy teacher frames co-agency around tool learning and explicit criteria (i.e., when to use AI, how to use it, and under what limits and traceability expectations). The optimizer frames the co-agency around operational efficiency (accelerating workflow and converting dispersed inputs into usable outputs). In this sense, “AI use” becomes empirically classifiable at the level of discourse: each typology is supported by lexical cores and co-occurrence constellations, an A/D/G profile, and an II that captures the relative visibility of delegation and governance markers in student descriptions. This reading connects with prior frameworks for sociotechnical mediation (e.g., proxy agent, scaffolder). This is also consistent with the caution that automation plus rapid feedback can be associated with increased perceived AI authority and reduced human locus of control [17]. Importantly, narrated orchestration does not necessarily imply verifiable control; it may be represented as control even when criteria and guarantees are weakly explicit, sustaining symbolic authority as dependence increases [15].
A central interpretation is the visibility of governance. A, D, and G are indicators of discursive visibility in student accounts. Low G values should not be interpreted as evidence that governance was absent in teaching practice. They indicate that governance cues (criteria, limits, sources, verification, disclosure, and traceability) were less frequently explicit in student descriptions under the present elicitation frame. This is important because the prompts asked students to report pedagogically noteworthy or beneficial uses. Under this frame, students may foreground teacher actions and perceived outcomes rather than the governance language that may accompany them. This motivates future triangulations using practice-based evidence.
In response to RQ2, the three configurations can be interpreted as discursive positions that organize how students attribute initiative, delegation, and control to teachers, students, and algorithms. In the Immediate Personalizer, student accounts represent situated responsiveness through short cycles of tasks, feedback, and readjustment. Lexical markers, such as personalize, adapt, pace/rhythm, need, immediate feedback, real time, and detect difficulties, anchor this representation. When agency markers dominate and delegation/governance markers remain less visible (high II), effectiveness is framed as relying on teacher authority with fewer explicit criteria. A plausible implication, consistent with the scope of discourse-based evidence, is that traceability may be strengthened when delegations are named (“AI did X”) and minimal verification cues are made explicit (e.g., provenance and adequacy) [43].
For the TL teachers, student accounts frame authority around teachable rules for AI use. Responses foreground prompting guidance, tool demonstrations, and governance cues for responsible practice (e.g., correct use, plagiarism checking, verification, and traceability). The defining lexical markers (teach, use, tools, correct, ethical, responsible, verification, traceability, plagiarism detector, and prompts) align with moderate A and more visible D/G (low II). A plausible implication is that governance becomes more educational when enacted in authentic tasks—requiring students to apply criteria, compare outputs, and justify adoption under explicit limits [43,44,45,46]. Such moves can strengthen agency without sacrificing transparency.
In the Operational Optimizer, student discourse foregrounds efficiency. The lexicon emphasizes speed, templates, and task completion (do/perform, search/obtain information, seconds/minutes, presentation, simple/easy, grade/correct, and prompts). The profile combined moderate-to-high A, high D, and low-to-medium G (moderate II). Students value speed and productivity in this configuration while also leaving room for a known vulnerability: apparent accuracy may substitute for justified accuracy when explicit boundaries are scarce, a concern documented in the literature on automation and perceived authority [17]. A plausible implication is that brief governance cues (verification prompts and limits), explicit delegation markers, and a final justified decision by the student may preserve efficiency while improving traceability [15].
Overall, the evidence supports a discourse-level shift toward orchestration-oriented descriptions. Each typology offers different levers for reflective improvement; however, these levers are best framed as hypotheses. The A/D/G–II matrix supports operationalizing such hypotheses: for the Immediate Personalizer, lowering II by making delegation explicit and foregrounding minimal governance cues so responsiveness leaves a trace [43]; for the Technological Literacy Teacher, keeping II low while increasing A through authentic tasks that require applying, comparing, and justifying criteria [45,46]; and for the Operational Optimizer, converting efficiency into auditable quality by increasing governance visibility in decisions with epistemic consequences (e.g., factuality, assessment) while keeping delegation explicit to reduce opaque dependency [15,17].
Methodologically, the transition from “AI use” as a generic label to comparable discourse-based configurations is supported by four operational indicators: teacher-attributed agency (A), delegation to AI (D), visible safeguards/governance (G), and the Illusion Index (II) as a heuristic imbalance indicator capturing the relative visibility of delegation and governance in relation to agency. This scheme is supported by lexical footprints (cores and co-occurrences). It can be strengthened through future triangulation (e.g., classroom artifacts, syllabi, interviews, and observations) and by improving convergent validity between discourse and pedagogical mediation.
In response to RQ3, the illusion of teacher autonomy can be understood as a discourse effect that emerges when student narratives emphasize what “the teacher does” (decides, adapts, detects, provides feedback), while making less visible what AI does and which criteria regulate its use (sources, verification, limits, disclosure). This imbalance yields a performative impression of full teacher control, even when part of the adjustment is co-produced through algorithmic mediation without explicit guarantees. The illusion is sustained through four recurrent resources: (i) a focus on verbs of teacher intervention; (ii) partial erasure of the technical agent through nominalizations or impersonal constructions (e.g., “it is corrected…,” “presentations…”); (iii) emphasis on immediacy (“fast,” “in seconds”) that can imply sufficiency through speed; and (iv) adjectives of accuracy (“accurate,” “effective,” “simple”) without explicit indicators of verification.
Reading across typologies, the mechanism shows nuances. In the Immediate Personalizer, hyper-responsiveness may concentrate authorship on the teacher when delegation and criteria remain implicit. In the Operational Optimizer, delegations are more visible (searches, summaries, presentations); however, fluency may operate as a warrant of quality without explicit sources and limits. In the Technological Literacy profile, traceability markers reduce the illusion; however, if governance remains primarily declarative (checklists stated but not enacted), student discourse may still reproduce “declared transparency” without practiced verification.
Within the scope of discourse-based evidence, a practical implication is that these asymmetries may be addressed by increasing the discursive visibility of AI mediation and adding lightweight verification checkpoints. The two minimal moves are as follows: (1) explicit disclosure when AI intervenes (e.g., “AI generated X; the teacher/student validated Y using Z criteria”); and (2) a brief closing verification focused on provenance and appropriateness (Where does the information come from? Is it suitable for this objective?). In typology-specific terms, the Immediate Personalizer may benefit from stronger verification cues and explicit limits, the Operational Optimizer from clarifying which tasks are delegated and requiring brief justifications after outputs, and the Technological Literacy Teacher from moving governance from declaration to situated application in authentic tasks.
Based on the above, illusion is interpreted as an effect of discourse composition: the high visibility of teacher-attributed agency, the lower visibility of algorithmic mediation, and relatively few explicit rules. The illusion was plausibly reduced when student narratives made the mediation and criteria more explicit: naming what the AI did, indicating how it was validated, and requiring a justified decision. Autonomy remains pedagogically productive within this framework. Teachers are still portrayed as deciding and orchestrating, but now in relation to public criteria. This may support epistemic confidence insofar as adjustments are explained, reviewed, and defended.

5. Conclusions

This study suggests that AI-assisted teaching, as represented in student discourse, is not organized as “styles” but as forms of orchestration coordinating teacher-attributed agency in action (A), explicit delegations (D), and visible public guarantees (G). Three robust typologies were identified: Immediate Personalizer, Technological Literacy Teacher, and Operational Optimizer. Together, these configurations shift the discursive focus from transmission toward the design of application conditions, coordination of mediations, and justification of decisions in relation to algorithmic outputs. The Illusion Index (II = A/(D + G + 1)) synthesizes a central discourse-based finding: when teacher-attributed agency dominates and D/G are weakly visible in student accounts, an apparent agency that is difficult to audit becomes more likely; when delegation and governance are more explicit, agency becomes more publicly justifiable without necessarily losing personalization or efficiency.
Across typologies, students’ accounts foreground distinct orchestration logics. The Personalizer emphasizes short task–feedback–readjustment cycles. The Literacy Teacher emphasizes language, criteria, and limits for AI use. The Optimizer emphasizes time compression and task throughput. These differences are captured through a criterion-referenced scheme (A/D/G densities per 1000 words plus II) that enables comparison while preserving qualitative nuances through lexical evidence and prototypical segments.
These implications should be read as hypotheses that are consistent with the evidentiary scope of discourse-based data. The typology can support institutional reflection on how teacher–AI mediation becomes visible to students and what tends to be foregrounded or backgrounded under salience-oriented elicitation. In applied terms, the framework may inform training activities and governance guidelines that emphasize disclosure, verification, and explicit limits. Future work should triangulate these discourse patterns with practice-based evidence (artifacts, syllabi, interviews, and observations) and test stability under neutral and counterbalanced elicitation prompts. Under these conditions, the proposed model can contribute to responsible AI integration by supporting transparency, auditability, and epistemic trust, as experienced and articulated by students.

6. Limitations and Future Research Directions

This study had limitations related to its design, data source, elicitation frame, and measurement model. First, the findings are based on open-ended student responses; therefore, the identified configurations capture discourse-based representations and evaluations of teacher–AI co-agency and governance visibility, rather than directly verifying enacted teaching practices. Second, the sample comes from two institutions (Mexico and Ecuador) and a specific disciplinary field (economics/business), which limits its transferability to other disciplines, institutional cultures, and instructional modalities. Third, the elicitation prompts were salience-oriented, aiming to evoke teacher uses of AI that students perceived as pedagogically noteworthy. Accordingly, the resulting typology should be interpreted as a typology derived from student discourse under this frame, not as an exhaustive map of the full spectrum of student perceptions of teacher AI use. Fourth, administering the instrument via an online questionnaire may introduce biases associated with self-selection, variability in the degree of AI exposure across courses, and heterogeneity in the type of AI used by students. Fifth, the lexicometric analysis and the estimation of A/D/G and II depend on technical decisions (segmentation, stopwords, lemmatization/dictionaries); therefore, stability may be sensitive to parameterization choices and linguistic variation. The survey was administered in Spanish, and the analysis was conducted on the original Spanish corpus (i.e., no translation effects in the lexicometric procedures). However, regional lexical variation across Mexican and Ecuadorian Spanish may influence co-occurrence patterns and the relative salience of certain lemmas. Finally, data collection was cross-sectional and conducted within a limited timeframe, preventing the examination of developmental trajectories, changes under sustained AI exposure, or causal inferences.
Based on these limitations, we propose a research agenda to strengthen the validity, robustness, and applicability of the model. This includes the following: (a) intervention trials by typology to test, using pre-and/or controlled designs, in which training and governance strategies improve epistemic control, traceability, and epistemic confidence; (b) validity and robustness studies of the instrument and the A–D–G–II scheme (e.g., sensitivity to dictionaries/lemmatization/segmentation, split-sample stability, and interval estimation via resampling), including stability checks across Mexico vs. Ecuador and robustness to regional lexical variation, as well as invariance tests by discipline, level, and country; and (c) longitudinal studies to model typology stability and II trajectories over time. In addition, (d) future work should integrate student co-agency (including bidirectional disclosure) and complementary measures such as critical self-efficacy and epistemic confidence to examine how teacher governance aligns with student verification practices. Moreover, (e) applied tools (e.g., an A–D–G–II dashboard with interpretable alerts) should be prototyped responsibly and evaluated through usability testing, ethical safeguards, and nonpunitive criteria. Finally, the empirical program should be expanded through (f) replications in other disciplines and modalities; (g) triangulation with practice-based evidence (e.g., classroom observation, task artifacts, syllabi, and interviews) to strengthen convergent validity between discourse-based perceptions and publicly enacted pedagogical mediation; and (h) studies using neutral and counterbalanced prompts (including problematic, ambiguous, or negative teacher–AI experiences) to test the stability and scope of the typology across elicitation conditions.

Author Contributions

Conceptualization, T.F.-R., A.P.-R. and P.F.M.; methodology, T.F.-R., A.P.-R. and L.C.R.; software, W.C.U. and L.C.R.; validation, T.F.-R., A.P.-R. and P.F.M.; formal analysis, W.C.U., L.C.R. and T.F.-R.; investigation, T.F.-R., A.P.-R., W.C.U., L.C.R. and P.F.M.; resources, T.F.-R. and A.P.-R.; data curation, W.C.U. and L.C.R.; writing—original draft preparation, W.C.U. and L.C.R.; writing—review and editing, T.F.-R., A.P.-R. and P.F.M.; visualization, W.C.U. and L.C.R.; supervision, T.F.-R. and A.P.-R.; project administration, T.F.-R. and A.P.-R.; funding acquisition, T.F.-R. and A.P.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Universidad Técnica de Machala on 4 June 2025, with the code: UTMACH-DIDI-EFC-2025-010.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data supporting the reported results are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Immediate Personalizer

The excerpts below come from the original Spanish corpus, which was built using student discourse. Spanish segments are presented verbatim (lightly cleaned for spacing) and followed by an English translation for better readability. Bolded terms in Spanish indicate salient lemmas (highly recurrent within this class and aligned with the A/D/G mapping). Translations are provided for reporting purposes only.
Lemma mapping (expanded; illustrative, non-exhaustive)
A-lexicon (Teacher agency lemmas)
Personalizar, personalización, adaptar, adaptarse, ajustar, enseñanza, aprender/aprendizaje, ritmo, estilo, contenido, material, método, explicar, ofrecer, brindar, retroalimentación, detectar, identificar, dificultad/dificultades, duda/dudas, apoyar/apoyo, proponer, fomentar, integrar, mantener(se), actualizado, capacidad, habilidad/habilidades, cualidad/cualidades.
D-lexicon (Delegation/AI-output lemmas)
IA, inteligencia artificial, herramienta(s), tecnología(s), recurso(s), recursos digitales, diapositiva(s), presentación(es), test/preguntas de test, crear, generar, contenido(s), interactivo(s), visual(es), información, investigación, archivo(s), exposición(es).
G-lexicon (Governance visibility lemmas; sparse in this class)
Ética, responsable, crítico/pensamiento crítico, uso correcto, formas correctas (governance cues appear occasionally and mainly as normative markers rather than detailed verification procedures).
Note on polysemy. When a lemma could plausibly function across axes (e.g., correcto as “correct use” vs. “to correct”), assignment is guided by its dominant meaning in the surrounding co-text of prototypical segments; ambiguous cases are not treated as representative axis markers.
Prototypical evidence
Score 1314.76 (ES):superpoder reside en la capacidad de adaptar el material y el ritmo de enseñanza a las necesidades individuales de cada estudiante… con la retroalimentación inmediata…”
(EN): “The ‘superpower’ lies in the ability to adapt materials and teaching pace to each student’s individual needs, combined with immediate feedback.”
Score 1239.53 (ES): “…un profesor que usa inteligencia artificial tiene la capacidad de personalizar la enseñanza según las necesidades de cada estudiante, ofrecer retroalimentación inmediata y detectar dificultades…”
(EN): “A teacher who uses artificial intelligence is described as personalizing instruction to students’ needs, providing immediate feedback, and detecting difficulties early.”
Score 1217.48 (ES): “Una de las más valiosas es su capacidad para personalizar el aprendizajeadaptar el contenido, el ritmo y los métodos según las necesidades…”
(EN): “One of the most valuable qualities is the ability to personalize learning by adapting content, pace, and methods to students’ needs.”
Score 1214.12 (ES):detecta errores al instante y ofrece sugerencias… adaptar el contenido al estilo, ritmo y nivel… en tiempo real…”
(EN): “They detect errors instantly and offer suggestions, adapting content to each student’s style, pace, and level in real time.”
Score 1201.07 (ES): “Me parecen extraordinarias las habilidades… para personalizar la enseñanza, resolver dudas rápidamente y adaptar recursos a distintos estilos de aprendizaje…”
(EN): “Students describe extraordinary skills: personalizing instruction, resolving doubts quickly, and adapting resources to different learning styles.”
Score 1182.36 (ES): “Usando IA el docente puede adaptar el contenido y el ritmo… a las necesidades individuales… creando itinerarios personalizados…”
(EN): “Using AI, the teacher is described as adapting content and pace to individual needs and creating personalized learning pathways.”
Score 1179.75 (ES): “…IA para generar diapositivascrea las diapositivaspersonalizar el aprendizajeadaptando los contenidosretroalimentación inmediata.”
(EN): “AI is mentioned as generating slides, while the teacher is portrayed as personalizing learning, adapting content, and providing immediate feedback.”
Score 1178.24 (ES): “Más recursos didácticoscapacidad de personalizar el aprendizaje usando IA, adaptando los contenidos a las necesidades…”
(EN): “Students emphasize more didactic resources and the teacher’s ability to personalize learning by adapting content to needs using AI.”
Score 1139.29 (ES): “…usar la IA para adaptar clases a distintos niveles, estilos de aprendizaje y ritmospersonalizar materiales según necesidades…”
(EN): “AI is described as helping adapt classes to different levels, learning styles, and paces, enabling personalized materials aligned with needs.”
Score 1137.68 (ES): “…personalización del aprendizaje: adapta el contenido, ritmo y estilo de enseñanza según las necesidades…”
(EN): “The discourse highlights learning personalization: adapting content, pace, and teaching style to students’ needs.”
Score 1137.20 (ES): “…hacer diapositivas al instantepersonalización extrema del aprendizajeadaptar contenido, ritmo y método de enseñanza a necesidades y estilo…”
(EN): “Students mention instant slide creation and portray ‘extreme’ personalization: adapting content, pace, and methods to needs and learning style.”
Score 1132.18 (ES): “Crear preguntas para test, crear presentaciones… capacidad de personalizar el aprendizajeproporcionando recursos adaptados…”
(EN): “AI is referenced for generating tests/presentations, while the teacher is portrayed as personalizing learning by providing adapted resources.”
Score 1113.66 (ES): “…personalizar la enseñanza, identificar rápidamente necesidadesproponer recursos adaptados… y fomentar pensamiento crítico… de manera ética y responsable.”
(EN): “Beyond personalization, some accounts include normative cues: fostering critical thinking and teaching ethical/responsible AI use.”
Score 1109.81 (ES): “…un profesor que usa inteligencia artificial de forma efectiva puede personalizar el aprendizajeadaptando los contenidos al ritmo y estilo…”
(EN): “A teacher is framed as using AI effectively to personalize learning by adapting content to students’ pace and style.”
Score 1090.08 (ES): “…personalizar la enseñanza adaptando explicaciones y ejemplos al ritmo y estilo de aprendizaje…”
(EN): “Students describe personalization through adapting explanations and examples to learners’ pace and style.”
Score 1089.42 (ES): “La capacidadpersonalizar el aprendizajeadaptar al ritmo y el contenido del curso.”
(EN): “The teacher’s capability is framed as personalizing learning by adapting course content and pace.”
Score 1059.88 (ES): “…adaptar el contenido educativo a necesidadesidentificar áreas de mejora y brindar apoyo… integrar nuevas tecnologías…”
(EN): “Accounts emphasize adapting educational content, identifying areas for improvement, providing individualized support, and integrating new technologies.”
Score 1037.80 (ES): “…superpoder de personalizar el aprendizajeadapta contenidos según el ritmo… con ayuda de ejemplos interactivos.”
(EN): “Personalization is framed as a ‘superpower’: adapting content to pace, supported by interactive examples.”
Score 1027.91 (ES): “…capacidad de personalizar el aprendizajeadaptar clases a necesidades… formas correctas de sacarle provecho a la IA.”
(EN): “Students mention personalization and adaptation to needs, and occasionally refer to ‘correct ways’ of getting the most out of AI.”
Score 1012.24 (ES): “La IA puede identificar patrones de aprendizajeáreas con dificultades y adaptar contenido, ritmo y estilo de enseñanza…”
(EN): “AI is described as identifying learning patterns and difficulties and supporting adaptation of content, pace, and teaching style.”

Appendix B. The Technological Literacy Teacher

The excerpts below come from the original Spanish corpus, which was built from student discourse. Boldface indicates the salient lemmas highlighted within the prototypical segments. English translations are provided for better readability and do not affect the lexicometric procedures.
Lemma mapping (expanded; illustrative, non-exhaustive)
A-lexicon (Teacher agency lemmas)
Enseñar, explicar, saber, dominar, manejo/manejar, interpretar, identificar, diferenciar, investigar, innovar, integrar, actualizar(se), aprovechar, comunicar(se), transmitir, impulsar, motivar, recomendar, conocer.
D-lexicon (Delegation/AI-output lemmas)
IA, inteligencia artificial, herramienta(s), tecnología(s), prompt/promps, imagen(es), presentación(es), video(s), información, dato(s), aplicación/app, actividad(es), ejercicio(s), simulación(es), evaluación(es) automatizada(s), resumir/resumen (tarea), Excel, Kahoot.
G-lexicon (Governance visibility lemmas; present but not dominant)
Correcto/correctamente, responsable, ética, plagio, detector/detectoras, checar, revisar, limitar, dependiente (no-dependencia), falacias, información falsa, fuente(s).
Note on polysemy. When a lemma could plausibly function across axes (e.g., correcto as “correct use” vs. “to correct”), assignment is guided by its dominant meaning in the surrounding co-text of prototypical segments; ambiguous cases are not treated as representative axis markers.
Prototypical evidence
Score 177.31 (ES): “realmente por ahora no he visto… profesor que use IA… imagen creada por IA… desarrollar presentaciones… enseñarte cómo se usan correctamente… uso de herramientas… promps.”
(EN): “Students mention learning how to use AI-generated outputs (images/presentations) and being taught correct use of tools and prompts.”
Score 174.96 (ES): “la IA puede proporcionar información actualizada… cualidad de saber interpretar los datos… identificar cuándo se ha usado esa herramienta.”
(EN): “Students highlight knowing how to interpret AI-provided data and recognize when AI tools have been used.”
Score 154.72 (ES): “el uso correcto y manejo… un docente que usa… puede crear actividades interactivas.”
(EN): “The discourse emphasizes correct use/handling of AI and innovation through interactive activities.”
Score 153.65 (ES): “domina de IA y sabe integrarlas… hace la clase más entretenida… investigando información…”
(EN): “Teachers are portrayed as mastering AI, integrating it into class, and using it to search/investigate information.”
Score 153.59 (ES): “nos permite usar la IA… habilidad para interpretar… cuando explica con videos o imágenes de IA… cuando crea imágenes…”
(EN): “Students mention learning to use AI and teachers explaining with AI images/videos and creating AI images.”
Score 152.65 (ES): “ninguna… usar muy bien la tecnología… actualizado… para investigar más a fondo…”
(EN): “Students associate the profile with strong technology use and staying updated to investigate topics more deeply.”
Score 151.12 (ES): “aproveche la tecnología… ayudar a aprender… maestra de tecnologías… nos ha enseñado que las IA pueden… videos, fotos, sacar información…”
(EN): “A technology teacher is described as showing what AI can do (videos/photos/information) to support learning.”
Score 147.36 (ES): “en vez de decirte que no las uses, te impulsa a usarlas… checar si hicimos la tarea con IA… revisar si hay plagio…”
(EN): “Students mention encouragement to use AI alongside checks for plagiarism and whether work involved AI.”
Score 144.80 (ES): “la manera que trabajan con la IA… la clase se ve más moderna…”
(EN): “Students frame AI use as making class feel more modern and efficient.”
Score 143.22 (ES): “maestro de TICs nos enseñe IA… resumir la tarea… saber en segundos si está completa…”
(EN): “AI is linked to teaching in ICT courses and to rapid summarization/checking of assignment completeness.”
Score 138.06 (ES): “me alegra ver que algunos profesores… nos enseñan a usarla de manera correcta y responsable…”
(EN): “Students emphasize being taught to use AI correctly and responsibly.”
Score 137.76 (ES): “crear presentaciones con las tecnologías… usarla como herramientas… a la vanguardia…”
(EN): “Students cite creating presentations with technology and using AI tools to stay up to date.”
Score 137.20 (ES): “desarrollar su clase… con herramientas y tecnologías… un docente que usa… sabe explicar… aprovecha la tecnología…”
(EN): “The profile is described as using tools/technologies to teach dynamically and explain clearly.”
Score 136.98 (ES): “los veo usar la IA… profe de tecnologías…”
(EN): “Students mention seeing teachers—especially technology instructors—use AI.”
Score 136.72 (ES): “aprovechar estas herramientas… apoyo… no le veo ningún lado negativo…”
(EN): “AI tools are framed as supportive and broadly beneficial.”
Score 135.90 (ES): “no he visto algo así… que sepan de herramientas que uno como alumno no conoce.”
(EN): “Students value teachers who know AI tools that students may not already know.”
Score 134.08 (ES): “sepa explicar con apoyo de herramientas nuevas… que no le tenga miedo a usarlas… motivarnos a aprender…”
(EN): “The profile includes teaching with new tools and encouraging students without fear of using them.”
Score 133.66 (ES): “todos tenemos derecho a usar algo de IA… en clases online no podemos ver si usa o no el docente…”
(EN): “Some accounts note visibility constraints (e.g., online classes) affecting what students can observe.”
Score 132.54 (ES): “todavía hay profesores que ven a la IA como algo malo… es una herramienta muy útil…”
(EN): “Students contrast resistance with framing AI as a useful tool.”
Score 132.07 (ES): “habilidades de saber manejar la IA… dar enunciados para aprovecharla lo mejor posible… correctamente estas herramientas.”
(EN): “Students emphasize knowing how to handle AI and formulate inputs to use tools effectively and correctly.”

Appendix C. Operational Optimizing Teacher

The excerpts below come from the original Spanish corpus, which was built using student discourse. Spanish segments are presented verbatim (lightly cleaned for spacing) and followed by an English translation for better readability. Bolded terms in Spanish indicate salient lemmas (highly recurrent within this class and aligned with the A/D/G mapping). Translations are provided for reporting purposes only.
Lemma mapping (expanded; illustrative, non-exhaustive)
A-lexicon (Teacher agency lemmas)
Explicar, organizar, orientar, asesorar, enseñar, facilitar, ayudar, comprender, entender, identificar, corregir, calificar, evaluar, presentar, adaptar, planificar, resolver, actualizar(se), dominar, saber (cómo), elaborar (prompts/indicaciones).
D-lexicon (Delegation/AI-output lemmas)
IA, inteligencia artificial, herramienta, tecnología, aplicación/app, prompt, presentación, diapositiva, imagen, video, información, buscar, obtener, sacar, generar, automatizar, resumir, citar, APA, tarea, trabajo, archivo, segundos, minutos.
G-lexicon (Governance visibility lemmas; present but not dominant)
Plagio/plagiado, alucinación, verdadero/verdadera, uso correcto/correctamente, error(es), fundamento(s)/base (as quality cues). (These appear as episodic cues—e.g., plagiarism checking, “true information,” avoiding hallucinations—rather than as sustained descriptions of criteria, sources, or verification procedures.)
Note on polysemy. When a lemma could plausibly function across axes (e.g., correcto as “correct use” vs. “to correct”), assignment is guided by its dominant meaning in the surrounding context of prototypical segments; ambiguous cases are not treated as representative axis markers.
Prototypical evidence
Score 332.05 (ES): “las respuestas instantaneas que ofrecen información mas exactahace más sencillas tareas… como citar en APA… agilizar su trabajobuscar maneras más fáciles de explicar un tema… automatizar tareas administrativas.”
(EN): “Students emphasize instant, more accurate information that makes tasks easier (e.g., APA citations, summaries), streamlines work, and supports easier explanations; automation is also mentioned.”
Score 317.24 (ES): “con la misma pregunta te aparecerá la información exactaobtener información más detallada… explicar… delimitar temas para calificarhacer presentaciones.”
(EN): “AI is framed as returning exact/detailed information, supporting explanation, delimiting topics for grading, and producing presentations.”
Score 297.14 (ES): “el saber qué especificaciones darle a la IA… mejor resultadohacer videos… investigar… distintas opcionesbuscar exactamente la información que desean.”
(EN): “The discourse highlights knowing how to specify prompts/inputs to obtain better results, including videos and research options, and retrieving the exact information needed.”
Score 287.33 (ES): “las presentaciones que realizan… app para evaluar… fácil explicar temas… se le facilita la redacción… se pueden hacer imágenes con IA.”
(EN): “Students mention presentations, evaluation apps, easier topic explanations, writing support, and AI-generated images.”
Score 282.47 (ES): “explicaciones más fáciles de entender… te presentan la información… asesoramiento para saber usarlas de manera correctaexplicar temas complejos de forma simple.”
(EN): “Accounts emphasize making explanations easier to understand, presenting information clearly, and advising correct use to explain complex topics simply.”
Score 278.14 (ES): “encontrar información más precisa… saber si un trabajo está plagiadocorregir trabajospresentaciones inmediata… en menos de 5 minutos.”
(EN): “Students describe precision, plagiarism detection, correction, rapid presentation creation, and very fast task completion (minutes).”
Score 276.00 (ES): “hacer presentaciones… identificar si la información es verdadera… calificarcitar… la rapidez con la cual pueden hacer distintas actividades.”
(EN): “The discourse highlights fast presentation creation, identifying true information, grading, citing, and overall speed across tasks.”
Score 267.56 (ES): “ayudan a saber utilizarla para no caer en la alucinación de la IA por un mal promptresumir para mejor entendimiento… buscar información para explicar.”
(EN): “Students mention learning to avoid AI hallucinations due to poor prompts, using summarization for understanding, and retrieving information for explanations.”
Score 266.32 (ES): “sacar información con palabras claves… adaptar su plan educativo en minutos o incluso segundos… organización de temas de estudio…”
(EN): “AI is framed as enabling keyword-based retrieval, rapid planning (minutes/seconds), and organizing study topics.”
Score 263.43 (ES):explicar de manera sencilla un tema… usar tecnología actual… presentaciones más rápidas… identificar trabajos hechos con IA.”
(EN): “Students describe simple explanations, faster presentations, and identifying work produced with AI.”
Score 261.21 (ES): “entendamos la información… hacer más fácilsaber usarlas de manera adecuada para hacer un trabajo presentable.”
(EN): “Accounts stress making information easier to understand and using AI appropriately to produce presentable work.”
Score 260.19 (ES):corregir algún trabajo… explicar temas… análisis más desarrollado en menor tiempo… resolver problemas de forma rápida.”
(EN): “Students mention correction, clearer topic explanations, more developed analysis in less time, and fast problem solving.”
Score 259.29 (ES): “cómo elabora los promps… realizar trabajos con una sola indicación… cuando no entendemos… mostrarlo visualmente… facilitar comprender un tema complicado.”
(EN): “The discourse highlights crafting prompts, completing work from a single instruction, and supporting understanding of complex topics, including visual explanation.”
Score 258.12 (ES):uso correcto para organizar tiempos y temas… trabajo más eficiente… generar información más enriquecedora dependiendo del prompt.”
(EN): “Students link correct use to organizing time/topics, increasing efficiency, and generating richer information depending on prompts.”
Score 256.16 (ES): “más actualizados… explicar presentaciones… mostrar de manera más sencilla… saber que hay IA para crear… una presentación.”
(EN): “Accounts mention being updated, explaining presentations more simply, and using AI to create various outputs such as presentations.”
Score 251.55 (ES): “no saben el uso correcto… piden a la IA que les haga la presentación… sin fundamentos… obtener información actualizada…”
(EN): “Some narratives contrast correct use with superficial reliance (asking AI to produce a presentation without basis), while also noting access to updated information.”
Score 249.06 (ES): “buscar puntos claves… hacer citas bibliográficas… presentaciones con IA… problema en específico… adaptar contenidos y métodos…”
(EN): “Students mention extracting key points, supporting bibliographic citations, creating presentations, and adapting content/methods for specific problems.”
Score 247.78 (ES): “gran conocimiento… realizar tareas más rápido y eficiente… explicar de manera sencilla… sacar información de varias fuentes… calificar más rápido.”
(EN): “The profile is framed as doing tasks faster and more efficiently, explaining simply, extracting information from multiple sources, and grading faster.”
Score 247.62 (ES): “generar los prompts… usan ilustrativos… trabajo más presentable… hacer las cosas más fáciles.”
(EN): “Students emphasize generating prompts, using illustrative materials, and making work more presentable and easier.”
Score 247.28 (ES): “saber qué prompts usar… de manera rapidapresentaciones interactivas… corregir temas o contar palabras…”
(EN): “The discourse highlights knowing which prompts to use to get desired outputs, rapid interactive presentations, and support for corrections or technical tasks (e.g., word counting).”

References

  1. Cabero-Almenara, J.; Palacios-Rodríguez, A.; Loaiza-Aguirre, M.I.; Rivas-Manzano, M.d.R.d. Acceptance of educational artificial intelligence by teachers and its relationship with some variables and pedagogical beliefs. Educ. Sci. 2024, 14, 740. [Google Scholar] [CrossRef]
  2. Kwid, G.; Sarty, N.; Yang, D. A review of AI tools: Definitions, functions, and applications for K-12 education. AI Comput. Sci. Robot. Technol. 2024, 3, 1–22. [Google Scholar] [CrossRef]
  3. Murris, K. Karen Barad as Educator: Agential Realism and Education; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
  4. Bakhadirov, M.; Alasgarova, R.; Rzayev, J. Factors influencing teachers’ use of artificial intelligence for instructional purposes. IAFOR J. Educ. 2024, 12, 9–32. [Google Scholar] [CrossRef]
  5. Barad, K. On touching—The inhuman that therefore I am. Differences 2012, 23, 206–223. [Google Scholar] [CrossRef]
  6. Deng, G. Productive entanglements: A quantum-agential framework for cognition, agency, and ethics in AI-mediated education. Artif. Intell. Educ. Stud. 2025, 1, 70–82. [Google Scholar] [CrossRef]
  7. Kinowska, H.; Sienkiewicz, Ł.J. Influence of algorithmic management practices on workplace well-being—Evidence from European organisations. Inf. Technol. People 2023, 36, 21–42. [Google Scholar] [CrossRef]
  8. Ponce Rojo, A.; Fontaines-Ruiz, T.; Bracho, A.S.; Cánquiz Rincón, L. From digital natives to AI natives: Emerging competencies and media and information literacy in higher education. Educ. Sci. 2025, 15, 1134. [Google Scholar] [CrossRef]
  9. Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
  10. Khosravi, H.; Shum, S.B.; Chen, G.; Conati, C.; Tsai, Y.-S.; Kay, J.; Knight, S.; Martinez-Maldonado, R.; Sadiq, S.; Gašević, D. Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 2022, 3, 100074. [Google Scholar] [CrossRef]
  11. Kizilcec, R.F.; Lee, H. Algorithmic Fairness in Education. arXiv 2020. [Google Scholar] [CrossRef]
  12. Wolf, A. Algorithmic fairness and educational justice. Educ. Theory 2025, 75, 661–681. [Google Scholar] [CrossRef]
  13. Islamutdinov, V.; Akhmetshin, E.; Lydia, E.L.; Raja, P.V.K. Implementation of AI in an ethical and responsible manner to empower education in digital sector. In Lecture Notes in Networks and Systems; Springer Nature: Singapore, 2025; pp. 127–134. [Google Scholar] [CrossRef]
  14. Manoharan, G.; Ashtikar, S.P.; Dudhagara, C.R. Ethics of AI in the educational sector—Navigating the moral landscape. In Communications in Computer and Information Science; Springer Nature: Cham, Switzerland, 2025; pp. 254–272. [Google Scholar] [CrossRef]
  15. Trotta, A.; Ziosi, M.; Lomonaco, V. The future of ethics in AI: Challenges and opportunities. AI Soc. 2023, 38, 439–441. [Google Scholar] [CrossRef]
  16. Mouta, A.; Torrecilla-Sánchez, E.M.; Pinto-Llorente, A.M. Comprehensive professional learning for teacher agency in addressing ethical challenges of AIED: Insights from educational design research. Educ. Inf. Technol. 2024, 30, 3343–3387. [Google Scholar] [CrossRef]
  17. Mouta, A.; Pinto-Llorente, A.M.; Torrecilla-Sánchez, E.M. “Where is agency moving to?”: Exploring the interplay between AI technologies in education and human agency. Digit. Soc. Ethics Socio-Leg. Gov. Digit. Technol. 2025, 4, 49. [Google Scholar] [CrossRef]
  18. Nopas, D.-S. Algorithmic learning or learner autonomy? Rethinking AI’s role in digital education. Qual. Res. J. 2025. [Google Scholar] [CrossRef]
  19. Reinert, M. Alceste une méthodologie d’analyse des données textuelles et une application: Aurelia de Gerard de Nerval. Bull. Méthodol. Sociol. 1990, 26, 24–54. [Google Scholar] [CrossRef]
  20. Reinert, M. La méthodologie Alceste et l’analyse d’un corpus de 304 récits de cauchemars d’enfants. In Proceedings of the Convegno Internazionale “Ricerca Qualitativa e Computer Nelle Scienze Sociali”, Rome, Italy, 30 November–2 December 1992; pp. 163–182. Available online: https://hal.science/hal-04629424/ (accessed on 11 February 2026).
  21. Reinert, M. La tresse du sens et la méthode « Alceste». Application aux « Rêveries du promeneur solitaire». J. Agrosains Dan Teknol. 2000. Available online: https://scholar.archive.org/work/jqyswcorqjfalld7tharhdr76e/access/wayback/http://lexicometrica.univ-paris3.fr/jadt/jadt2000/pdf/31/31.pdf (accessed on 11 February 2026).
  22. Reinert, M. Le rôle de la répétition dans la représentation du sens et son approche statistique par la méthode ALCESTE. Semiotica 2003, 2003, 389–420. [Google Scholar] [CrossRef]
  23. Davies, B.; Harré, R. Positioning: The discursive production of selves. J. Theory Soc. Behav. 1990, 20, 43–63. [Google Scholar] [CrossRef]
  24. Harré, R. Positioning theory: Moral dimensions of social-cultural psychology. In The Oxford Handbook of Culture and Psychology; Oxford University Press: Oxford, UK, 2012. [Google Scholar] [CrossRef]
  25. Harré, R.; Moghaddam, F.M.; Cairnie, T.P.; Rothbart, D.; Sabat, S.R. Recent advances in positioning theory. Theory Psychol. 2009, 19, 5–31. [Google Scholar] [CrossRef]
  26. Aiken, L.R. Content Validity and Reliability of Single Items or Questionnaires. Educ. Psychol. Meas. 1980, 40, 955–959. [Google Scholar] [CrossRef]
  27. Gee, J.P. An Introduction to Discourse Analysis: Theory and Method, 4th ed.; Routledge: London, UK, 2014. [Google Scholar] [CrossRef]
  28. Patton, M.Q. Qualitative Research & Evaluation Methods: Integrating Theory and Practice; SAGE Publications: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  29. Hess, F.; Salze, P.; Weber, C.; Feuillet, T.; Charreire, H.; Menai, M.; Perchoux, C.; Nazare, J.-A.; Simon, C.; Oppert, J.-M.; et al. Active Mobility and Environment: A Pilot Qualitative Study for the Design of a New Questionnaire. PLoS ONE 2017, 12, e0168986. [Google Scholar] [CrossRef]
  30. Sainz de la Maza, M.; Idoiaga Mondragon, N.; Gil de Montes, L. Remembering according to National Identity and Ideology: Influences of Ideological Positioning and National Identities on the Collective Memory of the Civil War and Dictatorship in Spain among Youths/La Memoria En Función de La Identidad Nacional Y La Ideología: Influencias Del Posicionamiento Ideológico Y de La Identidad Nacional En La Memoria Colectiva de La Guerra Civil Y La Dictadura En España Entre Los Jóvenes. Rev. Psicol. Soc. 2024, 39, 376–405. [Google Scholar] [CrossRef]
  31. Akhmetov, I.; Pak, A.; Ualiyeva, I.; Gelbukh, A. Highly language-independent word lemmatization using a machine-learning classifier. Comput. y Sist. 2020, 24, 1353–1364. [Google Scholar] [CrossRef]
  32. Malik, S.Z.; Iqbal, K.; Sharif, M.; Shah, Y.A.; Khalil, A.; Irfan, M.A.; Rosak-Szyrocka, J. Attention-aware with stacked embedding for sentiment analysis of student feedback through deep learning techniques. PeerJ Comput. Sci. 2024, 10, e2283. [Google Scholar] [CrossRef]
  33. Illia, L.; Sonpar, K.; Bauer, M.W. Applying Co-occurrence Text Analysis with ALCESTE to Studies of Impression Management: Co-Occurrence Text Analysis with ALCESTE. Br. J. Manag. 2014, 25, 352–372. [Google Scholar] [CrossRef]
  34. Ratinaud, P.; Marchand, P. Application de la méthode ALCESTE à de “gros” corpus et stabilité des “mondes lexicaux”: Analyse du “CableGate” avec IRaMuTeQ. In Proceedings of the 11èmes Journées Internationales d’Analyse Statistique des Données Textuelles, Liège, Belgique, 18–21 June 2012; pp. 835–844. [Google Scholar]
  35. Soares, S.S.S.; da Costa, C.C.P.; Carvalho, E.C.; Queiroz, A.B.A.; Peres, P.L.P.; Souza, N.V.D.d.O. Teaching Iramuteq for Use in Qualitative Research according to YouTube Videos: An Exploratory-Descriptive Study. Rev. Esc. Enferm. USP 2022, 56, e20210396. [Google Scholar] [CrossRef]
  36. Vaccaro, M.; Almaatouq, A.; Malone, T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nat. Hum. Behav. 2024, 8, 2293–2303. [Google Scholar] [CrossRef]
  37. Wiese, L.J.; Patil, I.; Schiff, D.S.; Magana, A.J. AI ethics education: A systematic literature review. Comput. Educ. Artif. Intell. 2025, 8, 100405. [Google Scholar] [CrossRef]
  38. Goddard, K.; Roudsari, A.; Wyatt, J.C. Automation bias: A systematic review of frequency, effect mediators, and mitigators. J. Am. Med. Inform. Assoc. 2012, 19, 121–127. [Google Scholar] [CrossRef]
  39. Jauernig, J.; Uhl, M.; Walkowitz, G. People prefer moral discretion to algorithms: Algorithm aversion beyond intransparency. Philos. Technol. 2022, 35, 2. [Google Scholar] [CrossRef]
  40. Leijen, Ä.; Pedaste, M.; Lepp, L. Teacher agency following the ecological model: How it is achieved and how it could be strengthened by different types of reflection. Br. J. Educ. Stud. 2020, 68, 295–310. [Google Scholar] [CrossRef]
  41. Delgado, N.; Carrasco, L.C.; la Maza, M.S.D.; Etxabe-Urbieta, J.M. Aplicación de la Inteligencia Artificial (IA) en Educación: Los beneficios y limitaciones de la IA percibidos por el profesorado de educación primaria, educación secundaria y educación superior. Rev. Electrónica Interuniv. De Form. Del Profr. 2024, 27, 207–224. [Google Scholar] [CrossRef]
  42. Dieterle, E.; Dede, C.; Walker, M. The cyclical ethical effects of using artificial intelligence in education. AI Soc. 2022, 39, 633–643. [Google Scholar] [CrossRef]
  43. Durampart, M.; Bonfils, P.; Romero, M. Digital Acculturation in the Era of Artificial Intelligence. In Creative Applications of Artificial Intelligence in Education; Urmeneta, A., Romero, M., Eds.; Palgrave Macmillan: London, UK, 2024; pp. 45–56. [Google Scholar] [CrossRef]
  44. Adhikari, D.P.; Pandey, G.P. Integrating AI in higher education: Transforming teachers’ roles in boosting student agency. Educ. Technol. Q. 2025, 2025, 151–168. [Google Scholar] [CrossRef]
  45. Chen, R.; Lee, V.R.; Lee, M.G. A cross-sectional look at teacher reactions, worries, and professional development needs related to generative AI in an urban school district. Educ. Inf. Technol. 2025, 30, 16045–16082. [Google Scholar] [CrossRef]
  46. Hodøl, H.-O.; Kjærgård, P.I.; Økland, Ø. From access to ability: Educator’s perspectives on AI and new digital divides in higher education. Digit. Soc. Ethics Socio-Leg. Gov. Digit. Technol. 2025, 4, 78. [Google Scholar] [CrossRef]
Table 1. Teacher Types: Immediate Personalizer.
Table 1. Teacher Types: Immediate Personalizer.
%STAssociated Lexical CoreKey Co-OccurrencesADGII
28.81A-lexicon (agency): personalize, adapt, pace, need, content, style, feedback, learning, teaching, resource, pathway, individualize.Personalize ↔ needs; adapt ↔ content; content ↔ pace; pace ↔ style; immediate ↔ feedback; detect ↔ difficulties; real ↔ time123.72.921.4625.4
D-lexicon (Delegation): AI; tool; generate; create; presentation; slide; activity; real-time; detect; suggestion
G-lexicon (Governance): verify; limit; criterion; error; check; responsibility
Prototypical evidence:
Score 1314.76: “The ‘superpower’ lies in adapting materials and teaching pace to individual needs, combined with the immediate feedback enabled by AI.”
Score 1239.53: “A teacher who uses AI can personalize teaching to students’ needs, provide immediate feedback, and detect learning difficulties.”
Score 1217.48: “Personalizing learning means adapting content, pace, and methods to each student’s needs.”

Prototypical segments are summarized here; extended evidence (additional segments and axis-level A/D/G lemma lists) is provided in Appendix A.
Note. %ST = percentage of text segments (ECUs) assigned to the class. A = teacher agency; D = delegation to AI; G = governance (explicit control markers such as criteria, limits, sources, verification, disclosure, and traceability). Lexicons are reported at the lemma level and list illustrative (non-exhaustive) overrepresented terms (high χ2; p ≤ 0.001). Co-occurrences were estimated within a ±5-word window. A, D, and G are normalized densities per 1000 words. II is a heuristic imbalance indicator computed as A/(D + G + 1), with +1 as a smoothing constant. Prototypical evidence corresponds to high-scoring segments used to anchor interpretation.
Table 2. The Technological Literacy Teacher.
Table 2. The Technological Literacy Teacher.
%STAssociated Lexical CoreKey Co-OccurrencesADGII
27.32A-lexicon (agency): teach; guide; explain; interpret; identify; know-how; update; evaluate; literacy; skill.Use ↔ AI; teach ↔ AI); tools ↔ AI; AI ↔ technology; use ↔ technology; teach ↔ tools; teach ↔ use; AI ↔ correct; AI ↔ presentations; teach ↔ presentations.29.297.556.662.0
D-lexicon (Delegation): AI; tool; prompt; generate; create; image (AI-generated outputs); presentation; video; application; activity
G-lexicon (Governance): responsible; ethical; correct use; plagiarism; source; verify; false information
Prototypical evidence:
Score 177.31: “They teach us how to use these tools correctly, how to use prompts, and how to create images and presentations with AI.”
Score 151.12: “The instructor knows a lot about AI tools and teaches us what they can do and how to use them.”
Score 147.36: “They encourage us to use AI, but also to check for plagiarism and to reflect on responsible use.”

Prototypical segments are summarized here; extended evidence (additional segments and axis-level A/D/G lemma lists) is provided in Appendix B.
Note. See Table 1 for metric definitions and computational details (A/D/G/II; lemma-level lexicons; ±5-word co-occurrence window; prototypical evidence).
Table 3. The Operational Optimizing Teacher.
Table 3. The Operational Optimizing Teacher.
%STAssociated Lexical CoreKey Co-OccurrencesADGII
43.87A-lexicon (Agency): organize, plan, explain, grade, evaluate, manage, present, adapt, provide feedback, content.Make/deliver ↔ presentations
Find/get/obtain ↔ information
Explain ↔ topic/izn a way
68.9118.916.32.69
D-lexicon (Delegation): AI, generate, summarize, cite, APA, correct, presentation, task, minute, automate, prompt.
G-lexicon (Governance): plagiarism, hallucination, verify, source, accuracy, check, and correct use.
Prototypical evidence:
Score 332.05: “Instant answers, more ‘accurate’ information, APA citations, summaries, and overall faster work.”
Score 317.24: “It provides accurate and detailed information, helps explain topics, and supports delimiting content for grading.”
Score 297.14: “Knowing what specifications to give AI to obtain better results; making videos; facilitating research by offering options.”

Prototypical segments are summarized here; extended evidence (additional segments and axis-level A/D/G lemma lists) is provided in Appendix C.
Note. See Table 1 for metric definitions and computational details (A/D/G/II; lemma-level lexicons; ±5-word co-occurrence window; prototypical evidence).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fontaines-Ruiz, T.; Ponce-Rojo, A.; Merchán, P.F.; Urcos, W.C.; Rincón, L.C. Distributed Teaching Agency–AI in the University: A Typology Based on Student Voice. Multimodal Technol. Interact. 2026, 10, 34. https://doi.org/10.3390/mti10040034

AMA Style

Fontaines-Ruiz T, Ponce-Rojo A, Merchán PF, Urcos WC, Rincón LC. Distributed Teaching Agency–AI in the University: A Typology Based on Student Voice. Multimodal Technologies and Interaction. 2026; 10(4):34. https://doi.org/10.3390/mti10040034

Chicago/Turabian Style

Fontaines-Ruiz, Tomás, Antonio Ponce-Rojo, Paolo Fabre Merchán, Walther Casimiro Urcos, and Liliana Cánquiz Rincón. 2026. "Distributed Teaching Agency–AI in the University: A Typology Based on Student Voice" Multimodal Technologies and Interaction 10, no. 4: 34. https://doi.org/10.3390/mti10040034

APA Style

Fontaines-Ruiz, T., Ponce-Rojo, A., Merchán, P. F., Urcos, W. C., & Rincón, L. C. (2026). Distributed Teaching Agency–AI in the University: A Typology Based on Student Voice. Multimodal Technologies and Interaction, 10(4), 34. https://doi.org/10.3390/mti10040034

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop