Next Article in Journal
Application of Geo-Bag and Cement Concrete Blocks in Riverbank Erosion Control: A Study of Satkhira Koyra
Previous Article in Journal
The Impact of Servitization on Performance in Manufacturing Firms in the Digital Era
Previous Article in Special Issue
The Role of AI Implementation in Higher Education in Achieving the Sustainable Development Goals: A Case Study from Slovenia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Facilitated Lecturers in Higher Education Videos as a Tool for Sustainable Education: Legal Framework, Education Theory and Learning Practice

by
Anastasia Atabekova
,
Atabek Atabekov
* and
Tatyana Shoustikova
*
Law Institute, Peoples’ Friendship University of Russia named after Patrice Lumuba, 117198 Moscow, Russia
*
Authors to whom correspondence should be addressed.
Sustainability 2026, 18(1), 40; https://doi.org/10.3390/su18010040
Submission received: 9 November 2025 / Revised: 5 December 2025 / Accepted: 9 December 2025 / Published: 19 December 2025

Abstract

The study aims to establish a comprehensive framework aligning institutional governance, pedagogical theories, and teaching practice for the sustainable adoption of AI-facilitated digital representatives of human instructors in higher education videos within universities. The study employs a systemic qualitative approach and grounded theory principles to analyze administrative/legal documents and academic publications. The methodology includes source searching and screening, automated text analysis using the Lexalytics tool, clustering and thematic interpretation of the findings, and a subsequent discussion of the emerging perspectives. Following the analysis of international/supranational/national regulations, the findings reveal a significant regulatory gap for humans’ digital representatives in educational videos and suggest a governance baseline for tailored institutional guidelines that address data protection, copyright, and ethical compliance. Theoretically, the study synthesizes evidence-informed educational theories and concepts to form a robust theoretical foundation for using humans’ digital representatives in higher education instructional videos and identifies constructivism, student-centered personalized learning, multimodal multimedia-based learning principles, smart and flipped classrooms, and post-digital relations pedagogy as crucial foundational concepts. The findings suggest a thematic taxonomy that outlines diverse digital representative types, their varying efficiency based on knowledge and course type, and university community attitudes highlighting benefits and challenges. The overall contribution of this research lies in an integrated interdisciplinary framework—including the legal context, pedagogical theory, and promising practices—that guides the responsible use of digital human representatives in higher education videos.

1. Introduction

The advancement of artificial intelligence (AI) is reshaping the educational landscape, with new tools altering how teaching is designed, delivered, and scaled. Both policy-makers and researchers acknowledge the intersection of AI and the sustainable development goals and view the AI phenomenon as an “operationalizing technology for a sustainable future” [1], within which digital representatives of human teachers and lectures have emerged as an emphasized element in sustainability and cross-sector growth [2] and as a salient innovation for potential implications for SDG 4 (quality education, equity, and access), SDG 9 (innovation and sustainable technological progress), and institutional resource efficiency.
With reference to the above, performing as the foundational systems for the global sustainable development, higher education institutions require both empirical investigation and theoretical revision of pedagogical foundations, policies, and practices. Among various innovations, AI-facilitated digital representatives of humans (instructors/teachers/lecturers) have emerged as a significant development within educational practices. This phenomenon encompasses digital avatars of human teachers and humanoid animated teachers [3], AI chat bots for educational support [4], AI-generated pedagogical agents [5], AI as humanized agents [6], pedagogical AI conversational agents [7], synthetic humanlike spokespersons [8], and AI avatars as digital humans produced based on videos of real human behavior [9].
The benefits of integrating AI lecturers are acknowledged across sectors, including both business [8,10,11] and education [12,13,14]. Academia [15] suggests the consideration of digital teachers from economic and social reasons, including the global teacher shortage, increase in student enrollment, and the global character of learning [16]. Researchers have identified the potential contribution of digital twins to enhance programming instruction and to inform educational planning [17]. Scholars also advocate for a next-generation university learning architecture predicted on the digital twins’ integration into the campus environment; their work also tends to emphasize operational and technological issues [18].
However, institutions are approaching this technology in classroom settings with caution [19]. Policy-makers and practitioners note a regulatory gap for digital twins’ cross-sectoral advancement [20,21], which extends to the educational sector and makes institutions implement digital twins in an ad hoc manner. Normative dimensions of AI-facilitated digital representatives of human identity, consent, authorship, and labor rules invite a shift from purely epistemic considerations to both a tailored and nationally adopted legal framework and tangible societal implications [22]. Educators emphasize that, as of 2025, higher education encompasses a range of instructional approaches incorporating generative AI and digital twins that are often treated in fragmented isolated ways outside institutional policy [23] and utilized without a comprehensive theoretical background [24]. Scholars call for a comprehensive theoretical scaffold within the theory of education to guide the integration of AI-facilitated representatives of human teachers into institutional architectures [25]. Importantly, empirical classroom evidence indicates that students continue to value authentic human interaction alongside digital tools for learning [25,26,27].
In sum, although AI-facilitated representatives of humans are widely heralded for their potential to advance sustainability goals and enrich educational practice, the field faces critical gaps.
Given the above, the present research aims to provide a comprehensive framework that integrates institutional governance, educational theory, and learning/teaching practices to support the sustainable adoption of digital representations of human lecturers in educational videos. This framework seeks to lay the foundations for a holistic, safe, equitable, and pedagogically coherent implementation of this phenomenon within university settings. To accomplish this objective, the research addresses the following tasks:
(1)
Mapping existing international/supranational/national regulations to derive an institutional governance baseline for using humans’ digital representatives in higher education instructional videos;
(2)
Synthesizing evidence-informed educational theories and concepts to form a robust theoretical foundation for using humans’ digital representatives in higher education instructional videos;
(3)
Systematizing current institutional practices into a taxonomy of themes discussed within current empirical studies on humans’ digital representatives in higher education instructional videos.

2. Materials and Methods

The study integrates theoretical and empirical studies and employs a systematic qualitative approach and grounded theory. This perspective has been chosen because the qualitative research emphasizes the understanding and interpretation of individual and collective experiences within live societal contexts. Adhering to qualitative research traditions, this study utilizes administrative and legal documents as primary sources and the academic publications on the theme under study as secondary sources. The analysis is also implemented on the grounded theory principles, which assume gathering data of documented societal perspectives and experiences to interpret them through the inducive reasoning within up-to-date domain-specific societal contexts [28].
The methodology was structured into the following stages:
Search strategy and data collection;
Document screening and selection;
Text pre-processing;
Automated text analysis;
Thematic interpretation.

2.1. Search Strategy and Data Collection

A comprehensive search strategy was developed to collect data along several different pathways. Considering that AI-enhanced educational videos featuring educational twins became widely available to the public alongside GPT technologies in 2021–2022, the timeline for search activities was defined as 2021–2025. However, the present paper references the latest research findings that include pertinent statements and arguments.
As the first task, a diverse array of international and national sources was analyzed. The databases of official sites of international and regional organizations (which produced documents on the topic under consideration), namely the UN (https://documents.un.org/), the EU (https://data.europa.eu/en), the OECD (https://data-explorer.oecd.org/), and the ASEAN (https://asean.org/legal-instruments-database/ (accessed on 1 November 2025)), were searched. The search queries were adapted for search in the mentioned databases and used varied combinations of the phrases “AI-facilitated representatives of human teachers/lectures/” and “Digital twins of human teachers/lecturers” using database-specific syntax.
The search was primarily limited to the English-language data as the dominant language of international communication. In the case of Russia, due to the authors’ affiliation and data relevance, materials in Russian were considered. Initially, 86 texts were found, with reference to the UN, EU, ASEAN, and OECD policies in the field under study. The national track sought to include elements of national laws from various countries across different continents that are recognized as leaders in the field. Initially, 95 texts related to AI and digital representatives/twins or avatars of humans were found.
For tasks two and three, academic research data from various countries was examined to clarify international academic perspectives on digital representatives of humans in educational videos. The analysis included titles and abstracts from academic sources published between 2021 and 2025. The relevant literature was sourced from Google Scholar. Initially, the search was conducted through the keywords “digital twins/avatars/teachers” with the additional filters “educational/instructional videos”. The search yielded approximately 16,900 positions in just 0.012 s. However, the advanced search through the tailored filters in line with the research subject limited the data to 576 pieces.

2.2. Document Screening and Selection

The documents were managed using the management software Zotero 7 for Windows. Duplicate records were removed using automated tools within Zotero. The authors screened the titles and documents as well as the research publications’ abstracts. Exclusion criteria addressed documents whose provisions did not apply to AI-facilitated representatives of humans in educational settings and academic sources other than empirical and review articles. Texts unrelated to the topic (AI tools with no reference to digital representative of humans in educational videos, videos with digital twins for training outside the higher education settings, or purely technological issues of digital twins’ creation) were also excluded. After removing the duplicate content, 26 texts were selected out of the initial 95, with reference to the UN, EU, ASEAN, and OECD policies. Accordingly, the initial 86 texts introducing pieces of national legal acts from North American, Latin American, European, and Asian countries specifically addressing the topics resulted in a final list of 37 texts. Regarding the regions’ and countries’ data, it should be noted that the EU AI Act is a binding tool for all the organization members, thus representing the consolidated approaches and practices of all EU member states. As far as the academic publications are concerned, 187 full texts were selected for further investigation.

2.3. Text Pre-Processing

The final set of selected sources underwent a series of steps to optimize them for automated analysis. All texts were converted into plain text (txt) format. All non-textual elements as well as boilerplate texts (i.e., reference section) were removed. The predetermined coding was applied manually and covered legal concepts and branches of law for the document analysis, as well as educational theories and approaches, digital representatives of human lecturers in videos, types of educational activities, human human attitudes toward the phenomenon under study.

2.4. Automated Text Analysis

The texts of each track were organized into a separate text corpus. The Lexalytics text analyzer tool (version 6.5) (https://www.lexalytics.com) was utilized for computer-assisted text analysis, thematic coding, semantic concepts extraction, and assessment of their frequency in the text corpus for further clustering techniques. This methodology is commonly applied to themed data, supporting both its identification and the extraction of meaningful content components [29].
Several Lexalytics modules and configurations were employed. The Core NLP Pipeline was used for tokenization. Concept extraction module was employed for identifying overarching themes and concepts. The concept weighting was configured to prioritize noun phrases and multiword expressions with higher frequency across the corpus.

2.5. Thematic Interpretation and Validation

The automated text analysis produced a set of conceptual clusters derived from word frequency, which served as an indicator of the relevance of respective concepts and themes. Initially pre-determined themes were specified by grouping semantically similar concept entities generated through automated text analysis. The English-based textual corpuses were also analyzed through QDA Miner Lite (version 4.0) (https://qda-miner-lite.software.informer.com/4.0/) to assess the robustness of the identified themes. A high degree of thematic overlap (>80% similarity in the top 10 concepts for documents and in the top 20 concepts for academic sources) was observed, indicating the reliability of the core findings. The thematic interpretation aimed to identify common grounds (as opposed to country-specific features) for the first and second research tasks and to encompass evidence-based educational practices regarding the use of humans’ digital representatives in instructional video lecturing across countries.
The above steps enabled the authors to pinpoint key themes in line with the research tasks.

3. Findings and Interpretation

This section provides the data and the respective interpretation of each research task.

3.1. Regulatory Framework (International to National) for Using Humans’ Digital Representatives in Higher Education Instructional Videos

The investigation of sources has resulted in the structure of interrelated clusters, introducing key legal concepts and areas of current legislation that refer to the institutional governance of AI-facilitated representatives of human lecturers in educational videos. The structure is introduced in Figure 1. The percentages reflect the frequency of cluster themes and key concepts in the analyzed sources, although these values may vary when additional sources are examined in future studies.
These key topics are traced across international, supranational, and national laws. Table 1 illustrates how these topics are expressed in the respective documents.
The analysis of the materials has revealed that no single, comprehensive legal act or document specifically regulating the production and use of digital representatives of humans in educational settings—whether in general or in relation to videos in particular—exists at the international, supranational (regional), or national levels across continents and countries. The respective rules concerning the production and use of digital representatives of humans in educational videos are mapped within more general regulations. Some data and examples to support this thesis are introduced below.
International regulations include guidelines and principles established by the UN that cover a number of areas related to the research themes, including the following:
Protection and privacy policies for the Secretariat, the UN internal law [71];
Principles of AI safe and ethical use and requirements for AI responsible use for developers and operators [72];
Human privacy protection [30];
Issues of quality education and digital literacy [58];
Human rights protection amid digitalization, including issues related to women’s access to technologies and STEM education [50];
Enhanced access to the internet, inclusion and increasing investment in ICT R&D [40];
AI competency frameworks for teachers and students regarding AI capacity awareness and the meaningful use of AI technologies [51].
Another example of an international approach is provided by the OECD through its Recommendation of the Council on Artificial Intelligence [31], which adopts a comprehensive perspective and underscores human rights and privacy protection, the “responsible stewardship of trustworthy AI”, the promotion of “safe, fair, legal sharing of data”, the development of “human capacity”, and the enhancement of “social dialogue”.
The supranational (regional) level also reveals practices of a generalized approach to the phenomenon under study.
The EU is developing a regulatory framework that covers the use of AI and digital twins, with an emphasis on human rights and digital security. Regulation (EU) 2016/679 [32] safeguards individuals’ personal data and applies to data controllers and processors, irrespective of where the processing takes place. The data subject has the right to access, delete (particularly in relation to virtual environments involving avatars or deepfake), choose profiling options, correct data, restrict processing, and object to processing. The Digital Services Act [33] and the Digital Markets Act [34] require metaverse operators to protect personal data, remove illegal content, disclose online advertising, and ensure technical interoperability and third-party access to data. Data governance frameworks, Including the Data Act [73], establish cybersecurity by logging who can access and control the data and by affirming the users’ right to access and manage the information they generate. Directive 2005/29/EC [62] tackles unfair commercial practices, while Directive 2011/83/EU [63] governs anti-competitive behavior and consumer protection within the digital landscape. The Audiovisual Services Directive 2010/13/EU [52] and the Children’s Rights Directive 2011/92/EU [53] apply to the protection of children, parental control, and prohibition of destructive content. More specifically, the EU Artificial Intelligence Act [41] sets out requirements for AI providers, including disclosure of source code, internal procedures, and administrative liability for violations. Technology Report No. 1 [66] specifies smart glasses issues. Guidelines 02/2021 [67] concretizes the realm of virtual voice assistants. Guideline 05/2022 [68] focuses on facial recognition in policing. Finally, Guideline 3/2022 [69] addresses dark patterns in the interfaces of social media platforms. Under the integration of the above legal sources, the availability of digital avatars is strictly regulated, but the copyright of avatars requires a focus on national legislations. The above regulations are binding for EU member states; therefore, they provide a solid basis for national activities, including those related to the issues of AI-facilitated videos with digital representatives of human lecturers. However, it seems critical to mention examples of national legislation in this regard. For instance, Germany’s DSK guidance [74] specifically focuses on the human oversight and a consistent consent on data usage.
Another example of regional legal practice can be found within the activities of the Association of Southeast Asian Nations (ASEAN). The organization adopted the Guide on AI Governance and Ethics [42], which follows generalized approaches (e.g., human centricity, privacy and data governance, integrity) and specifies risk issues, levels of human involvement (human-in/over/out-of-the-loop), a shared responsibility framework, and other relevant issues. These issues do not explicitly refer to digital representatives of humans in education, although they are relevant for the respective theme in terms of a generalized legal framework.
National regulations specify a particular country’s vision. The research has collected extensive country-contextualized data; the following section highlights examples of countries whose legislation demonstrates diverse approaches to the theme under study, albeit without a systematized and comprehensive legal framework.
In the United States, as a major North American continent representative, federal laws establish civil and criminal liability for the misuse of avatars in case of humiliation and harassment through digital copies; intentionally causing physical harm or provoking an armed/diplomatic conflict; fraud using identity; or interference by a foreign state or agent in a domestic policy debate [39,75]. At the state level, the main prohibitive area is the use of avatars for illegal pornographic content [76]. New York regulates contracts for the creation and use of digital copies [61], and the contract is concluded without a professional lawyer or a union. Entertainment industry unions [77] have developed provisions that include performer consent for copying, restrictions on the use of such copies, requirements for their storage, and proper compensation for AI-generated content. The position of trade unions allows the use of avatars without a complete replacement of a person. However, there is no special law regulating educational involving digital avatars at either the federal or state level.
In Argentina, representing the Latin American continent, the use of digital avatars in education is regulated by a combination of general rules on data protection [35]: the Law [54] and guidance [43] on the safe use of AI jointly provide a legal basis for data processing, ensure data minimization, and require that parents and students be informed about the purposes of using avatars, the types of information collected, and the storage periods. Regulators also require compliance with educational programs and ethical standards as well as monitoring the effectiveness and impact on the educational process.
The Asian–Pacific region introduces common practices and specific approaches.
Singapore maintains a sector-specific approach to AI governance and can regulate the use of digital avatars in education through a combination of privacy, data protection, cybersecurity, and education standards. The core is the Personal Data Protection Act [38], which requires explicit consent for the collection and use of students’ educational data, restrictions on processing and storage, and the right to access and delete information. The Guide on Synthetic Data Generation [46] ensures ethical and technical issues for AI training, as complemented by Guidelines on Securing AI Systems [45]. Several regulations address ethical and technical issues related to AI training, the use of generative AI in judicial and healthcare settings, and protection against fakery and manipulation in the metaverse [60]. However, the above-mentioned acts do not contain a reference to digital twins’ technology, thus relying on the Model AI Governance Framework for Generative AI [47].
In Japan, the use of digital avatars in education has no special legal background and is regulated by a combination of general laws on personal information, cybersecurity, and intellectual property, as well as sectoral guidelines [78]. The main legal framework is the Act on the Protection of Personal Information [37]. More specifically, with reference to virtual reality, there are guidelines for the creation and operation of virtual reality, including those related to the use of cultural assets for tourism purposes [79], as well as a report by the Japanese Interior Ministry on future opportunities and challenges associated with virtual spaces [64]. Another pool of civil law legislation regulates copyright law issues with reference to digital representatives of humans in the metaverse. The examination of sources reveals that Japan’s regulation tends to be permissive [48], enabling educational institutions and teachers to employ this technology within the existing legal standards renewed in the Act on Promotion of Research and Development, and Utilization of AI-Related Technologies [49].
In China, the Cybersecurity Law [80] provides a general regulatory framework for operators and users of digital technologies, focusing on the requirements to protect and preserve original data. Interim measures for the management of generative artificial intelligence services [44] require the legality of data for training AI and the inadmissibility of violating intellectual property rights and illegal distribution of personal data. The Data Security Act [64] and the Personal Information Protection Act [36] explain the procedure for interacting with individuals when employing digital technologies; these documents can be applied to teachers’ AI-facilitated video representations. Regulation on the management of deep synthesis of internet information services [81] can be directly applied to govern relations regarding the use of these technologies. The Recent Measures for Identifying Artificial Intelligence-Generated Synthetic Content [59] ensures explicit information about synthetical content used in the data provided to audiences/clients, complimenting the recent Empowerment Initiative to Promote the Deep Integration of Intelligent Technology with Education Teaching and Scientific Research [56]. In sum, while a body of laws and regulations serve as the background to regulate digital avatar technologies, there are no specialized national norms for educational activities.
Meanwhile, against the above generalized and sector-specific rules on AI in society and education, China’ s Empowerment Initiative to Promote the Deep Integration of Intelligent Technology with Education Teaching and Scientific Research [56] and Australia’ s Gen AI strategies for Australian Higher Education [55] demonstrate a tailored approach, regarding Gen AI framework in education, which can regulate diverse AI-facilitated educational content. This framework seems to be more tailored and thematically focused when it refers to the digital representatives of humans in educational videos.
In Russia, standing at the crossroads of Europe and Asia, the regulation of digital avatars is considered in two areas of civil law [65] (Articles 152.1 and 152.2): realistic images (digital photographs, video recordings, or fine art work depicting realistic facial portraits) and unrealistic images (including a user’s digital avatar, graphic or holographic network representations, AI-generated characters, or digital twins employed as a communication systems for products and their components). The legislation ensures the protection of the rights of digital avatar owners based on the type of user (e.g., an individual or a legal entity). For individuals, civil law protection is based on confidentiality as defined by personal data and information laws. Ownership of a digital avatar is addressed through copyright law [65] (Article 1259,) and through the user or license agreements of content-generating services. For legal entities, protection involves the individualization of goods and service marks, regulated by the Civil Code [65] (Articles 1477 and 1481, ibid). In the educational sphere, the legislation and by-laws of the Ministry of Education and Science of the Russian Federation are applied. Article 16 of the Law on Education [57] does not directly regulate the use of digital twins, but it allows the use of e-learning, which, in turn, allows the use of digital avatars in educational activities but does not provide for a complete replacement of the teacher. Issues of ownership and responsibility for a digital avatar are resolved at the level of general legal norms and local acts of the educational organization.
The above evidence leads us to discuss a list of general provisions for the institutional governance regarding the digital representatives of human teachers in educational videos. These operate as teaching tools, virtual teachers, and interactive characters. The institutional governance of their use in education should cover their creation, use, and protection of student data, as well as the commercial and copyright components of the content. Meanwhile, the respective regulations should include a number of aspects.

3.2. Evidence-Informed Theoretical Foundations for Using Humans’ Digital Representatives in Higher Education Instructional Videos

The investigation of sources that can form the theoretical background for using digital representatives of human teachers within higher education video settings has resulted in a cluster structure, introduced in Figure 2.
The identified clusters characterize theoretical dimensions and their concepts that academic studies currently employ when considering the issue of AI-facilitated representatives of human lecturers in educational videos. The percentages indicate the frequency of cluster themes and key concepts in the analyzed sources, although values may vary as additional sources are examined in future studies.
First, it should be noted that some researchers focus on the philosophical dimension when considering AI-facilitated tools in education. Researchers adhere to a strong philosophical and ethical foundation that underscore the need to respect students’ identities, mental well-being, equitable learning opportunities, and inclusiveness [82]. Further investigation of the literature reveals that some educators consider the philosophy of constructivism as a theoretical lens through which educational approaches to digital representatives of humans in education are analyzed [83]. This approach allows pedagogical theory to justify the role of digital representations of humans in accordance with the constructivism historical paradigm, which is rooted in the following:
Rousseau’s seminal philosophy of education articulated in Emile [84], which paved the way to the theory of creative, individualized, and experiential learning;
Contributions of Dewey [85], who championed active engagement in learning;
The heritage of Vygotsky [86], whose sociocultural theory underscored the role of interaction in cognitive development;
Piaget’s school of thought [87], which highlighted the importance of developmental stages in constructing knowledge.
Thus, by aligning constructivist, experiential, and sociocultural philosophies from Rousseau through Dewey, Vygotsky, and Piaget, contemporary AI-facilitated representatives of humans in instructional videos can be understood as technologically mediated continuation of human-centered pedagogy within the current digitalized landscape. It seems relevant to consider the philosophy of artificial intelligence, including learning and exploitation of knowledge [1].
The collected data further identifies student-centered and personalized learning as key concepts for the educational dimension to conceptualize AI-facilitated representatives of human lecturers within contemporary pedagogy. Current academic practices use them to ensure both academic achievement and personal development [88]. We argue that this angle can serve as a critical element when digital representatives of human lecturers are considered, as contemporary scholarship increasingly posits that the progression of personalized learning is inextricably linked to advancements in artificial intelligence. Kaswan et al. [89] contend that integration of AI possesses transformative potential in configuring instructional design by facilitating the creation of tailored educational experiences that are meticulously attuned to the cognitive and affective needs of individual learners. Thus, the above-mentioned concepts serve as the theoretical underpinning to underscore the educational relevance of AI-facilitated representatives of human lecturers for enabling finely tailored instructional experiences that respond to learners’ cognitive, emotional, and developmental needs.
In terms the cognitive dimension, knowledge delivery and processing are key issues, and the AI integration into education is considered through the lens of its multimodality [90] and multimedia learning principles [91]. Scholars [92] contend that AI-facilitated tools enhance knowledge reception and processing as they integrate several sources (unlike the single-minded structure of traditional education, both in conventional classroom and online environments) and ensure a tailored cognitive load management, thereby improving learning efficacy [93]. Thus, it is timely to situate video-based AI-facilitated representatives of human lecturers within these respective theoretical frameworks.
Regarding the organizational dimension (learning process), the empirical investigation highlights that the smart classroom concept offers a productive theoretical frame, as it explicitly integrates physical classrooms’ social and spatial affordances with digitally mediated learning resources and analytics. Studies show that learners appreciate smart classrooms based on factors such as technology readiness, ease of use, social emotional benefits, educational value, and trust [94]. Moreover, smart classrooms possess the capability to store, collect, compute, and analyze students’ data via automated assessment technologies. This functionality enables customized tutoring for students and real-time feedback for teachers. As a result, smart classrooms contribute to enhanced classroom interactions and improved management practices, ultimately fostering a more engaging and effective learning environment [95]. Conceptually, smart classrooms function as socio-technical ecologies.
Another track of the conceptual organization of the learning process with reference to AI integration refers to the stages and formats of knowledge delivery. In this regard, the empirical data reveals a notable value of the flipped classroom learning and the effectiveness of video content therein [96]. Empirical studies explain that the flipped learning system fosters students’ self-regulated learning behaviors, permits learners to engage with introductory content at their own pace, supports personal knowledge reconstruction, and enables students to identify issues requiring additional support from their teachers [97].
Regarding the instructional tools, the analysis has revealed a particular relevance of video content. There is evidence that students increasingly utilize video content for learning purposes, thus prompting educators to meet these evolving learning preferences [98]. The international institutional vision acknowledges that current AI technologies for video generation facilitate the conversion of presentations, websites, and text content into video format [99]. Within this angle, scholars underline the empirically confirmed preference for short videos instead of traditional lecture lengths. Educators acknowledge that it is timely to consider the use of AI-generated synthesized videos, as it saves both time and money and allows the teachers to refine and update materials in line with students’ feedback and needs without constant re-recording [100].
While exploring the theoretical foundations of the use of AI-facilitated videos with digital representatives of humans for instructional purposes, the collected data foregrounds the psychological dimension of this phenomenon in terms of student–teacher interaction issues. Some academic publications specifically focus on teacher–student interactions as the cornerstone of modern, digitalized pedagogy [101]. Recent data confirms that student audiences value human actors, i.e., the presence of a teacher as an instructor, the collaboration among peers, and interactive relations between students and teachers over the leaning content [102]. In addition, with reference to teacher–student collaborations within digital settings, a growing importance of the teacher’s conversational style is underlined [103]. More specifically, empirical findings reveal that in online blended learning, learners report higher perceptions and satisfaction when the instructor delivering the content demonstrates social and cognitive presence [104]. These data also confirm the related aspect and relational affordances in the use of AI-facilitated videos featuring digital human representatives for instructional purposes.
The above findings and their interpretation confirm the timeliness of examining digital representatives of human lecturers in synthesized videos within the theoretical landscape of higher education and of integrating fragmented evidence into a comprehensive theoretical background encompassing philosophical, educational, cognitive, organizational, instructional, and psychological dimensions.

3.3. Taxonomy of Themes Discussed Within Current Empirical Studies on Humans’ Digital Representatives in Higher Education Instructional Videos

The analysis of the materials related to the third task—examining the taxonomy of themes discussed in current empirical studies on digital representatives of humans in higher education instructional videos—has revealed several clusters within this domain. The data is introduced in Figure 3. The percentages reflect the frequency of themes in the analyzed sources, although these values may vary when additional sources are examined in future studies.

3.3.1. Technological Diversity of Digital Representatives of Human Lecturers in AI-Facilitated Videos

Current strands of applied educational research and practice [105] employ various technological types of digital representatives of human teachers now used in AI-facilitated videos, including the following:
A digital twin of a real person, replicating the individual’s voice, appearance, and gestures;
An avatar as a virtual image resembling a social character or a member of a particular profession, with AI-generated personal features;
An avatar as a fictional character or an android-like figure, with voice and intonation generated by a neural network or modeled after those of a well-known person;
A cartoon-style avatar.
Such technological diversity explains why students’ perceptions of different avatar types constitute one of the key research domains. Empirical findings indicate students’ consistent preference for realistic avatars over cartoon-like ones. According to researchers, the use of avatars exhibiting genuine behaviors holds significant promise for enhancing teaching and learning experiences within the metaverse [106].
However, critical remarks from students mention the digital lecturers’ lack of responsiveness to students’ needs and inquiries, which impedes truly interactive learning. When evaluating digital lectures, students strongly appreciate naturalness, authenticity, and interactivity, highlighting that respective issues should be considered in terms of further development [15]. Scholars confirm that students demonstrated a noticeable greater engagement when the digital avatars appeared to be “more human-like”, which was their preferred choice [107].
This situation seems logical to the authors of the present study, as human learners value collaboration and communication. Students’ concerns regarding digital representatives of lecturers’ stem from their aspiration for personalized interactions and need for flexibility due to diverse learning styles. Therefore, the current data confirms the importance of the digital personage’s anthropomorphism for its teaching functions. Increased responsiveness and personalized avatars may be considered promising directions for future research.

3.3.2. Digital Representatives of Human Lecturers in AI-Facilitated Videos in Relation to the Type of Knowledge, Course, and Delivery Style

While confirming the potential of interactive avatars for personalized and intelligent learning [108], current studies also indicate an interdependence between the type of avatar and the type on knowledge to be transmitted, as well as the course type [100]. Research by Gao et al. reveals that “digital teachers with a cartoonish appearance are more effective for delivering explicit knowledge, whereas digital teachers with a realistic appearance excel in conveying tacit knowledge” [109]. There is empirical evidence that audiences who watched declarative knowledge videos featuring human avatars exhibited greater visual engagement and attained better learning outcomes. Conversely, procedural videos enhanced participation in lectures as well as increased emotional involvement, satisfaction, and a sense of social presence [3]. Another study [110] shows that students consider AI avatars suitable for tasks such as lecture delivery, where the goal is primarily information transfer and the context relies less on the personal experiences of individual instructors.
The authors of the present study concur with scholars who argue that digital twins are particularly valuable for the fields where experimental and hands-on competences are highly valued and less dependent on emotionally sensitive and human-centered perspectives [111], especially for applied and procedural knowledge typically imparted through demonstration and experiential learning [112]. In contrast, such twins may not be suitable for teaching content and activities that are more complex and require the facilitation of a human teacher. When students believe that additional clarification is required for complex topics and assignments, they express skepticism about an AI avatar’s ability to act as effectively as a human would. In this case, teacher facilitation and support are preferable. Although students typically enjoy engaging elements like the speaking avatar, vibrant background, and interactive aspects of the online tools, there are a number of problems they encounter, especially in areas they need to tackle in order to better master their discipline of study [113]. Researchers analyzing avatar-mediated videos argue that AI-created instructors employing conversational techniques greatly improve socio-emotional engagement and show a similarly positive impact on leaning activities and cognitive load compared to human teachers [114].
The above leads the authors to argue that digital representations of lecturers in videos can be useful for lectures on theoretically focused subjects and conceptual courses. However, the requirements for fully operational artificial human lecturers and AI-facilitated educational videos remain a subject for further academic research and technological development.

3.3.3. University Community Attitude to Digital Representatives of Human Lecturers in AI-Facilitated Videos

The collected data provides empirical evidence regarding students’ and teachers’ positive attitude to AI-powered digital twins in educational videos due to a number of reasons. These include the following:
Such digital participant in the educational process can offer feedback, facilitate interactive discussions, customize teaching approaches, and provide contextual responses, thereby enhancing educational management and addressing learners’ most challenging questions [115];
“Hyper-realistic avatars enhance the sense of embodiment…and presence” [116];
AI-powered interactive video avatars improve consultations in university courses and increase the effectiveness of conveying technical info and content [117];
Avatars can enhance cultural diversity in the classroom as students positively evaluate avatars designed to reflect cultural nuances [118];
Digital twins reduce the time and cost of production and support collaboration and interaction, in contrast to learning solely from textbooks [119].
However, while highlighting the benefits of AI delivering knowledge in an immersive way, students reflected on the following challenges:
The less social and personal nature of the AI-generated knowledge presenter [120].
Concerns about socio-technological aspects, including the risk of AI promoting biased perspectives, providing incorrect information, and encouraging overreliance on technology that may diminish the importance of human support and interaction [121].
Concerns regarding classroom administration, developmental support, technical issues, reduced interpersonal collaboration, and limitations in cultivating liberal attainment when instruction is mediated by a virtual teacher [5].
Difficulties in maintaining attention, along with the absence of tone, modulation, and physical or vocal characteristics—elements of emotional authenticity and social capability that remain challenging for avatars to reproduce [122].
Some research reveals no notable differences in academic outcomes regarding human and humanoid animated pedagogical avatars in video lectures [3]. Meanwhile, other empirical data suggests that students found environments with AI avatars harder to use [123] and reveal learners’ preference for human tutors [124].
In terms of technology, stakeholders involved in incorporating AI-facilitated videos with digital representatives of human teachers confirm implementation costs, technological complexity, privacy concerns, partial academic opposition to change, training needs, digital divide, and scalability challenges [119].
In sum, researchers across geographically diverse academic communities seem to be unanimous in their view that AI-facilitated educational videos featuring digital representatives of human teachers must align with learning design and support educational goals [125]. Moreover, digital twins in education should be approached from a comprehensive perspective that integrates legal, ethical, administrative, and pedagogical considerations—factors that, according to the data, require a reasonable balance in their use within educational materials [126].

4. Discussion: Academic Perspectives of Digital Representatives of Human Teachers in Higher Education Videos

In line with the first research task, the analysis of administrative–legal data and the identification of current gaps allows us to infer a set of general provisions for institutional governance concerning digital representatives of human teachers in educational videos.
The first element of regulation concerns the protection of personal data (including minimization and informed consent from parents and students) as well as mechanisms for storing and processing personal data, rights to access, correct, and delete data, and requirements for information system security, the confidentiality of digital avatars, and the data associated with them.
The second element concerns copyright and related rights, including original works and avatar materials, licenses, restrictions on copying and distribution, compliance with contractual agreements among institutions, and suppliers and users.
The third element concerns licensing agreements (institutions’ contracts with technology providers regulate access to avatar functionality, responsibility for content, and restrictions on commercial use) and user agreements, which may include provisions on training, testing, and monitoring students’ behavior in the digital environment.
The fourth element includes security issues, regulating the requirements for protecting the systems that control avatars, protecting the infrastructure, and countering abuse, manipulation, and discrimination, as well as authentication protocols, data encryption, and activity monitoring.
The fifth element concerns compliance with educational standards and ethics. All the content and interactions with avatars must comply with educational standards, including the principles of inclusivity, non-discrimination, and accessibility. Ethical standards include transparency of sources, explainability of algorithms, and responsibility for the consequences of using avatars in education.
Such an approach enables institutions to develop internal regulations that define responsibility for content, cases of intervention by an authorized person, actions in an educational emergency, and requirements for educational materials transferred to the avatar and considers the ethical and legal framework.
A specific aspect refers concerns administrative solutions that support the development of a competency framework for teachers and students regarding the use of AI. At the current stage of legislative and administrative practice, this area only implicitly accounts for teachers’ competencies regarding their involvement in producing AI-facilitated videos with digital representatives of human lecturers.
In line with the second research task, the study synthesized evidence-informed educational theories and concepts to provide a comprehensive theoretical vision of AI-facilitated videos featuring digital representatives of human teachers that
Technologically realize the human-centered philosophy of AI use and pedagogy amid the current digitalized landscape within the philosophical dimension;
Provide consistent human-like interfaces that model pedagogical behaviors and discourse patterns, personalize knowledge delivery, foster learner engagement, and offer a controllable source of modeled expertise and instruction within the educational dimension;
Constitute a specific class of operational instructional content within the socio-technical ecosystem of smart classrooms and operate as multimodal scaffolds that align closely with the aims and mechanisms of the flipped classroom within the organizational dimension;
Introduce compact, focused segments of knowledge, guiding metacognitive activities of learners through pauses, summaries, and prompts for reflection within the cognitive dimension;
Serve as entry points for further formative educational activities within the instructional tools dimension;
Contribute to the presence of the instructor, sustain learners’ engagement through naturalistic voice, facial expressions, and interactive cues within the psychological dimension.
In line with the third research task, the study has systematized current institutional practices into a taxonomy of themes discussed within current empirical studies on humans’ digital representatives in higher education instructional videos. While considering academic community practices and attitudes toward AI-facilitated videos featuring digital representatives of human teachers, the following aspects seem to be critical.
First, there are two contrasting visions of this technology: one positions it as a full replacement for the human teacher—based on the assumption that traditional education can no longer meet the digitalized expectations of contemporary learners [127]—while the other views it not as a replacement, but as an intelligent support tool interface that can be easily used by a non-technical educator through AI-facilitated avatar [117]. This latter perspective is grounded in learners’ perceptions [128]. The present research shares the standpoint of those scholars who consider that, in the contemporary era of artificial intelligence, it is relevant to explore the phenomenon of AI instructors/teachers/lecturers from the perspective of prioritizing humans in society [129] and argues for a substantial human dominance in knowledge transfer through AI-facilitated videos. The empirical academic data confirms that digital twins of human lecturers have demonstrated their potential to complement traditional teaching methods, particularly in scenarios requiring scalability [122]. However, their adoption must be accompanied by careful consideration of ethical, technical, administrative, and pedagogical factors to ensure that they enhance rather than hinder the educational experience. Given the advantages and disadvantages, we consider it relevant to treat AI-facilitated videos featuring digital representatives of human lecturers as a multidimensional phenomenon that extends beyond pure technology. We argue that it is critical to view this educational tool as one that
Rests on the professionalism and competence of the human teacher;
Relies on the expertise and intelligent capacity of the human developer of this technology;
Depends on its own technological capacity for self-monitoring;
Is designed and trained to select and generate appropriate information in accordance with the proposed and existing content.
The second point concerns socio-technological aspects. The present study recommends incorporating the concepts of social presence and interaction into the discussion, as these issues are critical for several reasons. First, we share the views of those who argue that interactions with AI could reshape social behavior [130]. Thus, AI-facilitated videos with digital personages could potentially change patterns regarding not only students’ engagement with their instructors but also learners’ common relations and collaboration with individuals. These alterations may, over time, impact compassion, reliability, and views on authority. Furthermore, we consider it critical to consider the argument of those scholars who suggest that comprehensive and complex and varied behaviors of teachers might not be easily reproduced in artificial intelligence systems [131]. Individuals’ personal treats and motivations, which encompass their flexible decision making, body language, and subtle teaching modifications, along with their experiences and preferences, shape their behavior and responses in their surroundings, thus affecting their pathways for knowledge sharing.
Another difficulty is the possible prejudice inherent in AI educator models [132]. To our mind, it is essential to anticipate a possible scenario when an AI system mirrors the views or implicit biases of a real human teacher or is influenced to act this way. Consequently, upcoming systems should be developed in a manner that ensures fair treatment of all learners. We admit that it is not easy to create such systems, as they frequently evolve over time through direct engagements between the teacher and the student. Bearing in mind the above, we strongly support the arguments of those scholars who assert that, in such circumstances, the loss, lack, or insufficiency of social and emotional components in educational processes can potentially impede their progress [133].
The empirical studies confirm that learners interpret the educational videos not only as pieces of information but also as situations of social communication and actions within societal dimensions. Following this line, in terms of pedagogical theory, we consider it critical to bear in mind the positum of those scholars who highlight that diversified cultural, economic, and institutional factors might affect the effectiveness of AI-driven learning in terms of the personalized approaches to diverse learning populations [134]. Within this framework, we suggest that AI-facilitated videos with digital representatives of teachers can be efficient when produced as customized resources tailored to specific student communities in terms of the subject domain, institutional policies and practices, and when incorporating a human-in-the-loop option.

5. Conclusions

Teaching in the metaverse presents a dynamic frontier for educational innovation. Amid the many issues surrounding this phenomenon, the presence of human teachers’ digital representatives has been increasing in contemporary education, particularly in blended and flipped learning as well as in the use of educational videos. In higher education, the sustainable incorporation of this phenomenon into educational settings requires an integrated approach that combines institutional governance, academic theory, and practice.
The present study has outlined a legal baseline for developing comprehensive guidelines for the integration of digital representatives of human lecturers in AI-facilitated educational videos; established a comprehensive theoretical framework to support this integration; and systematized current learning experiences related to the use of digital representatives of human teachers in educational videos. The findings may serve as a theoretical foundation for further research, as educational material for training courses on AI use within digital didactics, and as guidance for applied activities related to AI-facilitated course design within university curricula.
The data obtained raises questions to be explored through theoretical and applied lenses, including the extent to which knowledge may be automated and delivered through screen-based video; the range and format of the content that can appropriately be provided by digital replicas of human instructors; and the types of courses and knowledge domains that can effectively integrate videos featuring digital personages–or, conversely, those that require human-led delivery and interaction.
Given the exponential growth of AI applications in education, integrated administrative, theoretical, and applied solutions are essential to ensure that human and artificial intelligence complement one another. Such integration is crucial for their consistent and coherent inclusion in the educational process and, ultimately, for supporting the sustainable development of contemporary education.

Author Contributions

Conceptualization, A.A. (Anastasia Atabekova) and T.S.; methodology, A.A. (Anastasia Atabekova); software, A.A. (Anastasia Atabekova); validation, A.A. (Anastasia Atabekova), A.A. (Atabek Atabekov) and T.S.; formal analysis, A.A. (Anastasia Atabekova), A.A. (Atabek Atabekov) and T.S.; investigation, A.A. (Anastasia Atabekova), A.A. (Atabek Atabekov) and T.S.; investigation, A.A. (Anastasia Atabekova), A.A. (Atabek Atabekov); data curation, A.A. (Anastasia Atabekova); writing—original draft preparation, A.A. (Anastasia Atabekova); writing—review and editing, A.A. (Anastasia Atabekova), A.A. (Atabek Atabekov) and T.S.; visualization, A.A. (Anastasia Atabekova); supervision, A.A. (Anastasia Atabekova); project administration, A.A. (Anastasia Atabekova); funding acquisition, A.A. (Anastasia Atabekova) and A.A. (Atabek Atabekov). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wagg, D.J.; Burr, C.; Shepherd, J.; Conti, Z.X.; Enzer, M.; Niederer, S. The philosophical foundations of digital twinning. Data-Centric Eng. 2025, 6, e12. [Google Scholar] [CrossRef]
  2. Ali, Z.A.; Zain, M.; Hasan, R.; Pathan, M.S.; AlSalman, H.; Almisned, F.A. Digital twins: Cornerstone to circular economy and sustainability goals. Environ. Dev. Sustain. 2025, 1–42. [Google Scholar] [CrossRef]
  3. Polat, H.; Taş, N.; Kaban, A.; Kayaduman, H.; Battal, A. Human or Humanoid Animated Pedagogical Avatars in Video Lectures: The Impact of the Knowledge Type on Learning Outcomes. Int. J. Hum.–Comput. Interact. 2024, 41, 8912–8927. [Google Scholar] [CrossRef]
  4. Davar, N.F.; Dewan, M.A.A.; Zhang, X. AI chatbots in education: Challenges and opportunities. Information 2025, 16, 235. [Google Scholar] [CrossRef]
  5. Lin, Y.; Yu, Z. Learner Perceptions of Artificial Intelligence-Generated Pedagogical Agents in Language Learning Videos: Embodiment Effects on Technology Acceptance. Int. J. Hum.–Comput. Interact. 2024, 41, 1606–1627. [Google Scholar] [CrossRef]
  6. Wang, X.; Pang, H.; Wallace, M.P.; Wang, Q.; Chen, W. Learners’ perceived AI presences in AI-supported language learning: A study of AI as a humanized agent from community of inquiry. Comput. Assist. Lang. Learn. 2022, 37, 814–840. [Google Scholar] [CrossRef]
  7. Yusuf, H.; Money, A.; Daylamani-Zad, D. Pedagogical AI conversational agents in higher education: A conceptual framework and survey of the state of the art. Educ. Technol. Res. Dev. 2025, 73, 815–874. [Google Scholar] [CrossRef]
  8. Lind, S. Can AI-powered avatars replace human trainers? An empirical test of synthetic humanlike spokesperson applications. J. Workplace Learn. 2025, 37, 19–40. [Google Scholar] [CrossRef]
  9. Wang, C.; Zou, B. D-ID Studio: Empowering Language Teaching With AI Avatars. TESOL J. 2025, 16, e70034. [Google Scholar] [CrossRef]
  10. Deryuguina, I. MTS Will Train Employees Using Digital Avatars. MTS. 14 November 2024. Available online: https://moskva.mts.ru/about/media-centr/soobshheniya-kompanii/novosti-mts-v-rossii-i-mire/2024-11-14/mts-budet-obuchat-sotrudnikov-s-pomoshhyu-cifrovyh-avatarov (accessed on 7 November 2025).
  11. Lebedev, P. Russian Company Creates Created Digital Avatars of Teachers for Training Employees [In Russian: Лебедев, П. (28.05.2025) В рoссийскoй кoмпании сделали цифрoвые аватары препoдавателей для oбучения сoтрудникoв.] Skillbox. 2025. Available online: https://skillbox.ru/media/education/v-rossiyskoy-kompanii-sdelali-cifrovye-avatary-prepodavateley-dlya-obucheniya-sotrudnikov/ (accessed on 7 November 2025).
  12. Du Qiongfang University in Hong Kong Develops AI lecturers Including Albert Einstein to Revolutionize College Classrooms. Global Times. 16 May 2024. Available online: https://www.globaltimes.cn/page/202405/1312466.shtml (accessed on 7 November 2025).
  13. Hall, R. Hologram Lecturers Thrill Students at Trailblazing UK University. The Guardian: London, UK, 21 January 2024. Available online: https://www.theguardian.com/technology/2024/jan/21/hologram-lecturers-thrill-students-at-trailblazing-uk-university (accessed on 7 November 2025).
  14. Yastrebov, O. RUDN Creates Digital Avatars of Teachers. [In Russian: Ястребoв, О. (2025). В РУДН сoздали технoлoгию цифрoвых аватарoв препoдавателей]. ТАСС: 31 Juy 2025. Available online: https://tass.ru/novosti-partnerov/24678435?utm_source=yxnews&utm_mdium=destop&utm_referer=https%3A%2F%2Fdzen.ru%2Fnews%2Fby%2Fstory%2Fc52853b0-9dd6-5c15-acc4-087cba575fbe (accessed on 7 November 2025).
  15. Pang, C.C.; Zhao, Y.; Yin, Z.; Sun, J.; Hadi Mogavi, R.; Hui, P. Artificial human lecturers: Initial findings from Asia’s first AI lecturers in class to promote innovation in education. In International Conference on Human-Computer Interaction; Springer Nature: Cham, Switzerland, 2025; pp. 105–124. [Google Scholar]
  16. Global Report on Teachers; UNESCO: Paris, France, 2024; Available online: https://www.unesco.org/en/articles/global-report-teachers-what-you-need-know (accessed on 7 November 2025).
  17. Selim, A.; Ali, I.; Saracevic, M.; Ristevski, B. Application of the digital twin model in higher education. Multimed. Tools Appl. 2025, 84, 24255–24272. [Google Scholar] [CrossRef]
  18. Kabashkin, I. AI-based digital twins of students: A new paradigm for competency-oriented learning transformation. Information 2025, 16, 846. [Google Scholar] [CrossRef]
  19. Packer, Н. AI-Generated Lecturers Take a Turn at Hong Kong University; Times Higher Education: London, UK, 2024. Available online: https://www.timeshighereducation.com/news/ai-generated-lecturers-take-turn-hong-kong-university (accessed on 7 November 2025).
  20. Evangeline, S.I. Ethical, Privacy, and Security Implications of Digital Twins. In AI-Powered Digital Twins for Predictive Healthcare: Creating Virtual Replicas of Humans; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 397–424. [Google Scholar] [CrossRef]
  21. Pariso, P.; Picariello, M.; Marino, A. Digital Twins in Industry and Healthcare: Policy Regulation and Future Prospects in Europe. In Extended Reality. XR Salento; De Paolis, L.T., Arpaia, P., Sacco, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; Volume 15740. [Google Scholar] [CrossRef]
  22. Paseri, L.; Durante, M. Normative and Ethical Dimensions of Generative AI: From Epistemological Considerations to Societal Implications. In The Cambridge Handbook of Generative AI and the Law; Zou, M., Poncibò, C., Ebers, M., Calo, R., Eds.; Cambridge Law Handbooks; Cambridge University Press: Cambridge, UK, 2025; pp. 27–42. [Google Scholar]
  23. Mexhuani, B. Adopting digital tools in higher education: Opportunities, challenges and theoretical insights. Eur. J. Educ. 2025, 60, e12819. [Google Scholar] [CrossRef]
  24. Pester, A.; Andres, F.; Anantaram, C.; Mason, J. Generative AI and Digital Twins: Fostering Creativity in Learning Environments. In Human-Computer Creativity: Generative AI in Education, Art, and Healthcare; Springer Nature: Cham, Switzerland, 2025; pp. 153–173. [Google Scholar]
  25. Huang, R. Education at Universities. In Proceedings of the 2025 Seminar on Modern Property Management Talent Training Enabling New Productive Forces; Springer Nature: Berlin/Heidelberg, Germany, 2025; Volume 337, p. 439. [Google Scholar]
  26. Ragusa, A.T. Student experiences and preferences for offline interactions with university lecturers. Rural Soc. 2025, 34, 35–61. [Google Scholar] [CrossRef]
  27. Yossel-Eisenbach, Y.; Gerkerova, A.; Davidovitch, N. The Impact of Lecturer Profiles on Digital Learning Habits in Higher Education. Eur. Educ. Res. 2025, 8, 31–58. [Google Scholar] [CrossRef]
  28. Liu, Z.; Zhang, W. A qualitative analysis of Chinese higher education students’ intentions and influencing factors in using ChatGPT: A grounded theory approach. Sci. Rep. 2024, 14, 18100. [Google Scholar] [CrossRef] [PubMed]
  29. Saldaña, J. An Introduction to Themeing the Data. In Expanding Approaches to Thematic Analysis; Wolgemuth, J.R., Guyotte, K.W., Shelton, S.A., Eds.; Routledge: London, UK, 2024; pp. 11–26. [Google Scholar]
  30. A/RES/75/176; The Right to Privacy in the Digital Age. United Nations General Assembly: New York, NY, USA, 2020. Available online: https://digitallibrary.un.org/record/3896430?v=pdf (accessed on 7 November 2025).
  31. Recommendation of the Council on Artificial Intelligence (2019/2025); OECD: Paris, France, 2019; Available online: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed on 7 November 2025).
  32. Regulation (EU) 2016/679; On the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data 2016. Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng (accessed on 7 November 2025).
  33. The Digital Services Act; European Commission: Brussels, Belgium, 2022; Available online: https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en (accessed on 18 September 2025).
  34. The Digital Markets Act; European Commission: Brussels, Belgium, 2022; Available online: https://digital-markets-act.ec.europa.eu/index_en (accessed on 18 September 2025).
  35. Act No. 25.326; Personal Data Protection Act. Argentina. 2020. Available online: https://www.unodc.org/cld/uploads/res/uncac/LegalLibrary/Argentina/Laws/Argentina%20Personal%20Data%20Protection%20Act%202000.pdf (accessed on 7 November 2025).
  36. Personal Information Protection Act. 2021. Available online: https://www.gov.cn/xinwen/2021-08/20/content_5632486.htm (accessed on 7 November 2025).
  37. APPI (2003/2022); Act on the Protection of Personal Information No. 57. Japan. 2003. Available online: https://www.japaneselawtranslation.go.jp/en/laws/view/4241/en (accessed on 7 November 2025).
  38. PDPA (2012/2021). Personal Data Protection Act. Singapore. Available online: https://www.pdpc.gov.sg/overview-of-pdpa/the-legislation/personal-data-protection-act (accessed on 7 November 2025).
  39. H.R.3230 (2019-2020)I; Deep Fakes Accountability Act. Congress Gov: Washington, DC, USA, 2019. Available online: https://www.congress.gov/bill/116th-congress/house-bill/3230/text (accessed on 7 November 2025).
  40. A/78/L.49; Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development. United Nations General Assembly: New York, NY, USA, 2024. Available online: https://docs.un.org/ru/A/78/L.49 (accessed on 7 November 2025).
  41. The EU Artificial Intelligence Act; European Commission: Brussels, Belgium, 2024; Available online: https://artificialintelligenceact.eu/ (accessed on 7 November 2025).
  42. ASEAN Guide on AI Governance and Ethics. ASEAN. 2024. Available online: https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf (accessed on 7 November 2025).
  43. Anchorena, B.; Carpinacci, L.; Paulero, V. Guide for public and private entities on Transparency and Personal Data Protection for responsible Artificial Intelligence (in Spanish: Guía para Entidades Públicas y Privadas en Materia de Transparencia y Protección de Datos Personales para una Inteligencia Artificial Responsible). 2024. Available online: https://www.argentina.gob.ar/sites/default/files/aaip-argentina-guia_para_usar_la_ia_de_manera_responsable.pdf (accessed on 7 November 2025).
  44. Interim Measures for Generative AI Services 2023. Available online: https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed on 7 November 2025).
  45. Guidelines on Securing AI Systems (2024). Cyber Security Agency of Singapore. Available online: https://isomer-user-content.by.gov.sg/36/3cfb3cd5-0228-4d27-a596-3860ef751708/Companion%20Guide%20on%20Securing%20AI%20Systems.pdf (accessed on 7 November 2025).
  46. The Guide on Synthetic Data Generation. Privacy Enhancing Technology (Pet): Proposed Guide on Synthetic Data Generation. 2024. Available online: https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/other-guides/proposed-guide-on-synthetic-data-generation.pdf (accessed on 18 September 2025).
  47. Model AI Governance Framework for Generative AI. Singapore. 2024. Available online: https://aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf (accessed on 7 November 2025).
  48. Report of the Study Group on Utilization of Metaverse; Ministry of Internal Affairs and Communications: Tokyo, Japan, 2023; Available online: https://www.soumu.go.jp/main_content/000892205.pdf (accessed on 7 November 2025).
  49. Act on Promotion of Research and Development, and Utilization of AI-Related Technologies (2025). Policy-Related News. Available online: https://www8.cao.go.jp/cstp/ai/ai_hou_gaiyou_en.pdf (accessed on 7 November 2025).
  50. A/RES/78/213; Promotion and Protection of Human Rights in the Context of Digital Technologies. United Nations General Assembly: New York, NY, USA, 2023. Available online: https://docs.un.org/ru/A/RES/78/213 (accessed on 7 November 2025).
  51. Miao, F.; Shiohira, K.; Lao, N. AI Competency Framework for Students; UNESCO: Paris, France, 2024; 80p. [Google Scholar] [CrossRef]
  52. Directive 2010/13/EU; EUR-Lex: Brussels, Belgium. 2010. Available online: https://eur-lex.europa.eu/eli/dir/2010/13/oj/eng (accessed on 7 November 2025).
  53. Directive 2011/92/EU; EUR-Lex: Brussels, Belgium. 2011. Available online: https://eur-lex.europa.eu/eli/dir/2011/92/oj/eng (accessed on 7 November 2025).
  54. Applicable Legal Framework for the Responsible Use of Artificial Intelligence in the Argentine Republic (in Spanish: Régimen Jurídico Applicable para el Uso Responsible de la Intelegencia Artificial en la República Argentina) 2024. Available online: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2024/PDF2024/TP2024/3003-D-2024.pdf (accessed on 7 November 2025).
  55. Gen AI Strategies for Australian Higher Education: Emerging Practice. Tertiary Education. Quality and Standards Agency: Melbourne, VC, Australia. 2024. Available online: https://www.teqsa.gov.au/sites/default/files/2024-11/Gen-AI-strategies-emerging-practice-toolkit.pdf (accessed on 7 November 2025).
  56. AI-empowered Education Initiative. Ministry of Education. The Peoples’ Republic of China. 2024. Available online: http://en.moe.gov.cn/features/2025WorldDigitalEducationConference/News/202505/t20250518_1191049.html#:~:text=In%20line%20with%20the%20National,teacher%20development%20through%20AI%20support (accessed on 7 November 2025).
  57. Federal Law N273-FL; About Education in the Russian Federation (2012/2025). Russian Federation. 2012. Available online: https://www.consultant.ru/document/cons_doc_LAW_140174/ (accessed on 7 November 2025).
  58. A/RES/77/150; Information and Communications Technologies for Sustainable Development. United Nations General Assembly: New York, NY, USA, 2022. Available online: https://docs.un.org/ru/A/RES/77/150 (accessed on 7 November 2025).
  59. Measures for Identifying Artificial Intelligence-Generated Synthetic Content Notice on Issuing the Measures for Identifying Synthetic Content Generated by Artificial Intelligence. National Information Office Communication No. 2. 2025. Available online: https://www.chinalawtranslate.com/en/ai-labeling/#gsc.tab=0 (accessed on 7 November 2025).
  60. POFMA. Protection from Online Falsehoods and Manipulation Act 2019. Available online: https://www.pofmaoffice.gov.sg/regulations/protection-from-online-falsehoods-and-manipulation-act/ (accessed on 7 November 2025).
  61. SECTION 5-302; Contracts for the Creation and Use of Digital Replicas. 2025. Available online: https://www.nysenate.gov/legislation/laws/GOB/5-302 (accessed on 7 November 2025).
  62. Directive 2005/29/EC; EUR-Lex: Brussels, Belgium. 2005. Available online: https://eur-lex.europa.eu/eli/dir/2005/29/oj/eng (accessed on 7 November 2025).
  63. Directive 2011/83/EU; EUR-Lex: Brussels, Belgium. 2011. Available online: https://eur-lex.europa.eu/eli/dir/2011/83/oj/eng (accessed on 7 November 2025).
  64. Data Security Act 2021. Available online: https://www.gov.cn/xinwen/2021-06/11/content_5616919.htm (accessed on 7 November 2025).
  65. Report on the Study and Analysis of Future Opportunities and Problems of Virtual Space 2020. Available online: https://www.meti.go.jp/meti_lib/report/2020FY/000692.pdf (accessed on 7 November 2025).
  66. Civil Code. Russian Federation. Parts One, Two, Three and Four. 2024. Available online: https://www.wipo.int/wipolex/en/legislation/details/22547 (accessed on 7 November 2025).
  67. Technology Report No. 1. European Data Protection Supervisor. 2019. Available online: https://www.edps.europa.eu/sites/default/files/publication/19-01-18_edps-tech-report-1-smart_glasses_en.pdf (accessed on 7 November 2025).
  68. Guidelines 02/2021; European Data Protection Board: Brussels, Belgium. 2021. Available online: https://www.edpb.europa.eu/system/files/2021-07/edpb_guidelines_202102_on_vva_v2.0_adopted_en.pdf (accessed on 7 November 2025).
  69. Guidelines 05/2022; European Data Protection Board: Brussels, Belgium. 2023. Available online: https://www.edpb.europa.eu/system/files/2023-05/edpb_guidelines_202304_frtlawenforcement_v2_en.pdf (accessed on 18 September 2025).
  70. Guidelines 3/2022; European Data Protection Board: Brussels, Belgium. 2022. Available online: https://www.edpb.europa.eu/system/files/2022-03/edpb_03-2022_guidelines_on_dark_patterns_in_social_media_platform_interfaces_en.pdf (accessed on 7 November 2025).
  71. ST/SGB/2024/3; Data Protection and Privacy Policy for the Secretariat of the United Nations. Secretary-General’s Bulletin. United Nations: New York, NY, USA, 2024. Available online: https://docs.un.org/ru/ST/SGB/2024/3 (accessed on 7 November 2025).
  72. CEB/2022/2/Add.1; Principles for the Ethical Use of Artificial Intelligence in the United Nations System. Chief Executives Board for Coordination. United Nations System: Manhasset, NY, USA, 2022. Available online: https://unsceb.org/sites/default/files/2023-03/CEB_2022_2_Add.1%20%28AI%20ethics%20principles%29.pdf (accessed on 7 November 2025).
  73. The Data Act; European Commission: Brussels, Belgium, 2024; Available online: https://digital-strategy.ec.europa.eu/en/policies/data-act (accessed on 7 November 2025).
  74. Germany’s DSK Guidance. Data Protection Conference (DSK) Guidelines. 2025. Available online: https://www.datenschutzkonferenz-online.de/media/oh/20250917_DSK_OH_Datenuebermittlungen.pdf (accessed on 7 November 2025).
  75. AB-730; Elections: Deceptive Audio or Visual Media. California Legislative Information: Sacramento, CA, USA, 2019. Available online: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730 (accessed on 7 November 2025).
  76. Code of Virginia (2025 Updates). Available online: https://law.lis.virginia.gov/vacode/title18.2/chapter8/section18.2-386.2/ (accessed on 7 November 2025).
  77. SAG AFTRA TV/Theatrical Contracts, 2023. Available online: https://www.sagaftra.org/files/sa_documents/TV-Theatrical_23_Summary_Agreement_Final.pdf?kh8gqzg4us (accessed on 7 November 2025).
  78. Shimpo, F. Authentication of Cybernetic Avatars and Legal System Challenges; With a View to the Trial Concept of New Dimensional Domain Jurisprudence (AI, Robot, and Avatar Law). Jpn. Soc. Cult. 2024, 6. [Google Scholar] [CrossRef]
  79. Guidelines for the Creation and Operation of Virtual Reality, etc. for the Use of Cultural Assets for Tourism Purposes (n/d). Available online: https://www.bunka.go.jp/tokei_hakusho_shuppan/tokeichosa/vr_kankokatsuyo/pdf/r1402740_01.pdf (accessed on 7 November 2025).
  80. Cybersecurity Law 2016. Available online: https://www.gov.cn/xinwen/2016-11/07/content_5129723.htm (accessed on 7 November 2025).
  81. Regulation on the Management of Deep Synthesis of Internet Information Services 2022. Available online: https://www.gov.cn/zhengce/zhengceku/2022-12/12/content_5731431.htm (accessed on 7 November 2025).
  82. Lam, C.M. Building ethical virtual classrooms: Confucian perspectives on avatars and VR. Comput. Educ. X Real. 2025, 6, 100092. [Google Scholar] [CrossRef]
  83. Fait, D.; Mašek, V.; Čermák, R. A constructivist approach in the process of learning mechatronics. In Proceedings of the 15th annual International Conference of Education, Research and Innovation ICERI2022, Seville, Spain, 7–9 November 2022; pp. 3408–3413. [Google Scholar] [CrossRef]
  84. Rousseau, J.-J. Émile, ou De L’éducation; Néaulme: Amsterdam, The Netherlands, 1762. [Google Scholar]
  85. Dewey, J. Experience and Education; Simon & Schuster: New York, NY, USA, 1938; Available online: https://archive.org/details/ExperienceAndEducation/page/n7 (accessed on 7 November 2025).
  86. Vygotsky, L.S. Mind in Society: Development of Higher Psychological Processes; Harvard University Press: Cambridge, MA, USA, 1978; Available online: http://www.jstor.org/stable/j.ctvjf9vz4 (accessed on 7 November 2025).
  87. Piaget, J. To Understand Is to Invent; The Viking Press: New York, NY, USA, 1972. [Google Scholar]
  88. Ross, M. Philosophy of Education; Publifye AS: Olso, Norway, 2025. [Google Scholar]
  89. Kaswan, K.S.; Dhatterwal, J.S.; Ojha, R.P. AI in personalized learning. In Advances in Technological Innovations in Higher Education; CRC Press: Boca Raton, FL, USA, 2024; pp. 103–117. [Google Scholar]
  90. Lee, G.; Shi, L.; Latif, E.; Gao, Y.; Bewersdorff, A.; Nyaaba, M.; Guo, S.; Liu, Z.; Mai, G.; Liu, T.; et al. Multimodality of AI for education: Towards artificial general intelligence. IEEE Trans. Learn. Technol. 2025, 18, 666–683. [Google Scholar] [CrossRef]
  91. Twabu, K. Enhancing the cognitive load theory and multimedia learning framework with AI insight. Discov. Educ. 2025, 4, 160. [Google Scholar] [CrossRef]
  92. AlShaikh, R.; Al-Malki, N.; Almasre, M. The implementation of the cognitive theory of multimedia learning in the design and evaluation of an AI educational video assistant utilizing large language models. Heliyon 2024, 10, e25361. [Google Scholar] [CrossRef] [PubMed]
  93. Gkintoni, E.; Antonopoulou, H.; Sortwell, A.; Halkiopoulos, C. Challenging cognitive load theory: The role of educational neuroscience and artificial intelligence in redefining learning efficacy. Brain Sci. 2025, 15, 203. [Google Scholar] [CrossRef]
  94. Kim, L.; Jitpakdee, R.; Praditsilp, W.; Yeo, S.F. Analyzing factors influencing students’ decisions to adopt smart classrooms in higher education. Educ. Inf. Technol. 2025, 30, 14335–14365. [Google Scholar] [CrossRef]
  95. Xu, J.; Li, J.; Yang, J. Self-regulated learning strategies, self-efficacy, and learning engagement of EFL students in smart classrooms: A structural equation modeling analysis. System 2024, 125, 103451. [Google Scholar] [CrossRef]
  96. Shen, Y. Examining the efficacies of instructor-designed instructional videos in flipped classrooms on student engagement and learning outcomes: An empirical study. J. Comput. Assist. Learn. 2024, 40, 1791–1805. [Google Scholar] [CrossRef]
  97. Fisher, R.; Tran, Q.; Verezub, E. Teaching English as a Foreign Language in Higher Education using flipped learning/flipped classrooms: A literature review. Innov. Lang. Learn. Teach. 2024, 18, 332–351. [Google Scholar] [CrossRef]
  98. Cevikbas, M.; Mießeler, D.; Kaiser, G. Pre-service mathematics teachers’ experiences and insights into the benefits and challenges of using explanatory videos in flipped modelling education. ZDM–Math. Educ. 2025, 2, 1–14. [Google Scholar] [CrossRef]
  99. UNESCO 2023. Guidance for Generative AI in Education and Research. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 7 November 2025).
  100. Chen, J.; Mokmin, N.A.M.; Shen, Q. Effects of a Flipped Classroom Learning System Integrated with ChatGPT on Students: A Survey from China. Int. J. Interact. Multimed. Artif. Intell. 2025, 9, 113–123. [Google Scholar] [CrossRef]
  101. Ma, X.; Xie, Y.; Wang, H. Construction and verification study on the hierarchical model of teacher–student interaction evaluation for smart classroom. Asia-Pac. Educ. Res. 2025, 34, 1169–1180. [Google Scholar] [CrossRef]
  102. Martin, F.; Dennen, V.P.; Bonk, C.J. Systematic reviews of research on online learning: An introductory look and review. Online Learn. J. 2023, 27, 1–15. [Google Scholar] [CrossRef]
  103. Chen, J.; Mokmin, N.A.M.; Shen, Q.; Su, H. Leveraging AI in design education: Exploring virtual instructors and conversational techniques in flipped classroom models. Educ. Inf. Technol. 2025, 2, 1–21. [Google Scholar] [CrossRef]
  104. Herbert, C.; Dołżycka, J.D. Teaching online with an artificial pedagogical agent as a teacher and visual avatars for self-other representation of the learners. Effects on the learning performance and the perception and satisfaction of the learners with online learning: Previous and new findings. Front. Educ. 2024, 9, 1416033. [Google Scholar] [CrossRef]
  105. Huang, R.; Tlili, A.; Xu, L.; Chen, Y.; Zheng, L.; Metwally, A.H.S.; Da, T.; Chang, T.; Wang, H.; Mason, J.; et al. Educational futures of intelligent synergies between humans, digital twins, avatars, and robots-the iSTAR framework. J. Appl. Learn. Teach. 2023, 6, 28–43. [Google Scholar] [CrossRef]
  106. Garcia, M.B. Teachers in the metaverse: The influence of avatar appearance and behavioral realism on perceptions of instructor credibility and teaching effectiveness. Interact. Learn. Environ. 2025, 33, 1–17. [Google Scholar] [CrossRef]
  107. Anderson, M.; Manojlovic, J. Visual Design of Avatars as Pedagogical Agents. Master’s Thesis, University West, School of Business, Economics, and IT Division of Informatics, Trollhättan, Sweden, 2025. Available online: https://webmadster.com/MagisterUppsats_MastersInITAndManagement_2025/Final_Thesis_Publishing_Anderson_M_Manojlovic_J_Masters_In_IT_And_Management_2025.pdf (accessed on 7 November 2025).
  108. Mandić, D.; Miscević, G.; Ristić, M. Teachers’ Perspectives on the Use of Interactive Educational Avatars: Insights from Non-Formal Training Contexts. Res. Pedagog. 2025, 15, 115–124. [Google Scholar] [CrossRef]
  109. Gao, B.; Yan, J.; Zhong, R. How Digital Teacher Appearance Anthropomorphism Impacts Digital Learning Satisfaction and Intention to Use: Interaction with Knowledge Type. IEEE Trans. Learn. Technol. 2025, 18, 438–457. [Google Scholar] [CrossRef]
  110. Vallis, C.; Wilson, S.; Gozman, D.; Buchanan, J. Student perceptions of AI-generated avatars in teaching business ethics. Postdigital Sci. Educ. 2024, 6, 537–555. [Google Scholar] [CrossRef]
  111. Habarurema, J.B.; Di Fuccio, R.; Limone, P. Enhancing e-learning with a digital twin for innovative learning. The International J. Inf. Learn. Technol. 2025, 42, 341–351. [Google Scholar] [CrossRef]
  112. Murniarti, E.; Siahaan, G. The Synergy Between Artificial Intelligence (AI) and Experiential Learning in Enhancing Students’ Creativity through Motivation. Front. Educ. 2025, 10, 1606044. [Google Scholar] [CrossRef]
  113. Pârlog, A.C.; Crișan, M.M. New tools for approaching translation studies by simulation environments: EVOLI and ECORE. J. Educ. Sci. 2025, 26, 51. [Google Scholar]
  114. Wang, C.; Zou, B.; Du, Y.; Wang, Z. The Impact of Different Conversational Generative AI Chatbots on EFL learners: An Analysis of Willingness to Communicate, Foreign Language Speaking Anxiety, and Self-Perceived Communicative Competence. System 2024, 127, 103533. [Google Scholar] [CrossRef]
  115. Fiore, M.; Gattullo, M.; Mongiello, M. First Steps in Constructing an AI-Powered Digital Twin Teacher: Harnessing Large Language Models in a Metaverse Classroom. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Orlando, FL, USA, 8–12 March 2024; pp. 939–940. [Google Scholar] [CrossRef]
  116. Gasch, C.; Javanmardi, A.; Khan, A.; Garcia-Palacios, A.; Pagani, A. Exploring Avatar Utilization in Workplace and Educational Environments: A Study on User Acceptance, Preferences, and Technostress. Appl. Sci. 2025, 15, 3290. [Google Scholar] [CrossRef]
  117. Kumor, S.; Maik, M.; Walczak, K. AI-Driven Video Avatar for Academic Support. In Business Information Systems; Węcel, K., Ed.; BIS 2025. Lecture Notes in Business Information Processing; Springer: Cham, Switzerland, 2025; Volume 554. [Google Scholar] [CrossRef]
  118. Della Piana, B.; Carbone, S.; Di Vincenzo, F.; Signore, C. The Role of Avatars in Enhancing Cultural Diversity and Classroom Dynamics in Education. In Global Classroom: Multicultural Approaches and Organizational Strategies in Teaching and Learning Business and Economics; de Gennaro, D., Marino, M., Eds.; Emerald Publishing Limited: Leeds, UK, 2024. [Google Scholar] [CrossRef]
  119. Ezeoguine, P.E.; Kasumu, R.Y. Undergraduate Students’ Perception of Digital Twins Technology in Education: Uses and Challenges. Int. J. Educ. Eval. 2024, 10, 381–396. [Google Scholar]
  120. Xu, T.; Liu, Y.; Jin, Y.; Qu, Y.; Bai, J.; Zhang, W.; Zhou, Y. From recorded to AI-generated instructional videos: A comparison of learning performance and experience. Br. J. Educ. Technol. 2025, 56, 1463–1487. [Google Scholar] [CrossRef]
  121. Rienties, B.; Domingue, J.; Duttaroy, S.; Herodotou, C.; Tessarolo, F.; Whitelock, D. What distance learning students want from an AI Digital Assistant. Distance Educ. 2024, 46, 173–189. [Google Scholar] [CrossRef]
  122. Struger, P.; Brünner, B.; Ebner, M. Synthetic Educators: Analyzing AI-Driven Avatars in Digital Learning Environments. In Learning and Collaboration Technologies; Smith, B.K., Borge, M., Eds.; HCII 2025. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2025; Volume 15807. [Google Scholar] [CrossRef]
  123. Wong, P.P.Y.; Lee, J.; Gonzales, W.D.W.; Choi, S.H.S.; Hwang, H.; Shen, D.J. New Dimensions: The Impact of the Metaverse and AI Avatars on Social Science Education. In Blended Learning. Intelligent Computing in Education; Ma, W.W.K., Li, C., Fan, C.W., Hou U, L., Lu, A., Eds.; ICBL. Lecture Notes in Computer Science; Springer: Singapore, 2024; Volume 14797. [Google Scholar] [CrossRef]
  124. Le, H.; Shen, Y.; Li, Z.; Xia, M.; Tang, L.; Li, X.; Jia, J.; Wang, Q.; Gašević, D.; Fan, Y. Breaking human dominance: Investigating learners’ preferences for learning feedback from generative AI and human tutors. Br. J. Educ. Technol. 2025, 56, 1758–1783. [Google Scholar] [CrossRef]
  125. Gârdan, I.P.; Manu, M.B.; Gârdan, D.A.; Negoiță, L.D.L.; Paștiu, C.A.; Ghiță, E.; Zaharia, A. Adopting AI in education: Optimizing human resource management considering teacher perceptions. Front. Educ. 2025, 10, 1488147. [Google Scholar] [CrossRef]
  126. Gayazova, E.B.; Nikitina, T.N. Digital Technologies for Automating Communication in the Activities of a State University. In Proceedings of the International Scientific Conference; Mantulenko, V.V., Horák, J., Kučera, J., Ayyubov, M., Eds.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2025; Volume 1552. [Google Scholar] [CrossRef]
  127. Zhao, Z.; Yin, Z.; Sun, J.; Hui, P. Embodied AI-guided interactive digital teachers for education. In SIGGRAPH Asia 2024 Educator’s Forum; ACM: Tokyo, Japan, 2024; pp. 1–8. [Google Scholar]
  128. Merlo, O.; Li, N. What Do Students Want from AI-Assisted Teaching? Times Higher Education: London, UK, 19 June 2025. Available online: https://www.timeshighereducation.com/campus/what-do-students-want-aiassisted-teaching (accessed on 7 November 2025).
  129. Awashreh, R.; Ramachandran, B. Can artificial intelligence dominate and control human beings. Int. Res. J. Multidiscip. Scope 2024, 5, 427–438. [Google Scholar] [CrossRef]
  130. Pirjan, A.; Petroşanu, D.M. Artificial Social Intelligence and the Transformation of Human Interaction by Artificial Intelligence Agents. J. Inf. Syst. Oper. Manag. 2025, 19, 259–350. [Google Scholar]
  131. Aperstein, Y.; Cohen, Y.; Apartsin, A. Generative ai-based platform for deliberate teaching practice: A review and a suggested framework. Educ. Sci. 2025, 15, 405. [Google Scholar] [CrossRef]
  132. Bhavana, S.; Jayashree, K.; Rao, T.V.N. Navigating Ai biases in education: A foundation for equitable learning. In AI Applications and Strategies in Teacher Education; IGI Global: Hershey, PA, USA, 2025; pp. 135–160. [Google Scholar] [CrossRef]
  133. Sayffaerth, C. Educational twin: The influence of artificial XR expert duplicates on future learning. arXiv 2025, arXiv:2504.13896. [Google Scholar]
  134. Merino-Campos, C. The impact of artificial intelligence on personalized learning in higher education: A systematic review. Trends High. Educ. 2025, 4, 17. [Google Scholar] [CrossRef]
Figure 1. Legal themes and areas forming an institutional governance baseline for the use of digital representatives of human teachers in educational video: key legal areas and topics (authors’ data).
Figure 1. Legal themes and areas forming an institutional governance baseline for the use of digital representatives of human teachers in educational video: key legal areas and topics (authors’ data).
Sustainability 18 00040 g001
Figure 2. Theoretical dimensions for using video-based digital representatives of human lecturers: key theories and concepts (authors’ data).
Figure 2. Theoretical dimensions for using video-based digital representatives of human lecturers: key theories and concepts (authors’ data).
Sustainability 18 00040 g002
Figure 3. Themes discussed within current empirical studies on humans’ digital representatives in higher education instructional videos (authors’ data).
Figure 3. Themes discussed within current empirical studies on humans’ digital representatives in higher education instructional videos (authors’ data).
Sustainability 18 00040 g003
Table 1. Legal themes and areas across international, supranational, and national laws, relevant for digital representatives of humans in educational videos: examples (Authors’ data).
Table 1. Legal themes and areas across international, supranational, and national laws, relevant for digital representatives of humans in educational videos: examples (Authors’ data).
Legal ThemeExamples of International and Supranational LawExamples of National Law (Countries are Listed in the Alphabetical Order)
Human rights, data protection, privacyUN Resolution A_RES_75_176 [30]
OECD Recommendation (2019/2025) [31]
EU—GDPR (Regulation (EU) 2016/679 Directive 95/46 [2016] OJ L119/1 [32]
Digital Services Act (2022) [33]
Digital Markets Act (2022) [34]
Argentina: Data Protection Act No. 25.326 (2020) [35]
China: Personal Information Protection Act (2021) [36]
Japan: Act on the Protection of Personal Information (APPI 2003/2022) [37]
Singapore: Personal Data Protection Act (PDPA 2012/2021) [38]
USA: H.R.3230—DEEP FAKES Accountability Act, Assembly Bill No. 730 [39]
Guidelines for AI-facilitated activitiesUN Resolution A/78/L.49 [40]
OECD Recommendation (2019/2025) [31]
EU AI Act (2024) [41]
ASEAN Guide on AI Governance and Ethics (2024) [42]
Argentine: Guía para entidades públicas y privadas en materia de Transparencia y Protección de Datos Personales para una Inteligencia Artificial responsible (2024) [43]
China: Interim measures for the management of generative artificial intelligence services (2023) [44]
Singapore: Guidelines on Securing AI Systems (2024) [45], Guide on Synthetic Data Generation (2024) [46], Model AI Governance Framework for Generative AI (2024) [47]
Japan: Report of the Study Group on Utilization of Metaverse, etc., for the Web3 Era (2023) [48], Act on Promotion of Research and Development, and Utilization of AI-Related Technologies (2025) [49]
Regulations and Standards in EducationUN Resolution A/RES/78/213 [50]
UNESCO AI competency framework for students and teachers (2024) [51]
Directive 2010/13/EU [52]
Directive 2011/92/EU [53]
Argentine: Régimen Jurídico applicable para el uso responsible de la intelegencia artificial en la República Argentina (2024) [54]
Australia: Gen AI strategies for Australian higher education (2024) [55]
China:a) Interim Measures for the Management of Generative Artificial Intelligence Services (2023) [44]
b)Empowerment Initiative to Promote the Deep Integration of Intelligent Technology with Education Teaching and Scientific Research (2024) [56]
Japan: Report of the Study Group on Utilization of Metaverse, etc., for the Web3 Era (2023) [48]
Russia: Art. 16 of the Law on Education (2012/2025) [57]
USA: H.R.3230—DEEP FAKES Accountability Act, Assembly Bill No. 730 [39]
Cyber Law, protecting the systems and infrastructure, for countering abuseUN Resolution A/RES/77/150 [58]
OECD Recommendation (2019/2025) [31]
EU Digital Services Act (2022) [33]
EU Digital Markets Act (2022) [34]
China: Measures for Identifying Artificial Intelligence-Generated Synthetic Content (2025) [59]
Japan: Act on Promotion of Research and Development, and Utilization of AI-Related Technologies (2025) [49]
Singapore: Guide on Synthetic Data Generation (2024) [46], Protection from Online Falsehoods and Manipulation Act (POFMA, 2019) [60]
USA:SECTION 5-302 (2025). New York Law. Contracts for the creation and use of digital replicas [61]
Civil Law and ethical issuesUN Resolution A/78/L.49 [40]
OECD Recommendation (2019/2025) [31]
Directive 2005/29/EU (2005) [62]
Directive 2011/83/EU (2011) [63]
ASEAN Guide on AI Governance and Ethics (2024) [42]
China: Interim measures for the management of generative artificial intelligence services (2023) [44]
Data Security Act (2021) [64], Personal Information Protection Act (2021) [36]
Japan: Act on Promotion of Research and Development, and Utilization of AI-Related Technologies (2025) [49]; Report of Japanese Interior Ministry Report on the Study and Analysis of Future Opportunities and Problems of Virtual Space (2020) [65]
Russia: Articles 152.1, 152.2, 1259, 1477, 1481 of the Russian Civil Code [66]
Singapore: Protection from Online Falsehoods and Manipulation Act (POFMA, 2019) [60]
USA—H.R.3230—DEEP FAKES Accountability Act, Assembly Bill No. 730 [39]
Labor Law and responsibility for digital tools useOECD Recommendation (2019/2025) [31]
EU Technology Report No. 1 (2019) [67]
EU Guidelines 02/2021 (2021) [68]
EU Guideline 3/2022 (2022) [69]
EU Guideline 05/2022 (2023) [70]
China: Interim Measures for the Management of Generative Artificial Intelligence Services (2023) [44]; Measures for Identifying Artificial
Intelligence-Generated Synthetic Content (2025) [59]
Japan: Report on the study and analysis of future opportunities and problems of virtual space (2020) [64]
Act on Promotion of Research and Development, and Utilization of AI-Related Technologies (2025) [49]
USA: SECTION 5-302 (2025). Contracts for the creation and use of digital replicas [61]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Atabekova, A.; Atabekov, A.; Shoustikova, T. AI-Facilitated Lecturers in Higher Education Videos as a Tool for Sustainable Education: Legal Framework, Education Theory and Learning Practice. Sustainability 2026, 18, 40. https://doi.org/10.3390/su18010040

AMA Style

Atabekova A, Atabekov A, Shoustikova T. AI-Facilitated Lecturers in Higher Education Videos as a Tool for Sustainable Education: Legal Framework, Education Theory and Learning Practice. Sustainability. 2026; 18(1):40. https://doi.org/10.3390/su18010040

Chicago/Turabian Style

Atabekova, Anastasia, Atabek Atabekov, and Tatyana Shoustikova. 2026. "AI-Facilitated Lecturers in Higher Education Videos as a Tool for Sustainable Education: Legal Framework, Education Theory and Learning Practice" Sustainability 18, no. 1: 40. https://doi.org/10.3390/su18010040

APA Style

Atabekova, A., Atabekov, A., & Shoustikova, T. (2026). AI-Facilitated Lecturers in Higher Education Videos as a Tool for Sustainable Education: Legal Framework, Education Theory and Learning Practice. Sustainability, 18(1), 40. https://doi.org/10.3390/su18010040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop