Next Article in Journal
DIGITRACKER: An Efficient Tool Leveraging Loki for Detecting, Mitigating Cyber Threats and Empowering Cyber Defense
Previous Article in Journal
Trusted Yet Flexible: High-Level Runtimes for Secure ML Inference in TEEs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks

by
Sofia Sakka
1,2,*,
Vasiliki Liagkou
1,2,*,
Afonso Ferreira
3 and
Chrysostomos Stylios
1,4
1
Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece
2
Archimedes, Athena Research Center, 15125 Marousi, Greece
3
CNRS-Institut de Recherches en Informatique de Toulouse, 31058 Toulouse, France
4
Industrial Systems Institute, Athena Research Center, 26504 Patras, Greece
*
Authors to whom correspondence should be addressed.
J. Cybersecur. Priv. 2026, 6(1), 24; https://doi.org/10.3390/jcp6010024
Submission received: 20 November 2025 / Revised: 29 December 2025 / Accepted: 4 January 2026 / Published: 2 February 2026
(This article belongs to the Special Issue Current Trends in Data Security and Privacy—2nd Edition)

Abstract

Metaverse presents significant opportunities for educational advancement by facilitating immersive, personalized, and interactive learning experiences through technologies such as virtual reality (VR), augmented reality (AR), extended reality (XR), and artificial intelligence (AI). However, this potential is compromised if digital environments fail to uphold individuals’ privacy, autonomy, and equity. Despite their widespread adoption, the privacy implications of these environments remain inadequately understood, both in terms of technical vulnerabilities and legislative challenges, particularly regarding user consent management. Contemporary Metaverse systems collect highly sensitive information, including biometric signals, spatial behavior, motion patterns, and interaction data, often surpassing the granularity captured by traditional social networks. The lack of privacy-by-design solutions, coupled with the complexity of underlying technologies such as VR/AR infrastructures, 3D tracking systems, and AI-driven personalization engines, makes these platforms vulnerable to security breaches, data misuse, and opaque processing practices. This study presents a structured literature review and comparative analysis of privacy risks, consent mechanisms, and digital boundaries in metaverse platforms, with particular attention to educational contexts. We argue that privacy-aware design is essential not only for ethical compliance but also for supporting the long-term sustainability goals of digital education. Our findings aim to inform and support the development of secure, inclusive, and ethically grounded immersive learning environments by providing insights into systemic privacy and policy shortcomings.

1. Introduction

The metaverse is a cutting-edge concept that combines various technologies, including XR, which covers Virtual Reality, Mixed Reality, and Augmented Reality, plus others, like Digital Twins and Blockchain, allowing for immersive interactions with digital objects, virtual environments, and other individuals [1]. It is a digital realm where individuals can immerse themselves in a virtual world, engaging with others, exploring new environments, and creating unique content, fundamentally rooted in personalization and AI-driven decisions.
XR technologies inherently gather human data to operate effectively; thus, their integration into the metaverse presents a new avenue for data processing. Protecting user privacy is important; the metaverse requires careful personal data governance given the large volume of data used. As individuals place greater emphasis on safeguarding their privacy, it is crucial to address the unique challenges posed by the metaverse. While the industry is actively developing the metaverse, academia is lagging in conducting research, particularly in the field of metaverse security and privacy [2]. One notable exception is the research work in [3], which explicitly addresses this notion is by conceptualizing digital boundaries primarily in terms of virtual harm, user safety, and behavioral norms in immersive environments. This work provides an important theoretical foundation by highlighting how blurred physical–digital boundaries can lead to harassment, psychological harm, and safety risks in virtual spaces.
In their daily lives, users utilize a variety of traditional platforms for social interactions, games, education, healthcare applications, and financial transactions [3]. Metaverse platforms are distinct from conventional platforms due to their immersive nature, resulting in unique privacy concerns. In the metaverse, users do not need to directly provide their data as it is automatically collected through the edge devices required to access the metaverse environment [4]. Accordingly, organizations are able to collect expanded information from individual’s physical characteristics, behavioral patterns, physiological responses, movements, and even brainwave patterns, resulting in more accurate behavior prediction and user profiling than data mining by traditional platforms has already achieved [5,6].
These challenges are present across metaverse applications such as social interaction, entertainment, healthcare, and education. Educational contexts are particularly illustrative due to the involvement of minors and institutional accountability requirements. Privacy and security are foundational to the sustainable adoption of metaverse technologies in education. Their influence extends beyond technical concerns, shaping trust, equity, and the long-term viability of digital learning environments. The success and sustainability of educational metaverse platforms depend on the trust of students, parents, and educators. If users fear that their personal data, behaviors, or identities are at risk, they are less likely to engage with these platforms, undermining their widespread adoption and long-term use [7,8]. Therefore, realizing its full potential in practice depends heavily on its social acceptance [9,10]. Metaverse’s adoption in education requires addressing several critical design challenges, including ensuring equity, preventing cyberbullying, improving device usability, promoting inclusivity, and safeguarding user privacy [9,10]. These factors are essential for developing a trusted and inclusive metaverse-based learning environment [9], aligning with Sustainable Development Goal (SDG) 16, which promotes peace, justice, and strong institutions [11]. Moreover, educational institutions must comply with strict data protection regulations (i.e., GDPR). Inadequate privacy and security measures can lead to legal penalties, reputational damage, and forced shutdowns, threatening the sustainability of metaverse-based education [8,12]. Privacy and security challenges, such as unauthorized data collection, surveillance, or cyber-attacks, can erode user confidence and hinder the integration of metaverse tools into mainstream education. Addressing these issues is crucial for the sustainable technological evolution of educational platforms [6,7].
The metaverse supports the aims of SDG 4 (Quality Education) by providing immersive and accessible learning environments that help overcome sensory, spatial, and cognitive barriers, thereby promoting inclusive and equitable educational opportunities for all learners. By fostering engagement and enabling the integration of diverse learners, the metaverse supports more inclusive and accessible education. Furthermore, it offers personalized learning experiences and facilitates skills training in scenarios that are otherwise difficult or unsafe to replicate in real-world environments [7]. Security breaches or weak privacy protections can affect vulnerable populations, such as minors or students with disabilities. Ensuring robust protection measures is essential for creating inclusive, equitable digital learning spaces that can be sustained over time [8,13]. Institutional transparency, data accountability, and strong digital rights protections are not only a technical necessity but also a requirement for aligning metaverse education with the broader vision of sustainable and just digital societies outlined in SDG 16 [11].
The notion of digital boundaries has recently emerged as a crucial construct in understanding privacy, autonomy, and identity within immersive digital ecosystems. Our work is positioned as a structured literature review and comparative policy analysis. We synthesize existing technical, legal, and policy and analyze privacy practices across representative metaverse platforms to identify systemic gaps and governance challenges. As users engage through avatars and XR interfaces, traditional physical or institutional boundaries are replaced by dynamic, data-driven constraints that regulate information exchange, interaction, and self-representation. In metaverse environments, the blurring of physical and virtual spaces necessitates new mechanisms to maintain informational and social boundaries, as continuous data capture threatens personal agency and contextual privacy [12,14,15]. Similarly, the use of decentralized identity management could serve as a means of restoring user control and reinforcing the “identity boundary” in educational metaverses, where learners must securely manage their digital identities across platforms [16].
Building on these insights, our study defines digital boundaries as the explicit and implicit limits that define: (a) what personal and academic information can be shared or accessed, (b) how users interact with each other and with digital content, and (c) the extent to which digital identities and activities are visible, monitored, or regulated. While digital boundaries are relevant across all metaverse applications, their importance is particularly evident in educational contexts, where privacy, consent, and identity management directly affect learner autonomy, trust, and institutional responsibility. They help balance the benefits of immersive, collaborative education with the need to safeguard users’ rights and well-being, particularly given the complex challenges of identity, data protection, and behavioral regulation in the metaverse.
The emergence of the educational metaverse introduces unprecedented challenges regarding user consent and the establishment of digital boundaries. Accordingly, this study addresses the following research questions: RQ1: What types and levels of personal, behavioral, biometric, and environmental data are collected by contemporary metaverse platforms?; RQ2: What privacy and security risks emerge from these data practices, particularly in immersive and educational contexts?; and RQ3: How are consent mechanisms implemented across metaverse platforms, and to what extent do they align with GDPR principles and digital boundary requirements?
The remainder of the paper is organized as follows. Section 2 reviews related work, highlighting key privacy and security vulnerabilities, particularly those related to biometric data and user consent, and identifying the gaps that motivate our focus on user autonomy and trust. Section 3 outlines the categories of data collected by metaverse platforms, while Section 4 analyzes the privacy risks that arise from continuous data collection in immersive environments indicating the potential impact on education. Section 5 examines existing user-consent strategies implemented across metaverse platforms. Section 6 expands on these findings by identifying persistent challenges surrounding informed consent, biometric data use, and autonomy. Section 7 discusses the broader implications of user consent for managing the disclosure, use, and retention of personal information. Finally, Section 8 concludes the paper by synthesizing the key insights and outlining recommendations for advancing privacy-aware and ethically grounded metaverse-based education.

2. Related Work

As educational institutions increasingly adopt metaverse technologies, immersive 3D worlds, VR/AR classrooms, and AI-driven learning, digital boundaries have become a critical area of research and practice. These boundaries are essential for privacy, security, ethical conduct, and effective learning. The XR technologies adopted by metaverse transform educational experiences by offering immersive, interactive, and personalized learning environments. Recent studies, such as [17,18], have explored the diverse applications of metaverse in education, underscoring its potential to support inclusive and experiential learning. However, alongside its growing popularity and adoption, serious concerns have emerged regarding user security and privacy. As emphasized in [19], AR, a core component of the metaverse, introduces unique privacy challenges. These challenges include informational privacy, volumetric privacy, and physical privacy, given the capacity of AR devices to collect novel forms of personal data such as facial features, reflexes, eye movement, and motion-based biometric identifiers (e.g., kinetic fingerprints). These evolving risks necessitate robust frameworks to ensure user trust and data protection in educational metaverse settings.
Several studies have examined the security and privacy aspects of various metaverse environments, such as those in healthcare [20,21] and education [7,9,18] identifying key vulnerabilities and proposing corresponding mitigation measures. Overall, numerous security and privacy gaps have been identified in various research studies, such as those in [7,22]. These studies have categorized existing vulnerabilities into threats related to access control, edge devices, data management, AI algorithms, and communication links between users and avatars. This classification serves as an initial step in determining the type of data being used, and consequently, in designing appropriate consent mechanisms. Additionally, the work in [9], presents critical privacy issues within the metaverse, highlighting significant concerns such as the vulnerability of biometric data, the continuous collection of personal information through user-worn headsets, and the risks associated with unauthorized access to critical data. Despite the use of secure communication protocols, the integration of metaverse technologies further complicates privacy protection, particularly regarding user profiling and datafication. The insufficient privacy-enhancing mechanisms raise concerns around user autonomy and security. In Ref. [23], the authors discuss privacy concerns in the metaverse and measures to address them, as the metaverse exploits extensive amounts of data, including personal, to create the virtual world, confirming the privacy concerns. Conventional websites usually record user consent through Consent Management Platforms (CMPs) [24], e.g., OneTrust [25]. This CMP is also utilized in metaverse-based platforms, like VRChat and SecondLife. Notably, in metaverse spaces, edge devices collect a large amount of critical data, demonstrating that such data collection may occur even without the user’s consent [26,27,28]. Moreover, the work in Ref. [29], explores the significant impacts of public policy and legislation on metaverse security and privacy. In particular, the authors investigate how existing digital policies and the technological requirements of the metaverse intersect and propose a solution, which leverages a fully decentralized SSI (Self-Sovereign Identity) architecture to respect data privacy and portability.
The research work in Ref. [30] emphasizes the need for consent mechanisms in social virtual reality, particularly in dating applications, to prevent nonconsensual experiences, underscoring the absence of consent. In Ref. [31], the authors analyze Meta’s compliance with the European Union’s General Data Protection Regulation (GDPR) provisions [32], including general processing principles, specific rules for lawful biometric, and other unique data processing, and profiling. Additionally, they investigate user consent within metaverse environments, exploring the multiple dimensions that must be considered in its implementation. The work in Ref. [33] also evaluates the consent feature in existing VR/MR devices. Moreover, in Ref. [34], the authors address privacy risks inherent in VR and metaverse platforms, emphasizing their susceptibility to user profiling and deanonymization attacks. Their work presents the first known method for implementing an “incognito mode” in VR, using local differential privacy to obscure user data. While an incognito mode in VR would provide privacy for individual users, it introduces ethical and privacy challenges that must be carefully considered to ensure it does not negatively affect the experience or privacy of others. However, we note that these studies provide a targeted analysis of user privacy only within edge devices. There is a critical gap between the unintentional and intentional data collection in immersive environments and the inherent failure of traditional mechanisms to secure informed user consent [12]. XR devices expose users to significant new security and privacy challenges because they collect data, such as real-world location traceability, user movement, eye-tracking, and physiological responses, which can reveal sensitive preferences and interests [9,12]. This continuous and often passive data collection often occurs without users being aware of precisely what, when, or why their data is being gathered, leading to a pervasive sense of distrust and a loss of digital autonomy [12,35]. To address this, solutions like [36] propose a continuum of consent mechanisms, ranging from explicit, high-effort inputs, like gestures or voice commands, to semi-implicit systems, using physical sliders or privacy belts to set preferences, and fully automated implicit systems where the system learns and decides, all designed to be quick, low-effort, and contextually sensitive to maintain user control. The authors argue that traditional consent mechanisms like lengthy privacy policies are not suitable for fast, spontaneous AR experiences because users do not have time to read and respond during brief interactions.
Existing research remains fragmented across device-level privacy, general metaverse policy, or sector-specific applications such as healthcare or social VR. Research on the educational sector of metaverse stresses minors’ data protection, ethical challenges, and pedagogical effects, whereas healthcare studies prioritize biometric data integrity, informed consent, and compliant system architectures [37]. Our research aims to offer a comprehensive understanding of consent management in metaverse environments, which, to the best of our knowledge, is still missing in the literature. Before we proceed further, we first introduce the various types of critical information that are being transmitted in metaverses, but also in education-specific metaverses, which in turn give rise to different threats. Next, we examine differences in privacy policies and consent management across different metaverse platforms, shedding light on the potential challenges that may arise in these virtual environments.

3. Data Collection in Metaverse

This section reports the findings of the literature regarding the types and levels of data collected in metaverse environments, forming the analytical basis for the subsequent privacy and consent analysis, and addressing RQ1. We begin by categorizing the data in alignment with the privacy policies of prominent metaverse platforms, distinguishing between types and levels of information. Types of information refer to the source and method of collection, including data provided directly by users, data generated while using the platform, data collected automatically, and data received from third parties. Levels of information refer to the role or nature of the data, such as authentication data, user-generated content, behavioral data, derivative data produced through analysis of user interactions [14].
We adopt this dual-dimension approach because it captures both the source and sensitivity of data, as well as the unintentional and intentional data collection. We separate the category of biometric data due to its classification as a special category under Article 9 of GDPR, which imposes stricter consent and processing requirements. Moreover, since we explore the digital boundaries of metaverse, users’ surroundings data is also treated separately because it includes capturing bystanders who are not actively participating in the platform, or environmental context, facing unique consent management challenges and raising distinct privacy, ethical, and regulatory concerns. We present both categories as levels of information. Alternative classification criteria might encompass personal and non-personal data, with the distinction between these categories often being blurred, particularly in a metaverse setting where vast amounts of data are generated and analyzed within complex, interconnected environments [38].
Our categorization is important for the stakeholders to understand the reasons and methods by which their data will be processed, and why they should pay attention to privacy policies beforehand. Finally, when the privacy policy explicitly prohibits the collection of certain types of Personally Identifiable Information (PII), but the taint analysis, a vulnerability-detecting tool for tracking sensitive data flows [36], indicates that the code is in fact collecting and transmitting such information, this raises a clear privacy concern.
An overview of the personal data collected and processed in a metaverse setting is illustrated in Figure 1 below.

3.1. Types of Information in Metaverse Platforms

In general, the types of information required by a metaverse platform can be categorized into four main groups, according to their privacy policies: (i) information provided by users, (ii) information collected while using the platform, (iii) information that is collected automatically, and (iv) information collected by third parties. In the following subsections, we briefly present these data categories.

3.1.1. Information Provided by Users

When accessing a metaverse environment, users are required to provide information similar to that of a typical 2D platform. This information can be categorized into four subcategories:
  • Registration and account information: This information could include username, password, email, birth date, and country of residence.
  • Payment information: When the user buys a service or conducts transactions, the platform collects personal details like ID verification, date of birth, address, phone number, and payment information. Transaction details, such as amounts, parties, methods, and related circumstances (e.g., third-party accounts like PayPal), may also be retained.
  • Uploaded content: Information and content provided in services available within the metaverse, like forums and community environments that do not have a restricted audience, is included.

3.1.2. Information by Using the Platform

Information collected by using the platform can be divided into three subcategories regarding privacy policies of metaverse-based platforms such as those of VRChat or SecondLife. These subcategories are the following:
  • Usage Information: This information includes data such as IP address, browser type/version, visited pages, time spent, and unique device identifiers. Additional data may be sent by the user’s browser during visits.
  • Communication Information: This information includes personal information that the user may share through microphone conversations, text chats, or other interactions and communication links between system components.
  • Movement Data: When using a metaverse platform with devices such as a VR headset, it may gather biometric data on the physical movements of fingers, hands, head, and other body parts, depending on the equipment, to animate the avatar.

3.1.3. Data Collected Automatically

While a user navigates into the metaverse, multiple types of data are collected automatically. This automated collection includes information from tracking technologies like cookies and web beacons. Some of the tracking technologies are the following:
  • Cookies: Utilized to recognize users across services, improve experience, enhance security, analyze trends, and personalize ads.
  • Location Tracking: Many apps provide settings that allow users to limit or disable explicit location sharing; however, the availability and effectiveness of these controls vary across platforms. Additionally, some forms of location tracking, such as IP geolocation, cannot be disabled by users, meaning complete control over location data may be limited.
  • Web Beacons: Utilized to track how the user interacts with the platform, tailor content, measure usability, and resolve technical issues.
Overall, these technologies help optimize user experiences and improve service functionality.

3.1.4. Data Collected from Third Parties

This type of data includes the following subcategories:
  • Platform Partners: A metaverse platform may receive personal information from third parties, such as analytics services, gaming platforms, social networks, and advertisers.
  • Social Media: The platforms may use personal information from linked profiles, (e.g., Facebook).

3.2. Levels of Information in Metaverse Platforms

Data used in metaverse context goes beyond that of traditional 2D platforms. Subsequently, the data within the metaverse consists of four levels of information: the user’s authentication data, content data created by the user, background behavior data that is automatically collected, and derivative data produced through in-depth analysis of the content and behavior data [14]. However, as already stated, we include in this category also biometric data and user surrounding data.
Understanding these levels of information is essential, as each one corresponds to distinct privacy risks, regulatory obligations, and consent requirements. The problem is that as more information is collected, it becomes increasingly easier to identify individual users. In the following subsections we briefly present the levels of information.

3.2.1. Authentication Data

Authentication data refers to information used to verify the identity of users and control access to metaverse platforms. This category appears both as a type and a level of information. As a type, it can be user-provided (e.g., passwords, biometric characteristics such as fingerprints), or automatically collected (e.g., behavioral biometrics). As a level, due to their sensitive nature, biometric and behavioral authentication data falls under heightened data protection requirements, including explicit and informed consent (GDPR Art. 7), special-category data protections for biometrics (GDPR Art. 9), and the application of privacy-by-design and data minimization principles (GDPR Art. 25).

3.2.2. User-Generated Content

User-generated content (UGC) encompasses all information created by users within the metaverse, including text, voice messages, avatars, 3D models, forum posts, and multimedia contributions [39]. UGC is critical both for personal expression, therefore its collection raises privacy considerations because it may contain PII or other sensitive insights about users [40]. In educational contexts, UGC may include submitted assignments, collaborative documents, or recorded classroom interactions, linking learning behaviors directly to identifiable students.

3.2.3. Behavioral Data

Behavioral data refers to observed patterns of user interaction within the metaverse. This includes movement trajectories, clickstream data, task completion sequences, navigation patterns, and in VR/AR, gestures and gaze directions [41]. The learning outcomes of user interactions, represent measurable patterns of behavior that reveal which aspects of user activity are most influential and how they shape overall outcomes [42].

3.2.4. Biometric Data

The literature underlines that the metaverse requires real-time and continuous collection of data from users, including sensitive information such as identification, and biometric data [2]. Unlike traditional platforms that gather limited identifiers, metaverse systems continuously monitor fine-grained body movements and emotional cues to animate avatars and support interactive experiences. Given that biometric data is classified as a special category under Article 9 of GDPR, it is noteworthy that the privacy policies of existing metaverse platforms, such as VRChat, OnCyber, Sansar, SecondLife, and Decentraland, which are examined in Section 5, do not explicitly acknowledge the collection of such data. Instead, they refer only to the use of movement data for animation purposes, without recognizing that VR/AR devices are capable of capturing far more sensitive biometric signals, including iris and periocular patterns, facial topology and expressions, heart rate and physiological indicators, gaze direction, and vocal characteristics [19]. Due to this reason, movement data is also categorized separately under “information collected while using the platform” to reflect this distinction in privacy policies.
Types of biometric data, such as eye movements, heartbeat, and facial expressions, provide valuable insights into user responses, and enhance realism and interactivity, allowing for adaptive and personalized experiences based on physiological reactions [33]. This information can be gathered by monitoring the avatar’s behavior, preferences, and interactions in virtual environments [29]. This is beneficial in scenarios such as training, healthcare, and education for personalized user experience. The learning outcomes of this data help teachers to assess learners in a comprehensive way, and provide learners with personalized resources and services [17]. However, if this information is utilized without the user’s explicit consent, it potentially results in the misuse or exploitation of personal information.

3.2.5. Derivative Data

Derivative data is information inferred or generated by analyzing the above categories of data. In the metaverse, rich interaction and visualization techniques support immersive user experiences and digital identities are tightly coupled with behavioral information, such derived data enable the personalization of learning experiences, optimization of content delivery, and enhancement of virtual interactions [14]. Examples include learning analytics metrics (e.g., performance scores, progress tracking, engagement indicators), predictive models for student success, and derived recommendations [41].

3.2.6. User’s Surroundings Data

The edge devices used in the metaverse also capture information of the user’s surrounding environment [21,43]. This means information about the user’s interior or exterior space, whether it be their workspace or home, but also bystander information who are not even informed. While this allows platforms to enable spatial mapping, interaction detection, and safety boundaries it simultaneously results in continuous, sometimes unavoidable, surveillance of private spaces.
A particularly sensitive issue is bystander capture, where individuals who are present in the user’s surroundings, but who have not agreed to take part, are recorded or processed [31]. Environmental data can further reveal sensitive information about the user’s lifestyle, socioeconomic background, or household composition [14]. Despite these implications, platform privacy policies rarely specify what environmental data is collected, whether bystanders are protected, how long such data is retained, or whether it is used to generate derivative insights [31].

3.3. Data in Metaverse-Based Education

The metaverse enables immersive and interactive virtual environments that support forms of experiential learning not achievable in traditional classrooms [7,18]. Metaverse-based educational environments rely on an extensive and multidimensional set of learner data that goes far beyond what is collected in traditional digital learning systems. As described in [17], metaverse learning systems track students’ actions, interactions, choices, and emotional states to provide tailored feedback and facilitate individualized learning pathways. These immersive platforms integrate learning analytics, including task performance, progress rates, errors, reaction patterns, and persistence levels, to identify strengths, weaknesses, and learning preferences, enabling highly personalized instruction. In addition to performance data, metaverse environments can record emotional and behavioral indicators, such as signs of stress, frustration, or confidence, which are used to assess learner readiness, engagement, and attitudes toward learning tasks.
At the same time, personalization mechanisms such as adaptive learning pathways, AI-driven feedback, and customizable avatars require the processing of large volumes of behavioral, preference-based, and biometric data. Such data contributes to personalizing the learning experience and recommend customized learning activities that suit each student’s needs by exploiting the vast data generated among the metaverse for behavioral analysis and interaction patterns [7]. While this data enhances pedagogical value, it simultaneously raises significant privacy, security, and ethical considerations, particularly regarding transparency, consent, student autonomy, and compliance with regulatory frameworks.

4. Security and Privacy Vulnerabilities of Metaverse’s Implemented Technologies

Based on the data taxonomy established in Section 3, this section presents the results of the literature regarding security and privacy vulnerabilities in metaverse platforms, with particular emphasis on their implications for educational contexts. Although the analysis is applicable to metaverse platforms broadly, educational use cases are employed throughout the paper as a representative and high-stakes context to illustrate privacy and consent challenges. In this section, we respond to RQ2 by mapping security and privacy threats to specific educational risks and relating them to the data categories identified in Section 3.

4.1. Existing Threats from the Perspective of the Identity Management Lifecycle

Firstly, users have to create a profile to access metaverse environments. Therefore, as a first step, metaverse platforms should provide a trustful access control mechanism. However, as we discuss in the following sections, existing metaverse platforms typically rely on standard one-factor authentication, either through usernames and passwords or via third-party services. Moreover, XR devices which also serve as access points, lack strong access control mechanisms [44]. Hence, metaverse platforms must expand their access control mechanisms to include an identity management scheme that offers various policies to users [7].
Educational data introduces unique privacy and security concerns regarding user data. For instance, learning analytics, assignment submissions, and interactive collaboration data are inherently sensitive because they reflect students’ academic performance, engagement patterns, and interpersonal interactions [45]. Unauthorized access to such data can compromise academic integrity, enable targeted manipulation, or expose minors’ information [46]. Table 1 presents the identity-related security and privacy threats reported in the literature [47,48,49], organizing them by attack surface, attacker capability, affected data categories, and violated security goals. This synthesis highlights how identity management vulnerabilities in metaverse platforms translate into concrete risks for educational environments, including impersonation, unauthorized access to learning spaces, and exposure of sensitive student data.
Secondly, users are free to create any avatar as a virtual representation of themselves, regardless of their real-world identity, carrying inherent risks [30]. These risks can cause serious problems regarding the real user behind the avatar, in metaverses for purposes such as education, health, and dating. Users also have the option to create a precise replica of themselves by utilizing measurements obtained from wearable devices. While this enables personalized and adaptive learning experiences [17], it also introduces risks such as identity theft, impersonation, and privacy violations. Behavioral and biometric data (facial expressions, voice, gaze, gestures) may reveal learning engagement patterns or stress levels, which are sensitive in an educational context [50]. Improper handling of this information could lead to unauthorized profiling or psychological manipulation, undermining both student autonomy and trust in educational platforms. Moreover, collaborative behavior data, including group work interactions, chat logs, and shared content, create additional attack surfaces. For example, an insider threat could manipulate collaborative outputs or access peer performance data, introducing ethical and academic risks unique to educational contexts [45,46].
Therefore, the ability of users to freely create and customize avatars introduces several significant concerns:
  • Identity Theft and Impersonation: Serving as virtual representations of a user’s identity, avatars are susceptible to being duplicated or stolen, potentially resulting in impersonation. Furthermore, the impersonator could attempt to gather personal information, opening the door to social engineering or manipulation [44]. In educational environments, impersonation can grant unauthorized individuals access to virtual classrooms, where they may behave inappropriately, distract other students, and create safety concerns. Additionally, such actions can lead to fraudulent submissions of work, ultimately undermining academic integrity. The use of avatars to conceal one’s identity can be manipulated for fraudulent purposes, including scams, deepfakes, and deceitful transactions within the metaverse. Further, the anonymity afforded by avatars complicates the attribution of actions to real-world individuals, posing challenges for accountability and legal enforcement [51]. Anonymity becomes especially problematic in educational settings, where verifying the identity of students and educators is crucial for maintaining trust. Ensuring that individuals are who they claim to be is vital for creating a safe and effective learning environment.
  • Psychological Manipulation and Harassment: Avatars can be created or utilized for deviant purposes, allowing for psychological manipulation or harassment of other users [52]. This behavior undermines students’ mental well-being and deters participation, particularly among those who are more vulnerable.
  • Data Privacy Concerns: The creation and use of avatars often involve the collection of critical personal data, including biometric information like facial expressions and voice patterns. If not properly secured, this data can be vulnerable to unauthorized access or misuse, posing significant privacy risks [44]. In education, where students may be minors, the handling of such sensitive data becomes even more critical, necessitating strict compliance with privacy regulations and ethical standards.

4.2. Existing Threats in Data Lifecycle

The volume of data transmitted within the metaverse significantly surpasses that of conventional platforms. Users typically use XR devices to access virtual environments lacking robust security mechanisms as already stated in the previous section. These devices collect movement data for avatar animation purposes, which can also be maliciously exploited for user authentication [53]. Moreover, XR devices can track user interactions, collect and analyze behavioral data, and gather environmental data such as dimensions, temperature, surroundings, objects, and even other individuals. XR technology, encompassing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), introduces a wide range of security and privacy challenges. An extended analysis of privacy threats in XR can be found in [54].
Specifically for educational settings, during the collection phase, learning analytics, assessment responses, and real-time engagement data are vulnerable to interception or tampering, such as through false data injection, which can compromise the accuracy of educational records and the reliability of personalized feedback [45]. In the processing phase, behavioral and biometric data are used to generate derivative analytics, including predictive learning models; therefore, if AI models are trained on corrupted or manipulated inputs, recommendations and assessment outcomes may be biased or inaccurate. During the storage and sharing phase, unauthorized access to student-generated content, collaborative work, or avatar data could expose sensitive information and violate GDPR obligations, particularly for minors. Ensuring secure data retention and implementing robust anonymization protocols are therefore essential to mitigate long-term privacy risks [55].
In Table 2, we map privacy and security threats to specific stages of the metaverse data lifecycle. This mapping reveals how risks emerge not only during data collection but persist through processing, storage, sharing, and destruction, amplifying their impact in educational contexts where learning analytics, assessment data, and biometric information are continuously processed.

4.3. Existing Threats from the Perspective of Machine Learning and Artificial Intelligence

AI is the core technology enabling seamless virtual-reality experience to users [59]. Moreover, a key distinction that sets the metaverse apart from conventional platforms is the concept of personalization. AI enables personalized experiences by employing sophisticated algorithms that analyze user behavior, preferences, and interactions [60]. Moreover, AI enhances educational services providing advanced simulation, facilitating arbitration, and decision-making acting as intelligent Non-player Character (NPC) peers, NPC tutors, NPC tutees, and policy-making advisors [7,61,62].
To provide a fully personalized user experience, metaverse data is integrated into machine learning and artificial intelligence algorithms, enabling predictions, identifying patterns, and optimizing processes. Furthermore, artificial intelligence and machine learning play a pivotal role in fostering an enjoyable and trustworthy environment. Just as users’ data is leveraged to train algorithms to deliver personalized experiences based on their behavior, it can also play a critical role in detecting and anticipating security threats. By analyzing behavioral patterns and identifying anomalies, AI-based solutions could also improve the early detection of suspicious activity and enhance the overall effectiveness of threat mitigation strategies.
The work in Ref. [63], introduces privacy challenges in metaverse environments that may arise due to the utilization of technologies such as artificial intelligence and machine learning, but also presents AI-based solutions to mitigate privacy issues. Therefore, there is a fine line between leveraging artificial intelligence technologies for their benefits and managing their potential consequences. The main concern is the massive amount of data used that raises privacy considerations. There are several challenges across the stages needed for data to be transformed into a meaningful outcome. Educational metaverse platforms collect learning analytics and interaction data that reveal patterns of learner engagement and performance, which require specific security protections beyond those for generic activity data [45]. AI and machine learning systems in educational metaverse platforms introduce a range of security and privacy risks that directly impact student data and learning processes. Model integrity threats arise because AI models rely on high-quality inputs from student interactions, learning analytics, and collaborative activities, data poisoning, adversarial attacks, or backdoor manipulations can compromise model performance, leading to biased recommendations, incorrect assessments, or unfair grading [45]. Data leakage and inference attacks occur when ML algorithms unintentionally expose sensitive student information; attribute inference or membership inference attacks may reveal learning patterns, assessment scores, or engagement behaviors, potentially violating GDPR, particularly for minors [55].
Behavioral manipulation is another concern, as AI-driven feedback or adaptive systems can inadvertently influence student behavior. For instance, reward shaping or emotional profiling may optimize engagement at the expense of learner autonomy, raising ethical concerns in educational settings [12]. Additionally, ownership and attribution conflicts can arise when student-created content, including collaborative projects, avatar-generated artifacts, or AI-assisted outputs, is misused, re-shared without consent, or incorporated into derivative works, creating intellectual property disputes and privacy violations [64].
In Table 3, we summarize the findings regarding privacy and security vulnerabilities introduced by artificial intelligence and machine learning in metaverse platforms [63,65,66]. By linking specific attack types to lifecycle stages and affected data categories, the table illustrates how AI-driven personalization and analytics can directly compromise learner autonomy, assessment integrity, and data protection in educational metaverse environments.
The findings presented in Section 3 and Section 4 demonstrate that metaverse platforms rely on extensive, continuous, and often opaque data collection practices. These practices generate compounded privacy and security risks that cannot be adequately mitigated through traditional consent mechanisms, particularly in educational environments where user vulnerability and regulatory obligations are heightened.

5. Consent in Selected Metaverse Platforms

This section presents the results of the comparative synthesis of the literature and platform documentation, focusing on how consent mechanisms and data practices are implemented across representative metaverse platforms. The analysis highlights common patterns, divergences, and systemic gaps relevant to privacy governance, particularly in educational contexts. We selected these platforms due to their comprehensive documentation, advanced features, user-friendly interfaces, and substantial user base, which collectively demonstrate their reliability, and accessibility. Another key parameter is the widespread recognition of these platforms for their application in educational, social, and collaborative learning contexts. Each one offers features that support user-generated content, immersive interactions, and virtual classrooms, making them relevant case studies for examining privacy practices in metaverse-based education. Their popularity and technical diversity also provide a representative view of how personal data is handled in educational metaverse settings. This section, as well as the following one, address RQ3 by comparatively analyzing platform consent mechanisms and their limitations through the lens of GDPR and digital boundaries.

5.1. VRChat

VRChat (2025.1.2) is an innovative online virtual world platform that enables users to engage with others using custom-made 3D avatars and virtual worlds. VRChat is specifically tailored for virtual reality headsets, compatible with Microsoft Windows PCs, and available as a native app for Android-based headsets like Meta Quest, Pico 4, and HTC Vive XR Elite. Additionally, VRChat can be accessed without a VR headset in desktop mode, optimized for mouse and keyboard or gamepad use, as well as through an Android app for touchscreen devices. Regarding the access control, VRChat uses the typical one-factor authentication with username and password. However, the platform enables user to choose also two-factor authentication by downloading an authentication application on his/her device.
VRChat’s privacy policy outlines how the platform collects, uses, and shares user personal information through various platforms, including its website, related subdomains, Desktop Clients, VR Clients, SDKs, and other services. The collection of data platform usage helps VRChat in personalizing user experience. The platform may use personal information to assess its effectiveness, analyze usage trends, and personalize the platform based on user preferences and interactions. In addition, it may be used for marketing purposes, such as developing and providing promotional and advertising materials that may be useful, relevant, valuable, or interesting to users. To achieve it, this information is shared with vendors, service providers and analytics services to provide knowledge about how the platform is used and user preferences.
Since VRChat can be accessed via VR devices, it utilizes face tracking technology to detect and analyze movements in real-time, animating the user’s avatar to reflect their actual reactions. The data collected is used solely for avatar animation purposes and is promptly deleted after each use, ensuring that it is not shared with any third parties. Finally, by using the VRChat platform, the user agrees with the collection (see Table 1), use, and disclosure of personal information. Specifically, the type of consent utilized by this platform is explicit consent.

5.2. Oncyber

OnCyber is a metaverse platform that enables users to create their own 3D, immersive experience that can be accessed directly from the browser. Users can showcase digital assets (NFTs) or simply connect virtually in a fully customizable space, OnCyber provides easily accessible tools to bring one’s vision to life, creating a truly inclusive multiverse for creators and explorers alike. It can be accessed by desktop or VR device. The primary login method is through a web3 crypto wallet. Supported wallets include MetaMask, WalletConnect, or other similar Ethereum-compatible wallets. There are also additional log-in methods via third parties (i.e., Google, Twitter), and the typical one with an email address.
OnCyber privacy policy reports that it collects information provided by the users and information collected automatically from tracking technologies (i.e., cookies, web beacons). Its privacy policy claims that only personal information categories listed in the CCR statute like name, contact information, education, employment, employment history and financial information while does not collect identifiers like contact details, such as real name, alias, postal address, telephone or mobile contact number, unique personal identifier, online identifier, Internet Protocol address, email address and account name. Additionally, it mentions that does not collect geolocation, such as device. However, this is not clear since the platform utilizes tracking technologies.
Finally, OnCyber employs explicit consent mechanism; however, its privacy policy states that the platform may process user data only if the user has given specific consent.

5.3. Sansar

Sansar is a social virtual reality platform that allows users to create and share interactive 3D environments for activities such as gaming, video viewing, and social interaction. Users are represented by customizable avatars featuring speech-driven facial animations and motion-based body movements. The platform supports both VR headsets (e.g., Oculus Rift and HTC Vive) and Windows computers. Access is free, with optional paid features, and users can register using a username and password or sign in via a mobile application using a VR code. Sansar’s privacy policy emphasizes user consent in various data processing activities. Users must provide explicit consent when sharing personal information, subscribing to marketing communications, or engaging in activities that require data sharing with third parties. Additionally, users have the right to withdraw their consent at any time by adjusting their account settings or contacting Sansar. This ensures transparency and control over personal data while using the platform’s online services.
Sansar’s privacy policy underlines that no data transmission or security system is completely secure, so the platform cannot guarantee the safety of information sent or stored, including personal data. By using the services, users accept the risk of potential data loss or breaches. Users are advised to carefully consider sharing sensitive information and are responsible for maintaining the confidentiality of their personal details.

5.4. SecondLife

SecondLife is a multiplayer virtual world that allows people to create an avatar for themselves and then interact with other users and user-created content within a multi-user online environment. Users worldwide to create avatars for socializing, business, or academic pursuits, often without revealing their real identities [67]. It can be accessed by desktop or through VR device, and the user can create an account simply with a username and password.
The platform collects two types of information, personal information (e.g., name, email, address) and anonymous information (e.g., usage data). Personal information is used to provide and improve services, while anonymous data may be shared with third parties for market research. Information provided by users, such as during registration, transactions, or communications, is stored and may be shared with third parties in specific cases, such as with service providers, business partners, or in the event of a sale.
SecondLife claims that it will not share personal information with third parties except in specific situations, such as when users consent to sharing it with other companies or platforms (like social networks), when companies perform services on its behalf (e.g., hosting, billing, support), with business partners for joint campaigns, in the event of a sale or business transfer, to enforce terms of service or legal obligations, or with affiliates for service performance. These third parties are restricted from using your data for purposes beyond what is necessary. The platform does not actively collect sensitive data but may inadvertently receive it through user communications, such as chat functions. If users share sensitive information like political views or sexual orientation, it will be collected along with other data. Users are encouraged not to disclose such information if they wish to keep it private.

5.5. Decentraland

Decentraland is a 3D virtual world browser-based platform where users may buy virtual plots of land in the platform as NFTs via the MANA cryptocurrency, which uses the Ethereum blockchain. The primary way to log in is by connecting a crypto wallet that supports Decentraland, such as MetaMask or WalletConnect, unless a user can access by a Google account.
By using Decentraland platform, the user consents to the collection and processing of personal data. Its privacy policy covers the collection of personal data like email, Ether address, device details, and plugin information, as well as how this data is used, including for improving services, communication, and analytics. Third parties may have access to this information for specific functions like providing support or analytics. The policy also discusses how information may be shared with these third parties and under what circumstances it could be disclosed, such as during legal proceedings or business changes.

6. Conceptualizing User Consent in Metaverse Environments

Building on the comparative findings presented in Section 5, this section synthesizes the literature to identify structural limitations of existing consent models and conceptualizes informed consent as a core digital boundary in immersive metaverse environments. The review of platform metaverse policies, examining how they address, or overlook, the multitude of privacy risks in such environments, reveals a deeply interconnected landscape of privacy and security vulnerabilities in metaverse environments. Many of these threats are rooted in or amplified by inadequate consent mechanisms and the opaque handling of user data. From behavioral tracking and identity fraud to misuse of educational content and biometric inference, these issues raise fundamental questions about user autonomy and transparency, and most importantly, whether users are fully aware of how their personal data is being collected and utilized.
User consent is permission granted by users to a website or organization to proceed with their data collection. The request of user consent before storing any identifiable information is a good step towards respecting the user privacy, but according to [68], user choices are not always respected. Next, we present the existing types of consent:
  • Explicit consent: Explicit consent is a form of user consent that is clearly and unambiguously given, often through written statements. These statements include clicking the traditional “I Accept” button upon reviewing the terms and conditions, checking an empty box indicating agreement to receive marketing emails, or signing a document authorizing the use of personal health data for a designated research project.
  • Implicit consent: With implicit consent a user has provided information (i.e., email, name) for various purposes, such as buying a product but has not explicitly agreed to the use of their data for something like marketing. With implicit consent, the user’s consent is assumed.
  • Informed consent: As its name implies, the user is informed with the necessary information in order to decide whether to give their consent or not. Specifically, the user is informed about the type of data the company will process, how it will be used, and the purpose of each processing operation for which consent its required [31].
In educational contexts, informed consent is not merely a procedural formality but a manifestation of learner autonomy and ethical participation. Autonomy entails the user’s right to make free and informed choices about how personal data, identity, and behavioral information are used in learning environments [69]. In the metaverse, where participation inherently involves biometric and behavioral data collection, informed consent must ensure that users maintain meaningful control [9]. When consent is limited to a one-time click-through agreement, it undermines the pedagogical value of agency and self-determination.

6.1. Tracking Technologies

These requests are usually presented as web forms, allowing users to specify their preferences and indicate which cookies they wish to accept. The following consent forms are usually given [68]:
  • Accept All: grant consent for all data processing purposes to all third parties residing on the visited website.
  • Reject All: deny consent for all data processing purposes to all third parties residing in the visited website.
  • No Action: avoid interacting with the form in any way.
However, refusing to consent to tracking technologies (like cookies or location ser vices) might limit your access to certain features, such as personalized experiences, location-based interactions, or targeted advertisements. This creates a trade-off where full functionality is contingent upon consent. Furthermore, considering the vast amount of data that is being collected via edge devices, such as VR and AR, for accessing the metaverse world, even the “Reject All” option cannot entirely prevent tracking. Tracking, however, is essential in metaverse environments because many immersive and interactive features, such as adaptive avatars, context-aware virtual environments, personalized content recommendations, and real-time social interactions, rely on understanding user behavior, preferences, and location. Without such data, the system cannot dynamically adjust the experience to the user, resulting in a limited or generic interaction that undermines the core value of the metaverse.

6.2. Bystander’s Consent

While this type of consent is beginning to be considered in research community, there are currently no practical implementations within metaverse environments. In such environments, objects and even people are captured through edge devices, such as cameras. Especially for bystanders, they must be fully informed about their presence and data processes regarding them. In addition, they should have the ability to withdraw implicit or assumed consent [31].

7. Discussion

The results of the literature synthesis and comparative platform analysis reveal three consistent findings: (i) current metaverse platforms rely predominantly on static, one-time consent mechanisms that are poorly suited to continuous data collection; (ii) biometric, behavioral, and bystander data are insufficiently addressed in existing privacy policies despite their central role in immersive interaction; and (iii) the lack of transparent, revocable, and context-aware consent undermines user autonomy and trust, particularly in educational metaverse environments.
In general, metaverse-based platforms collect personal information (e.g., email, username, payment info) and anonymous data (e.g., IP addresses, usage statistics, device data). Personal information is generally required to create and maintain user accounts, make purchases, and facilitate interactions on the platforms. All platforms use collected data to manage accounts, process payments, personalize experiences, and communicate with users (e.g., system updates, security alerts, promotions). Platforms also use data for improving services (through feedback, analytics, and trends), detecting illegal activities or policy violations, and sharing it with third-party service providers (e.g., payment processors, analytics tools, hosting ser vices). Information may also be shared in the event of legal obligations (e.g., responding to law enforcement requests) or business transactions (e.g., mergers, acquisitions). In Table 4 we present the core findings of the comparative analysis of selected metaverse platforms, summarizing the types of data collected, access control mechanisms, and consent models implemented in practice. The comparison reveals substantial inconsistencies and omissions in how platforms document biometric, behavioral, and environmental data collection, underscoring systemic transparency and consent gaps relevant to educational use cases.
VRChat and Sansar offer a clear opt-in for marketing and communication preferences. OnCyber users must give explicit consent to the platform to access third-party data (e.g., social media logins) and are provided with the option to opt into marketing communications. There is a strong focus on ensuring transparency for data collection practices related to external services.
In Decentraland, user consent is obtained during account creation, and transparency is based on the use of public blockchain. In addition, the Decentraland platform does not use cookies currently. However, as we already stated, the use of tracking technologies in metaverse environments is necessary regarding the enhanced personalized user experience. In SecondLife, user consent is required during account registration, with more flexibility for users to manage data sharing preferences (e.g., opting out of marketing emails). User-generated content may be made public, but users are notified of this risk, and there are options to control the visibility of content.
Regarding biometric data, while Table 4 groups it under “collected by platform” to reflect platforms’ privacy policies, we treated it as a separate category to emphasize its distinct privacy implications. This distinction highlights the need for stronger consent mechanisms and privacy safeguards in metaverse environments. VRChat tracks facial features using video input of user facial movements to animate the facial features of avatar. This feature is disabled by default, but the user can enable it in the settings, and it remains active until manually turned off. The feature works by analyzing facial movements in real-time to animate avatar’s face. However, the platform does not use video or facial analysis to identify users. All data is processed and retained temporarily on the user’s device only for the purpose of animating the avatar. Once used, the data is deleted in real-time and cannot be saved. Additionally, the facial tracking data is not shared with VRChat or any third parties. However, phrases like “as long as necessary” or “shorter durations” are vague and may leave users uncertain about the exact duration their data is stored. Furthermore, VRChat’s documentation mentions that can also provide full body tracking, but without referring to this in its privacy policy.
It is worth noting that Decentraland does not currently provide access to the platform through VR devices, unlike other platforms. Despite this difference, all platforms, in cluing SecondLife, Sansar, and OnCyber, share almost the same privacy policies. We observe that privacy policies of metaverse-based platforms such as those presented in the previous section do not provide sufficient information regarding biometric data. For example, VRChat mentioned that it utilizes facial tracking but does not mention any other biometric data that could be collected through the use of the VR devices. This issue may arise due to the lack of services that require full-body tracking. It is important to note that VR technology can lead to tracking and potentially result in privacy concerns. Therefore, we signify a serious privacy gap, as the data collection capabilities of these devices appear to be overlooked. This conclusion follows from the fact that our research has documented the entire set of data that could be obtained from devices such as VR.
Furthermore, it is mentioned that anonymous information is gathered to analyze user behavior, improve services, and ensure platform functionality. This personalization feature in the metaverse can be achieved through AI algorithms. However, the privacy policies refer to the use of data for personalization purposes but without mentioning how this is achieved. Only VRChat’s policy mentions potential use of artificial intelligence, but just for determining the appropriateness of content. Ensuring data privacy requires that AI systems collect, store, and process personal and sensitive data in accordance with privacy regulations. This includes obtaining proper consent, utilizing data anonymization techniques, and implementing secure data handling practices to safeguard individuals’ privacy rights [68]. Finally, despite that there is much research work regarding security and privacy risks in machine learning and artificial intelligence user, even though such concerns are not fully clarified in the above-mentioned privacy policies. Therefore, it is imperative for companies to have privacy safeguards in place for the data utilized in training or testing AI models.
Overall, it is clear that in the metaverse, types of consent such as explicit consent cannot be applicable due to its nature, demanding stronger and clearer data protection [14]. Considering that implicit consent operates under the same framework, we evaluate that the same principle applies to it too. The most adequate type of consent would be informed consent. However, a more stringent approach to consent in metaverse would pose a technical problem, for instance in seamless immersion effect by the metaverses intends, since changing and moving from platform to platform would, ideally, require data usage and processing consent every single time. Finally, there is no mention in existing privacy policies regarding bystander’s tracking and therefore to their consent.
Integrating privacy-aware practices into immersive metaverse education is not only a matter of ethical digital design but also a critical pillar for achieving sustainable development goals (SDGs). Specifically, SDG 4 (Quality Education), SDG 9 (Resilient Infrastructure), and SDG 16 (Peace, Justice, and Strong Institutions) are directly impacted. Embedding transparency, accountability, and user consent into metaverse learning environments, contributes to building enjoyable and trustworthy digital infrastructures that reflect institutional integrity. Privacy-aware systems uphold the rights of learners and educators, reduce digital surveillance risks, and ensure that emerging educational technologies serve inclusive, just, and equitable purposes.
Incorporating advanced technologies to enhance user privacy and secure information in metaverse educational environments presents both opportunities and challenges. While these environments are inherently attractive due to their immersive nature and ability to simulate real-world experiences, the integration of privacy-preserving technologies must be carefully balanced to maintain user engagement.
Blockchain-based access control systems can address decentralization challenges by enabling distributed governance and user-controlled data ownership. However, such implementations require users to authenticate repeatedly to verify attributes including age verification and course enrollment status. This frequent authentication process, particularly the repeated use of digital wallets for consent management, may discourage student participation and compromise the seamless immersive experience that defines effective metaverse environments [70]. Moreover, the use of non-fungible tokens (NFTs) for validating academic credentials, such as diplomas and transcripts, introduces additional friction through required user interactions and one-time password (OTP) verifications. These security measures, while necessary for credential integrity, can further diminish immersive experience and create barriers to metaverse adoption among educational institutions and learners [7,70,71,72,73].
Decentralization also underpins Federated Learning as a privacy-aware alternative to centralized AI training, enabling local data processing while supporting personalization and analytics [7]. Employing Federated Learning to preserve data privacy during model training processes imposes substantial computational and memory demands on client devices. These resource requirements can degrade the performance and responsiveness of virtual reality applications, particularly affecting real-time rendering quality and frame rates that are critical for maintaining presence and preventing simulator sickness. In general, federated learning introduces its own vulnerabilities, such as inference attacks, data poisoning, and information leakage through model updates, necessitating complementary safeguards such as differential privacy [21,74,75,76]. If implemented appropriately, these technologies could enable decentralized governance, secure avatar authentication, and privacy-preserving data training via federated learning in the metaverse, as illustrated in Figure 2.
Homomorphic encryption also enables privacy-preserving computation by allowing data to be processed in encrypted form without compromising model utility or exposing sensitive user information. However, its practical adoption is constrained by high computational complexity, significant encryption and decryption overhead, and limited efficiency for large-scale, real-time metaverse operations [77]. Furthermore, the integration of Intrusion Detection Systems (IDS), including Network Intrusion Detection Systems (NIDS) and Host-based Detection Systems (HIDS), can adversely impact system performance through increased network packet processing overhead and latency, thereby affecting real-time responsiveness essential for immersive interactions [78].
Consequently, a delicate equilibrium must be established between security requirements and user experience. Students and educators expect seamless metaverse experiences comparable to high-quality commercial virtual reality gaming, without persistent interruptions for consent verification, OTP authentication, credential validation, or the allocation of device resources to cryptographic operations. In the lack of a unified protection framework provided by organizations and platform operators, the burden of managing security and privacy risks is effectively transferred to end users. However, in most domains, like in the educational, where participants include minors and educators without advanced technical knowledge, this seems unrealistic. The path forward requires training users to value and demand privacy-preserving VR environments while simultaneously encouraging technology providers to develop efficient, privacy-by-design metaverse platforms that integrate robust security mechanisms without compromising the high-quality, immersive experiences users expect [77].
Digital boundaries are not mechanisms for eliminating data flows, but rather as controls that regulate how, where, and at what level of data is processed. Privacy-enhancing technologies such as federated learning, homomorphic encryption, and differential privacy enable core metaverse functionalities, including real-time rendering, avatar synchronization, and AI-driven personalization, by allowing sensitive data to be processed locally, transiently, or in encrypted form [77]. In this sense, digital boundaries function as enablers of system utility, embedding privacy into metaverse architecture rather than constraining its data-driven foundation.
These boundaries are operationalized through consent management mechanisms. To assess how consent mechanisms are implemented in practice, in Table 5, we map GDPR principles regarding essential user rights and data protection safeguards. This mapping demonstrates that, across platforms, compliance is often partial or unclear, particularly for biometric data processing, informed consent, and privacy-by-design obligations, revealing sector-wide governance weaknesses in educational metaverse environments. Notably, the absence of explicit statements regarding certain GDPR principles does not necessarily indicate non-compliance, but it does highlight a lack of transparency. Beyond individual platform practices, a cross-platform comparison reveals systemic weaknesses in biometric data handling and informed consent implementation, as summarized in the following table.
The comparison of these metaverse platforms, despite offering similar functionalities regarding consent management, highlights that they share same weaknesses in privacy governance. Our analysis on consent management found that consent mechanisms are mostly limited to one-time, click-through agreements, providing users with insufficient understanding or control over how their personal and behavioral data are collected, processed, and shared. Also, we observed that the privacy policies across all platforms remain unclear regarding biometric and behavioral tracking, omit details on how AI-driven personalization operates, and details concerning the collection, use, storage, and sharing of data. Moreover, regardless of whether the platforms rely on centralized systems (e.g., VRChat, Sansar, Second Life) or decentralized infrastructures (e.g., Decentraland), are not yet aligned with GDPR principles.
Metaverse platforms operate across multiple jurisdictions, creating challenges for privacy protection and consent management. The GDPR imposes strict requirements on consent, data minimization, and special categories of personal data, whereas other jurisdictions, such as the United States, adopt more fragmented, sector-specific approaches [31,79]. These differences complicate the implementation of consistent consent mechanisms, particularly in immersive environments where data collection is continuous and often cross-border. As a result, platform privacy policies tend to adopt generalized, minimum-compliance strategies that may not fully reflect jurisdiction-specific obligations. In this study, we adopted a GDPR-informed perspective, while acknowledging that future research should explore how consent and privacy practices adapt to diverse regulatory frameworks. Under the GDPR, data processing within virtual learning environments must respect the principles of lawfulness, transparency, purpose limitation, and data minimization (Articles 5–7). Incorporating privacy-by-design principles (Article 25) and DPIA (Article 35) into platform development would ensure that educational institutions uphold both legal and ethical standards. Furthermore, these practices directly contribute to achieving SDGs by promoting equitable and secure digital learning, fostering trustworthy digital infrastructures, and reinforcing accountability and digital justice. The consistency of the shortcomings across platforms indicates that they are not isolated to individual systems but reflect sector-wide challenges, underscoring the urgent necessity for regulatory guidance, the adoption of standardized privacy-by-design frameworks, and the implementation of stronger consent mechanisms. Consequently, integrating privacy-aware consent models into metaverse governance is not only a compliance measure but a strategy for sustainable and ethical digital education.

8. Conclusions

This study has examined privacy and consent mechanisms within educational metaverse platforms, revealing a persistent gap between technological innovation and data protection accountability. Through a comparative analysis of five widely used metaverse environments, VRChat, OnCyber, Sansar, SecondLife, and Decentraland, our work made three key contributions to the existing literature. First, it provided a structured taxonomy of data types and levels collected in metaverse environments, explicitly highlighting biometric and environmental data that are often overlooked in platform privacy policies. Second, it offered a comparative analysis of consent mechanisms across widely used metaverse platforms, mapping their practices against GDPR principles and identifying systemic governance gaps. Third, it conceptualized digital boundaries as a unifying framework for understanding privacy, consent, and user autonomy in immersive environments, with specific implications for the design and sustainability of educational metaverse platforms.
While most platforms claim compliance through explicit or implicit consent, their implementations often lack transparency, granularity, and user agency. In contexts where data collection is continuous, consent must evolve beyond one-time agreements toward informed, contextual, and revocable participation. This shift is essential not only for compliance but also for fostering trust, autonomy, and ethical sustainability in virtual learning ecosystems. Finally, privacy must be treated not as a regulatory constraint but as a cornerstone of trustworthy digital education. Ensuring privacy-aware immersive learning will determine whether the metaverse evolves into a safe, equitable, and inclusive space or reinforces existing vulnerabilities within digital societies.

Author Contributions

Conceptualization, S.S. and V.L.; methodology, S.S. and V.L.; validation, S.S., V.L. and C.S.; formal analysis, S.S. and A.F.; investigation, S.S.; resources, C.S.; writing—original draft preparation, S.S.; writing—review and editing, V.L. and A.F.; visualization, S.S.; supervision, C.S. and V.L.; project administration, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Recovery and Resilience Plan Greece 2.0 (project MIS 5154714), funded by the European Union under NextGenerationEU, and by the European Union’s Horizon Europe Research and Innovation Programme under grant agreements No. 101168144 (MIRANDA), No. 101086308 (DUCA), ARN TrustInClouds, and CNRS IRN EU-CHECK.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mourtzis, D.; Panopoulos, N.; Angelopoulos, J.; Wang, B.; Wang, L. Human Centric Platforms for Personalized Value Creation in Metaverse. J. Manuf. Syst. 2022, 65, 653–659. [Google Scholar] [CrossRef]
  2. Choi, M.; Azzaoui, A.E.; Singh, S.K.; Salim, M.M.; Jeremiah, S.R.; Park, J.H. The Future of Metaverse: Security Issues, Requirements, and Solutions. Hum.-Centric Comput. Inf. Sci. 2022, 12, 837–850. [Google Scholar] [CrossRef]
  3. Chawki, M.; Basu, S.; Choi, K.-S. Redefining Boundaries in the Metaverse: Navigating the Challenges of Virtual Harm and User Safety. Laws 2024, 13, 33. [Google Scholar] [CrossRef]
  4. Tricomi, P.P.; Nenna, F.; Pajola, L.; Conti, M.; Gamberini, L. You Can’t Hide Behind Your Headset: User Profiling in Augmented and Virtual Reality. IEEE Access 2023, 11, 9859–9875. [Google Scholar] [CrossRef]
  5. Falchuk, B.; Loeb, S.; Neff, R. The Social Metaverse: Battle for Privacy. IEEE Technol. Soc. Mag. 2018, 37, 52–61. [Google Scholar] [CrossRef]
  6. Di Pietro, R.; Cresci, S. Metaverse: Security and Privacy Issues. In Proceedings of the 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), Atlanta, GA, USA, 13 December 2021; pp. 281–288. [Google Scholar]
  7. Sakka, S.; Liagkou, V.; Stylios, C.; Ferreira, A. On the Privacy and Security for E-Education Metaverse. In Proceedings of the 2024 IEEE Global Engineering Education Conference (EDUCON), Kos Island, Greece, 8 May 2024; pp. 1–10. [Google Scholar]
  8. Hartina, S.; Nurcholis, M.; Dewi, A. Metaverse in Education: Exploring the Potential of Learning in Virtual Worlds. J. Pedagog. 2024, 1, 73–81. [Google Scholar] [CrossRef]
  9. Kaddoura, S.; Al Husseiny, F. The Rising Trend of Metaverse in Education: Challenges, Opportunities, and Ethical Considerations. PeerJ Comput. Sci. 2023, 9, 1252. [Google Scholar] [CrossRef]
  10. Lee, H.; Hwang, Y. Technology-Enhanced Education through VR-Making and Metaverse-Linking to Foster Teacher Readiness and Sustainable Learning. Sustainability 2022, 14, 4786. [Google Scholar] [CrossRef]
  11. Lai, Y.-H.; Lin, Y.-S.; Chang, Y.-C.; Chen, S.-Y. Cyber-Physical Metaverse Learning in Cultural Sustainable Education. Libr. Hi Tech 2024. ahead-of-print. [Google Scholar] [CrossRef]
  12. Tukur, M.; Schneider, J.; Househ, M.; Dokoro, A.H.; Ismail, U.I.; Dawaki, M.; Agus, M. The Metaverse Digital Environments: A Scoping Review of the Challenges, Privacy and Security Issues. Front. Big Data 2023, 6, 1301812. [Google Scholar] [CrossRef]
  13. Meena, S.D.; Mithesh, G.S.S.; Panyam, R.; Chowdary, M.S.; Sadhu, V.S.; Sheela, J. Advancing Education through Metaverse: Components, Applications, Challenges, Case Studies and Open Issues. In Proceedings of the 2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS), Coimbatore, India, 14–16 June 2023; pp. 880–889. [Google Scholar]
  14. Wu, H.; Zhang, W. Digital Identity, Privacy Security, and Their Legal Safeguards in the Metaverse. Secur. Saf. 2023, 2, 2023011. [Google Scholar] [CrossRef]
  15. Zhao, R.; Zhang, Y.; Zhu, Y.; Lan, R.; Hua, Z. Metaverse: Security and Privacy Concerns. J. Metaverse 2023, 3, 93–99. [Google Scholar] [CrossRef]
  16. Polychronaki, M.; Xevgenis, M.G.; Kogias, D.G.; Leligou, H.C. Decentralized Identity Management for Metaverse-Enhanced Education: A Literature Review. Electronics 2024, 13, 3887. [Google Scholar] [CrossRef]
  17. Zhang, X.; Chen, Y.; Hu, L.; Wang, Y. The Metaverse in Education: Definition, Framework, Features, Potential Applications, Challenges, and Future Research Topics. Front. Psychol. 2022, 13, 1016300. [Google Scholar] [CrossRef] [PubMed]
  18. Lin, H.; Wan, S.; Gan, W.; Chen, J.; Chao, H.-C. Metaverse in Education: Vision, Opportunities, and Challenges. In Proceedings of the 2022 IEEE International Conference on Big Data (Big Data), Osaka, Japan, 17–20 December 2022; pp. 2857–2866. [Google Scholar]
  19. Christopoulos, A.; Mystakidis, S.; Pellas, N.; Laakso, M.-J. ARLEAN: An Augmented Reality Learning Analytics Ethical Framework. Computers 2021, 10, 92. [Google Scholar] [CrossRef]
  20. Han, B.; Wang, H.; Qiao, D.; Xu, J.; Yan, T. Application of Zero-Watermarking Scheme Based on Swin Transformer for Securing the Metaverse Healthcare Data. IEEE J. Biomed. Health Inform. 2023, 1–10. [Google Scholar] [CrossRef]
  21. Letafati, M.; Otoum, S. On the Privacy and Security for E-Health Services in the Metaverse: An Overview. Ad Hoc Netw. 2023, 150, 103262. [Google Scholar] [CrossRef]
  22. Wang, Y.; Su, Z.; Zhang, N.; Xing, R.; Liu, D.; Luan, T.H.; Shen, X. A Survey on Metaverse: Fundamentals, Security, and Privacy. IEEE Commun. Surv. Tutor. 2023, 25, 319–352. [Google Scholar] [CrossRef]
  23. Canbay, Y.; Utku, A.; Canbay, P. Privacy Concerns and Measures in Metaverse: A Review. In Proceedings of the 2022 15th International Conference on Information Security and Cryptography (ISCTURKEY), Ankara, Turkey, 19–20 October 2022; pp. 80–85. [Google Scholar]
  24. Liu, Z.; Iqbal, U.; Saxena, N. Opted Out, Yet Tracked: Are Regulations Enough to Protect Your Privacy? Proc. Priv. Enhancing Technol. 2024, 2024, 280–299. [Google Scholar] [CrossRef]
  25. Onetrust. Available online: https://www.onetrust.com/ (accessed on 20 September 2025).
  26. Ruiu, P.; Nitti, M.; Pilloni, V.; Cadoni, M.; Grosso, E.; Fadda, M. Metaverse & Human Digital Twin: Digital Identity, Biometrics, and Privacy in the Future Virtual Worlds. Multimodal Technol. Interact. 2024, 8, 48. [Google Scholar] [CrossRef]
  27. Nair, V.; Munilla Garrido, G.; Song, D.; O’Brien, J. Exploring the Privacy Risks of Adversarial VR Game Design. Proc. Priv. Enhancing Technol. 2023, 2023, 238–256. [Google Scholar] [CrossRef]
  28. Bye, K.; Hosfelt, D.; Chase, S.; Miesnieks, M.; Beck, T. The Ethical and Privacy Implications of Mixed Reality. In Proceedings of the ACM SIGGRAPH 2019 Panels, Los Angeles, CA, USA, 28 July 2019; pp. 1–2. [Google Scholar]
  29. Laborde, R.; Ferreira, A.; Lepore, C.; Kandi, M.-A.; Sibilla, M.; Benzekri, A. The Interplay Between Policy and Technology in Metaverses: Towards Seamless Avatar Interoperability Using Self-Sovereign Identity. In Proceedings of the 2023 IEEE International Conference on Metaverse Computing, Networking and Applications (MetaCom), Kyoto, Japan, 26–28 June 2023; pp. 418–422. [Google Scholar]
  30. Zytko, D.; Chan, J. The Dating Metaverse: Why We Need to Design for Consent in Social VR. IEEE Trans. Vis. Comput. Graph. 2023, 29, 2489–2498. [Google Scholar] [CrossRef] [PubMed]
  31. Xynogalas, V.; Leiser (Mark), M.R. The Metaverse: Searching for Compliance with the General Data Protection Regulation. Int. Data Priv. Law 2024, 14, 89–105. [Google Scholar] [CrossRef]
  32. General Data Protection Regulation (GDPR)—Legal Text. Available online: https://gdpr-info.eu/ (accessed on 9 December 2025).
  33. Basyoni, L.; Tabassum, A.; Shaban, K.; Elmahjub, E.; Halabi, O.; Qadir, J. Navigating Privacy Challenges in the Metaverse: A Comprehensive Examination of Current Technologies and Platforms. IEEE Internet Things Mag. 2024, 7, 144–152. [Google Scholar] [CrossRef]
  34. Nair, V.C.; Munilla-Garrido, G.; Song, D. Going Incognito in the Metaverse: Achieving Theoretically Optimal Privacy-Usability Tradeoffs in VR. In Proceedings of the Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, San Francisco, CA, USA, 29 October–1 November 2023; pp. 1–16. [Google Scholar]
  35. Rahartomo, A.; Merino, L.; Ghafari, M. Metaverse Security and Privacy Research: A Systematic Review. Comput. Secur. 2025, 157, 104602. [Google Scholar] [CrossRef]
  36. Windl, M.; Laboda, P.Z.; Mayer, S. Designing Effective Consent Mechanisms for Spontaneous Interactions in Augmented Reality. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 26 April–1 May 2025; pp. 1–18. [Google Scholar]
  37. Singh, J.; Singh, P.; Kaur, R.; Kaur, A.; Hedabou, M. Privacy and Security in the Metaverse: Trends, Challenges, and Future Directions. IEEE Access 2025, 13, 120209–120243. [Google Scholar] [CrossRef]
  38. Frosio, G.; Obafemi, F. Augmented Accountability: Data Access in the Metaverse. Comput. Law Secur. Rev. 2025, 59, 106196. [Google Scholar] [CrossRef]
  39. Hussain, Z.; Khan, A.; Ali, A. The Impact of User-Generated Content, Social Interactions and Virtual Economies on Metaverse Environments. J. Sustain. Econ. 2023, 1, 34–44. [Google Scholar] [CrossRef]
  40. YemenïCï, A.D. Entrepreneurship in The World of Metaverse: Virtual or Real? J. Metaverse 2022, 2, 71–82. [Google Scholar] [CrossRef]
  41. Tao, L.; Cukurova, M.; Song, Y. Learning Analytics in Immersive Virtual Learning Environments: A Systematic Literature Review. Smart Learn. Environ. 2025, 12, 43. [Google Scholar] [CrossRef]
  42. Xu, W.; Zhang, N.; Wang, M. The Impact of Interaction on Continuous Use in Online Learning Platforms: A Metaverse Perspective. Internet Res. 2024, 34, 79–106. [Google Scholar] [CrossRef]
  43. Garrido, G.M.; Nair, V.; Song, D. SoK: Data Privacy in Virtual Reality. Proc. Priv. Enhancing Technol. 2024, 2024, 21–40. [Google Scholar] [CrossRef]
  44. Alcántara, J.C.; Tasic, I.; Cano, M.-D. Enhancing Digital Identity: Evaluating Avatar Creation Tools and Privacy Challenges for the Metaverse. Information 2024, 15, 624. [Google Scholar] [CrossRef]
  45. Lampropoulos, G.; Evangelidis, G. Learning Analytics and Educational Data Mining in Augmented Reality, Virtual Reality, and the Metaverse: A Systematic Literature Review, Content Analysis, and Bibliometric Analysis. Appl. Sci. 2025, 15, 971. [Google Scholar] [CrossRef]
  46. Mousa, A.; Omar, A. The Ethical Dilemma of Educational Metaverse. Recent Adv. Evol. Educ. Outreach 2024, 1, 006–016. [Google Scholar] [CrossRef]
  47. Thakur, G.; Gautam, D.; Kumar, P.; Das, A.K.; K., V.B.; Rodrigues, J.J.P.C. Blockchain-Assisted Cross-Platform Authentication Protocol With Conditional Traceability for Metaverse Environment in Web 3.0. IEEE Open J. Commun. Soc. 2024, 5, 7244–7261. [Google Scholar] [CrossRef]
  48. Ryu, J.; Son, S.; Lee, J.; Park, Y.; Park, Y. Design of Secure Mutual Authentication Scheme for Metaverse Environments Using Blockchain. IEEE Access 2022, 10, 98944–98958. [Google Scholar] [CrossRef]
  49. Thakur, G.; Kumar, P.; Chen, C.-M.; Vasilakos, A.V.; Anchna; Prajapat, S. A Robust Privacy-Preserving ECC-Based Three-Factor Authentication Scheme for Metaverse Environment. Comput. Commun. 2023, 211, 271–285. [Google Scholar] [CrossRef]
  50. Bakk, Á.K.; Bényei, J.; Ballack, P.; Parente, F. Current Possibilities and Challenges of Using Metaverse-like Environments and Technologies in Education. Front. Virtual Real. 2025, 6, 1521334. [Google Scholar] [CrossRef]
  51. Cheong, B.C. Avatars in the Metaverse: Potential Legal Issues and Remedies. Int. Cybersecur. Law Rev. 2022, 3, 467–494. [Google Scholar] [CrossRef]
  52. Dwivedi, Y.K.; Hughes, L.; Baabdullah, A.M.; Ribeiro-Navarrete, S.; Giannakis, M.; Al-Debei, M.M.; Dennehy, D.; Metri, B.; Buhalis, D.; Cheung, C.M.K.; et al. Metaverse beyond the Hype: Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. Int. J. Inf. Manag. 2022, 66, 102542. [Google Scholar] [CrossRef]
  53. Mustafa, T.; Matovu, R.; Serwadda, A.; Muirhead, N. Unsure How to Authenticate on Your VR Headset?: Come on, Use Your Head! In Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics, Tempe, AZ, USA, 21 March 2018; pp. 23–30. [Google Scholar]
  54. Qamar, S.; Anwar, Z.; Afzal, M. A Systematic Threat Analysis and Defense Strategies for the Metaverse and Extended Reality Systems. Comput. Secur. 2023, 128, 103127. [Google Scholar] [CrossRef]
  55. Christodoulou, P.; Limniotis, K. Data Protection Issues in Automated Decision-Making Systems Based on Machine Learning: Research Challenges. Network 2024, 4, 91–113. [Google Scholar] [CrossRef]
  56. Mosharraf, M. Data Governance in Metaverse: Addressing Security Threats and Countermeasures across the Data Lifecycle. Technol. Soc. 2025, 82, 102910. [Google Scholar] [CrossRef]
  57. Fiaz, F.; Sajjad, S.M.; Iqbal, Z.; Yousaf, M.; Muhammad, Z. MetaSSI: A Framework for Personal Data Protection, Enhanced Cybersecurity and Privacy in Metaverse Virtual Reality Platforms. Future Internet 2024, 16, 176. [Google Scholar] [CrossRef]
  58. Koo, J.; Kang, G.; Kim, Y.-G. Security and Privacy in Big Data Life Cycle: A Survey and Open Challenges. Sustainability 2020, 12, 10571. [Google Scholar] [CrossRef]
  59. Huynh-The, T.; Pham, Q.-V.; Pham, X.-Q.; Nguyen, T.T.; Han, Z.; Kim, D.-S. Artificial Intelligence for the Metaverse: A Survey. Eng. Appl. Artif. Intell. 2023, 117, 105581. [Google Scholar] [CrossRef]
  60. Awadallah, A.; Eledlebi, K.; Zemerly, M.J.; Puthal, D.; Damiani, E.; Taha, K.; Kim, T.-Y.; Yoo, P.D.; Raymond Choo, K.-K.; Yim, M.-S.; et al. Artificial Intelligence-Based Cybersecurity for the Metaverse: Research Challenges and Opportunities. IEEE Commun. Surv. Tutor. 2025, 27, 1008–1052. [Google Scholar] [CrossRef]
  61. Hwang, G.-J.; Xie, H.; Wah, B.W.; Gašević, D. Vision, Challenges, Roles and Research Issues of Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 2020, 1, 100001. [Google Scholar] [CrossRef]
  62. Hwang, G.-J.; Chien, S.-Y. Definition, Roles, and Potential Research Issues of the Metaverse in Education: An Artificial Intelligence Perspective. Comput. Educ. Artif. Intell. 2022, 3, 100082. [Google Scholar] [CrossRef]
  63. Alkaeed, M.; Qayyum, A.; Qadir, J. Privacy Preservation in Artificial Intelligence and Extended Reality (AI-XR) Metaverses: A Survey 2023. J. Netw. Comput. Appl. 2024, 231, 103989. [Google Scholar] [CrossRef]
  64. Al-Busaidi, A.S.; Raman, R.; Hughes, L.; Albashrawi, M.A.; Malik, T.; Dwivedi, Y.K.; Al- Alawi, T.; AlRizeiqi, M.; Davies, G.; Fenwick, M.; et al. Redefining Boundaries in Innovation and Knowledge Domains: Investigating the Impact of Generative Artificial Intelligence on Copyright and Intellectual Property Rights. J. Innov. Knowl. 2024, 9, 100630. [Google Scholar] [CrossRef]
  65. Habbal, A.; Ali, M.K.; Abuzaraida, M.A. Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, Applications, Challenges and Future Research Directions. Expert Syst. Appl. 2024, 240, 122442. [Google Scholar] [CrossRef]
  66. Qayyum, A.; Butt, M.A.; Ali, H.; Usman, M.; Halabi, O.; Al-Fuqaha, A.; Abbasi, Q.H.; Imran, M.A.; Qadir, J. Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses. ACM Comput Surv 2024, 56, 170. [Google Scholar] [CrossRef]
  67. Demirci, B.; Yaşa Özeltürkay, E.; Gülmez, M. Metaverse Users’ Purchase Intention in Second Life. J. Metaverse 2024, 4, 84–93. [Google Scholar] [CrossRef]
  68. Papadogiannakis, E.; Papadopoulos, P.; Kourtellis, N.; Markatos, E.P. User Tracking in the Post-Cookie Era: How Websites Bypass GDPR Consent to Track Users. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 2130–2141. [Google Scholar]
  69. Beauchamp, T.; Childress, J. Principles of Biomedical Ethics: Marking Its Fortieth Anniversary. Am. J. Bioeth. AJOB 2019, 19, 9–12. [Google Scholar] [CrossRef]
  70. Elsadig, M.; Alohali, M.A.; Ibrahim, A.O.; Abulfaraj, A.W. Roles of Blockchain in the Metaverse: Concepts, Taxonomy, Recent Advances, Enabling Technologies, and Open Research Issues. IEEE Access 2024, 12, 38410–38435. [Google Scholar] [CrossRef]
  71. Huynh-The, T.; Gadekallu, T.R.; Wang, W.; Yenduri, G.; Ranaweera, P.; Pham, Q.-V.; Da Costa, D.B.; Liyanage, M. Blockchain for the Metaverse: A Review. Future Gener. Comput. Syst. 2023, 143, 401–419. [Google Scholar] [CrossRef]
  72. Aisyahrani, A. Non-Fungible Tokens, Decentralized Autonomous Organizations, Web 3.0, and the Metaverse in Education: From University to Metaversity. J. Educ. Learn. EduLearn 2023, 17, 1–15. [Google Scholar] [CrossRef]
  73. Banaeian Far, S.; Hosseini Bamakan, S.M. NFT-Based Identity Management in Metaverses: Challenges and Opportunities. SN Appl. Sci. 2023, 5, 260. [Google Scholar] [CrossRef]
  74. Lyu, L.; Yu, H.; Yang, Q. Threats to Federated Learning: A Survey 2020. arXiv 2020, arXiv:2003.02133. [Google Scholar]
  75. Geiping, J.; Bauermeister, H.; Dröge, H.; Moeller, M. Inverting Gradients—How Easy Is It to Break Privacy in Federated Learning? arXiv 2020, arXiv:2003.14053. [Google Scholar]
  76. Wei, K.; Li, J.; Ding, M.; Ma, C.; Yang, H.H.; Farokhi, F.; Jin, S.; Quek, T.Q.S.; Vincent Poor, H. Federated Learning With Differential Privacy: Algorithms and Performance Analysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3454–3469. [Google Scholar] [CrossRef]
  77. Chen, C.; Li, Y.; Wu, Z.; Mai, C.; Liu, Y.; Hu, Y.; Zheng, Z.; Kang, J. Privacy Computing Meets Metaverse: Necessity, Taxonomy and Challenges. arXiv 2023, arXiv:2304.11643. [Google Scholar] [CrossRef]
  78. Ali, M.; Naeem, F.; Kaddoum, G.; Hossain, E. Metaverse Communications, Networking, Security, and Applications: Research Issues, State-of-the-Art, and Future Directions. IEEE Commun. Surv. Tutor. 2022, 26, 1238–1278. [Google Scholar] [CrossRef]
  79. Sorrentino, G.; López-Guzmán, J. Rethinking Privacy for Avatars: Biometric and Inferred Data in the Metaverse. Front. Virtual Real. 2025, 6, 1520655. [Google Scholar] [CrossRef]
Figure 1. An overview of the data collected in Metaverse. The solid arrows indicate the direct collection/creation relationships and the dashed arrows indicate the indirect/analysis relationships.
Figure 1. An overview of the data collected in Metaverse. The solid arrows indicate the direct collection/creation relationships and the dashed arrows indicate the indirect/analysis relationships.
Jcp 06 00024 g001
Figure 2. Overview of a blockchain-based access control and privacy-preserving data training via federated learning in educational metaverse.
Figure 2. Overview of a blockchain-based access control and privacy-preserving data training via federated learning in educational metaverse.
Jcp 06 00024 g002
Table 1. Categorization of identity-related threats by attack surface, attacker privilege, and security goal violated.
Table 1. Categorization of identity-related threats by attack surface, attacker privilege, and security goal violated.
Attack
Type
Privilege LevelAffected Data CategoryAttack
Surface
Violated Security Goal(s)Attacker’s
Target
Potential Impact in Education
Stolen Smart DeviceExternalAuthentication DataDeviceConfidentiality
Authentication
Extract credentials-related parameters Unauthorized access to student profiles or virtual classrooms
Offline Password GuessingPassive EavesdropperAuthentication DataDevice & NetworkConfidentiality
Authentication
Derive user password from intercepted and stored valuesAccount compromise, impersonation of educators/students
ImpersonationExternalAuthentication DataNetworkAuthenticationMasquerade as a legitimate userInfiltration of classes, manipulation of grades or discussions
Platform Server SpoofingExternalAuthentication Data, UGCServerIntegrity
Authentication
Deceive users with a fake platform to steal data or credentialsMisdirection, phishing, or theft of sensitive learning data
Replay & MITM AttackPassive/ExternalAuthentication Data, Behavioral DataNetworkIntegrity
Authentication
Non-Reputation
Reuse or intercept messages to bypass verificationHijacked sessions, false submissions, altered attendance logs
Insider AttackInternalBehavioral Data, UGCSystem/AvatarAuthentication
Confidentiality
Abuse platform knowledge to impersonate othersViolation of academic integrity, avatar misuse
Privileged Insider AttackPrivileged InsiderAuthentication Data, Derivative DataSystemConfidentiality
Authentication
Use elevated access to steal or forge authentication dataAvatar impersonation, identity leakage
Ephemeral Secret LeakageAdvanced ExternalAuthentication Data, Behavioral DataDevice/NetworkConfidentiality
Forward Secrecy
Use short and long term secrets to reconstruct secure sessionsBreach of session privacy, access to sensitive discussions
Perfect Forward Secrecy BreachAdvanced ExternalBehavioral Data, UGCSession/ProtocolForward SecrecyUse long-term keys to decrypt past sessionsRetrospective access to private data, long-term privacy loss
User Anonymity ViolationPassive EavesdropperBehavioral Data, Derivative DataDevice & Network
Protocol
Anonymity
Privacy
Reveal true identity behind pseudonymStudent tracking, profiling, or targeting
Mutual Authentication BypassExternal/InternalAuthentication DataProtocolAuthentication
Integrity
Disrupt mutual trust between user and serverUnauthorized access, impersonation, unverified interactions
Table 2. Threat categorization in each data lifecycle phase and potential implications for education.
Table 2. Threat categorization in each data lifecycle phase and potential implications for education.
Threat CategoryData Lifecycle PhaseAffected Data
Category
DescriptionExample Scenarios/
Attack Methods
Implications in EducationReference(s)
Data TamperingStorage, SharingBehavioral Data, Derivative Data, UGCUnauthorized modification, deletion, or replacement of stored or in-transit dataAltered session logs, corrupted sensor data, fake class recordsUndermines trust, misrepresents student behavior and participation[6,22,54,56]
False Data InjectionCollection, ProcessingBehavioral Data, Biometric Data, User’s Surroundings DataInsertion of malicious data into XR systems or AI training modelsPoisoned AI models, harmful haptic feedback (e.g., fake physical pain), falsified motion or environmental inputsHarms physical safety, corrupts AI-driven recommendations and feedback systems
Biometric & sensor data leakageCollection, StorageBiometric Data,
Behavioral Data
Challenges in handling sensitive, biometric, or behavioral data captured via XR sensorsEavesdropping, MITM, Man in the room, Packet Sniffing, Side-channel attackHigh privacy risk, exposure of minors, biometric data theft[22,54,56,57]
Low-Quality UGC & Sensor InputCollection, ProcessingUGC, Behavioral Data, Biometric DataPoorly calibrated devices or unverified UGC lead to degraded experiences or flawed data modelsMisaligned visuals, Νon-IID (non-independent and identically distributed) data in recommendation systems, erroneous inputsLow quality of services and user experience, reduced immersion, poor recommendation quality[22,56]
UGC Ownership & Provenance IssuesStorage, SharingUGC, Derivative DataDifficulty verifying authorship and originality of user-generated content across distributed platformsRemove embedded signatures or invisible watermarks that track original ownership, Copied educational content, replicated avatar-generated projectsAcademic dishonesty, IP disputes, unfair attribution in collaborative environments [22,54,57]
Intellectual Property ViolationsSharing, DestructionUGC, Derivative DataInadequate legal and technical frameworks to enforce digital content rightsUse of recognizable individuals in avatar forms, unauthorized remixing of lecture content, reused AI-generated contentLegal ambiguity in digital content ownership and licensing in education[22,54,56,57]
Insecure Data Retention/DestructionDestructionBiometric Data, UGC, Behavioral DataLack of proper mechanisms for data expiration, secure deletion, or revocationPersistent biometric records post-course, inability to revoke user-generated data and contentViolates user consent, risk of future exposure, loss of “right to be forgotten”[58]
Table 3. Categorization of vulnerabilities in AI models and their educational impact.
Table 3. Categorization of vulnerabilities in AI models and their educational impact.
Threat
Category
AttacksDescriptionLifecycle
Stage
Affected Data CategoryEducational
Impact
Model Integrity Attacks
-
Data Poisoning
-
Backdoor Injection
-
Adversarial Examples
Attackers corrupt training data or manipulate inputs to degrade model performance.Training InferenceBehavioral Data, Biometric Data, Derivative DataAI tutors give incorrect feedback, facial recognition login fails, learning manipulation
Federated Learning Exploits
-
Model Poisoning in FL
-
Gradient Inversion
-
Free-rider Attacks
Federated updates are manipulated to corrupt or extract private information.Collaborative TrainingBehavioral Data, Biometric Data, Derivative DataBias in shared education models, reduced personalization, data leakage
Behavioral Data Leakage
-
Attribute Inference
-
Membership Inference
-
Model Inversion
ML models leak user traits or behaviors from outputs or embeddings.Data Collection InferenceBehavioral Data, Derivative DataInference of student mood, stress, or engagement; violation of privacy
Avatar and Identity Spoofing
-
Deepfakes
-
Avatar Cloning
-
Voice Synthesis
AI-generated media impersonates users or teachers in XR.Inference InteractionAuthentication Data, Biometric Data, UGCUnauthorized exam participation, fake instructors spreading misinformation
Consent & Surveillance Risks
-
Covert Data Logging
-
Sensor Overreach
-
Persistent Tracking
User data is collected without clear consent or beyond initial intent.Data CollectionBiometric Data, Behavioral Data, User’s Surroundings DataTrust erosion, collection of biometric data (gaze, voice) without student awareness
Behavioral Manipulation
-
Emotional Profiling
-
Reward Shaping
-
Cognitive Nudging
AI agents influence users based on psychological patterns or reward signals.Inference InteractionBehavioral Data, Derivative DataStudent autonomy reduced, learning outcomes optimized for engagement, not understanding
Explainability Gaps
-
Black-box Decisions
-
Lack of Justification Logging
Users cannot understand why the AI behaves in certain ways.InferenceDerivative DataAI systems make opaque decisions (e.g., marking disengagement) without explanation
Ownership & Attribution Conflict
-
Unlicensed AIGC Reuse
-
Content Provenance Loss
-
IP Ambiguity
AI-generated or user-created educational assets are reused without attribution or consent.Post-processing DeploymentUGC, Derivative DataStudents’ XR projects used in public apps, institutional IP disputes
Table 4. Collected data across metaverse platforms and consent mechanisms. The symbol ✓ indicates that the platform collects this type of data, while the symbol × signifies that it does not. The symbol * denotes that the type of data is not explicitly mentioned in the platform’s privacy policy.
Table 4. Collected data across metaverse platforms and consent mechanisms. The symbol ✓ indicates that the platform collects this type of data, while the symbol × signifies that it does not. The symbol * denotes that the type of data is not explicitly mentioned in the platform’s privacy policy.
Metaverse Platforms
Data CollectedVRChatOnCyberSansarSecondLifeDecentraland
By usersRegistration and account
Payment*
Uploaded
Content
***
By platfromUsage**
Biometric×**×
Movement×**×
Surrounding****×
Communication*
AytomaticallyLocation Data×
Device Data
User preferences*
Access ControlOne-factor, optional two-factorOne-factor, wallet, via 3rd partiesOne-factor, Mobile applicationOne-factorWallet, 3rd parties
CompatibilityDesktop, VRDesktop, VRDesktop, VRDesktop, VRDesktop
User Consentexplicitexplicitexplicitexplicitimplicit
Bystander’s consent×××××
Table 5. GDPR Mapping. The meaning of statements: Compliant = Clear and compliant, Partial = Mentioned but vague or incomplete, Lacking = The principle is missing, contradicted, or insufficiently supported, Not applicable = The activity does not exist in the platform.
Table 5. GDPR Mapping. The meaning of statements: Compliant = Clear and compliant, Partial = Mentioned but vague or incomplete, Lacking = The principle is missing, contradicted, or insufficiently supported, Not applicable = The activity does not exist in the platform.
GDPR Principle/ArticleDescriptionVRChatDecentralandSecondLifeSansarOnCyberSustainability Relevance
Art. 5(1)(a) Lawfulness, Fairness, TransparencyData collection must be lawful, fair, and transparent.PartialPartialPartialPartialPartialBuilds long-term user trust, reduces misinformation and system misuse.
Art. 5(1)(e) Storage LimitationData must not be stored longer than necessary.LackingLackingLackingLackingPartialAvoids unnecessary energy use in data storage, supports responsible digital resource use.
Art. 6 Lawfulness of ProcessingData processing must rely on a lawful basis, such as consent or contract.PartialPartialPartialPartialPartialEncourages ethical and lawful platform design, reducing legal/operational waste.
Art. 7 Conditions for ConsentConsent must be freely given, informed, specific, and unambiguous.LackingPartialPartialLackingPartialEmpowers users, prevents misuse of personal data, supports ethical data ecosystems.
Art. 9 Special Category Data (e.g., biometrics)Explicit consent is needed for biometric data processing.LackingNot ApplicableLackingLackingLackingPrevents exploitation of sensitive data; reduces AI overreach, safeguards human dignity.
Art. 13/14 Right to Be InformedUsers must be told how and why their data is processed, including AI usage.PartialLackingPartialLackingLackingEnsures transparency; encourages responsible data collection and usage.
Art. 15–17 Access, Rectification, and Erasure RightsUsers must be able to access, correct, or delete their data.PartialPartialPartialLackingLackingSupports data minimization and reduces unnecessary storage, empowers digital self-agency.
Art. 21 Right to ObjectUsers must be able to opt out of certain data processing (e.g., marketing).CompliantCompliantCompliantCompliantCompliantRespects digital autonomy; limits manipulative personalization, supports sustainable user experience.
Art. 25 Data Protection by Design & DefaultPlatforms must embed privacy by design, minimize data usage.LackingPartialLackingLackingPartialPromotes efficient, future-ready, and resilient infrastructure design.
Art. 35 Data Protection Impact Assessments (DPIA)High-risk processing (e.g., biometrics, profiling) requires DPIA.LackingLackingLackingLackingLackingEncourages careful risk planning, minimizing harm and costly technical debt.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sakka, S.; Liagkou, V.; Ferreira, A.; Stylios, C. Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks. J. Cybersecur. Priv. 2026, 6, 24. https://doi.org/10.3390/jcp6010024

AMA Style

Sakka S, Liagkou V, Ferreira A, Stylios C. Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks. Journal of Cybersecurity and Privacy. 2026; 6(1):24. https://doi.org/10.3390/jcp6010024

Chicago/Turabian Style

Sakka, Sofia, Vasiliki Liagkou, Afonso Ferreira, and Chrysostomos Stylios. 2026. "Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks" Journal of Cybersecurity and Privacy 6, no. 1: 24. https://doi.org/10.3390/jcp6010024

APA Style

Sakka, S., Liagkou, V., Ferreira, A., & Stylios, C. (2026). Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks. Journal of Cybersecurity and Privacy, 6(1), 24. https://doi.org/10.3390/jcp6010024

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop