Next Article in Journal
An Efficient Parallelization of Microscopic Traffic Simulation
Previous Article in Journal
Mechanical Response of Soft Rock Roadways in Deep Coal Mines Under Tectonic Stress and Surrounding Rock Control Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications

by
Aida Vidal-Balea
1,2,
Oscar Blanco-Novoa
1,2,
Paula Fraga-Lamas
1,2,* and
Tiago M. Fernández-Caramés
1,2
1
Department of Computer Engineering, Faculty of Computer Science, Universidade da Coruña, 15071 A Coruña, Spain
2
Centro de Investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 6959; https://doi.org/10.3390/app15136959
Submission received: 15 May 2025 / Revised: 18 June 2025 / Accepted: 18 June 2025 / Published: 20 June 2025
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)

Abstract

This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative experiences. Such anchors are digital reference points tied to specific positions in the physical world that allow virtual content in MR applications to remain accurately aligned to the real environment, thus being an essential tool for building collaborative MR experiences. This anchor synchronization system takes advantage of the use of local anchor storage to optimize the sharing process and to exchange the anchors only when necessary. The framework integrates Unity, Mirror and Mixed Reality Toolkit (MRTK) to support seamless interoperability between Microsoft HoloLens 2 devices and desktop computers, with the addition of external IoT interaction. As a proof of concept, a collaborative multiplayer game was developed to illustrate the multi-platform and anchor sharing capabilities of the proposed system. The experiments were performed in Local Area Network (LAN) and Wide Area Network (WAN) environments, and they highlight the importance of efficient anchor management in large-scale MR environments and demonstrate the effectiveness of the system in handling anchor transmission across varying levels of spatial complexity. Specifically, the obtained results show that the developed framework is able to obtain anchor transmission times that start around 12.7 s for the tested LAN/WAN networks and for small anchor setups, and to roughly 86.02–87.18 s for complex physical scenarios where room-sized anchors are required.

1. Introduction

Nowadays, a variety of commercial applications that use Extended Reality (XR) technologies are being developed across industries such as healthcare [1,2], marketing [3], entertainment [4], construction [5], or robotics [6]. Recently, the release of new XR devices, including Augmented Reality (AR), Mixed Reality (MR) and Virtual Reality (VR) technologies, expands the variety of platforms on which an application can operate. Moreover, the adoption of multi-user XR elevates the immersion of the developed applications to a higher level, greatly enhancing User Experience (UX) and enabling smooth communications among participants [7].
The development of XR applications has been the focus of numerous studies [8]. However, ensuring that a single XR application can be executed on devices with different specifications remains a significant challenge [9]. In addition, enabling multiple devices on different platforms to share a collaborative experience, especially when one of those is an XR platform, becomes even more challenging [10]. This is mainly due to the unique characteristics of each visualization technology and the development frameworks associated with each device. Fortunately, initiatives such as the OpenXR framework [11] are making significant progress towards unifying and simplifying the development process, promoting interoperability among XR platforms.
Regarding multi-user and collaborative experiences, the market is still emerging and offers limited solutions. One of the reasons is the need for tracking the environment, which is fundamental in MR applications that need to position digital content accurately within the physical environment [12,13,14]. A key technology in MR systems for tracking the environment is the use of spatial anchors. These anchors represent a specific point in the real world that the system tracks over time, providing a stable coordinate system to ensure that virtual content remains aligned with the physical space [15]. This allows users to move freely throughout the room while keeping virtual objects in place, avoiding dragging or flickering of the 3D models [16,17]. Moreover, spatial anchors are fundamental for creating persistent and consistent MR experiences across sessions and shared experiences among multiple devices.
For example, until recently, the only option provided by Microsoft for sharing, transferring and synchronizing anchors between XR devices was the use of its proprietary tool Azure Spatial Anchors. This service is no longer available as it was announced that this Microsoft product would be deprecated and retired from the market by the end of 2024 [18]. Furthermore, to the knowledge of the authors of this article, Microsoft does not offer any official substitute with similar capabilities. This leaves developers without alternatives for a highly useful functionality in their applications, as there is no simple and straightforward approach for sharing and synchronizing spatial anchors within Microsoft’s official documentation.
This article, which is an extension of the preliminary work developed in [19], introduces an innovative framework for anchor sharing and for building multi-user and multi-platform XR applications. The proposed solution uses the latest standards, open-source tools and libraries to simplify development. Moreover, this article provides guidelines for future application development. Furthermore, to illustrate the potential of the proposed solution, the development and evaluation of a proof-of-concept application is provided. Such an application allows multiple users to collaborate in real-time to solve the challenges presented in a collaborative game. The application utilizes Unity [20] and Mirror [21] for building the game and for handling all the network communications among all the participating XR devices (both clients and servers). In addition, the proposed system also uses the Mixed Reality Toolkit (MRTK) [22], an open-source set of tools provided by Microsoft, designed to simplify the development of MR applications by offering a collection of pre-built components. This is Microsoft’s recommended development tool for creating MR applications, particularly optimized for HoloLens 2 smart glasses. Thus, the multi-user cross-platform application can be executed both from desktop computers and MR devices.
The developed game consists of a collaborative experience in which users work together in a fast-food restaurant in a similar way to games like Overcooked [23]. Players in multiple remote locations need to collaborate in real-time to face challenges such as preparing salads or hamburgers, making sure these are delivered on time while not burning the kitchen in the process. Moreover, in order to achieve more dynamic experiences, an Internet of Things (IoT) module has also been designed to allow the application to interact with real-world objects [24]. In this particular use case scenario, an IoT button is used to activate a fire extinguisher in the event that a frying pan bursts into flames.
Specifically, this article includes the following main contributions:
  • It details the design and development of a novel spatial anchor sharing system: a system that locally stores and synchronizes spatial anchors between devices, removing reliance on third-party or cloud services.
  • It provides cross-platform compatibility. The proposed solution seamlessly integrates diverse devices, including Microsoft HoloLens 2, desktop computers and IoT devices, enabling real-time multi-user collaboration.
  • To illustrate the use of the proposed solution, a practical implementation is described (a multiplayer collaborative game showcasing the framework capabilities), which serves as a reference for future applications in XR development. In order to facilitate the reproducibility of the experiments and the extension of the proposed framework, the development is available as open-source code [25].
The rest of this paper is structured as follows. Section 2 overviews the state of the art on MR multi-user or multi-platform studies. Section 3 details the design and implementation of the proposed multi-user multi-platform collaborative MR framework. Section 4 illustrates with a practical use case how the proposed framework can be used for developing multi-platform MR applications for multi-user experiences, while Section 5 details the conducted experiments and analyzes the obtained results. Finally, Section 7 is dedicated to the conclusions.

2. State of the Art

The development of XR applications is a topic that has been explored for years [8]. However, enabling the same XR application to be executed seamlessly by multiple users and on devices with different characteristics remains a significant challenge [9,10].
On one hand, designing and implementing applications in which several users can interact collaboratively in real-time is a challenging task [26]. In relation to UX, especially in XR environments, one of the most important aspects in multi-user applications is the communications speed among the participating devices [27,28]. This is required to be fast and fluid enough so that the user does not perceive it and has a pleasant, seamless and uninterrupted experience. This is a topic covered by [29], where an edge-assisted framework was designed for multi-user mobile AR environments, enhancing real-time collaboration among users.
On the other hand, when it comes to multi-platform environments, multi-user co-operation and real-time interaction among users is considered to be very relevant. Several works emphasize the need for cross-platform compatibility to enable broader user participation in XR environments [30,31,32]. For example, in [30], the authors introduce a system for cross-platform immersive visualization, using sensors and image markers for real-time collaboration within smart buildings. Similarly, in [31], the authors explored privacy-preserving Application Programming Interfaces (APIs) for XR applications to improve interoperability and security between devices, demonstrating the importance of flexible, platform-independent solutions for multi-user interactions. In [32], a cross-platform VR system designed for real-time construction safety training is proposed. Such a system ensures accessibility across devices without requiring specialized VR hardware or software, by simplifying the development of VR training scenarios and improving user engagement.
Privative or cloud-dependent solutions can derive into discontinuity or loss of functionality due to service deprecations or occasional system failures. For example, as it was previously mentioned, Azure Spatial Anchors (Microsoft’s technology for creating shared experiences and exchanging MR anchors between users) were announced to be deprecated [18], from November 20th 2024. Situations like these result in a lack of documentation or tools for developers and highlight the need for independent solutions that ensure continuity, adaptability and control over anchor exchange processes. This dependency has also been detected by other authors [33], which advocate for self-contained systems that do not depend on third parties. Specifically, in [33], the authors describe a blockchain-assisted framework for secure data sharing in MR applications within military contexts. The proposed decentralized approach leverages blockchain and edge computing to secure multi-user data exchanges without relying on third-party services.
In addition to the academic works mentioned above, Hubs Foundation [34], a commercial application based on WebXR technology, has also been found. This solution is aimed at hosting virtual rooms for meetings or social gatherings where other users can join, similar to the popular VRChat application [35]. This project was originally known as Mozilla Hubs, which ceased support and passed its maintenance to the Hubs Foundation in mid-2024 [36]. While Hubs supports shared WebXR environments and is accessible via VR headsets and browsers (e.g., Oculus Quest, Vive, Pico Neo), it is primarily focused on virtual-only spaces and does not provide native support for MR scenarios involving the integration of virtual content with the physical environment. Moreover, Hubs appears to be oriented towards social interaction in static virtual rooms rather than supporting complex application logic or spatially anchored digital content in real-world environments. In contrast, the framework proposed in this article is built with Unity and enables full interaction with both digital and physical elements, offering greater flexibility for industrial and custom MR applications. Furthermore, the proposed framework also provides the possibility to integrate IoT devices into the experience, thus allowing for a much richer range of interactions.
Thus, by developing an open-source, off-cloud anchor sharing mechanism, the solution presented in this article ensures continuity, security and autonomy, providing a robust alternative for developers of multi-user and multi-platform MR applications. In addition, thanks to providing the proposed system as open-source [25], other developers will be able to collaborate and adapt the project to their needs without having to start from scratch.

3. Design and Implementation of the System

3.1. Main Features of the Proposed System

The solution described in this article has been designed to provide the following main features:
  • It provides smooth user interaction within the virtual environment both for desktop and MR devices.
  • The system allows clients to either connect to an external server or to host the server themselves (i.e., the server can be executed on the device of a specific client).
  • It allows multiple users to connect to a server and to experience the same scenario in real time.
  • The system is able to keep the status of all the virtual elements synchronized across all the connected devices.
  • Specifically for MR devices:
    Spatial anchors are used to align virtual objects between the different devices and to keep them synchronized.
    The system has been designed to share spatial anchors as fast as possible among MR devices.
    An efficient local storage system is utilized for saving spatial anchors in local memory.
    To reduce communications overhead, the anchor exchange protocol has been designed for minimizing the number of times a spatial anchor needs to be exchanged between devices via LAN. Thus, loading times at scene launching can also be decreased.

3.2. Communication Architecture

The architecture of the proposed system is depicted in Figure 1. As it can be observed, the system consists of the following modules:
  • Network Manager, which uses the Mirror library, is in charge of exchanging messages among the clients and the server, while it is also responsible for keeping all the objects synchronized.
  • Network Discovery, which also utilizes the network library, is the subsystem in charge of managing the automatic connection of the users to the local servers.
  • Shared Anchor Manager: this module handles all aspects related to the sharing and synchronization of anchors between MR devices.
  • Network Transmitter: combined with the Shared Anchor Manager, is used to divide the anchors into smaller segments and to send them through the network.
  • Multiplatform Manager, which is responsible for managing the platform-dependent components, enables and disables the components specific to each runtime environment. Specifically, this is implemented using conditional compilation directives (e.g., #if UNITY_WSA, #if UNITY_STANDALONE) and runtime logic, ensuring that only the necessary elements are loaded and active on each platform. This approach allows for a clean separation of platform-dependent resources while maintaining a unified codebase.
  • Lastly, the Interaction Manager, which uses MRTK on MR devices, is in charge of managing the different user inputs depending on the platform on which the application is running.
Furthermore, the system server is responsible for managing all the communications exchanged among the clients, as well as for transmitting the information necessary to keep the position of all the objects in the scene up to date. It should be noted that the different devices can act as servers or clients, so that, if necessary, two or more devices can be connected directly through a local network, without the need for an external Internet connection.

3.3. System Design

To illustrate the complexity and inner workings of the developed solution, Figure 2 shows its classes. It is worth mentioning that some of such classes, names and internal behaviors are strictly tied to the practical use case utilized to exemplify the proposed framework, which is later detailed in Section 4. Therefore, although the naming of some classes is specific for the use case, its internal behavior can be adapted and extrapolated to other uses cases and scenarios.
  • OverCookedNetworkManager: When using Mirror, it is required to implement a NetworkManager. This is in charge of managing and delegating the methods and events that are triggered when a server is started, a client is connected, etc.
  • Player: This class is responsible for handling everything related to the connection, interactions with the environment and movement of the players. It is also used to exchange information with the server through Mirror commands.
  • ObjectRef: This class represents an element shared by the network to reference an object (in this case, an ingredient or dish). It stores a reference to the represented GameObject (Prefab).
  • ARInteractionManager: This class captures and manages all the MRTK events that are triggered when an MR user makes hand gestures or interacts with elements of the virtual environment. Some examples of these interactions can be clicks, pressing buttons or using the cutting board, among others.
  • Slot: It symbolizes the point at which users can drop objects (in this particular case, countertops). This class could also be extended to create special Slots such as the trash can or the counter where finished dishes are placed for delivery.
  • WorldAnchorSharedManager: This class manages all aspects related to anchor sharing and synchronization. It offers methods that are used, for example, from the Player, allowing both local synchronization in each device and global synchronization between server and clients. It uses the “ARAnchorManager” system from the Unity XR ARFoundation library and also OpenXR “XRAnchorStore”.
  • NetworkTransmitter: This class is used when sending an anchor between devices, being an adaptation of the NetworkTransmitter provided by Unity in one of their example projects [37]. The file has been taken as a reference to adapt it to the needs of the system developed in this project. This class is employed when the size of the serialized anchor exceeds the transmission limit imposed by the Mirror networking library (298,449 bytes); the data must then be split into smaller segments. Mirror internally uses a KCP-based transport layer, where the default window size is 4096 bytes. However, the maximum payload size per packet is determined by Mirror and calculated to be 298,449 bytes. Thus, the anchor is divided into appropriately sized chunks to fit within this limit, ensuring reliable and ordered delivery across the network.
  • NetworkDiscoveryUI: This class is responsible for the automatic connection of users to the network, thus facilitating the connection between devices and avoiding the need to enter the server IP manually. Its methods are executed when the application connection panel is used, that is, when a user opens the application for the first time and connects to the network as a host or client. This is where Mirror’s “NetworkDiscovery” component is encapsulated and used transparently to the user and developer, regardless of the platform on which the application is running.
The following subsections describe in more detail some of the technical aspects of the design and implementation details of the main functionalities of the proposed framework: automatic user connection, anchor synchronization and the IoT interaction modules.

3.3.1. Automatic User Connection Module

In order to improve UX, a functionality was developed to enable the automatic connection of multiple devices, in such a way that each device searches for any other device in the local network to connect to. In this way, the participating users simply need to start the application by selecting the “host” mode (if there are no previously connected devices in that network) or “client” mode if there is already an active “host” available.
For this purpose, the NetworkDiscoveryUI class was created, from which the interactions of the interface buttons for new server or client connections are managed. This new script must be included in each scene inside the GameObject that is acting as the NetworkManager. This component requires the GameObject to also have Mirror’s NetworkDiscovery associated with it. In addition, if desired, TextMeshPro can be added to show on screen to the person who is connecting as a client how many and which IPs the servers of their local network have.
In addition, it was also necessary to modify the OverCookedNetworkManager class: first, to add the management of the interaction with the new buttons and, secondly, whenever a person connects as a server, a message is sent, indicating that there is a new server available on the network by executing the “networkDiscovery.AdvertiseServer()” method. In contrast, if the person connects as a client, the device is able to find the server by using the “networkDiscovery.StartDiscovery()” method provided by Mirror.
As it was previously indicated, when the users open the application, they will be able to connect as a client or host (acting as server and client simultaneously). For this purpose, when the application is started, a panel is displayed in which the user can select the connection mode: host or client. Users can also perform a previous search to find out whether there are servers in the local network (find servers) or directly enter an IP in case they want to connect to a remote server (server IP).
Below is a detailed explanation of the process of creating a new connection in case the user starts as a client, which is illustrated in Figure 3:
1.
A person (Player 2) starts the application and selects the menu button to connect as a client. This is the only action performed by Player 2, the rest of the process is executed automatically by the system.
2.
The system executes the method “networkDiscovery.StartDiscovery()” to search if there is any server available in the network. The active server (in this case, Player 1 acting as host) responds by sending its IP.
3.
The system configures the received IP as a connection point and connects to that server.
4.
The system calls the “networkManager.StartClient()” method, and configures and synchronizes the scene with the server.
5.
Then, the system invokes the “OnStartClient()” and “Start()” methods, where all the internal variables for the correct operation of Player 2 and the system are configured.
6.
From this moment, Player 2 will be able to play freely and the system will synchronize its movements and actions with the server, which will be in charge of broadcasting these actions to the rest of the clients of the network, in case there are any.

3.3.2. Anchor Synchronization Module

A brief description of the algorithm designed to perform the synchronization and sharing of anchors between HoloLens 2 devices is presented in the following paragraphs and is depicted in the flow diagram in Figure 4.
First, when a new device is connected to the system, if it is merely acting as a server, no actions would be taken, since it has no User Interface (UI) and no anchor (virtual attachment to the real world) is needed. When an MR client or MR host (i.e., a device acting as client and server at the same time) is connected, the following procedure is carried out:
1.
Load local anchor (either from the HoloLens anchor store or from a file in local memory). If this anchor exists and is loaded successfully, the process ends.
2.
If there is no local anchor, two situations may occur:
(a)
If the device is acting as a host (server+client), a new anchor is created and stored locally (either in the anchor store or in memory in a file), and the process ends.
(b)
If the device acts as a client, a request is made to the server asking to send an anchor.
3.
Once the request is received by the server, it prepares the anchor to be sent (if the size of the anchor is too large, the NetworkTransmitter component is used to divide it into smaller pieces and transmit them one at a time).
4.
Finally, when the client receives the whole anchor, it is saved in an in-memory file and loaded into the scene (it is imported into Unity’s internal anchor management system and stored in the local anchor store). The format in which the anchor is stored in local memory is a plain text file containing the serialized byte array that is generated after exporting the anchor using the MRTK libraries. After the anchor is loaded, the 3D model is moved to the anchor position in order to synchronize the scenes of two devices (server and client), so they can both see the 3D objects on the same physical position.
To optimize performance and avoid unnecessary synchronization overhead, the system was intentionally designed to prioritize the use of local anchors stored on the HoloLens device. If a valid local anchor is found on the device, it is automatically loaded on the client, ignoring the need to request another version from the server. Thus, in cases where spatial inconsistencies are observed, users can manually trigger the synchronization process through an in-app menu option. This action would initiate the full synchronization protocol previously described, allowing for realignment across MR devices.

3.3.3. IoT Interaction Module

An additional layer of immersion was added by implementing a Bluetooth-based system that allows the connection of external devices with the HoloLens 2 application. Specifically, an ESP32 board was used [38]. This board acts as a controller that is used as a physical switch that reacts with some virtual element in the scene. This approach combines the real and virtual world, providing a more immersive UX. For example, in the use case proposed in this article, which is detailed later in Section 4, a button attached to the controller would act as a physical “fire extinguisher” that, when activated, extinguishes the fire in the virtual world when a frying pan burns in the game.
The ESP32 was configured to establish a Bluetooth connection in serial mode with the HoloLens 2. This controller is connected to a physical button that, when pressed, sends a signal to the HoloLens 2 device, indicating that the fire should be turned off. The application on the HoloLens 2 is prepared to receive the message sent by the ESP32 and trigger the corresponding action.
In addition, the HoloLens 2 is configured to automatically scan for and find Bluetooth devices in its environment. Using “auto-discovery” capabilities, whose protocol can be seen in Figure 5, the HoloLens 2 smart glasses detect devices designed for this application and connects to them without any manual intervention. This simplifies the UX, as users do not need to perform any additional configuration to establish the connection with the IoT device.
To facilitate the integration with heterogeneous IoT devices while minimizing manual configuration, an allowlist mechanism was used during the Bluetooth auto-discovery process. This allows us to apply a flexible naming convention in which compatible devices are recognized by their broadcast device name, following the pattern “APP_NAME_<device-id>”. This approach allows the application to dynamically detect devices intended for this MR system without hardcoding any Universally Unique Identifier (UUID) or device identifiers, thereby improving scalability. In addition, a locally stored configurable list of trusted devices could be added. This list would contain accepted name prefixes, and only devices whose device name includes one of these prefixes in their Bluetooth advertisement beacons are considered eligible for connection. This system would provide an extra layer of filtering to ensure that only validated and expected devices could interact with the application.
Specifically, the method used to detect nearby Bluetooth devices is based on the following system call: “DeviceInformation.FindAllAsync(BluetoothDevice.GetDeviceSelector-FromPairingState())”. This function retrieves all discoverable devices that are not yet paired with the HoloLens 2. Once a compatible device is identified based on its name pattern, the application establishes the connection using “BluetoothDevice.FromIdAsync(device.Id)”, which initiates the pairing and communication process.
The interaction process that occurs when running the “auto-discovery” of IoT devices is detailed below:
1.
Initial Connection: When the application is first started on the HoloLens 2, the application automatically scans the environment for compatible Bluetooth devices (i.e., devices that are included in a previously defined white list of trusted devices). Once detected, the connection is established without the need of manual intervention.
2.
Physical Action: Pressing the physical button connected to the ESP32 sends a message to the HoloLens 2 device.
3.
Virtual Action: The HoloLens 2 receives the message and executes the assigned function, ensuring a fast and consistent response to the user’s physical interaction.

4. Practical Use Case

A multiplayer and cross-platform collaborative application has been developed to show the capabilities of the proposed framework, as an example of its potential. In such applications, users are required to collaborate with each other to successfully cook hamburgers and salads, and deliver the food on time in a virtual restaurant. The source code of this project is available on the following GIT repository [25].
The developed application can run simultaneously and synchronously across different devices, so that all users see the same scene adapted to the platform of their device. Figure 6 depicts a scenario where several users are connected to the same server and play from different platforms. Specifically, Figure 6a shows a scene captured from a desktop computer in which four players are collaboratively interacting (the blue and green players are using a PC, while the red and yellow players are using Microsoft HoloLens 2 smart glasses). Both types of players can be recognized because each platform has its own representation, considering both the local user’s platform and the remote user’s platform. In this case, if the local user is running in a desktop platform, all the other players are presented with large 3D models, with the addition of a small minifigure above the heads of the HoloLens 2 players. The aim of this minifigure, which is the same 3D model used to represent normal players, is to identify the HoloLens users. These tiny figures are also used when executing the app in the MR device and, as shown in Figure 6b, they are placed above the HoloLens device and move along the user.
Figure 6b shows a user’s perspective when using an MR device. As it can be seen, if the application is being played from the HoloLens 2, desktop players are seen as large figures, while other MR players are identified by a miniature figurine that is placed above the real user’s head.

5. Experiments

After completing all the developments described in the previous sections, a series of experiments were performed to evaluate the proposed system. The objective was to assess the impact of spatial anchor size on performance during transmission and loading processes in a multi-user MR application.
Specifically, the aim of the experiments was to measure the time required for creating, exporting, transmitting and loading spatial anchors of different sizes between the two smart glasses. For this, two experimental cases were designed to comprehensively evaluate the system under different network configurations and hardware roles. In the first case, two HoloLens 2 devices were connected to a local network, with one device acting as the host and the other as a client. The second case involved a remote setup where a PC acted as the host and the HoloLens device was connected as a client. This setup allowed us to add potentially less-stable or higher-latency network conditions, thereby assessing the robustness and scalability of the proposed solution.
Thus, the proposed evaluation focused on how anchor complexity, which is strongly tied to the volume of HoloLens’ spatial mapping data, affects synchronization performance throughout the application initializing process.
Four scenarios were defined to simulate different anchor sizes and spatial complexity levels: small, medium, big and room-sized anchors. The different spatial mappings are compared in Figure 7 and are detailed below:
1.
Small anchor: Standing, slowly rotate head and Head-Mounted Displays (HMD) 180° on vertical and horizontal axis multiple times to capture a small, focused area.
2.
Medium anchor: Similar to the small anchor setup, standing, the HMD performs full 360° head rotation, capturing a slightly larger spatial environment.
3.
Big anchor: Standing on each corner of a 2 m by 2 m square; performing slow 360° rotations on vertical and horizontal axes multiple times to scan a broader area.
4.
Room-sized anchor: walking around the room, scanning walls and surrounding objects by rotating the head and HMD, generating a large and detailed spatial map.

5.1. First Experiment: Local HoloLens–HoloLens Communication

In this experiment, the performance of the proposed framework is evaluated using two HoloLens devices in a Local Area Network (LAN) environment. The goal was to measure the time required for creating, transmitting and loading spatial anchors of various sizes between the two devices in a total of four scenarios. For such a purpose, one device acted as the host, while the other one was connected as a client. Specifically, the experiments were conducted using two Microsoft HoloLens 2 HMD and a TP-LINK router with the following configuration:
  • Microsoft HoloLens 2 (Qualcomm Snapdragon 850, Wi-Fi (IEEE 802.11 ac (2x2)), 4 GB of LPDDR DRAM).
  • Router: TP-LINK Archer C5400 (IEEE 802.11 ac), 2.4 GHz connection.
The four evaluated scenarios utilized a different anchor size: small, medium, big and room-size. For each scenario, the experiment was repeated 20 times to ensure data consistency, obtaining a total of 80 recordings. The following key metrics were collected: exported spatial anchor size (in MB), time in seconds required to create and export the anchor on the host device (i.e., acting as server), transmission time from server to the client, and the time taken by the client to load the received anchor into the scene.
The obtained results are depicted in Figure 8. As it can be observed in this figure, a consistent increase in time and memory usage can be observed as the spatial complexity of the anchor grows, as detailed in Table 1. In the small-anchor scenario, the anchor size remains minimal (oscillating between 3.0 to 3.6 MB), resulting in very low processing times across all stages. For medium-size anchors, the size ranges from 6.0 to 6.7 MB, leading to significantly longer transmission times, with the minimum time being 25.92 s and reaching up to 26.1 s in some executions. The client loading times also rise, almost doubling the ones from small anchors, with values between 1.4 and 2.2 s. In the big-anchor scenario, anchor sizes keep growing, ranging from 10.1 to 10.8 MB. As expected, the transmission times increased to values between a minimum of 43.3 s and maximum of 43.6 s. Meanwhile, client loading times range from 2.4 to 5.2 s, nearly tripling in comparison to the small-anchor scenario, although the server export time remained relatively stable. Finally, the room-size anchor scenario presents the largest data volumes, with anchor sizes of up to 20.8 MB. In this case, the clients receive times starting from 85.01 s and peaking to 86.02 s, while load times extend from 4.83 up to 7.3 s.

5.2. Second Experiment: Remote HoloLens–PC Communication

To further evaluate the robustness and scalability of the proposed framework, a second experiment using a Wide Area Network (WAN) was designed to test the device interactions with less ideal network conditions than in a controlled LAN. The aim of this experiment was—similarly to the first experiment—to measure the time required for transmitting and loading spatial anchors of multiple sizes. In order to achieve this, the experiments were conducted using a Microsoft HoloLens 2 HMD, a Windows 10 desktop computer and TP-LINK and Zte H3600P routers with the following configuration:
  • Client-side:
    Microsoft HoloLens 2 (Qualcomm Snapdragon 850, Wi-Fi (IEEE 802.11 ac (2x2)), 4 GB of LPDDR DRAM).
    Router: TP-LINK Archer C5400 (IEEE 802.11 ac), 2.4 GHz connection.
  • Host-side:
    Desktop computer: Windows 10 (Intel Core i7-960 3.20 GHz (4 cores) CPU, 12 GB RAM and graphic card NVIDIA GeForce GTX 660).
    Router: Zte H3600P (IEEE 802.11 a/x), Ethernet connection.
Following the same procedure as for the first experiment, four scenarios were created, each one using a different anchor size: small, medium, big and room-sized. For each scenario, the experiment was repeated 20 times, recording the following key metrics: transmission time from server to the client and time taken by the client to load the received anchor into the scene. In this set of experiments, there was no need to create and to export the anchor nor track these times on the server-side because the application was executed on a desktop computer with no MRTK and spatial anchor capabilities. Thus, the task performed by the server was to store an anchor in local memory and to transmit it whenever the client requested a new anchor. For each of the four scenarios, a random anchor created in the first experiment was selected and stored in the server prior to the initiation of each set of tests.
The obtained results were analyzed and are summarized in Table 2, where the average times and standard deviations are detailed for each anchor size scenario. For the smallest anchor size (3.0 MB), transmission times ranged between 12.69 and 12.84 s, where the anchor loading time remained below 1 s in most cases. In the next scenarios, as anchor complexity increased, both the transmission and loading times also rose. In the case of medium-sized anchors (6.0 MB), transmission times were required to be from 25.62 to 25.74 s and between 1.39 and 2.13 s to be loaded. For an anchor of 10.1 MB, transmission times ranged between 43.36 and 43.63 s, with loading times from 2.36 to 3.38 s. Finally, room-sized anchors (20.1 MB) required the highest performance demands, with transmission times ranging between 85.57 and 87.18 s and loading times exceeding 4.74 s and even reaching 6.75 s.
These data can be compared with those obtained during the LAN experiments (Table 1), and it can be concluded that for the same anchor sizes, the response times are very similar. Therefore, not much difference is perceived in the performance of the framework between LAN and WAN execution. Nonetheless, future researchers and developers should note that the obtained results may differ notably depending on the selected scenario and on the characteristics of the used communications network.

6. Key Findings

The results presented in Section 5 highlight the growing complexity of anchor sharing in large environments, where the significant increase in anchor size directly impacts transmission and loading times. In particular, the serialization, segmentation and transfer of large anchors require more processing and communication overhead, resulting in higher latency during initial synchronization. However, this does not present a long-term issue, as the system proposed in this article was specifically designed with these challenges in mind. Anchors are stored locally on each device after the initial synchronization, meaning that anchor sharing only needs to be performed once per location and user. This approach significantly reduces communications overhead and ensures that spatial alignment remains consistent across sessions, making the system a robust and scalable solution for multi-user, multi-platform MR applications.
The proposed system was designed with cross-platform compatibility in mind, with the current implementation focusing specifically on Microsoft HoloLens 2 and desktop environments. As such, mobile platforms (i.e., iOS, Android) were not included in the experimental evaluation, as no specific development was conducted for these devices in the present version of the framework. The implementation of the proposed system was centered on the MRTK for HoloLens 2, as this platform remains relatively less explored in comparison to mobile AR devices or mainstream VR frameworks. Supporting cross-platform MR requires non-trivial development efforts due to platform-specific differences, APIs and, especially, spatial mapping capabilities. For example, anchor sharing in ARKit (iOS) and ARCore (Android) mobile devices differ significantly from those used with the MRTK, requiring a new implementation or adaptation of key components such as anchor compatibility. Although the use of Unity ARFoundation provides a common layer that supports various plugins (i.e., OpenXR for HoloLens, ARKit for iOS and ARCore for Android devices), according to Unity official documentation [39], core features such as anchors are supported across such platforms, which suggests the proposed framework could be adapted to modern MR headset devices. Cross-platform transmission and synchronization of the exported serialized anchor data would remain using the same modules of the proposed framework. However, the compatibility of the anchor data structure among these platforms (e.g., whether an anchor exported from HoloLens 2 can be imported directly on Apple Vision Pro) remains uncertain and would require further investigation, as no specific documentation regarding such suitability was found at the time of writing this paper. In such cases, platform-specific anchor conversion mechanisms or alternative synchronization methods may be needed.
Thus, cross-platform compatibility could be feasibly integrated into the proposed framework through the use of fiducial markers (e.g., ArUco codes or QR-based visual markers). These markers could serve as a common reference system, enabling spatial alignment and synchronization across heterogeneous devices. This approach would allow mobile devices and other modern MR headsets such as Apple Vision Pro to participate in the shared experience in a more immersive way. Nevertheless, future research will have to explore the introduction of reusable abstract components as well as extending compatibility to mobile platforms (e.g., iOS and Android) and other modern MR HMDs. This would allow broader device interoperability and also enable testing under different hardware constraints. Future research will also consider the identification of comparable systems to allow for quantitative benchmarking and comparison of the proposed framework, such as Hubs. In addition, assessing UX remains a crucial aspect in the design of interactive MR applications and, for this reason, future work will also be focused on conducting structured usability studies such as the XR NASA-TLX (Task Load Index) methodology proposed in [40].
Finally, regarding the IoT interaction module, no reconnection strategy has been implemented. This is due to the fact that the application is intended to be used in controlled environments and short-duration sessions, where connection drops are rare. However, integrating a reconnection mechanism could improve the system robustness in more demanding conditions, so it is considered as future work.

7. Conclusions

This article introduced a novel off-cloud anchor sharing framework for multi-user and multi-platform MR applications. The proposed framework addresses key challenges in cross-platform XR development, such as ensuring real-time synchronization, facilitating multi-user collaboration and eliminating reliance on third-party cloud services. By employing the local storage system and efficient communication protocols, the system achieves a fast synchronization of spatial anchors and enhances the overall UX in collaborative MR scenarios.
The development of a practical use case demonstrates the practicality and versatility of the system, highlighting its ability to operate seamlessly across heterogeneous devices like Microsoft HoloLens 2 smart glasses and desktop computers.
The results obtained from the performed experiments demonstrate a clear correlation between anchor size and system performance. As the spatial anchor increases in complexity and memory size, all measured durations also increase, particularly client-side load time and data transmission, both for the tested LAN and WAN. While larger anchors offer better spatial mapping and alignment accuracy, they introduce significant performance compromises. The results indicate that anchor sizes should be carefully managed depending on the application real-time requirements, especially in collaborative MR settings where synchronization speed and responsiveness are critical. However, this does not represent a major drawback since the system was designed with this in mind and stores the anchors in local memory. Thus, the anchor sharing for the initial synchronization of XR experiences only has to be performed once whenever an MR user arrives at a new location.
In conclusion, a practical and adaptable system for the creation of multiplayer and multi-platform applications has been proposed in this article, providing a set of valuable guidelines for the future development of collaborative XR experiences.

Author Contributions

Design, A.V.-B., O.B.-N. and T.M.F.-C.; software, A.V.-B. and O.B.-N.; experiments, A.V.-B.; writing—original draft preparation, A.V.-B.; writing—review and editing, A.V.-B. and P.F.-L.; supervision, T.M.F.-C.; funding acquisition, P.F.-L. and T.M.F.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by grants PID2020-118857RA-100 (ORBALLO) and TED2021-129433A-C22 (HELENE) funded by MCIN/AEI/10.13039/501100011033 and the European Union NextGenerationEU/PRTR.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chengoden, R.; Victor, N.; Huynh-The, T.; Yenduri, G.; Jhaveri, R.H.; Alazab, M.; Bhattacharya, S.; Hegde, P.; Maddikunta, P.K.R.; Gadekallu, T.R. Metaverse for healthcare: A survey on potential applications, challenges and future directions. IEEE Access 2023, 11, 12765–12795. [Google Scholar] [CrossRef]
  2. Vidal-Balea, A.; Blanco-Novoa, Ó.; Fraga-Lamas, P.; Fernández-Caramés, T.M. Developing the Next Generation of Augmented Reality Games for Pediatric Healthcare: An Open-Source Collaborative Framework Based on ARCore for Implementing Teaching, Training and Monitoring Applications. Sensors 2021, 21, 1865. [Google Scholar] [CrossRef]
  3. Wedel, M.; Bigné, E.; Zhang, J. Virtual and augmented reality: Advancing research in consumer marketing. Int. J. Res. Mark. 2020, 37, 443–465. [Google Scholar] [CrossRef]
  4. Von Itzstein, G.S.; Billinghurst, M.; Smith, R.T.; Thomas, B.H. Augmented reality entertainment: Taking gaming out of the box. In Encyclopedia of Computer Graphics and Games; Springer: Berlin/Heidelberg, Germany, 2024; pp. 162–170. [Google Scholar]
  5. Li, X.; Yi, W.; Chi, H.L.; Wang, X.; Chan, A.P. A critical review of virtual and augmented reality (VR/AR) applications in construction safety. Autom. Constr. 2018, 86, 150–162. [Google Scholar] [CrossRef]
  6. Yu, J.; Wang, T.; Shi, Y.; Yang, L. MR meets robotics: A review of mixed reality technology in robotics. In Proceedings of the 2022 6th International Conference on Robotics, Control and Automation (ICRCA), Xiamen, China, 26–28 February 2022; pp. 11–17. [Google Scholar]
  7. Nguyen, H.; Bednarz, T. User experience in collaborative extended reality: Overview study. In Proceedings of the International Conference on Virtual Reality and Augmented Reality; Springer: Berlin/Heidelberg, Germany, 2020; pp. 41–70. [Google Scholar]
  8. Doolani, S.; Wessels, C.; Kanal, V.; Sevastopoulos, C.; Jaiswal, A.; Nambiappan, H.; Makedon, F. A review of extended reality (xr) technologies for manufacturing training. Technologies 2020, 8, 77. [Google Scholar] [CrossRef]
  9. Speicher, M.; Hall, B.D.; Yu, A.; Zhang, B.; Zhang, H.; Nebeling, J.; Nebeling, M. XD-AR: Challenges and opportunities in cross-device augmented reality application development. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–24. [Google Scholar] [CrossRef]
  10. Tümler, J.; Toprak, A.; Yan, B. Multi-user Multi-platform xR collaboration: System and evaluation. In Proceedings of the International Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2022; pp. 74–93. [Google Scholar]
  11. The Khronos Group Inc. OpenXR Overview. Available online: https://www.khronos.org/openxr/ (accessed on 17 June 2025).
  12. Soon, T.J. QR code. Synth. J. 2008, 2008, 59–78. [Google Scholar]
  13. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
  14. Zhou, F.; Duh, H.B.L.; Billinghurst, M. Trends in Augmented Reality tracking, interaction and display: A review of ten years of ISMAR. In Proceedings of the 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, Cambridge UK, 15–18 September 2008; pp. 193–202. [Google Scholar]
  15. Microsoft. Spatial Anchors—Mixed Reality|Microsoft Learn. Available online: https://learn.microsoft.com/en-us/windows/mixed-reality/design/spatial-anchors (accessed on 17 June 2025).
  16. Hübner, P.; Clintworth, K.; Liu, Q.; Weinmann, M.; Wursthorn, S. Evaluation of HoloLens tracking and depth sensing for indoor mapping applications. Sensors 2020, 20, 1021. [Google Scholar] [CrossRef] [PubMed]
  17. Vassallo, R.; Rankin, A.; Chen, E.C.; Peters, T.M. Hologram stability evaluation for Microsoft HoloLens. In Proceedings of the Medical Imaging 2017: Image Perception, Observer Performance, and Technology Assessment; Spie: Orlando, FL, USA, 2017; Volume 10136, pp. 295–300. [Google Scholar]
  18. Microsoft Azure. Azure Spatial Anchors Retirement. Available online: https://azure.microsoft.com/es-es/updates/azure-spatial-anchors-retirement/ (accessed on 17 June 2025).
  19. Vidal-Balea, A.; Blanco-Novoa, O.; Fraga-Lamas, P.; Fernández-Caramés, T.M. A Multi-Platform Collaborative Architecture for Multi-User eXtended Reality Applications. In Proceedings of the 5th XoveTIC Conference, A Coruña, Spain, 5–6 October 2023; pp. 148–151. [Google Scholar]
  20. Unity. Unity Real-Time Development Platform|3D, 2D, VR & AR Engine. Available online: https://unity.com/ (accessed on 17 June 2025).
  21. Mirror. Mirror Networking Documentation. Available online: https://mirror-networking.gitbook.io/docs (accessed on 17 June 2025).
  22. Microsoft Learn. MRTK2-Unity Developer Documentation—MRTK 2. Available online: https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk2/ (accessed on 17 June 2025).
  23. Ghost Town Games. Overcooked|Cooking Video Game|Team17. Available online: https://www.team17.com/games/overcooked (accessed on 17 June 2025).
  24. Hernández-Rojas, D.L.; Fernández-Caramés, T.M.; Fraga-Lamas, P.; Escudero, C.J. A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things. Sensors 2018, 18, 2052. [Google Scholar] [CrossRef]
  25. ORBALLO Project. UNDERCOOKED—ORBALLO Extended Reality Framework. Available online: https://gitlab.com/Orballo-project/orballo-extended-reality (accessed on 17 June 2025).
  26. Dong, T.; Churchill, E.F.; Nichols, J. Understanding the challenges of designing and developing multi-device experiences. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems, Brisbane, Australia, 4–8 June 2016; pp. 62–72. [Google Scholar]
  27. Akyildiz, I.F.; Guo, H. Wireless communication research challenges for extended reality (XR). ITU J. Future Evol. Technol. 2022, 3, 1–15. [Google Scholar] [CrossRef]
  28. Van Damme, S.; Sameri, J.; Schwarzmann, S.; Wei, Q.; Trivisonno, R.; De Turck, F.; Torres Vega, M. Impact of latency on QoE, performance, and collaboration in interactive Multi-User virtual reality. Appl. Sci. 2024, 14, 2290. [Google Scholar] [CrossRef]
  29. Ren, P.; Qiao, X.; Huang, Y.; Liu, L.; Pu, C.; Dustdar, S.; Chen, J.L. Edge ar x5: An edge-assisted multi-user collaborative framework for mobile web augmented reality in 5g and beyond. IEEE Trans. Cloud Comput. 2020, 10, 2521–2537. [Google Scholar] [CrossRef]
  30. Ayyanchira, A.; Mahfoud, E.; Wang, W.; Lu, A. Toward cross-platform immersive visualization for indoor navigation and collaboration with augmented reality. J. Vis. 2022, 25, 1249–1266. [Google Scholar] [CrossRef]
  31. Warin, C.; Seeger, D.; Shams, S.; Reinhardt, D. PrivXR: A Cross-Platform Privacy-Preserving API and Privacy Panel for Extended Reality. In Proceedings of the 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Biarritz, France, 11–15 March 2024; pp. 417–420. [Google Scholar]
  32. Bao, L.; Tran, S.V.T.; Nguyen, T.L.; Pham, H.C.; Lee, D.; Park, C. Cross-platform virtual reality for real-time construction safety training using immersive web and industry foundation classes. Autom. Constr. 2022, 143, 104565. [Google Scholar] [CrossRef]
  33. Islam, A.; Masuduzzaman, M.; Akter, A.; Shin, S.Y. Mr-block: A blockchain-assisted secure content sharing scheme for multi-user mixed-reality applications in internet of military things. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 21–23 October 2020; pp. 407–411. [Google Scholar]
  34. Hubs Foundation. Hubs Foundation—We’ll Take It from Here. Available online: https://hubsfoundation.org/ (accessed on 17 June 2025).
  35. VRChat Inc. VRChat. Available online: https://hello.vrchat.com/ (accessed on 17 June 2025).
  36. Mozilla. End of Support for Mozilla Hubs|Hubs Help. Available online: https://support.mozilla.org/en-US/kb/end-support-mozilla-hubs (accessed on 17 June 2025).
  37. Unity-Technologies. NetworkTransmitter.cs. Available online: https://github.com/Unity-Technologies/SharedSpheres/blob/f054cdd832b1f0d575cab5469a7dd6454fd4dcc2/Assets/CaptainsMess/Example/NetworkTransmitter.cs (accessed on 17 June 2025).
  38. Espressif Systems. ESP32 Wi-Fi & Bluetooth SoC. Available online: https://www.espressif.com/en/products/socs/esp32 (accessed on 17 June 2025).
  39. Unity Technologies. AR Foundation|AR Foundation. Available online: https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@5.0/manual/index.html (accessed on 17 June 2025).
  40. Vidal-Balea, A.; Fraga-Lamas, P.; Fernández-Caramés, T.M. Advancing NASA-TLX: Automatic User Interaction Analysis for Workload Evaluation in XR Scenarios. In Proceedings of the 2024 IEEE Gaming, Entertainment, and Media Conference (GEM), Turin, Italy, 5–7 June 2024; pp. 1–6. [Google Scholar]
Figure 1. Proposed communication architecture.
Figure 1. Proposed communication architecture.
Applsci 15 06959 g001
Figure 2. System design: simplified class diagram showing Mirror-related classes.
Figure 2. System design: simplified class diagram showing Mirror-related classes.
Applsci 15 06959 g002
Figure 3. Sequence diagram representing system behavior when a new user starts the application.
Figure 3. Sequence diagram representing system behavior when a new user starts the application.
Applsci 15 06959 g003
Figure 4. Flow diagram shows the designed algorithm for when a new device connects to the system.
Figure 4. Flow diagram shows the designed algorithm for when a new device connects to the system.
Applsci 15 06959 g004
Figure 5. Sequence diagram describing the “auto-discovery” protocol.
Figure 5. Sequence diagram describing the “auto-discovery” protocol.
Applsci 15 06959 g005
Figure 6. Screenshot of the developed game: (a) View from a desktop computer. (b) View from a HoloLens device.
Figure 6. Screenshot of the developed game: (a) View from a desktop computer. (b) View from a HoloLens device.
Applsci 15 06959 g006
Figure 7. Three-dimensional meshes of the spatial mappings captured by the HoloLens for each experiment: (a) Small anchor. (b) Medium anchor. (c) Big anchor. (d) Room-sized anchor.
Figure 7. Three-dimensional meshes of the spatial mappings captured by the HoloLens for each experiment: (a) Small anchor. (b) Medium anchor. (c) Big anchor. (d) Room-sized anchor.
Applsci 15 06959 g007
Figure 8. Transmission and client loading times regarding anchor size for each experiment execution.
Figure 8. Transmission and client loading times regarding anchor size for each experiment execution.
Applsci 15 06959 g008
Table 1. First experiment (local): average values and standard deviation for anchor size and operation times for each experiment scenario.
Table 1. First experiment (local): average values and standard deviation for anchor size and operation times for each experiment scenario.
# ExpAnchor Size (MB)Server Exporting Time (s)Anchor Transmission Time (s)Client Loading Time (s)
1—Small Anchors3.40 (±0.21)0.4479 (±0.27)12.8068 (±0.04)0.8019 (±0.10)
2—Medium Anchors6.40 (±0.22)0.3765 (±0.20)26.0065 (±0.05)1.5944 (±0.18)
3—Big Anchors10.5 (±0.20)0.9910 (±0.53)43.4579 (±0.07)2.7862 (±0.70)
4—Room-Sized Anchors20.4 (±0.25)1.2923 (±0.31)85.7326 (±0.19)5.3937 (±0.64)
Table 2. Second experiment (remote): average values and standard deviation for anchor size and operation times for each experiment scenario.
Table 2. Second experiment (remote): average values and standard deviation for anchor size and operation times for each experiment scenario.
# ExpAnchor Size (MB)Transmission Time (s)Client Loading Time (s)
1—Small Anchors3.0012.969 (±0.55)0.890 (±0.22)
2—Medium Anchors6.0025.682 (±0.03)1.472 (±0.18)
3—Big Anchors10.1043.477 (±0.09)2.639 (±0.38)
4—Room-Size Anchors20.1085.845 (±0.38)5.265 (±0.58)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vidal-Balea, A.; Blanco-Novoa, O.; Fraga-Lamas, P.; Fernández-Caramés, T.M. Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications. Appl. Sci. 2025, 15, 6959. https://doi.org/10.3390/app15136959

AMA Style

Vidal-Balea A, Blanco-Novoa O, Fraga-Lamas P, Fernández-Caramés TM. Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications. Applied Sciences. 2025; 15(13):6959. https://doi.org/10.3390/app15136959

Chicago/Turabian Style

Vidal-Balea, Aida, Oscar Blanco-Novoa, Paula Fraga-Lamas, and Tiago M. Fernández-Caramés. 2025. "Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications" Applied Sciences 15, no. 13: 6959. https://doi.org/10.3390/app15136959

APA Style

Vidal-Balea, A., Blanco-Novoa, O., Fraga-Lamas, P., & Fernández-Caramés, T. M. (2025). Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications. Applied Sciences, 15(13), 6959. https://doi.org/10.3390/app15136959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop