Next Article in Journal
Application of Immersive Virtual Reality in the Training of Future Teachers: Scope and Challenges
Previous Article in Journal
The Gaia System: Revolutionizing Museum Storytelling with Projection Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Real-Time Immersive Augmented Reality Interface for Large-Scale USD-Based Digital Twins

by
Khang Quang Tran
1,
Ernst L. Leiss
2,
Nikolaos V. Tsekos
1 and
Jose Daniel Velazco-Garcia
3,*
1
Medical Robotics and Imaging Lab, Department of Computer Science, University of Houston, Houston, TX 77204, USA
2
Department of Computer Science, University of Houston, Houston, TX 77204, USA
3
Tietronix Software, Inc., Houston, TX 77058, USA
*
Author to whom correspondence should be addressed.
Virtual Worlds 2025, 4(4), 50; https://doi.org/10.3390/virtualworlds4040050
Submission received: 16 September 2025 / Revised: 15 October 2025 / Accepted: 20 October 2025 / Published: 1 November 2025

Abstract

Digital twins are increasingly utilized across all lifecycle stages of physical entities. Augmented reality (AR) offers real-time immersion into three-dimensional (3D) data, which provides an immersive experience with dynamic, high-quality, and multi-dimensional digital twins. A robust and customizable data platform is essential to create scalable 3D digital twins; Universal Scene Description (USD) provides these necessary qualities. Given the potential for integrating immersive AR and 3D digital twins, we developed a software application to bridge the gap between multi-modal AR immersion and USD-based digital twins. Our application provides real-time, multi-user AR immersion into USD-based digital twins, making it suitable for time-critical tasks and workflows. AR digital twin software is currently being tested and evaluated in an application we are developing to train astronauts. Our work demonstrates the feasibility of integrating immersive AR with dynamic 3D digital twins. AR-enabled digital twins have the potential to be adopted in various real-time, time-critical, multi-user, and multi-modal workflows.

1. Introduction

Augmented reality (AR) is a transformative technology that offers enhanced and immersive user experience. Its applications span diverse domains, such as education, medicine, and various industries [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. By superimposing digital information onto the real world, AR holds considerable potential to significantly improve the user experience in both learning and professional settings, especially within training and instructional environments [2,3,4,5,11,12,16,17].
AR is used to deliver information and provide an immersive method for knowledge assessment in education and cultural training [11,18]. AR can significantly boost learners’ engagement by gamifying lessons and exercises [18,19,20,21]. AR also finds extensive applications and research in medicine, where it assists in preoperative and intraoperative processes during medical treatments and surgeries [1,2,3,5,16,22,23,24,25]. AR helps physicians navigate procedures more effectively and safely by superimposing medical information and guidance onto patients’ bodies [26,27,28,29]. Furthermore, AR is proving useful in rehabilitation by providing patients with necessary guidance for at-home exercises and enabling physicians to monitor the patients remotely [30,31,32,33,34,35,36]. AR is also gaining traction in the manufacturing, logistics and automotive industries [15,37,38]. Manufacturing, logistic activities and vehicle designs with AR support have been explored intensively [8,15,37,38,39,40,41,42].
Virtual worlds are becoming a prevalent technology for human immersion into high-quality and interactive 3D spaces [2,17,23,25,43,44,45,46]. Within this context, digital twins emerge as the virtual counterparts of the physical entities in the real environments [47]. The concept of digital twins, initially proposed by Michael Grieves in 2002, is composed of a three-part architecture: a physical twin, a digital twin, and a means of communication between them [47,48]. The digital twin serves as a digital equivalent of a physical entity, mirroring the physical twin’s behaviors and states throughout its lifetime in the virtual world [47,49,50]. Verdouw et al. note that digital twins can be applied at any phase throughout the product lifecycle: design, realization, use, and retirement [49]. Figure 1 shows a basic digital twin system in which there is two-way data integration and interaction between the twins.
The applications of digital twins are extensive, ranging from the design and development to the management of physical entities [6,49,51,52,53,54,55,56]. For these applications to function effectively, digital twins require real-time access to object data and synchronization, capabilities supported by platforms like NVIDIA Omniverse [49,57,58].
A digital twin requires a continuous and intensive data inflow from the physical counterpart’s sensors and/or internal state trackers [49]. NVIDIA (Santa Clara, CA, USA) bases its digital twins on Universal Scene Description (USD) [57]. The BMW Group (Munich, Germany) exemplifies this approach, having built its digital twin using USD to perform simulations within NVIDIA Omniverse applications [6,54]. Similarly, Bentley Systems (Exton, PA, USA) has developed applications on the NVIDIA Omniverse platform for photorealistic, real-time visualization and simulation of digital twins of massive-scale industrial and civil infrastructure projects [59].
High-quality and dynamic AR scenarios demand advanced visualization capabilities, which involve creating, loading, processing and projecting three-dimensional (3D) models in specialized software applications. To fully leverage the strengths of multiple digital content creation tools, 3D models must be easily transferable between applications. The ability to process and alter these models in real time is a crucial factor for an effective pipeline.
However, different 3D creation applications often support only a limited set of 3D file formats [60,61,62]. Consequently, real-time data processing, sharing, and collaboration on the same data across multiple applications require a file format that is flexible, robust, extensible, and universally accepted within the pipeline. While open-source file formats like STL and OBJ are widely used, they lack key features necessary for real-time, cross-application collaboration and non-destructive authoring [63,64].
Fortunately, Universal Scene Description (USD or OpenUSD) has the necessary qualities to address these challenges. Originally developed by Pixar Animation Studios (Pixar) (Emeryville, CA, USA) for the animation industry, it was open-sourced in 2016. USD facilitates real-time, high-fidelity 3D content creation in multi-user workflows [64,65]. USD’s non-destructive editing uses a layer-based system [64] that is crucial for real-time scalability and allows for an asset’s data to be altered by adding a new layer on top of existing ones without modifying the original data. USD enables seamless, back-and-forth data transfer and authoring among USD-compatible software applications without the risk of losing or altering data from previous stages of the workflow.
Moreover, USD is highly extensible and customizable thanks to its core scene-graph and composition engine being domain-agnostic [66,67]. This flexibility allows for the encoding and composition of data across various domains. For example, NVIDIA Omniverse extends USD’s built-in asset resolver with a custom asset path resolver [68]. This allows data to be translated from and to any file format supported by USD-compatible applications [69]. Thanks to these features, USD provides rich and varied ways to combine assets into larger assemblies and enables collaborative workflows. A comparison of some of the features of USD, STL, and OBJ is provided in Table 1.
USD’s robust and extensible capabilities have positioned it as a potential industry standard. Early adoption of USD was primarily in animation films [70,71,72]. Since then, its use has expanded to a range of industries, including architecture, design, robotics, and manufacturing [54,58,73,74]. The Alliance for OpenUSD (AOUSD), founded by key industry players like Pixar (Emeryville, CA, USA), Adobe (San Jose, CA, USA), Apple (Cupertino, CA, USA), Autodesk (San Francisco, CA, USA), and NVIDIA (Santa Clara, CA, USA), has further promoted and standardized USD [73].
Recognizing USD’s potential in virtual worlds, NVIDIA has adopted it as the core of its Omniverse platform [57]. As an early adopter, NVIDIA introduced Omniverse in 2019 as a modular development platform for building a USD-based ecosystem of 3D workflows, tools, applications, and services [57]. Leveraging Pixar’s USD along with NVIDIA RTX and other proprietary technologies, developers use Omniverse to create real-time 3D simulation solutions for industrial digitization and perception AI applications. For example, USD is used to store drone configurations for simulation in the Omniverse Isaac Sim application [74], while the BMW Group uses Omniverse for factory design [6,54].
Omniverse, in conjunction with NVIDIA CloudXR, supports a wide range of common 3D content creation applications, facilitating a broader ecosystem around it [69]. Omniverse and CloudXR enable the projection of USD-generated scenes into virtual reality (VR) and augmented reality (AR) interfaces to enhance the immersion experience [75]. While the platform supports VR and AR, it lacks official support for various AR head-mounted displays (HMDs), such as the Microsoft HoloLens (Microsoft Corporation, Redmond, WA, USA) [76].
AR on HMDs provides a natural and realistic immersive experience by superimposing computer-generated images directly onto the users’ field of view in the real world. Unlike AR on Android and iOS devices, HMDs leave the users’ hands free, which is crucial for tasks like surgeries [15]. While the ability to experience AR in multimodalities can be impactful [77], this potential is limited by the lack of official support from platforms like NVIDIA Omniverse.
The Framework for Interactive Immersion into Imaging Data (FI3D) (University of Houston, Houston, TX, USA) provides 3D data visualization through an interactive and immersive interface using AR-capable head-mounted and hand-held displays [78]. The framework significantly enhances their processing power by offloading the majority of the computational load, including data generation and application logic, from the display devices. FI3D also facilitates multi-user shared scenes across multiple AR devices at multiple sites [11,46]. The framework’s modular and adaptable design allows for the integration of images, 3D models and data processing with existing or custom-made C++/C-based libraries [1,2,3,11,45,46]. FI3D uses an established messaging protocol to enable communication between an FI3D server and multiple AR clients on devices such as Microsoft HoloLens and AR-supported Android and iOS devices. This framework has been used as the foundation for the development of various AR/XR applications in fields like medicine and architecture [1,2,3,11,45,46].
The primary contribution of our work is the design and implementation of Omniverse Operator (OmniOp), a novel application developed on the FI3D framework. OmniOp’s core function is to facilitate real-time, multi-user, and multi-modal augmented reality (AR) immersion into USD-based digital twins. It achieves this by loading and projecting large-scale USD data onto the Microsoft HoloLens AR headset, serving as a testbed to connect high-fidelity digital twins in Omniverse with AR-enabled HMDs.
A key innovation of OmniOp directly addresses the significant challenges of using AR for large-scale digital twins: the extensive number of digital visuals often overwhelms the user’s real-world field of view, creating a cluttered experience. Our application resolves these issues by introducing the subscene extraction feature, which allows users to reduce visual clutter by limiting the displayed information to only the data relevant for a specific use case and user. The core distinction of OmniOp over existing AR visualization solutions for digital twins lies in its integrated, three-part architecture: combining the multi-user communication backbone of FI3D, the non-destructive data framework of USD/Omniverse, and a resource-management feature (subscene extraction) to dynamically manage visual fidelity on the client-side HMD. Prior solutions typically manage only a subset of these features and often rely on fixed, pre-processed models or lack native, real-time collaboration with an ecosystem like Omniverse [12,79,80,81].
The application provides two key functionalities: (1) seamless AR visualization for high-fidelity USD models, and (2) real-time, multi-user, multi-modal AR immersion into USD-based digital twins via the HoloLens. By integrating these capabilities, OmniOp directly addresses the significant computational and resource limitations commonly associated with AR devices when visualizing complex scenes.
To validate this proof-of-concept, we operate under the assumption that the primary performance barriers are data volume and sustained computational load. Success is defined by three goals: achieving rendering performance with a stable visual update rate on the HoloLens when loading a large-scale USD scene; demonstrating interaction latency with a minimal data request latency (server round-trip time); and confirming scalability through successful bidirectional, real-time data exchange between the physical twin, the Omniverse Nucleus server, and multiple AR client sessions.
The remainder of this manuscript is structured as follows. Section 2 details the methodology, design, and components of the proposed application. Section 3 presents the preliminary evaluation tasks and corresponding results. Lastly, Section 4 discusses the application’s limitations, possible use cases, and future research directions.

2. Materials and Methods

The application was developed in C++ 17 using Microsoft Visual Studio 2019 on a Dell Alienware Aurora R6 workstation with an Intel Core i7-7700K CPU and an NVIDIA RTX 4070 Ti SUPER GPU. The workstation runs under the Windows 11 Professional operating system; its network bandwidth is 250 Mbps for download and 90 Mbps for upload. The application renders content for both the host computer’s interface and for the AR client interface Microsoft HoloLens 1 device. As FI3D, Omniverse, and MQTT are all installed on the same computer, the network latencies are not considered. These specifications are the minimum requirements for the target system where the application will be deployed.
The Omniverse Operator (OmniOp) application is designed as a module integrated within the FI3D framework, leveraging FI3D’s internal data processing and XR client communication capabilities. OmniOp comprises three main components: the FI3D-USD adaptor, subscene extraction, and the FI3D-NVIDIA Omniverse Connector. The FI3D-USD adaptor serves as the core component, handling the data processing and format conversion required for visualization within FI3D. Working in conjunction with the FI3D-NVIDIA Omniverse Connector, OmniOp can exchange data and real-time updates with an NVIDIA Omniverse Nucleus server, version 2023.2.9, facilitating multi-user, multi-application collaboration. Finally, the subscene extraction feature, an extension of the FI3D-USD adaptor, generates a structural representation of the input USD file to provide users with an easier means of scene navigation. The overall pipeline connecting physical-digital twins and our software application is illustrated in Figure 2.

2.1. FI3D-USD Adaptor

Figure 3 [45] illustrates the pipeline from a 3D model authored in any combination of digital content creation applications to its AR visualization on a device such as a Microsoft HoloLens 1 HMD. To complete the FI3D side of the pipeline shown in Figure 2 and Figure 3, two key tasks must be accomplished: (1) adapting USD, version 0.23.11, for visualization using VTK, version 9.4.1, in FI3D and (2) establishing a connection between FI3D and Omniverse to leverage Omniverse services.
Figure 4 illustrates the FI3D-USD adaptor. The FI3D framework uses VTK (Kitware Inc., Clifton Park, NY, USA) for visualization [78], necessitating the implementation of a data-loading capability for the USD file format. This capability enables objects defined in a USD file to be rendered, including their geometry, transformations, and textures. Given USD’s extensibility, other elements can also be loaded, depending on the specific projects.
Both VTK and USD share many computer graphics terminologies and features, as detailed in Table 2 and Table 3. In USD, a primitive, or prim, is the primary container object [82,83]. USD geometry data are stored in UsdGeomMesh prims, among many other UsdGeom-family prims. These UsdGeomMesh prims are usually the leaf descendants of the USD prim tree. Data stored in a UsdGeomMesh prim include points, face vertex indices, face vertex count, display color, texture coordinates, transformations, and transformation order. Both VTK and USD support polygonal cells (referred to as “faces” in USD). In our implementation, points and vertices are converted from USD to VTK, and USD faces are converted to corresponding VTK cells with the same number of sides, which are then triangularized for efficient transmission to an AR device.
USD transformations, which consist of translation, scaling, and rotation, support only pre-multiplication [87,88,89]. In contrast, VTK supports both pre- and post-multiplication [90]. USD offers nine types of rotation and supports rotation about a translated pivot [87], whereas VTK does not have as many types of rotations (refer to Table 4). Despite these differences in transformation handling, both USD and VTK accept the 4 × 4 matrix representation for transformation [87,88,90]. Visualization of a prim’s transformation is accomplished by exporting its transformation matrix and importing it directly into VTK. In addition to being stored in the UsdGeomMesh prims, USD transformations can also be inherited from parent prims, applying to all child prims. Therefore, parent transformations must be carefully tracked to ensure the correct rendering of child visuals.
USD has its own set of textures and allows users to author custom ones [91]. Textures are stored as shader prims, which are children of material prims. In this work, we primarily use five texture types: diffuse color, metallic, normal, opacity, and roughness. For proper application of textures to 3D object surfaces, USD provides texture coordinates. These textures can be either constant values or 2D texture images, with the latter requiring texture coordinates for correct mapping. Additionally, texture data can be stored as vertex colors in the primvars:displayColor attribute. Since VTK also requires texture coordinates for 2D texture images, the USD data are compatible.
Once loaded, the geometry, transformations, and textures are transmitted to the connected AR-capable display devices via the FI3D framework. Rendering performance on the devices is improved by offloading the computationally intensive task of processing the USD data from the resource-limited display devices to a more powerful computer.

2.2. Subscene Extraction

A new feature to extract subscenes from a larger scene and generate a list of available subscenes is implemented as an extension to the FI3D-USD adaptor. This capability is made possible by USD’s customizability and extensibility. Users can define and mark specific sets of prims to be part of these subscenes by adding custom metadata as prim attributes. Any prim with this custom metadata, along with its child prims, will be included as part of the subscene. The processing logic for extracting these subscenes is integrated into the standard traversal of the UsdStage during the initial loading of the USD file for visualization.
Figure 5 illustrates the logical structure of a USD file. The “World” element represents the full scene, while the “Component” and “Subcomponent” elements represent various parts of the scene. The USD file user can add metadata to mark which parts of this structure are included in specific subscenes. For example, the “World” itself is the “World” subscene, and the “Component 1” can define the “Component 1” subscene. Each subscene is assigned to a tier which facilitates tracking and navigation between ancestor and descendant subscenes. This hierarchy allows the application to efficiently manage the display state and computational resources when the user moves between high-level and detailed views. Importantly, not all structural parts must have individual subscenes. Some can be viewed within a larger subscene, while others can be defined as separate subscenes for individual interaction and exploration.
Figure 6 presents an example of metadata added to a prim to denote the root of a subscene named HAB. An attribute named “dt_subscene” (short for “digital twin sub-scene”) of type “USD token,” which is equivalent to type “standard string” in C++, was added to the definition of the HAB prim to mark it as the root of the “HAB” subscene. All its descendant prims are included in the “HAB” subscene. During the traversal of the UsdStage, USD’s GetAttribute function was used to fetch that attribute within the visited prim.

2.3. FI3D-NVIDIA Omniverse Connector

With the FI3D-USD adaptor component in place, the next step is to develop a connector for FI3D and NVIDIA Omniverse. The connector provides two primary functionalities: (1) enabling FI3D to access data stored on an Omniverse Nucleus server for both read and write actions and (2) allowing FI3D to receive and broadcast updates from and to Omniverse. The overall pipeline of the FI3D-Omniverse Connector is shown in Figure 7.
NVIDIA provides several dependencies to support this connection, as listed in Table 5. The Omniverse Client Library facilitates data access on an Omniverse Nucleus server. The Omniverse USD Resolver Library wraps the USD file paths, allowing a client to access USD files as if they were stored locally, regardless of their actual storage location. The Omniverse Connector SDK handles the transmission and reception of updates from and to the Omniverse.
The FI3D-NVIDIA Omniverse Connector has two main functions: (1) accessing USD files on an Omniverse Nucleus server and (2) handling bidirectional updates between Omniverse and FI3D. Table 5 lists the key components and dependencies used in the development of this connector.

2.3.1. Accessing USD Files on Omniverse Nucleus Server

Since NVIDIA Omniverse is built on USD [57], USD files and their related files (e.g., JPG and PNG texture files) can be stored on an Omniverse Nucleus server to facilitate real-time, cross-platform collaboration [96]. There are two primary methods for accessing these USD files. The first uses the Omniverse Drive application to map an Omniverse Nucleus server to the local machine [94], treating the server’s file system as if it were local. Drawbacks of this method include the need for an additional application and local disk space, and the mapping of potentially unneeded files. Additionally, NVIDIA plans to deprecate this application in the future [98].
The second, more efficient method uses the Omniverse USD Resolver library [97]. This library wraps an Omniverse file path (as shown in Figure 8), allowing it to be treated as a local file path without requiring the entire Nucleus file system to be mapped to the local machine. This approach also ensures that all supporting files, such as textures, can be accessed as needed. With this functionality, two-way read and write actions can be performed on any USD files stored on an Omniverse Nucleus server.

2.3.2. Omniverse Live

Omniverse Live is an NVIDIA service that enables real-time, collaborative sessions, allowing multiple users across different software applications and physical locations to collaborate on a USD file and exchange changes in real time with all participating users [95]. In this service, a client initiates a live session, which other clients can then join. When a user makes a change to any part of the scene, Omniverse Live computes the change difference and instantly broadcasts the updates to all participating clients of the live session. The OmniOp module listens to these broadcasts and updates the affected visuals immediately.

3. Results

3.1. AR Visualization

A variety of 3D models are used to test the newly developed FI3D-USD adaptor. Some were obtained from publicly available online sources while others were provided by Tietronix Software, Inc. (Houston, TX, USA). All of the model files were native USD files. USD models can be authored and saved natively in the USD file format in a wide variety of digital content creation applications, such as Blender, version 4.5 (Blender Foundation, Amsterdam, The Netherlands) and Autodesk Maya, version 2023 (Autodesk Inc., San Francisco, CA, USA). Early in its adoption of USD, NVIDIA provided support for a selected group of applications through Omniverse Connectors. For example, Blender’s Omniverse Connector supports importing and exporting USD data.
The FI3D-USD adaptor was tested with native USD models from NVIDIA (free access) [99] and Tietronix Software, Inc. (restricted access). These include large-scale models for a residential lobby (NVIDIA) and a lunar exploration scene (Tietronix). These models are extensive in terms of prim, mesh, point, and face counts. Figure 9 provides a structural overview of the lunar exploration scene. Within this larger scene, the Lunar Habitat Complex (HAB) and the Umbilical Interface Assembly (UIA) were selected as the specific models used for visualization. These 3D visuals are on the framework’s GUI and can be transmitted to a Microsoft HoloLens 1 HMD. This allows the same visuals to be experienced in different modalities–on the computer’s display and through the HoloLens 1’s immersive AR interface. Table 6 shows some key information on the models used.
Figure 10, Figure 11 and Figure 12 illustrate the visualization results of the residential lobby, HAB and UIA on both the framework’s GUI and a Microsoft HoloLens 1. A qualitative comparison of the results shows that the HoloLens 1 offers a more intuitive experience with enhanced depth perception and easier visual inspection. Users can navigate the spatially fixed visuals to examine details, eliminating the need for a mouse and keyboard to zoom and rotate objects and providing a hands-free experience. Since the HoloLens 1 operates via a Wi-Fi connection, users can have an immersive experience without physical constraints. Users can perform actions on the rendered visuals, such as rotating, zooming, grabbing, and dragging using bare hands or gloved hands, which is particularly beneficial for professionals like surgeons.
Figure 13 illustrates the average load time for each USD model over 10 runs with respect to its element counts, as listed in Table 6. Figure 14 provides the Pearson correlation coefficients of the average load time and the element counts. While no apparent correlation exists between the load time and the number of prims and meshes, the results suggest a dependency on point and face counts, with load times increasing for larger counts of the two elements. A detailed list of all load times used to generate Figure 13 can be found in Table S1 of the Supplementary Materials.

3.2. Subscene Navigation

The full lunar exploration scene includes various objects, such as the HAB, the Lunar Electric Rover (LER), and the UIA, whose structure is shown in Figure 9. Metadata for subscene extraction was added as a custom attribute to the highest parent prim of each object and the root prim for the entire scene. The FI3D-USD adaptor’s subscene extraction extension traversed the USD file to identify and list these marked prims, allowing the user to select specific subscenes for visualization.
Figure 15 demonstrates a use case for navigating the lunar scene. Initially, the full scene was rendered. Then, the user selected the HAB subscene, and the visual was updated accordingly. Next, the user navigated to the UIA subscene, as shown in the FI3D GUI and on the Microsoft HoloLens 1 in Figure 15c,d. This feature makes the AR experience less cluttered and more comprehensible by allowing the user to narrow the information presented. The updated subscenes can be shared among all FI3D clients, and changes can be made by any of the participating clients. Crucially, while a user focuses on a specific subscene, the system maintains constant tracking of the state for both the visible subscene and all hidden ones. Updates regarding these non-visible subscenes are processed silently in the background without user intervention.

3.3. Physical-Digital Twin Pipeline

After implementing the FI3D-USD adaptor, the next objective was to access USD files stored on an Omniverse Nucleus server as if they were local. The Omniverse Connector handles the reading and writing of the USD files on the Nucleus server, allowing the USD adaptor to process them identically to local USD files. The Omniverse Connector SDK manages live updates and extends the standard functionality of Pixar’s USD used by the adaptor [93].
The time required to execute the request on the FI3D module, generate the response, transmit it to the client’s FI, and apply the update to the affected visual element is 60 ms [78]. Furthermore, the FI implemented on the HoloLens device can execute one visual update every 16 ms, excluding time spent on processing raw data, e.g., loading polygon meshes and associated textures [78].
The system was then tested with USD files provided by Tietronix Software, Inc. for their digital twin project focused on lunar exploration. A key experiment was the two-way data integration and interaction between a physical UIA and its virtual counterpart in AR. Two approaches were used to verify this bidirectional data flow.
In the first approach, FI3D was connected directly to an MQTT server. When a physical UIA’s switch was toggled (e.g., ON or OFF), a corresponding message was sent to the MQTT server. The virtual UIA on the FI3D side updated its visuals to reflect this change. Conversely, any updates on the virtual UIA were reflected on the physical UIA.
The second approach introduced Omniverse as an intermediary between the MQTT server and the FI3D system. Updates from the physical UIA were first received by the MQTT server and transmitted to the Omniverse. Omniverse then transferred the appropriate data to the FI3D system and other clients for further processing. Conversely, when an update was made on the virtual UIA, the FI3D system broadcast the message to Omniverse via the MQTT server. This ensured that all other AR clients, along with the physical UIA’s switches, were updated instantly. The pipelines for both testing methods are illustrated in Figure 16. As all operations are performed locally within the host computer (FI3D, Omniverse, and MQTT) and in the proximity of the host computer (UIA-MQTT messaging), the latencies were not considered.

4. Discussion and Conclusions

In this work, we present the Omniverse Operator (OmniOp), an application developed on the FI3D framework that provides AR visualization for USD models and offers an immersive AR experience for USD-based digital twins. The application effectively renders USD model data, enabling users to view the 3D models in augmented reality. A key feature is its subscene visualization capability, which allows users to limit the scene to models of interest while maintaining the state of the entire scene, thereby reducing visual clutter and alleviating information overload from large-scale digital twins. Another key feature is its seamless integration with the NVIDIA Omniverse ecosystem, which allows it to access data from the Omniverse Nucleus server and update AR visuals in real-time via Omniverse Live. This demonstrates its capability to serve as a bidirectional AR interface for a digital twin system, enabling multi-user, multi-modal interaction with both the physical and digital counterparts across multiple sites.
Our work offers a solution for the AR interface of large-scale USD-based digital twins, giving the user full control of the immersive experience. It also demonstrates the feasibility of integrating a new data format and its ecosystem, USD and Omniverse, into an existing platform like FI3D. The USD-integrated FI3D system can be extended further with specialized functionalities, opening up a promising path for the integration of immersive AR and 3D digital twins. Since FI3D, Omniverse, and MQTT are installed on the same computer, almost all computational processing is local. The only potential source of latency is the Framework Interface (FI) communication. Additional latencies can emerge depending on how FI3D, Omniverse and MQTT are deployed.
Despite the application’s achievements, it has several limitations that need to be addressed in future research. First, its texture rendering capability is currently limited. As our primary focus was on rendering the geometry and transformations, a more robust texture rendering implementation is required for proper visual fidelity. Second, the long loading times need to be optimized, aiming to reduce the process from minutes to seconds. One potential approach involves implementing multi-resolution rendering to reduce the level of detail for out-of-focus elements, a technique that would necessitate restructuring the underlying USD content. Lastly, the system was tested by trained individuals with prior knowledge of the technology and setup. More extensive user studies with larger groups of both experienced and inexperienced participants are necessary to thoroughly investigate and evaluate the usability of the AR interface for specific digital twin tasks.
Furthermore, the potential complications and numerical errors associated with triangulation prior to visualization and transmission to HMD devices need careful investigation, as they may lead to a loss of detail. The computational costs incurred by data processing and format conversions also warrant elaborate future research. Moreover, the performance impact resulting from traversing a large number of prims to establish the subscene structure requires detailed analysis. Future work will also involve performance testing and comparative analysis of our solution using HoloLens 2. It is noted, however, that Microsoft has discontinued the HoloLens line of products; therefore, we plan to assess and focus on a wider range of XR hardware, such as Meta Quest, VIVE, Varjo, Vision Pro, VITURE, and DigiLens, to further demonstrate the scalability of the OmniOp pipeline.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/virtualworlds4040050/s1, Table S1: Load times (in milliseconds) of 10 runs for the three USD models in Table 6.

Author Contributions

Conceptualization, K.Q.T., J.D.V.-G. and N.V.T.; methodology, K.Q.T., E.L.L., J.D.V.-G. and N.V.T.; software, K.Q.T. and J.D.V.-G.; validation, K.Q.T. and J.D.V.-G.; formal analysis, K.Q.T., E.L.L., J.D.V.-G. and N.V.T.; investigation, K.Q.T., J.D.V.-G. and N.V.T.; resources, J.D.V.-G. and N.V.T.; data curation, K.Q.T. and J.D.V.-G.; writing—original draft preparation, K.Q.T.; writing—review and editing, K.Q.T., E.L.L., J.D.V.-G. and N.V.T.; visualization, K.Q.T. and J.D.V.-G.; supervision, E.L.L., J.D.V.-G. and N.V.T.; project administration, E.L.L., J.D.V.-G. and N.V.T.; funding acquisition, J.D.V.-G. and N.V.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Aeronautics and Space Administration (NASA) STTR contract # 80NSSC22PB226. In addition, for the early version of the FI3D, author Jose Daniel Velazco-Garcia acknowledges support from the National Science Foundation (NSF) award DGE-1746046 and author Nikolaos V. Tsekos from the NSF award CNS-1646566.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions or in Supplementary Materials.

Acknowledgments

Special thanks to Theodore Griffin, Department of Technical Communication, University of North Texas, Denton, TX 76203, USA for invaluable feedback on language usage and proofreading.

Conflicts of Interest

Author Jose Daniel Velazco-Garcia was employed by the company Tietronix Software, Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Velazco-Garcia, J.D.; Navkar, N.V.; Balakrishnan, S.; Younes, G.; Abi-Nahed, J.; Al-Rumaihi, K.; Darweesh, A.; Elakkad, M.S.M.; Al-Ansari, A.; Christoforou, E.G.; et al. Evaluation of How Users Interface with Holographic Augmented Reality Surgical Scenes: Interactive Planning MR-Guided Prostate Biopsies. Robot. Comput. Surg. 2021, 17, e2290. [Google Scholar] [CrossRef] [PubMed]
  2. Morales Mojica, C.M.; Velazco-Garcia, J.D.; Pappas, E.P.; Birbilis, T.A.; Becker, A.; Leiss, E.L.; Webb, A.; Seimenis, I.; Tsekos, N.V. A Holographic Augmented Reality Interface for Visualizing of MRI Data and Planning of Neurosurgical Procedures. J. Digit. Imaging 2021, 34, 1014–1025. [Google Scholar] [CrossRef] [PubMed]
  3. Mojica, C.M.M.; Garcia, J.D.V.; Navkar, N.V.; Balakrishnan, S.; Abinahed, J.; Ansari, W.E.; Al-Rumaihi, K.; Darweesh, A.; Al-Ansari, A.; Gharib, M.; et al. A Prototype Holographic Augmented Reality Interface for Image-Guided Prostate Cancer Interventions. In Proceedings of the Eurographics Workshop on Visual Computing for Biology and Medicine, Granada, Spain, 20–21 September 2018; Eurographics Association: Goslar, Germany, 2018; pp. 17–21. [Google Scholar] [CrossRef]
  4. Capecchi, I.; Bernetti, I.; Borghini, T.; Caporali, A. CaldanAugmenty—Augmented Reality and Serious Game App for Urban Cultural Heritage Learning. In Proceedings of the Extended Reality: International Conference, XR Salento 2023, Lecce, Italy, 6–9 September 2023; pp. 339–349. [Google Scholar] [CrossRef]
  5. Ayoub, A.; Pulijala, Y. The Application of Virtual Reality and Augmented Reality in Oral & Maxillofacial Surgery. BMC Oral Health 2019, 19, 238. [Google Scholar] [CrossRef]
  6. Karkaria, U. BMW to build factories faster virtually: Nvidia’s Omniverse cuts costs, increases flexibility. Automotive News 2023, 98, 3. Available online: https://www.proquest.com/docview/2864615448/ABF598FC6D82418DPQ/1 (accessed on 13 September 2025).
  7. Garg, S.; Sinha, P.; Singh, A. Overview of Augmented Reality and Its Trends in Agriculture Industry. In IOT with Smart Systems; Senjyu, T., Mahalle, P., Perumal, T., Joshi, A., Eds.; Smart Innovation, Systems and Technologies; Springer Nature Singapore: Singapore, 2022; Volume 251, pp. 627–636. ISBN 978-981-16-3944-9. [Google Scholar] [CrossRef]
  8. Boroushaki, T.; Lam, M.; Chen, W.; Dodds, L.; Eid, A.; Adib, F. Exploiting Synergies between Augmented Reality and RFIDs for Item Localization and Retrieval. In Proceedings of the 2023 IEEE International Conference on RFID (RFID), Seattle, WA, USA, 13–15 June 2023; IEEE: New York, NY, USA, 2023; pp. 30–35. [Google Scholar] [CrossRef]
  9. Siegele, D.; Di Staso, U.; Piovano, M.; Marcher, C.; Matt, D.T. State of the Art of Non-vision-Based Localization Technologies for AR in Facility Management. In Augmented Reality, Virtual Reality, and Computer Graphics; De Paolis, L.T., Bourdot, P., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2020; Volume 12242, pp. 255–272. ISBN 978-3-030-58464-1. [Google Scholar] [CrossRef]
  10. Escribà-Gelonch, M.; Liang, S.; Van Schalkwyk, P.; Fisk, I.; Long, N.V.D.; Hessel, V. Digital Twins in Agriculture: Orchestration and Applications. J. Agric. Food Chem. 2024, 72, 10737–10752. [Google Scholar] [CrossRef]
  11. González, J.J.C.; Mishra, A.; Xu, G.; Garcia, J.D.V.; Tsekos, N.V. Mixed Reality Holographic Models for the Interactive and Synergetic Exploration of Space Structures in Architectural Design and Education. In Shell and Spatial Structures; Gabriele, S., Manuello Bertetto, A., Marmo, F., Micheletti, A., Eds.; Lecture Notes in Civil Engineering; Springer Nature Switzerland: Cham, Switzerland, 2024; Volume 437, pp. 540–548. ISBN 978-3-031-44327-5. [Google Scholar] [CrossRef]
  12. Ricci, M.; Mosca, N.; Di Summa, M. Augmented and Virtual Reality for Improving Safety in Railway Infrastructure Monitoring and Maintenance. Sensors 2025, 25, 3772. [Google Scholar] [CrossRef]
  13. Papagiannis, H. How AR Is Redefining Retail in the Pandemic. Available online: https://hbr.org/2020/10/how-ar-is-redefining-retail-in-the-pandemic (accessed on 16 March 2025).
  14. Boland, M. Does AR Really Reduce eCommerce Returns? Available online: https://arinsider.co/2021/09/28/does-ar-really-reduce-ecommerce-returns-2/ (accessed on 16 March 2025).
  15. Maio, R.; Santos, A.; Marques, B.; Ferreira, C.; Almeida, D.; Ramalho, P.; Batista, J.; Dias, P.; Santos, B.S. Pervasive Augmented Reality to Support Logistics Operators in Industrial Scenarios: A Shop Floor User Study on Kit Assembly. Int. J. Adv. Manuf. Technol. 2023, 127, 1631–1649. [Google Scholar] [CrossRef]
  16. Aranda-García, S.; Otero-Agra, M.; Fernández-Méndez, F.; Herrera-Pedroviejo, E.; Darné, M.; Barcala-Furelos, R.; Rodríguez-Núñez, A. Augmented Reality Training in Basic Life Support with the Help of Smart Glasses. A pilot study. Resusc. Plus 2023, 14, 100391. [Google Scholar] [CrossRef]
  17. He, S.; Ma, C.; Yuan, Z.-Y.; Xu, T.; Xie, Q.; Wang, Y.; Huang, X. Feasibility of Augmented Reality Using Dental Arch-Based Registration Applied to Navigation in Mandibular Distraction Osteogenesis: A Phantom Experiment. BMC Oral Health 2024, 24, 1321. [Google Scholar] [CrossRef]
  18. Liamruk, P.; Onwong, N.; Amornrat, K.; Arayapipatkul, A.; Sipiyaruk, K. Development and Evaluation of an Augmented Reality Serious Game to Enhance 21st Century Skills in Cultural Tourism. Sci. Rep. 2025, 15, 13492. [Google Scholar] [CrossRef]
  19. Carulli, M.; Generosi, A.; Bordegoni, M.; Mengoni, M. Design of XR Applications for Museums, Including Technology Maximising Visitors’ Experience. In Advances on Mechanics, Design Engineering and Manufacturing IV; Gerbino, S., Lanzotti, A., Martorelli, M., Mirálbes Buil, R., Rizzi, C., Roucoules, L., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2023; pp. 1460–1470. ISBN 978-3-031-15927-5. [Google Scholar] [CrossRef]
  20. Tokuno, K.; Kusunoki, F.; Inagaki, S.; Mizoguchi, H. Talkative Museum: Augmented Reality Interactive Museum Guide System Towards Collaborative Child-Parent-Specimen Interaction. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference, Delft, The Netherlands, 17–20 June 2024; ACM: New York, NY, USA, 2024; pp. 754–758. [Google Scholar] [CrossRef]
  21. Zucchi, V.; Guidazzoli, A.; Bellavia, G.; De Luca, D.; Liguori, M.C.; Delli Ponti, F.; Farroni, F.; Chiavarini, B. Il Piccolo Masaccio e le Terre Nuove. Creativity and Computer Graphics for Museum Edutainment. In Electronic Imaging & the Visual Arts. EVA 2018 Florence; Cappellini, V., Ed.; Firenze University Press: Florence, Italy, 2018; pp. 166–172. ISBN 978-88-6453-706-1. [Google Scholar] [CrossRef]
  22. Mangano, F.G.; Admakin, O.; Lerner, H.; Mangano, C. Artificial Intelligence and Augmented Reality for Guided Implant Surgery Planning: A Proof of Concept. J. Dent. 2023, 133, 104485. [Google Scholar] [CrossRef]
  23. Van Gestel, F.; Van Aerschot, F.; Frantz, T.; Verhellen, A.; Barbé, K.; Jansen, B.; Vandemeulebroucke, J.; Duerinck, J.; Scheerlinck, T. Augmented Reality Guidance Improves Accuracy of Orthopedic Drilling Procedures. Sci. Rep. 2024, 14, 25269. [Google Scholar] [CrossRef] [PubMed]
  24. McCloskey, K.; Turlip, R.; Ahmad, H.S.; Ghenbot, Y.G.; Chauhan, D.; Yoon, J.W. Virtual and Augmented Reality in Spine Surgery: A Systematic Review. World Neurosurg. 2023, 173, 96–107. [Google Scholar] [CrossRef] [PubMed]
  25. Wu, J.-R.; Wang, M.-L.; Liu, K.-C.; Hu, M.-H.; Lee, P.-Y. Real-time Advanced Spinal Surgery via Visible Patient Model and Augmented Reality System. Comput. Methods Programs Biomed. 2014, 113, 869–881. [Google Scholar] [CrossRef] [PubMed]
  26. Velazco-Garcia, J.D.; Navkar, N.V.; Balakrishnan, S.; Abinahed, J.; Al-Ansari, A.; Younes, G.; Darweesh, A.; Al-Rumaihi, K.; Christoforou, E.G.; Leiss, E.L.; et al. Preliminary Evaluation of Robotic Transrectal Biopsy System on an Interventional Planning Software. In Proceedings of the 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), Athens, Greece, 28–30 October 2019; IEEE: New York, NY, USA, 2019; pp. 357–362. [Google Scholar] [CrossRef]
  27. Arpaia, P.; De Benedetto, E.; De Paolis, L.; D’Errico, G.; Donato, N.; Duraccio, L. Performance and Usability Evaluation of an Extended Reality Platform to Monitor Patient’s Health during Surgical Procedures. Sensors 2022, 22, 3908. [Google Scholar] [CrossRef]
  28. Daher, M.; Ghanimeh, J.; Otayek, J.; Ghoul, A.; Bizdikian, A.-J.; El Abiad, R. Augmented Reality and Shoulder Replacement: A State-of-the-Art Review Article. JSES Rev. Rep. Tech. 2023, 3, 274–278. [Google Scholar] [CrossRef]
  29. Sugimoto, M.; Sueyoshi, T. Development of Holoeyes Holographic Image-Guided Surgery and Telemedicine System: Clinical Benefits of Extended Reality (Virtual Reality, Augmented Reality, Mixed Reality), The Metaverse, and Artificial Intelligence in Surgery with a Systematic Review. MRAJ 2023, 11. [Google Scholar] [CrossRef]
  30. Liu, Y.; Zhang, Q.; Li, W. Enhancing Lower-Limb Rehabilitation: A Scoping Review of Augmented Reality Environment. J Neuroeng. Rehabil. 2025, 22, 114. [Google Scholar] [CrossRef]
  31. Shen, Y.; Ong, S.K.; Nee, A.Y.C. Hand Rehabilitation Based on Augmented Reality. In Proceedings of the 3rd International Convention on Rehabilitation Engineering & Assistive Technology—ICREATE ’09, Singapore, 22–26 April 2019; ACM Press: New York, NY, USA, 2009; p. 1. [Google Scholar] [CrossRef]
  32. Lamichhane, P.; Sukralia, S.; Alam, B.; Shaikh, S.; Farrukh, S.; Ali, S.; Ojha, R. Augmented Reality-based Training versus Standard Training in Improvement of Balance, Mobility and Fall Risk: A systematic review and meta-analysis. Ann. Med. Surg. 2023, 85, 4026–4032. [Google Scholar] [CrossRef]
  33. Da Gama, A.E.F.; Chaves, T.M.; Figueiredo, L.S.; Baltar, A.; Meng, M.; Navab, N.; Teichrieb, V.; Fallavollita, P. MirrARbilitation: A Clinically-related Gesture Recognition Interactive Tool for an AR Rehabilitation System. Comput. Methods Programs Biomed. 2016, 135, 105–114. [Google Scholar] [CrossRef]
  34. Phan, H.L.; Le, T.H.; Lim, J.M.; Hwang, C.H.; Koo, K. Effectiveness of Augmented Reality in Stroke Rehabilitation: A Meta-Analysis. Appl. Sci. 2022, 12, 1848. [Google Scholar] [CrossRef]
  35. Yang, Z.-Q.; Du, D.; Wei, X.-Y.; Tong, R.K.-Y. Augmented Reality for Stroke Rehabilitation during COVID-19. J. Neuroeng. Rehabil. 2022, 19, 136. [Google Scholar] [CrossRef] [PubMed]
  36. Khan, H.U.; Ali, Y.; Khan, F.; Al-antari, M.A. A Comprehensive Study on Unraveling the Advances of Immersive Technologies (VR/AR/MR/XR) in the Healthcare Sector during the COVID-19: Challenges and Solutions. Heliyon 2024, 10, e35037. [Google Scholar] [CrossRef] [PubMed]
  37. Novak-Marcincin, J.; Barna, J.; Janak, M.; Novakova-Marcincinova, L. Augmented Reality Aided Manufacturing. Procedia Comput. Sci. 2013, 25, 23–31. [Google Scholar] [CrossRef]
  38. Yap, H.J.; Pai, Y.S.; Chang, S.-W.; Yap, K.S. Development of an Augmented Reality-Based G-Code Generator in a Virtual CNC Milling Simulation. Int. J. Comput. Sci. Eng. (IJCSE) 2016, 5, 63–72. [Google Scholar]
  39. Angelino, A.; Martorelli, M.; Tarallo, A.; Cosenza, C.; Papa, S.; Monteleone, A.; Lanzotti, A. An Augmented Reality Framework for Remote Factory Acceptance Test: An Industrial Case Study. In Advances on Mechanics, Design Engineering and Manufacturing IV; Gerbino, S., Lanzotti, A., Martorelli, M., Mirálbes Buil, R., Rizzi, C., Roucoules, L., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2023; pp. 768–779. ISBN 978-3-031-15927-5. [Google Scholar] [CrossRef]
  40. Seeliger, A.; Cheng, L.; Netland, T. Augmented Reality for Industrial Quality Inspection: An Experiment Assessing Task Performance and Human Factors. Comput. Ind. 2023, 151, 103985. [Google Scholar] [CrossRef]
  41. Menk, C.; Jundt, E.; Koch, R. Visualisation Techniques for Using Spatial Augmented Reality in the Design Process of a Car. Comput. Graph. Forum 2011, 30, 2354–2366. [Google Scholar] [CrossRef]
  42. Frizziero, L.; Santi, G.; Donnici, G.; Leon-Cardenas, C.; Ferretti, P.; Liverani, A.; Neri, M. An Innovative Ford Sedan with Enhanced Stylistic Design Engineering (SDE) via Augmented Reality and Additive Manufacturing. Designs 2021, 5, 46. [Google Scholar] [CrossRef]
  43. Ho, P.T.; Albajez, J.A.; Santolaria, J.; Yagüe-Fabra, J.A. Study of Augmented Reality Based Manufacturing for Further Integration of Quality Control 4.0: A Systematic Literature Review. Appl. Sci. 2022, 12, 1961. [Google Scholar] [CrossRef]
  44. Gao, L.; Wang, C.; Wu, G. Wearable Biosensor Smart Glasses Based on Augmented Reality and Eye Tracking. Sensors 2024, 24, 6740. [Google Scholar] [CrossRef]
  45. Tran, K.Q.; Neeli, H.; Tsekos, N.V.; Velazco-Garcia, J.D. Immersion into 3D Biomedical Data Via Holographic AR Interfaces Based on the Universal Scene Description (USD) Standard. In Proceedings of the 2023 IEEE 23rd International Conference on Bioinformatics and Bioengineering (BIBE), Dayton, OH, USA, 4–6 December 2023; IEEE: New York, NY, USA, 2023; pp. 354–358. [Google Scholar] [CrossRef]
  46. Neeli, H.; Tran, K.Q.; Velazco-Garcia, J.D.; Tsekos, N.V. A Multiuser, Multisite, and Platform-Independent On-the-Cloud Framework for Interactive Immersion in Holographic XR. Appl. Sci. 2024, 14, 2070. [Google Scholar] [CrossRef]
  47. Grieves, M.; Vickers, J. Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems. In Transdisciplinary Perspectives on Complex Systems; Springer International Publishing: Cham, Switzerland, 2017; pp. 85–113. ISBN 978-3-319-38754-3. [Google Scholar] [CrossRef]
  48. Campos-Ferreira, A.E.; Lozoya-Santos, J.d.J.; Vargas-Martınez, A.; Mendoza, R.R.; Morales-Menendez, R. Digital Twin Applications: A Review. In Proceedings of the Memorias del Congreso Nacional de Control Automático, Puebla, México, 23–25 October 2019; Volume 2, pp. 606–611. [Google Scholar]
  49. Verdouw, C.; Tekinerdogan, B.; Beulens, A.; Wolfert, S. Digital Twins in Smart Farming. Agric. Syst. 2021, 189, 103046. [Google Scholar] [CrossRef]
  50. Che, Z.; Peng, C.; Zhang, Z. Digital Twin Based Definition (DTBD) Modeling Technology for Product Life Cycle Management and Optimization. In Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus; Kim, K.-Y., Monplaisir, L., Rickli, J., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2023; pp. 573–583. ISBN 978-3-031-17628-9. [Google Scholar] [CrossRef]
  51. Botín-Sanabria, D.M.; Mihaita, A.-S.; Peimbert-García, R.E.; Ramírez-Moreno, M.A.; Ramírez-Mendoza, R.A.; Lozoya-Santos, J.D.J. Digital Twin Technology Challenges and Applications: A Comprehensive Review. Remote Sens. 2022, 14, 1335. [Google Scholar] [CrossRef]
  52. Pallagst, K.; Pauly, J.; Stumpf, M. Conceptualising Smart Cities in the Japanese Planning Culture. In KEEP ON PLANNING FOR THE REAL WORLD. Climate Change Calls for Nature-Based Solutions and Smart Technologies. Proceedings of the REAL CORP 2024, 29th International Conference on Urban Development, Regional Planning and Information Society, Mannheim, Germany, 15–17 April 2024; CORP–Competence Center of Urban and Regional Planning: Graz, Austria, 2024; pp. 137–148. [Google Scholar] [CrossRef]
  53. Evans, S.; Savian, C.; Burns, A.; Cooper, C. Digital Twins for the Built Environment: An Introduction to the Opportunities, Benefits, Challenges and Risks; Institution of Engineering and Technology: Hertfordshire, UK, 2019; Available online: https://www.theiet.org/media/8762/digital-twins-for-the-built-environment.pdf (accessed on 30 January 2025).
  54. Geyer, M. BMW Group Starts Global Rollout of NVIDIA Omniverse; NVIDIA Blog: Santa Clara, CA, USA, 2023; Available online: https://blogs.nvidia.com/blog/bmw-group-nvidia-omniverse/ (accessed on 14 August 2025).
  55. Kazuhiko, I.; Atsush, Y. Building a Common Smart City Platform Utilizing FIWARE (Case Study of Takamatsu City). NEC Tech. J. 2018, 13, 28–31. [Google Scholar]
  56. Goldenits, G.; Mallinger, K.; Raubitzek, S.; Neubauer, T. Current Applications and Potential Future Directions of Reinforcement Learning-Based Digital Twins in Agriculture. Smart Agric. Technol. 2024, 8, 100512. [Google Scholar] [CrossRef]
  57. Kass, M.; DeLise, F.; Kim, T.-Y. SIGGRAPH 2019: NVIDIA Omniverse: An Open, USD Based Collaboration Platform for Constructing and Simulating Virtual Worlds. Available online: https://developer.nvidia.com/siggraph/2019/video/sig921 (accessed on 21 September 2023).
  58. Johnston, K. NVIDIA Omniverse Opens Portals to Vast Worlds of OpenUSD. Available online: https://nvidianews.nvidia.com/news/nvidia-omniverse-opens-portals-to-vast-worlds-of-openusd (accessed on 17 September 2023).
  59. Byrne, C. Bentley Systems Brings Infrastructure Digital Twins to NVIDIA Omniverse | Bentley Systems, Incorporated. Available online: https://investors.bentley.com/news-releases/news-release-details/bentley-systems-brings-infrastructure-digital-twins-nvidia (accessed on 13 September 2025).
  60. Autodesk Inc. Maya Creative Help | Supported File Formats | Autodesk. Available online: https://help.autodesk.com/view/MAYACRE/ENU/?guid=GUID-3A6190CA-E296-4A10-9287-5AEE156DBA9D (accessed on 15 August 2025).
  61. Unity Technologies. Unity—Manual: Model File Formats. Available online: https://docs.unity3d.com/6000.3/Documentation/Manual/3D-formats.html (accessed on 15 August 2025).
  62. Blender Documentation Team. Importing & Exporting Files—Blender Manual. Available online: https://docs.blender.org/manual/en/2.93/files/import_export.html (accessed on 15 August 2025).
  63. Library of Congress. STL (STereoLithography) File Format, Binary. Available online: https://www.loc.gov/preservation/digital/formats/fdd/fdd000505.shtml (accessed on 24 September 2023).
  64. Pixar Animation Studios. USD Frequently Asked Questions—Universal Scene Description 25.08 Documentation. Available online: https://openusd.org/release/usdfaq.html (accessed on 14 August 2025).
  65. Van Gelder, D. Real-time Graphics in Pixar Film Production. In Proceedings of the ACM SIGGRAPH 2016 Real-Time Live! Anaheim, CA, USA, 24–28 July 2016; ACM: New York, NY, USA, 2016; p. 27. [Google Scholar] [CrossRef]
  66. Pixar Animation Studios. Introduction to USD. Available online: https://openusd.org/docs/ (accessed on 14 August 2025).
  67. Autodesk Inc. Universal Scene Description | OpenUSD | Autodesk. Available online: https://www.autodesk.com/solutions/universal-scene-description (accessed on 14 August 2025).
  68. NVIDIA Corporation. OmniUsdResolver Details—Omniverse USD Resolver 1.42.3 Documentation. Available online: https://docs.omniverse.nvidia.com/kit/docs/usd_resolver/latest/docs/resolver-details.html (accessed on 10 April 2025).
  69. NVIDIA Corporation. USD Connections Overview—Omniverse Connect. Available online: https://docs.omniverse.nvidia.com/connect/latest/index.html (accessed on 14 August 2025).
  70. Blevins, A.; Murray, M. Zero to USD in 80 Days. In Proceedings of the ACM SIGGRAPH 2018 Talks, Vancouver, BC, Canada, 12–16 August 2018; ACM: New York, NY, USA, 2018; pp. 1–2. [Google Scholar] [CrossRef]
  71. Walt Disney Animation Studios. Walt Disney Animation Studios—Moana Island Scene. Available online: https://disneyanimation.com/resources/moana-island-scene/ (accessed on 10 August 2025).
  72. Vavilala, V. Light pruning on Toy Story 4. In Proceedings of the ACM SIGGRAPH 2019 Talks, Los Angeles, CA, USA, 28 July–1 August 2019; ACM: New York, NY, USA, 2019; pp. 1–2. [Google Scholar] [CrossRef]
  73. Lehman, N.; Johnston, K. Pixar, Adobe, Apple, Autodesk, and NVIDIA Form Alliance for OpenUSD to Drive Open Standards for 3D Content. Available online: https://nvidianews.nvidia.com/news/aousd-to-drive-open-standards-for-3d-content (accessed on 17 September 2023).
  74. Xu, B.; Gao, F.; Yu, C.; Zhang, R.; Wu, Y.; Wang, Y. OmniDrones: An Efficient and Flexible Platform for Reinforcement Learning in Drone Control. IEEE Robot. Autom. Lett. 2024, 9, 2838–2844. [Google Scholar] [CrossRef]
  75. Jones, G. NVIDIA Blog: CloudXR Platform on AWS; NVIDIA Blog: Santa Clara, CA, USA, 2020. [Google Scholar]
  76. NVIDIA Corporation. FAQ—NVIDIA CloudXR SDK Documentation. Available online: https://docs.nvidia.com/cloudxr-sdk/support/faq.html (accessed on 10 August 2025).
  77. Oun, A.; Hagerdorn, N.; Scheideger, C.; Cheng, X. Mobile Devices or Head-Mounted Displays: A Comparative Review and Analysis of Augmented Reality in Healthcare. IEEE Access 2024, 12, 21825–21839. [Google Scholar] [CrossRef]
  78. Velazco-Garcia, J.D.; Shah, D.J.; Leiss, E.L.; Tsekos, N.V. A Modular and Scalable Computational Framework for Interactive Immersion into Imaging Data with a Holographic Augmented Reality Interface. Comput. Methods Programs Biomed. 2021, 198, 105779. [Google Scholar] [CrossRef]
  79. Ma, T.; Xiao, F.; Zhang, C.; Zhang, J.; Zhang, H.; Xu, K.; Luo, X. Digital Twin for 3D Interactive Building Operations: Integrating BIM, IoT-enabled Building Automation Systems, AI, and Mixed Reality. Autom. Constr. 2025, 176, 106277. [Google Scholar] [CrossRef]
  80. Schenk, V.K.; Küper, M.A.; Menger, M.M.; Herath, S.C.; Histing, T.; Audretsch, C.K. Augmented Reality in Pelvic Surgery: Using an AR-headset as Intraoperative Radiation-free Navigation Tool. Int. J. CARS 2025. [Google Scholar] [CrossRef]
  81. Qiao, X.; Xie, W.; Peng, X.; Li, G.; Li, D.; Guo, Y.; Ren, J. Large-Scale Spatial Data Visualization Method Based on Augmented Reality. Virtual Real. Intell. Hardw. 2024, 6, 132–147. [Google Scholar] [CrossRef]
  82. Pixar Animation Studios. USD Terms and Concepts—Universal Scene Description 25.02 Documentation. Available online: https://openusd.org/release/glossary.html (accessed on 5 March 2025).
  83. NVIDIA Corporation. Prim—Omniverse USD. Available online: https://docs.omniverse.nvidia.com/usd/latest/learn-openusd/terms/prim.html (accessed on 5 March 2025).
  84. Pixar Animation Studios. Universal Scene Description: UsdPrim Class Reference. Available online: https://openusd.org/release/api/class_usd_prim.html (accessed on 7 April 2025).
  85. Pixar Animation Studios. Universal Scene Description: UsdGeomMesh Class Reference. Available online: https://openusd.org/release/api/class_usd_geom_mesh.html (accessed on 7 April 2025).
  86. Pixar Animation Studios. Universal Scene Description: UsdStage Class Reference. Available online: https://openusd.org/release/api/class_usd_stage.html (accessed on 7 April 2025).
  87. Pixar Animation Studios. Universal Scene Description: UsdGeomXformable Class Reference. Available online: https://openusd.org/release/api/class_usd_geom_xformable.html (accessed on 17 April 2025).
  88. Pixar Animation Studios. Universal Scene Description: GfMatrix4d Class Reference. Available online: https://openusd.org/release/api/class_gf_matrix4d.html (accessed on 20 April 2025).
  89. Pixar Animation Studios. Universal Scene Description: UsdGeom: USD Geometry Schema. Available online: https://openusd.org/release/api/usd_geom_page_front.html (accessed on 17 April 2025).
  90. VTK. VTK: vtkTransform Class Reference. Available online: https://vtk.org/doc/nightly/html/classvtkTransform.html (accessed on 20 April 2025).
  91. Pixar Animation Studios. Universal Scene Description: UsdShadeMaterial Class Reference. Available online: https://openusd.org/release/api/class_usd_shade_material.html (accessed on 18 August 2025).
  92. NVIDIA Corporation. Omniverse Client Library—Omniverse Client Library 2.47.1 Documentation. Available online: https://docs.omniverse.nvidia.com/kit/docs/client_library/latest/index.html (accessed on 25 March 2025).
  93. NVIDIA Corporation. Omniverse Connect SDK—Omniverse Connect SDK 1.0.0 Documentation. Available online: https://docs.omniverse.nvidia.com/kit/docs/connect-sdk/latest/index.html (accessed on 25 March 2025).
  94. NVIDIA Corporation. Omniverse Drive (Beta)—Omniverse Utilities. Available online: https://catalog.ngc.nvidia.com/ (accessed on 19 October 2025).
  95. NVIDIA Corporation. NVIDIA Omniverse Live Overview|Omniverse 2020|NVIDIA On-Demand. Available online: https://www.nvidia.com/en-us/on-demand/session/omniverse2020-om1572/ (accessed on 20 April 2025).
  96. NVIDIA Corporation. Nucleus Overview—Omniverse Nucleus. Available online: https://docs.omniverse.nvidia.com/nucleus/latest/index.html (accessed on 20 April 2025).
  97. NVIDIA Corporation. Omniverse USD Resolver—Omniverse USD Resolver 1.42.3 Documentation. Available online: https://docs.omniverse.nvidia.com/kit/docs/usd_resolver/latest/index.html (accessed on 25 March 2025).
  98. NVIDIA Corporation. Legacy Tools for Omniverse Launcher | NVIDIA Developer. Available online: https://developer.nvidia.com/omniverse/legacy-tools (accessed on 18 August 2025).
  99. NVIDIA Corporation. Residential Lobby—Omniverse USD. Available online: https://docs.omniverse.nvidia.com/usd/latest/usd_content_samples/res_lobby.html (accessed on 19 August 2025).
Figure 1. Physical Twin and Digital Twin, with two-way data integration and interaction.
Figure 1. Physical Twin and Digital Twin, with two-way data integration and interaction.
Virtualworlds 04 00050 g001
Figure 2. The Physical Twin and Digital Twin pipeline, with FI3D and its Omniverse Operator module as an AR interface.
Figure 2. The Physical Twin and Digital Twin pipeline, with FI3D and its Omniverse Operator module as an AR interface.
Virtualworlds 04 00050 g002
Figure 3. The pipeline from a 3D model by digital content creation tools, e.g., Autodesk Maya, Blender, etc., to its AR visualization on an AR device, e.g., HoloLens [45].
Figure 3. The pipeline from a 3D model by digital content creation tools, e.g., Autodesk Maya, Blender, etc., to its AR visualization on an AR device, e.g., HoloLens [45].
Virtualworlds 04 00050 g003
Figure 4. FI3D-USD adaptor.
Figure 4. FI3D-USD adaptor.
Virtualworlds 04 00050 g004
Figure 5. Example of a USD file’s logical structure.
Figure 5. Example of a USD file’s logical structure.
Virtualworlds 04 00050 g005
Figure 6. Example of custom metadata to the definition of a prim to mark the root of a subscene.
Figure 6. Example of custom metadata to the definition of a prim to mark the root of a subscene.
Virtualworlds 04 00050 g006
Figure 7. FI3D-NVIDIA Omniverse Connector.
Figure 7. FI3D-NVIDIA Omniverse Connector.
Virtualworlds 04 00050 g007
Figure 8. Example of a URL for a USD file stored on an Omniverse Nucleus server.
Figure 8. Example of a URL for a USD file stored on an Omniverse Nucleus server.
Virtualworlds 04 00050 g008
Figure 9. Lunar exploration scene’s structure. Abbreviations: HAB: Lunar Habitat Complex; LER: Lunar Electric Rover; UIA: Umbilical Interface Assembly.
Figure 9. Lunar exploration scene’s structure. Abbreviations: HAB: Lunar Habitat Complex; LER: Lunar Electric Rover; UIA: Umbilical Interface Assembly.
Virtualworlds 04 00050 g009
Figure 10. Rendering the 3D model provided by NVIDIA: Residential lobby (without the surrounding building).
Figure 10. Rendering the 3D model provided by NVIDIA: Residential lobby (without the surrounding building).
Virtualworlds 04 00050 g010
Figure 11. The HAB, as rendered in FI3D GUI. Note the UIA visual within the Airlock component of the HAB’s subscene, as marked with the red circle.
Figure 11. The HAB, as rendered in FI3D GUI. Note the UIA visual within the Airlock component of the HAB’s subscene, as marked with the red circle.
Virtualworlds 04 00050 g011
Figure 12. A UIA, as rendered in (a) FI3D GUI and (b) FI3D’s Framework Interface implementation in the Microsoft’s HoloLens 1.
Figure 12. A UIA, as rendered in (a) FI3D GUI and (b) FI3D’s Framework Interface implementation in the Microsoft’s HoloLens 1.
Virtualworlds 04 00050 g012
Figure 13. Average load time with respect to (a) prim count, (b) mesh count, (c) point count and (d) face count of the three USD models in Table 6.
Figure 13. Average load time with respect to (a) prim count, (b) mesh count, (c) point count and (d) face count of the three USD models in Table 6.
Virtualworlds 04 00050 g013
Figure 14. Pearson correlation coefficients of average load time and element counts.
Figure 14. Pearson correlation coefficients of average load time and element counts.
Virtualworlds 04 00050 g014
Figure 15. (a) Full scene (in Omniverse application) and selected subscenes of the lunar exploration scene: (b) HAB (in FI3D GUI), (c) UIA (in FI3D GUI) and (d) UIA (in FI3D’s Framework Interface on Microsoft HoloLens 1). In the foreground of the full scene, from left to right, LER, HAB’s Airlock, HAB’s Main Housing and HAB’s Hygiene Module.
Figure 15. (a) Full scene (in Omniverse application) and selected subscenes of the lunar exploration scene: (b) HAB (in FI3D GUI), (c) UIA (in FI3D GUI) and (d) UIA (in FI3D’s Framework Interface on Microsoft HoloLens 1). In the foreground of the full scene, from left to right, LER, HAB’s Airlock, HAB’s Main Housing and HAB’s Hygiene Module.
Virtualworlds 04 00050 g015
Figure 16. Two pipelines for two-data integration and interaction: (a) with MQTT, and (b) with MQTT and Omniverse.
Figure 16. Two pipelines for two-data integration and interaction: (a) with MQTT, and (b) with MQTT and Omniverse.
Virtualworlds 04 00050 g016
Table 1. Comparison of features: USD, STL and OBJ.
Table 1. Comparison of features: USD, STL and OBJ.
FeatureUSDSTLOBJ
Open sourceYesYesYes
Texture, material propertiesSupportedNot supportedSupported
Non-destructive authoringSupportedNot supportedNot supported
Real-time collaborationSupportedNot supportedNot supported
Table 2. Comparison of USD terms vs. VTK terms. C++ data types are shown in parentheses.
Table 2. Comparison of USD terms vs. VTK terms. C++ data types are shown in parentheses.
USDVTK
Points (VtVec3fArray)Points (vtkFloatArray)
FacesCells
Face verticesCell vertices
Face vertex counts (VtIntArray)Cell vertex offsets (vtkTypeInt32Array)
Face vertex index array (VtIntArray)Cell connectivity array (vtkTypeInt32Array)
Table 3. Some USD terms encountered in this work.
Table 3. Some USD terms encountered in this work.
TermExplanation
UsdPrimPrincipal container of other types of scene description; providing API for accessing and creating all of the contained kinds of scene description [84]
UsdGeomMeshEncodes a mesh with optional subdivision properties and features [85]
UsdStageOutermost container for scene description [86]
UsdGeomXformableBase class for all transformable prims [87]
Table 4. Comparison of rotation types, USD vs. VTK. Angle units are shown in parentheses.
Table 4. Comparison of rotation types, USD vs. VTK. Angle units are shown in parentheses.
Rotation TypeUSDVTK
XSupported (degree)Supported (radian)
YSupported (degree)Supported (radian)
ZSupported (degree)Supported (radian)
XYZSupported (degree)Not supported
XZYSupported (degree)Not supported
YZXSupported (degree)Not supported
YXZSupported (degree)Not supported
ZXYSupported (degree)Not supported
ZYXSupported (degree)Not supported
Table 5. Components and dependencies for FI3D-NVIDIA Omniverse Connector development.
Table 5. Components and dependencies for FI3D-NVIDIA Omniverse Connector development.
Component/DependencyDescription
NVIDIA Omniverse Client LibraryLibrary for Omniverse Clients to communicate with Omniverse servers [92]
NVIDIA Omniverse Connector SDKDevelopment kit for building an Omniverse Connector [93]
NVIDIA Omniverse DriveApplication mapping folders from Nucleus Servers to local folders [94]
NVIDIA Omniverse LiveService for real-time, non-destructive collaboration on the same content from various applications [95]
NVIDIA Omniverse Nucleus serverDatabase and collaboration engine of Omniverse [96]
NVIDIA Omniverse USD Resolver LibraryPlugin for working with files on Omniverse servers [97]
Table 6. Models used for testing the FI3D-USD adaptor.
Table 6. Models used for testing the FI3D-USD adaptor.
ModelPrim CountMesh CountPoint CountFace CountNative USD/
Converted USD
Umbilical Interface Assembly (UIA)382353065708Native USD
Lunar Habitat Complex (HAB)16748784,821,1073,918,260Native USD
Residential Lobby16138325,889,5946,656,299Native USD
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tran, K.Q.; Leiss, E.L.; Tsekos, N.V.; Velazco-Garcia, J.D. A Real-Time Immersive Augmented Reality Interface for Large-Scale USD-Based Digital Twins. Virtual Worlds 2025, 4, 50. https://doi.org/10.3390/virtualworlds4040050

AMA Style

Tran KQ, Leiss EL, Tsekos NV, Velazco-Garcia JD. A Real-Time Immersive Augmented Reality Interface for Large-Scale USD-Based Digital Twins. Virtual Worlds. 2025; 4(4):50. https://doi.org/10.3390/virtualworlds4040050

Chicago/Turabian Style

Tran, Khang Quang, Ernst L. Leiss, Nikolaos V. Tsekos, and Jose Daniel Velazco-Garcia. 2025. "A Real-Time Immersive Augmented Reality Interface for Large-Scale USD-Based Digital Twins" Virtual Worlds 4, no. 4: 50. https://doi.org/10.3390/virtualworlds4040050

APA Style

Tran, K. Q., Leiss, E. L., Tsekos, N. V., & Velazco-Garcia, J. D. (2025). A Real-Time Immersive Augmented Reality Interface for Large-Scale USD-Based Digital Twins. Virtual Worlds, 4(4), 50. https://doi.org/10.3390/virtualworlds4040050

Article Metrics

Back to TopTop