You are currently viewing a new version of our website. To view the old version click .
ISPRS International Journal of Geo-Information
  • Article
  • Open Access

10 March 2025

The Metaverse Is Geospatial: A System Model Architecture Integrating Spatial Computing, Digital Twins, and Virtual Worlds

,
,
and
1
Department of Applied Informatics, School of Information Sciences, University of Macedonia, 54636 Thessaloniki, Greece
2
Department of Surveying & Geoinformatics Engineering, International Hellenic University, 57001 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.

Abstract

Virtual geographic environments long simulated real-world scenarios in urban planning, monument preservation, city infrastructure management, education, and entertainment. Their web-based visualisation and distribution made these environments widely accessible. However, many systems remain static, lacking real-time data integration and multi-user collaboration, while virtual worlds designed for the Metaverse emphasise dynamic interaction yet often omit essential geospatial context. Bridging this gap is critical for advancing virtual geographic environments into the next generation. In this paper, we present a modular system architecture for applications demonstrating geospatial virtual worlds over the web. Our goal is to provide a generic, well-structured framework that exposes the essential classes and interfaces needed for building 3D virtual worlds with geospatial data at their core. Our work focuses on defining specific geospatial components, methods, classes, and interfaces that form the foundation of a modern geospatial virtual environment in the Metaverse era. The proposed architecture is organised into three layers: access, world, and integration, which together enable accurate mapping and integration of real-time sensor data, digital twin synchronisation, and support for location-based services. Our analysis reveals that while most current solutions excel in either multi-user interaction or geospatial data management, they rarely combine both. In contrast, our model delivers enhanced geospatial focus, real-time collaboration, and interoperability between physical and digital realms. Overall, this work lays a solid foundation for future innovations in creating immersive, interactive, and geospatially grounded virtual experiences over the web, marking an important step in the evolution of virtual geographic environments for the Metaverse era.

1. Introduction

Virtual geographic environments, are virtual representations of the physical environment and existed since the 1990s, notably pioneered by Microsoft and NASA and popularised by Google Earth in the mid-2000s [1,2,3]. Their benefits in visualisation, remote observation, navigation, and holistic representation significantly increased their popularity [4]. Since then, numerous applications have been developed to cover various user needs, such as navigation [5] and representation of descriptive data, including population and climate change information [6]. However, as the volume and diversity of information escalated, interoperability and ease of distribution challenges emerged [7].
During the same period, GIS softwares were heavily utilised in the natural resources and defence domains, particularly by government agencies. Data interoperability was also a challenge, which led to the foundation of the Open Geospatial Consortium (OGC) in the 1990s. Since then, over a hundred and fifty (150) standards [8] have been proposed to address various information distribution needs over the web. For instance, the 3D Tiles standard was proposed for streaming massive 3D geospatial content, and the CityGML standard was proposed to define a conceptual model for representing, storing, and exchanging virtual 3D city models [9]. Today, virtual geographic environments (VGE), sometimes referred to as geospatial virtual environments (GVE), are commonly available online. They can be used to create and explore world-class 3D globes through open-source libraries such as CesiumJS [10], which recently joined Bentley Systems, or the gmp-map-3d by Google [11]. The question was philosophically posed in 2016: “Is there life in Virtual Globes?” [12]. This inquiry explored whether a geospatial environment could transcend its role as a mere visualisation tool to become something more dynamic and interactive. While the answer remained uncertain at the time, the vision of transforming geospatial environments into spaces capable of inhabitance and meaningful interactions was already emerging.
Meanwhile, virtual worlds also evolved significantly since their inception (first-generation virtual worlds) in the late 1970s [13]. Initially, they were primarily text-based environments created for entertainment and socialising purposes. In the 1980s, the development of the 2D game “Habitat” marked a significant advancement, as the term “avatar” was first used to describe the digital residents of these worlds, symbolising the transfer of human consciousness into a digital entity. The 1990s brought further innovation by introducing 2.5D and 3D environments. The Metaverse represents the ultimate goal of virtual worlds, where they are fully interoperable and interchangeable across servers and clients [14]. Today, several virtual world platforms exist that allow users to create and share their virtual environments, such as Second Life [15] and OpenSim [16].
In June 2022, the Metaverse Standards Forum was launched to unite stakeholders and chart a path towards a flourishing Metaverse based on open standards [17]. The link between geospatial technologies and the Metaverse has become increasingly apparent, with experts from geospatial technologies, game engines, network infrastructure, and web standards collaborating to usher in a viable Metaverse era.
Two major characteristics of a virtual world are its support for multiple users and data persistence [18]. Thus, we could upgrade virtual geographic environments into geospatial virtual worlds by enabling them to support multiple concurrent users who can interact with each other and by making them persistent (meaning they would operate even if a specific user does not participate).
Geospatial virtual worlds (GVWs) offer capabilities that go beyond merely visualising static 3D geospatial data. Unlike virtual geographic environments (VGEs), which primarily provide a one-way visual or analytical window into real-world conditions, GVWs are inherently multi-user, persistent, and interactive. For example, consider a remote archaeological excavation site where harsh weather or security concerns limit field access. In a VGE, researchers might only view a 3D model of the site, updated sporadically. In a GVW, however, multiple archaeologists from around the globe can interact simultaneously in real-time, each controlling an avatar, annotating discoveries, and dynamically integrating sensor data, thereby collectively exploring, documenting, and even reconstructing artefacts or terrains. This level of continuous, shared engagement is unique to GVWs and can significantly enhance our understanding of, and ability to collaborate around remote or inaccessible environments. With the increasing availability of 5G infrastructure, offering these environments over the web via location-based services is becoming easier, making them accessible to a broader audience. However, ensuring that geospatial virtual worlds (GVWs) remain interoperable and distributable over the web requires a well-defined system model. This paper analyses the core components of geospatial virtual worlds (GVWs) to propose a robust, modular, and expandable system model that modern applications can leverage. By projecting VGEs into the Metaverse era envisioned by researchers and experts, this work positions GVWs as the natural evolution of static GVWs. The findings of this research aim to contribute to the development of a more interconnected and resilient Metaverse, with far-reaching implications across diverse applications and industries.
The purpose of this study is to create a modular system model that bridges the gap between traditional VGEs and the dynamic virtual worlds emerging in the Metaverse. Our work introduces a three-layered architecture consisting of access, world, and integration layers that enhance real-time data integration, multi-user collaboration, and web interoperability. We specify detailed classes, interfaces, and methods to support geospatial components, transforming static virtual environments into interactive, living digital spaces. This framework advances the state of the art in geoinformatics and lays the groundwork for practical applications in urban planning, cultural heritage preservation, and disaster management, among other fields.
This paper is structured as follows: Section 2 provides the background and related work, establishing the foundational concepts and prior research in the fields of geospatial virtual worlds and the Metaverse. Section 3 presents the proposed GVW system model, detailing its modular architecture, design principles, and use case scenarios. Section 4 discusses noteworthy aspects, including the system’s strengths, weaknesses, and potential future developments. Finally, Section 5 concludes the paper, summarising the contributions and outlining directions for future research.

3. Proposed Approach of the GVW System Model

3.1. Methodology

In this study, we began with an extensive literature review to collect and analyse existing frameworks, concepts, and system models in VGEs and the Metaverse. We synthesised common components and patterns from this review along with established design principles, such as the emergent structure design principles, narrative composability, social assortativity, and path discoverability. Based on these insights, we developed a draft system model architecture that outlines the essential classes and interfaces needed for geospatial virtual worlds. To validate and refine our model, we constructed a detailed real-world use case scenario involving a virtual tour of an archaeological site, where both remote and on-site users interact in a hybrid environment. This iterative process allowed us to identify missing connections and subsequently incorporate additional components, such as a Time Spectrum interface and an Entitlement System, ensuring robust support for real-time data integration, multi-user collaboration, and accurate geospatial mapping. In doing so, our work advances the state-of-the-art by providing a modular and comprehensive framework that is well positioned to meet the evolving demands of the Metaverse.

3.2. Design Principles and Use Case Scenario

Building on the models, frameworks, and architectures discussed earlier, we developed a scalable and robust system model for geospatial virtual worlds (GVW system model). This model defines the core components of a GVW, establishes their relationships, and organises them into a scalable structure, forming the foundation of the business logic for applications incorporating GVWs. The design process was guided by principles derived from established research on digital environments and human-cantered system design.
For our proposed approach, the key guiding principles include the emergent structure design principles (ES1–ES5) proposed by Chaturvedi et al. [64], emphasising flexibility, adaptability, and user-driven evolution. Given the dynamic nature of GVWs, where user interactions [52], real-world geospatial data, and evolving requirements continuously shape the system, the ESPs provide a solid framework to guide the design of our architecture. They ensure that our system accommodates diverse users, prioritises user-centric experiences, supports persistent user-generated content, allows for multiple levels of computational experimentation, and seamlessly integrates real and virtual worlds.
Furthermore, we drew on Metaverse-centric principles such as narrative composability, social assortativity, and path discoverability [65], which ensure smooth transitions between experiences, foster meaningful social interactions, and help users navigate the virtual environment based on their interests. We also considered the balance of spatial, temporal, artifactual, actor, and transition tensions, as discussed by Seidel et al. [66], to maintain coherence and dynamism in world design, pacing, object fidelity, user roles, and transitions. The nine system requirements derived from the principles mentioned above are described in the table below (Table 1):
Table 1. Design principles for crafting a use case scenario.
Based on these principles, we developed the following real-world use case scenario to validate the model. This scenario demonstrates how the system must successfully perform and execute tasks informed by the guiding design principles.
In this scenario, a user first accesses the virtual environment of a remote archaeological site via their smartphone. Using the smartphone interface, the user explores the digital representation of the site, navigating and discovering a virtual kiosk, which mirrors a physical kiosk located on-site. At the physical kiosk, a customer service representative equipped with an optical see-through MR headset is connected to the geospatial virtual world (GVW). Through this connection, the representative sees the user’s virtual avatar approach the kiosk and initiates a conversation, providing information about upcoming events and assisting with inquiries.
Through the kiosk, the user discovers a virtual tour scheduled for 9 p.m. that day, offering a unique feature: a remote exploration vehicle equipped with a live camera feed for accessing areas of the site. Additionally, the tour includes a digitally reconstructed version of the monument, created by a cultural heritage organisation using historical records and advanced modelling techniques. This user-generated content offers participants an opportunity to experience the monument as it may have originally appeared, enriching their understanding of its historical significance. Interactive annotations within the reconstruction provide insights into the restoration process and invite collaborative discussions among attendees.
After purchasing a ticket through the system, the user receives a notification reminder of the tour’s start time. Shortly before the event, the user reenters the virtual environment using a VR headset, this time to enhance their experience.
The tour is a hybrid event, integrating remote users accessing the GVW through avatars and field users physically present at the site. A guide, connected to the GVW via an AR headset, interacts with both groups. Remote users operate the exploration vehicle through virtual controls, while on-site users manually drive the vehicle. The guide uses its AR headset to seamlessly interact with both on-site participants and virtual avatars, supporting inclusivity and creating a cohesive experience for all attendees.
As the tour begins, the guide introduces key attractions and facilitates group discussions. The exploration is enhanced by a digital twin of the remote-controlled vehicle, streaming live footage directly to the virtual environment. Remote users navigate challenging terrain and uncover hidden areas of the site using virtual controls, while on-site users have direct physical control of the vehicle. Real-time synchronisation ensures that all users, whether remote or on-site, experience the same interactions and responses from the environment.
Throughout the tour, participants communicate via a group chat interface, sharing observations and discussing the site’s significance. The system leverages AI-driven features, including real-time translation and speech-to-text functionality, ensuring inclusivity for users speaking different languages or with hearing impairments. Meanwhile, the virtual assistant provides enriched commentary, utilising live sensor data from the exploration vehicle. The experience concludes with all participants collaboratively exploring a highlighted area, guided by the on-site guide, while the live camera feed offers real-world insights into the site’s most remarkable features.

3.3. System Overview and Scope

The GVW system model bridges the gap between geospatial technologies and virtual worlds, enabling immersive, persistent, and multi-user environments that integrate seamlessly with real-world data. This modular architecture combines the strengths of virtual geographic environments and virtual worlds, transforming static geospatial data into dynamic, collaborative experiences.
The model is structured into three layers: the access layer, which facilitates secure and efficient user interactions; the world layer, responsible for managing and rendering the virtual environment; and the integration layer, which bridges the virtual world with the physical realm through IoT devices, APIs, and adaptive AI (Figure 2). Together, these layers form a scalable and flexible framework, enabling diverse applications such as urban planning, cultural heritage preservation, and industrial monitoring.
Figure 2. Layer diagram of the GVW system model.
The scope of the GVW system model encompasses real-time synchronisation of virtual and physical worlds, multi-user interaction, and integration of external systems and data streams to create hybrid experiences. This enables applications that require geospatial precision, interoperability, and meaningful collaboration, positioning GVWs as a cornerstone in the evolution of the Metaverse. By leveraging this system model, developers can create environments that are immersive, contextually rich, and grounded in geospatial accuracy, addressing challenges such as data persistence, scalability, and user engagement in virtual ecosystems.

3.4. Class Diagram of a Geospatial Virtual World

The use case scenario presented above highlights the complex interactions and functionalities required for a seamless user experience in GVWs. From user access and real-time synchronisation to dynamic interactions and the integration of IoT devices, the scenario demonstrates the diverse challenges the system must address. To meet these requirements, we propose a modular system model divided into three distinct layers: the access layer, the world layer, and the integration layer (Figure 3).
Figure 3. Class diagram of a geospatial virtual world.
Each layer is designed to encapsulate specific functionalities, ensuring scalability, interoperability, and maintainability. The access layer facilitates user login, interaction, and real-time connectivity. The world layer manages the core components of the virtual environment, including 3D scenes, avatars, and physics-based interactions. Finally, the integration layer bridges the virtual environment with external systems, including IoT devices, AI-driven components, and real-world data streams. Together, these layers provide a robust framework to support the dynamic and immersive experiences described in the use case.

3.4.1. Access Layer

The access layer is responsible for enabling users and their devices to securely and efficiently connect to and interact with the system (Figure 4). This layer is designed to provide a seamless and engaging user experience across various devices while maintaining high standards of accessibility and performance.
Figure 4. Class diagram of the access layer.
At the core of this layer is the devices class, which facilitates connections to the GVW (Table 2). Each device operates as hardware that supports distinct human–computer interaction methods and integrates with the Interactable interface. This integration ensures interoperability and consistency in how devices interact with the virtual environment. Regardless of the device being used, all users instantiate the users class. These users can either be field users, who access the GVW using augmented and mixed reality, or remote users, who engage with it through virtual reality. These users are authenticated through the Authentication Process interface, which ensures that all authorisation processes meet system requirements for securely identifying participants within the GVW.
Table 2. Classifiers overview of the access layer.
The system model uses the web browsers’ interface as the primary software component for accessing and experiencing the GVW. Modern web browsers support several essential Web APIs that enable this functionality. The WebXR Device API allows extended reality applications to run directly within web browsers, providing immersive experiences. The WebSocket API facilitates real-time communication between the client and server, supporting features such as environmental updates and user-to-user text-based communication. The WebRTC API supports audio and video communication among users, enhancing interactivity. The Notifications API sends notifications to users even when they are not actively logged into the GVW.
By integrating these components, the access layer creates a robust and user-centric gateway to the geospatial virtual world. It ensures seamless connections between devices, users, and the virtual environment.

3.4.2. World Layer

The world layer is responsible for defining, managing, and rendering the virtual environment within the system (Figure 5). It serves as the foundation of the GVW, incorporating spatial, visual, auditory, and interactive elements. This layer ensures the virtual environment remains dynamic, coherent, and immersive, providing users with responsive experiences that reflect real-world conditions while enabling meaningful interactions.
Figure 5. Class diagram of the world layer.
At the core of the world layer is the 3D Scenes class (Table 3). To define space in a geospatial context, 3D scenes utilise the Coordinate System interface. This interface establishes a unified coordinate system for placing objects within the scene, offering methods for measuring, positioning, and reprojecting objects between different coordinate systems. These capabilities ensure interoperability and usability across diverse geospatial datasets.
Table 3. Classifiers overview of the world layer.
The temporal aspect of the 3D scenes is managed by the Time Spectrum interface. This interface defines how time operates in the virtual environment, incorporating properties such as time zones, daylight saving adjustments, and methods for time conversions or retrieving the current world time. It ensures temporal consistency and synchronisation, even when hardware, software, or network discrepancies cause delays. For example, if two users walk in the same direction at the same speed but one experiences lag due to a slower device clock, the Time Spectrum interface can restrict out-of-sync actions. Similarly, it filters delayed external updates, such as positional data arriving out of sequence, to maintain the system’s temporal accuracy.
The world layer also incorporates a Physics Engine class to simulate physical interactions. This engine defines fundamental rules such as gravity, friction, inertia, and object collision detection. While the system includes a predefined physics engine, advanced engines can be integrated for specialised simulations, such as airflow visualisation.
To enrich the auditory experience, the Spatial Audio class is included. This class provides two types of Spatial Audio. The first type is area-based audio, which delivers environmental sounds, such as wind or background music, that activate with specific volume levels when users enter defined areas. The second type is point-based audio, where the sound source is located at a specific point. The audio perception varies based on the user’s distance and position relative to the source, enabling features such as directional audio in conversations or localised sound effects.
Once space and time are established, the system incorporates Scene Objects. These objects include all elements within the scene, such as lighting and cameras, and are categorised into digital objects, digital shadows, and digital twins. This categorisation helps describe the relationship between the digital and physical realms. For instance, a terrain modelled using a digital elevation model would be categorised as a digital shadow, as it reflects real-world data but does not dynamically update in real time. The use case scenario described in Section 4.1 involves a digital twin, the virtual vehicle used by a remote user for navigation. This digital reconstruction replicates the physical vehicle’s properties and updates its state in real time. If the user turns the steering wheel, the digital twin reflects the movement, updating the vehicle’s position and transformations within the scene.
Users interact with the GVW through avatars, which are instantiated within the Avatars class. Avatars enable navigation and interaction and represent non-human entities such as Virtual Agents or Non-Player Characters (NPCs). These entities enhance user experiences by enabling more interactive and engaging environments. Users and NPCs can initiate conversations using the ChatBubble class, which supports text, audio, and video communication in private or public communication spaces.
The system supports interactions through the Actions interface, which defines available actions for each Scene Object. Actions can trigger Behaviours, which are predefined responses that may involve Animations. For example, pressing the gas pedal on the virtual vehicle triggers Behaviours such as the vehicle moving forward and the wheels spinning.
Finally, the Entitlement System class ensures authorised access and validates user actions. For example, in our use case, a remote user cannot enter a virtual tour area or operate a vehicle without first purchasing a ticket, which the system validates.
This interconnected framework demonstrates how the world layer integrates its components to deliver a dynamic, immersive, and user-driven virtual environment. By incorporating user-generated content such as reconstructed monuments, digital twins, and elevation models, the world layer supports educational, interactive, and engaging virtual experiences. It achieves a balance between complexity and performance, fostering rich interactions and compelling user engagement.

3.4.3. Integration Layer

The integration layer is essential for ensuring that the geospatial virtual world (GVW) remains connected to the physical world and adaptive (Figure 6). While the other layers prepare and structure the virtual environment by making it adaptive and responsive, the integration layer completes the system by linking it to real-world data, transactions, and interactions. Without this layer, the GVW would lose its connection to the physical realm, making it isolated.
Figure 6. Class diagram of the integration layer.
The IoT Devices class connects physical IoT devices to the GVW, streaming real-time data into the virtual environment (Table 4). This class ensures that physical states and sensor data are accurately represented and updated within the GVW. Through this class, behaviours are triggered, enabling synchronised actions between the physical and virtual realms. For instance, a physical smart lamp installed at a kiosk with IoT capabilities can report its state to the GVW, ensuring that the virtual lamp mirrors its behaviour by turning on the lights at nighttime.
Table 4. Classifiers overview of the integration layer.
The APIs interface integrates external data, services, and functionalities into the GVW. This interface allows the system to incorporate essential features, such as payments, weather updates, and other external resources, making the virtual experience more relevant and aligned with real-world activities. The APIs interface can also interact with the Entitlement System to update ownership or permissions, enabling users to perform specific actions within the GVW.
The Location-Based Services interface provides geographic context and supports location-aware interactions. It ensures that users and objects in the GVW are accurately positioned relative to their real-world counterparts. For example, a field user can use this interface to discover and receive notifications about a virtual tour happening near their physical location, bridging the gap between real-world proximity and virtual experiences.
The Artificial Intelligence interface enables adaptive behaviour and decision making for automated components within the GVW. This interface supports advanced features such as real-time translation, speech-to-text, and intelligent interactions, ensuring that the system is both inclusive and responsive to user needs. By facilitating intelligent automation, AI enhances the GVW’s ability to adapt to a wide range of scenarios and user requirements.
This layer ensures that the GVW remains dynamic, interconnected, and reflective of the physical world. As more classes and interfaces are added to this layer, the GVW will become increasingly adaptive and immersive, providing richer, more engaging experiences for its users.

4. Discussion

4.1. Connection to Prior Work

This work situates geospatial virtual worlds within the broader context of virtual world platforms and Metaverse-centric frameworks, many of which lack the robust geospatial anchoring that our system model provides. As illustrated by the comparison table (Table 5) incorporated in this section, most existing solutions either excel at multi-user interaction (e.g., Second Life, OpenSimulator) or at geospatial data management (e.g., Mergin’Mode) but rarely integrate both in a unified architecture. Our categories, “Real-time Multi-user”, “Persistence”, “Geospatial Focus”, and “License”, highlight the dual requirements of concurrency and realistic geographical representation, underscoring why these features must coexist for open and advanced Metaverse applications. In this context, we rate the “geospatial focus” of each platform or framework based on how closely it aligns with real-world geography and spatial data. Specifically, we look at whether it (a) implements a coordinate reference system that anchors virtual objects or scenes to an actual Earth-based coordinate grid, (b) supports the creation of virtual worlds using geospatial data such as digital twins and digital elevation/surface models, (c) integrates sensor data to reflect current geospatial conditions in the virtual world, and (d) supports location-based interactions (e.g., searching for objects within a certain radius or along a route). Platforms that robustly incorporate these elements receive a higher geospatial focus rating, while those that treat space in purely fictional or abstract terms rank lower.
Table 5. Comparison to prior work.

4.2. Strengths and Weaknesses

Comparisons with earlier platforms reveal that geospatial aspects have typically been peripheral or absent, focusing instead on social or gaming interactions. Concurrently, many geospatial standards (e.g., CityGML, 3D Tiles) address data interoperability but do not provide a model that covers all the components mentioned earlier.
A key strength of our proposed model lies in its extensible layering. The access, world, and integration layers isolate concerns such as user devices, 3D scene logic, and external data feeds, respectively. This modularity promotes scaling across diverse domains, from cultural heritage to infrastructure monitoring. Moreover, explicit geospatial interfaces, such as the Coordinate System and Location-Based Services, ensure that every virtual object, user, or sensor feed is placed coherently in a world coordinate reference.
This system model extends potential use cases beyond conventional visualisation tools. In urban planning, interactive digital twins underpinned by real-time sensor data could enhance decision making on infrastructure projects. In cultural heritage and tourism, remote participants can experience historically significant sites with geospatial precision, enabling collaborative exploration and education. Industrial domains, such as resource management or smart factories, also benefit from persistent, sensor-driven 3D replicas that reflect ongoing physical processes.
Although many of the components in this model are designed for interoperability, certain aspects, such as the physics engine and Spatial Audio, require additional research and validation through application prototypes. For instance, integrating physics engines for advanced simulations, such as airflow or structural mechanics, with consistent interoperability remains challenging. Similarly, Spatial Audio implementations must account for diverse hardware capabilities and network constraints to ensure accurate and synchronised audio experiences across devices. These areas highlight the need for continued development and testing to achieve fully interoperable and reliable system components.

4.3. Spatial Computing in the GVW System Model

The spatial computing layer of the Metaverse is not a standalone component but a concept realised through multiple classes and interfaces within the GVW system. It is integral to the system, enabling seamless interaction between physical and virtual realms.
This layer is evident in the human–computer interaction interfaces, where user gestures and movements are captured in real time, represented in the virtual world via avatars, and translated into actions such as picking up a virtual object. Similarly, the Coordinate System interface ensures geospatial accuracy, aligning virtual objects and interactions with real-world references. The digital twin class reflects another critical aspect, synchronising physical entities with their virtual counterparts using sensor data. For example, a virtual vehicle mirrors the movements and functionality of its real-world equivalent. Location-Based Services (LBS) add contextual interactions by integrating real-world location data, while Spatial Audio enhances immersion with precise sound placement. Temporal synchronisation is managed by the Time Spectrum interface, ensuring consistency across all interactions. Meanwhile, Scene Objects and the Physics Engine simulate realistic behaviours, grounding virtual interactions in real-world physical principles.
These components collectively demonstrate that spatial computing is embedded throughout the system, driving the geospatial and interactive features essential for creating immersive, geospatially anchored virtual worlds.

4.4. Future Developments: Interoperability, Blockchain, and POC

The interoperability of the proposed system model is closely tied to open standards. While we do not propose new geospatial standards or extend existing ones, the system could benefit from their application and orchestration to achieve interoperability within the model. Future research should focus on evaluating current standards, such as 3D Tiles, CityGML, digital twin standards, IoT protocols, and avatar specifications, to ensure alignment with the model’s components. By mapping these standards to specific elements of the system, researchers can establish a cohesive orchestration of open standards that efficiently enables the distribution of GVWs over the web. The ultimate goal is to create a chain of interoperable standards that ensures seamless data exchange and robust functionality.
An important next step is the creation of a proof-of-concept application to validate the system model in a real-world scenario. This implementation will likely lead to minor modifications and extensions of the model. Specific challenges are anticipated, particularly in managing the Time Spectrum and synchronising spatiotemporal data within a unified virtual world. Additionally, data storage and management will pose significant hurdles, as the system must address issues such as exchanging large datasets, handling multiple versions of geospatial models, and minimising storage redundancy. Techniques that enable versioning of specific parts of larger models must be explored to avoid duplicating massive digital objects with minor changes.
Another promising avenue for future development is the incorporation of blockchain technology into the system. Blockchain could enhance the Entitlement System, ensuring secure validation of ownership and maintaining system integrity. This integration would further strengthen the GVW framework, particularly in scenarios involving sensitive transactions or digital asset management.
By addressing these future directions, this research aims to evolve the proposed system model into a robust and interoperable framework that supports geospatially anchored, dynamic virtual environments across a wide array of applications.

5. Conclusions

This research advances the concept of geospatial virtual worlds (GVWs) by introducing a modular system model that bridges the gap between geospatial technologies and the Metaverse. The proposed model, structured across the access, world, and integration layers, addresses key challenges of multi-user persistence, real-time interactivity, and geospatial interoperability. By defining core components such as Coordinate Systems, Location-Based Services, and digital twins, the model ensures precise alignment between virtual objects and their real-world counterparts.
We present a system model architecture designed for what is commonly referred to as Metaverse applications. This system can be developed using modern technologies that already operate on the web, making it a practical approach to realising the vision of Metaverse virtual worlds interconnected with the physical world. Unlike other Metaverse proposals that rely heavily on speculative technologies such as blockchain or cryptocurrency, our system emphasises feasibility and immediate applicability. By leveraging existing web technologies, it offers a grounded and implementable framework for creating interconnected virtual worlds.
The framework’s adaptability allows applications to range from lightweight cultural heritage visualisations to complex urban planning simulations enriched with live IoT data. These capabilities highlight the transformative potential of GVWs in delivering immersive, persistent, and dynamic digital experiences. Additionally, the proposed system aligns with existing open standards, such as CityGML and 3D Tiles, presenting opportunities for seamless integration and enhanced functionality across platforms.
Future developments will include evaluating the model’s interoperability with current standards, implementing a proof-of-concept application, and addressing challenges such as spatiotemporal synchronisation and scalable data storage. Techniques for versioning geospatial models and incorporating emerging technologies such as blockchain can also be explored to enhance system efficiency and integrity.
We believe this research will contribute to a better understanding of how the Metaverse is inherently geospatial, demonstrate how geospatial technologies can enhance virtual worlds and connect them with our physical world, and provide a robust framework and software architecture base for future applied research. As geospatial technologies continue to converge with the Metaverse, GVWs represent a pivotal step toward integrating digital and physical spaces. This work lays the foundation for creating geospatially anchored virtual worlds, poised to redefine user interaction and engagement across industries.

Author Contributions

Conceptualization: T.P. and K.E.; Methodology: T.P. and T.H.K.; Validation: T.P., T.H.K., and G.E.; Formal analysis: T.P.; Investigation: T.P.; Resources: T.P. and K.E.; Data curation: G.E.; Writing—original draft preparation: T.P.; Writing—review and editing: T.P., K.E., T.H.K., and G.E.; Visualization: T.P. and G.E.; Supervision: T.H.K. and T.P.; Project administration: T.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data supporting the reported results in this article are provided within the article. Additional raw data can be obtained from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pirotti, F.; Brovelli, M.A.; Prestifilippo, G.; Zamboni, G.; Kilsedar, C.E.; Piragnolo, M.; Hogan, P. An open source virtual globe rendering engine for 3D applications: NASA World Wind. Open Geospat. Data Softw. Stand. 2017, 2, 4. [Google Scholar] [CrossRef]
  2. Neves, J.N.; Câmara, A. Virtual environments and GIS. Geogr. Inf. Syst. 1999, 1, 557–565. [Google Scholar]
  3. Ellis, S.R. What are virtual environments? IEEE Comput. Graph. Appl. 1994, 14, 17–22. [Google Scholar] [CrossRef]
  4. Tuttle, B.T.; Anderson, S.; Huff, R. Virtual globes: An overview of their history, uses, and future challenges. Geogr. Compass 2008, 2, 1478–1505. [Google Scholar] [CrossRef]
  5. Stone, R.J. Applications of virtual environments: An overview. In Handbook of Virtual Environments; CRC Press: Boca Raton, FL, USA, 2002; pp. 867–896. [Google Scholar]
  6. Fauville, G.; Queiroz, A.C.M.; Bailenson, J.N. Virtual reality as a promising tool to promote climate change awareness. In Technology and Health; Elsevier: Amsterdam, The Netherlands, 2020; pp. 91–108. [Google Scholar]
  7. Lee, J.G.; Kang, M. Geospatial big data: Challenges and opportunities. Big Data Res. 2015, 2, 74–81. [Google Scholar] [CrossRef]
  8. Open Geospatial Consortium. Our History. Available online: https://www.ogc.org/our-history/ (accessed on 10 January 2025).
  9. Breunig, M.; Bradley, P.E.; Jahn, M.; Kuper, P.; Mazroob, N.; Rösch, N.; Al-Doori, M.; Stefanakis, E.; Jadidi, M. Geospatial data management research: Progress and future directions. ISPRS Int. J. Geo-Inf. 2020, 9, 95. [Google Scholar] [CrossRef]
  10. Cesium. CesiumJS. Available online: https://cesium.com/platform/cesiumjs/ (accessed on 10 January 2025).
  11. Google. 3D Maps JavaScript Demo. Available online: https://mapsplatform.google.com/demos/3d-maps-javascript/ (accessed on 10 January 2025).
  12. Evangelidis, K.; Papadopoulos, T. Is there life in Virtual Globes? Free Open Source Softw. Geospat. (FOSS4G) Conf. Proc. 2016, 16, 7. [Google Scholar]
  13. Downey, S. History of the (virtual) worlds. J. Technol. Stud. 2014, 40, 54–66. [Google Scholar] [CrossRef]
  14. Dionisio, J.D.N.; Iii, W.G.B.; Gilbert, R. 3D virtual worlds and the metaverse: Current status and future possibilities. ACM Comput. Surv. (CSUR) 2013, 45, 1–38. [Google Scholar] [CrossRef]
  15. Linden Lab. Second Life. Available online: https://secondlife.com/ (accessed on 10 January 2025).
  16. OpenSimulator. Main Page. Available online: http://opensimulator.org/wiki/Main_Page (accessed on 10 January 2025).
  17. Metaverse Standards Forum. Available online: https://metaverse-standards.org/ (accessed on 10 January 2025).
  18. Gilbert, R.L. The prose (psychological research on synthetic environments) project: Conducting in-world psychological research on 3d virtual worlds. J. Virtual Worlds Res. 2011, 4. [Google Scholar] [CrossRef]
  19. Worlds Inc. Worlds. Available online: https://www.worlds.com/ (accessed on 10 January 2025).
  20. Frey, D.; Royan, J.; Piegay, R.; Kermarrec, A.M.; Anceaume, E.; Le Fessant, F. Solipsis: A decentralized architecture for virtual environments. In Proceedings of the 1st International Workshop on Massively Multiuser Virtual Environments, Reno, NV, USA, 8 March 2008. [Google Scholar]
  21. Kaplan, J.; Yankelovich, N. Open wonderland: An extensible virtual world architecture. IEEE Internet Comput. 2011, 15, 38–45. [Google Scholar] [CrossRef]
  22. Alatalo, T. An entity-component model for extensible virtual worlds. IEEE Internet Comput. 2011, 15, 30–37. [Google Scholar] [CrossRef]
  23. Lopes, C. Hypergrid: Architecture and protocol for virtual world interoperability. IEEE Internet Comput. 2011, 15, 22–29. [Google Scholar] [CrossRef]
  24. VRChat. VRChat. Available online: https://hello.vrchat.com/ (accessed on 10 January 2025).
  25. Meta. AltspaceVR. Available online: https://www.meta.com/experiences/altspacevr/2133027990157329/ (accessed on 10 January 2025).
  26. Active Worlds. Active Worlds. Available online: https://www.activeworlds.com/ (accessed on 10 January 2025).
  27. Mozilla. End Support for Mozilla Hubs. Available online: https://support.mozilla.org/en-US/kb/end-support-mozilla-hubs (accessed on 10 January 2025).
  28. Thompson, C.W. Virtual world architectures. IEEE Internet Comput. 2011, 15, 11–14. [Google Scholar] [CrossRef]
  29. Ricci, A.; Piunti, M.; Tummolini, L.; Castelfranchi, C. The mirror world: Preparing for mixed-reality living. IEEE Pervasive Comput. 2015, 14, 60–63. [Google Scholar] [CrossRef]
  30. Dorri, A.; Kanhere, S.S.; Jurdak, R. Multi-agent systems: A survey. IEEE Access 2018, 6, 28573–28593. [Google Scholar] [CrossRef]
  31. Modahl, M.; Agarwalla, B.; Saponas, T.S.; Abowd, G.; Ramachandran, U. UbiqStack: A taxonomy for a ubiquitous computing software stack. Pers. Ubiquitous Comput. 2006, 10, 21–27. [Google Scholar] [CrossRef]
  32. Evangelidis, K.; Papadopoulos, T.; Sylaiou, S. Mixed Reality: A reconsideration based on mixed objects and geospatial modalities. Appl. Sci. 2021, 11, 2417. [Google Scholar] [CrossRef]
  33. Evangelidis, K.; Sylaiou, S.; Papadopoulos, T. Mergin’mode: Mixed reality and geoinformatics for monument demonstration. Appl. Sci. 2020, 10, 3826. [Google Scholar] [CrossRef]
  34. George, A.H.; Fernando, M.; George, A.S.; Baskar, T.; Pandey, D. Metaverse: The next stage of human culture and the internet. Int. J. Adv. Res. Trends Eng. Technol. (IJARTET) 2021, 8, 1–10. [Google Scholar]
  35. Wang, D.; Yan, X.; Zhou, Y. Research on Metaverse: Concept, development and standard system. In Proceedings of the 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), Sanya, China, 27–29 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 983–991. [Google Scholar]
  36. Duan, H.; Li, J.; Fan, S.; Lin, Z.; Wu, X.; Cai, W. Metaverse for social good: A university campus prototype. In Proceedings of the 29th ACM International Conference on Multimedia, Chengdu, China, 20–24 October 2021; pp. 153–161. [Google Scholar]
  37. Holonext. Metaverse 101: Understanding the Seven Layers. Available online: https://holonext.com/metaverse-101-understanding-the-seven-layers/ (accessed on 10 January 2025).
  38. Setiawan, K.D.; Anthony, A. The essential factor of metaverse for business based on 7 layers of metaverse–systematic literature review. In Proceedings of the 2022 International Conference on Information Management and Technology (ICIMTech), Semarang, Indonesia, 11–12 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 687–692. [Google Scholar]
  39. Kuru, K. Metaomnicity: Toward immersive urban metaverse cyberspaces using smart city digital twins. IEEE Access 2023, 11, 43844–43868. [Google Scholar] [CrossRef]
  40. Venugopal, J.P.; Subramanian AA, V.; Peatchimuthu, J. The realm of metaverse: A survey. Comput. Animat. Virtual Worlds 2023, 34, e2150. [Google Scholar] [CrossRef]
  41. Rawat, D.B.; El Alami, H. Metaverse: Requirements, architecture, standards, status, challenges, and perspectives. IEEE Internet Things Mag. 2023, 6, 14–18. [Google Scholar] [CrossRef]
  42. Wang, Y.; Su, Z.; Zhang, N.; Xing, R.; Liu, D.; Luan, T.H.; Shen, X. A survey on metaverse: Fundamentals, security, and privacy. IEEE Commun. Surv. Tutor. 2022, 25, 319–352. [Google Scholar] [CrossRef]
  43. Fu, Y.; Li, C.; Yu, F.R.; Luan, T.H.; Zhao, P.; Liu, S. A survey of blockchain and intelligent networking for the metaverse. IEEE Internet Things J. 2022, 10, 3587–3610. [Google Scholar] [CrossRef]
  44. Yang, L. Recommendations for metaverse governance based on technical standards. Humanit. Soc. Sci. Commun. 2023, 10, 1–10. [Google Scholar] [CrossRef]
  45. Sami, H.; Hammoud, A.; Arafeh, M.; Wazzeh, M.; Arisdakessian, S.; Chahoud, M.; Wehbi, O.; Ajaj, M.; Mourad, A.; Otrok, H.; et al. The metaverse: Survey, trends, novel pipeline ecosystem & future directions. IEEE Commun. Surv. Tutor. 2024, 26, 2914–2960. [Google Scholar]
  46. Trunfio, M.; Rossi, S. Advances in metaverse investigation: Streams of research and future agenda. Virtual Worlds 2022, 1, 103–129. [Google Scholar] [CrossRef]
  47. Stubbs, A.; Hughes, J.J.; Eisikovits, N.; Burley, J. The Democratic Metaverse: Building an Extended Reality Safe for Citizens, Workers and Consumers; Institute for Ethics and Emerging Technologies: Boston, MA, USA, 2023. [Google Scholar]
  48. Ball, M. The Metaverse: And How It Will Revolutionize Everything; Liveright Publishing: New York, NY, USA, 2022. [Google Scholar]
  49. Open Geospatial Consortium. The Metaverse Is Geospatial. Available online: https://www.ogc.org/blog-article/the-metaverse-is-geospatial/ (accessed on 10 January 2025).
  50. Ritterbusch, G.D.; Teichmann, M.R. Defining the metaverse: A systematic literature review. IEEE Access 2023, 11, 12368–12377. [Google Scholar] [CrossRef]
  51. Türk, T. The concept of metaverse, its future and its relationship with spatial information. Adv. Geomat. 2022, 2, 17–22. [Google Scholar]
  52. Papadopoulos, T.; Evangelidis, K.; Kaskalis, T.H.; Evangelidis, G.; Sylaiou, S. Interactions in augmented and mixed reality: An overview. Appl. Sci. 2021, 11, 8752. [Google Scholar] [CrossRef]
  53. Epic Games. Unreal Engine 5.4 Is Now Available. Available online: https://www.unrealengine.com/en-US/blog/unreal-engine-5-4-is-now-available (accessed on 10 January 2025).
  54. Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and LiDAR data registration using linear features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  55. Evangelidis, K.; Papadopoulos, T.; Papatheodorou, K.; Mastorokostas, P.; Hilas, C. 3D geospatial visualizations: Animation and motion effects on spatial objects. Comput. Geosci. 2018, 111, 200–212. [Google Scholar] [CrossRef]
  56. Naderi, H.; Shojaei, A. Digital twinning of civil infrastructures: Current state of model architectures, interoperability solutions, and future prospects. Autom. Constr. 2023, 149, 104785. [Google Scholar] [CrossRef]
  57. Papadopoulos, T.; Evangelidis, K.; Evangelidis, G.; Kaskalis, T. Immersive mixed reality experience empowered by the internet of things and geospatial technologies. In Proceedings of the Fourteenth International Conference on Computational Structures Technology, Montpellier, France, 23–25 August 2022. [Google Scholar]
  58. Gantz, J.; Reinsel, D. The digital universe in 2020: Big data, bigger digital shadows, and biggest growth in the far east. IDC Iview IDC Anal. Future 2012, 2007, 1–16. [Google Scholar]
  59. Sepasgozar, S.M. Differentiating digital twin from digital shadow: Elucidating a paradigm shift to expedite a smart, sustainable built environment. Buildings 2021, 11, 151. [Google Scholar] [CrossRef]
  60. Lv, Z.; Shang, W.L.; Guizani, M. Impact of digital twins and metaverse on cities: History, current situation, and application perspectives. Appl. Sci. 2022, 12, 12820. [Google Scholar] [CrossRef]
  61. Lee, L.H.; Braud, T.; Zhou, P.Y.; Wang, L.; Xu, D.; Lin, Z.; Kumar, A.; Bermejo, C.; Hui, P. All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda. Found. Trends Hum. Comput. Interact. 2024, 18, 100–337. [Google Scholar] [CrossRef]
  62. Cortex2. Available online: https://cortex2.eu/ (accessed on 10 January 2025).
  63. XR4ED. Available online: https://xr4ed.eu/ (accessed on 10 January 2025).
  64. Chaturvedi, A.R.; Dolk, D.R.; Drnevich, P.L. Design principles for virtual worlds. MIS Q. 2011, 35, 673–684. [Google Scholar] [CrossRef]
  65. Nickerson, J.V.; Seidel, S.; Yepes, G.; Berente, N. Design principles for coordination in the metaverse. Acad. Manag. Annu. Meet. 2022, 2022, 15178. [Google Scholar] [CrossRef]
  66. Seidel, S.; Berente, N.; Nickerson, J.; Yepes, G. Designing the metaverse. In Proceedings of the 55th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2022. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.