Next Article in Journal
Developing Social Simulations to Aid Scenario-Based Planning for Urban Regeneration Projects
Previous Article in Journal
Predictive Modeling of Urban Travel Demand Using Neural Networks and Regression Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring AI-Integrated VR Systems: A Methodological Approach to Inclusive Digital Urban Design

1
School of Design and Creative Arts, Loughborough University, Loughborough LE11 3TU, Leicestershire, UK
2
School of Architecture, Building and Civil Engineering, Loughborough University, Loughborough LE11 3TU, Leicestershire, UK
*
Author to whom correspondence should be addressed.
Urban Sci. 2025, 9(6), 196; https://doi.org/10.3390/urbansci9060196
Submission received: 31 March 2025 / Revised: 16 May 2025 / Accepted: 23 May 2025 / Published: 30 May 2025

Abstract

:
The integration of artificial intelligence (AI) and virtual reality (VR) is reshaping urban design by offering advanced tools that foster experiential engagement, real-time collaboration, and inclusive design strategies. This study explores AI-enhanced VR platforms through the development and implementation of a digital model of Loughborough University across five environments: Twinmotion, Unreal Engine, Hubs, FrameVR, and ShapesXR. As a methodological and technical evaluation, the research assesses each platform based on four core dimensions: compatibility, design and VR features, collaboration and accessibility, and AI capabilities. The results highlight the comparative strengths and limitations of each system, providing insights into their suitability for various urban design contexts. By establishing a structured evaluation framework, this study contributes to the discourse on digital urbanism and offers practical guidance for selecting and optimizing VR tools in architectural workflows. It concludes by underscoring the potential of AI–VR integration in bridging digital and physical environments within future Metaverse applications.

1. Introduction

The rapid advancement of AI and VR technologies has introduced transformative possibilities for architecture and urban design. These innovative tools enable immersive, interactive, and intelligent urban planning processes, effectively bridging the gap between digital simulations and real-world environments [1,2,3]. Increasingly, AI-driven VR applications are reshaping how architects, urban planners, and stakeholders conceptualize, test, and implement urban design solutions by providing dynamic modeling capabilities, real-time data integration, and enhanced opportunities for user participation [4,5]. However, despite these advancements, comprehensive comparative analyses evaluating multiple VR platforms in architectural and urban planning contexts remain scarce.
A critical challenge in contemporary urban design is ensuring inclusive and participatory development that genuinely addresses diverse user needs [6,7,8]. Traditional urban planning methodologies predominantly rely on static models and predefined decision-making frameworks, inherently limiting public engagement and responsiveness to user feedback [9,10]. Conversely, AI-enhanced VR introduces intelligent automation, adaptive real-time simulations, and immersive co-design experiences, empowering users to meaningfully interact with and influence proposed urban spaces prior to their physical realization [11,12,13]. Yet, there remains ongoing debate regarding the feasibility, accessibility, and practical integration of VR technologies in mainstream urban planning practice, necessitating further targeted research.
The emergence of Metaverse platforms and Extended Reality (XR) technologies—which encompass VR, augmented reality (AR), and mixed reality (MR)—has further expanded the possibilities for AI-driven urbanism [14,15]. These multi-user virtual environments facilitate real-time collaboration among architects, urban planners, and communities, significantly enhancing data-driven urban simulations and interactive public participation [16,17,18]. By leveraging AI-integrated VR technologies, designers can evaluate accessibility features, simulate realistic crowd behaviors, and optimize spatial planning based on dynamic user feedback [19,20]. Nevertheless, systematic investigations into the comparative strengths and limitations of different VR platforms in the context of inclusive urban planning remain limited, highlighting a clear research gap.
This study addresses the outlined gaps by presenting a methodological approach for integrating AI-enhanced VR systems into urban planning workflows. Specifically, the research involves developing a digital model of Loughborough University and implementing it across multiple prominent VR platforms: Twinmotion, Unreal Engine, Hubs, FrameVR, and ShapesXR. The aim is to rigorously evaluate each platform’s capabilities and limitations by assessing critical factors such as rendering quality, AI-driven interactivity, multi-user collaboration, and accessibility. his paper is positioned as a technical and methodological investigation, grounded in expert-led technical testing, and does not include user-based empirical data. By systematically comparing these platforms, the research offers a structured framework that guides architects, urban planners, and policymakers in selecting and optimizing VR tools, thereby contributing significantly to creating more inclusive, adaptive, and responsive urban environments.

2. Literature Review and Background

2.1. Co-Design Methods and Technological Innovations in Urban Design and Public Spaces

Co-design has become a fundamental approach in urban design, ensuring that public spaces reflect community needs while fostering transparency, accountability, and participatory governance [9,10,21]. This framework enables architects, planners, and end-users to engage in collaborative decision-making, resulting in urban environments that are more inclusive, functional, and contextually responsive [22,23]. Co-design methodologies can be broadly categorized into participatory design activities and consultation processes, both of which emphasize active user involvement to align the built environment with social, cultural, and functional priorities (Figure 1) [24,25].
Participatory design activities play a critical role in fostering meaningful engagement, employing interactive techniques to incorporate user input throughout the design process [7,26]. Community workshops serve as a foundational method, convening diverse stakeholders to discuss spatial planning ideas, conduct structured brainstorming sessions, and engage in prototyping exercises [27,28]. These workshops generate direct contributions to design development, ensuring that user perspectives shape spatial interventions [29]. Additionally, participatory exercises such as mock-up creation, prototype testing, and interactive simulations allow users to engage with design concepts firsthand, providing real-time feedback that enhances decision-making [13,30]. Co-design games further enrich participatory processes by incorporating play and exploration to engage a broad demographic in creative spatial design [7,31]. These approaches not only improve users’ understanding of urban environments but also foster inclusive decision-making [1,3].
Consultation processes, beyond participatory design, offer structured methods for collecting empirical data from users and stakeholders [9,10]. Surveys provide quantitative insights into user preferences and accessibility needs, supporting data-driven design [8,32]. Focus groups enable in-depth discussions, ensuring spatial interventions align with community expectations [21,22]. Stakeholder consultations broaden input by including perspectives from businesses, government, and civic organizations [23,25]. Structured interviews further explore user experiences, providing qualitative data that enhances human-centered design [7,26].
Despite its benefits, co-design presents several challenges, particularly in terms of facilitation, resource allocation, and stakeholder coordination [9,23]. Effective participatory design requires significant time, funding, and institutional support, while the involvement of diverse stakeholders can lead to conflicting priorities and logistical complexities [25,27]. Additionally, communication barriers—particularly in multidisciplinary projects—can hinder collaboration, reducing the efficiency of co-design processes [21,23]. Addressing these limitations requires new technological innovations that enhance accessibility, efficiency, and stakeholder participation [13,28].
The introduction of XR technologies, including VR and MR, has transformed participatory urban design by enabling immersive, real-time digital simulations [4,33,34]. These technologies offer an intuitive and interactive means for users to visualize, explore, and modify spatial configurations before implementation [35,36]. The integration of digital twin technology further enhances this process by creating real-time virtual replicas of physical urban environments, allowing for dynamic simulations, predictive modeling, and interactive user engagement [16,37]. When coupled with XR, digital twins facilitate stakeholder-driven urban design modifications, significantly improving decision-making processes [11,13,30].
The emergence of the Metaverse introduces further opportunities for collaborative co-design in digital urban spaces [2,28,29]. By merging physical and virtual environments, the Metaverse enables real-time stakeholder engagement through shared, immersive virtual platforms [36,38]. This ecosystem facilitates geographically dispersed collaboration, allowing multiple stakeholders—including designers, policymakers, and local communities—to contribute to urban design projects in real-time, regardless of their physical location [17,39]. AI-driven tools embedded within Metaverse-based co-design platforms enhance this process by adapting design elements dynamically based on real-time feedback [1,39]. These innovations have the potential to democratize urban design, making it more inclusive, flexible, and accessible to a wider range of users [3,40]. Emerging research highlights that AI-integrated immersive platforms can enhance inclusive urban design by enabling multi-user collaboration, real-time participation, and spatial adaptability in co-design environments [39,40].
While XR-enabled co-design frameworks offer promising opportunities, further research is needed to assess their long-term impact on stakeholder engagement and design quality [3,41]. Future studies should explore how AI-driven immersive environments can enhance participatory planning, usability, and real-world integration within urban design workflows [1,2,39]. Addressing technical, accessibility, and regulatory challenges will be critical in ensuring these digital tools are scalable, inclusive, and effective [21,42]. By leveraging AI-enhanced VR and digital co-design methodologies, urban design can transition toward more interactive, data-informed, and user-centered models, fostering the development of more adaptive and sustainable public spaces [25,32,40].

2.2. Virtual Reality in Urban Design: Immersion, Digital Twins, and Empathic Design

The evolution of XR comprising VR and MR has significantly enhanced urban design workflows [16,33,34]. VR has become an essential tool in urban design, providing an immersive and interactive medium for spatial visualization, simulation, and analysis [12,43]. Unlike traditional urban design methodologies that rely on static two-dimensional representations and physical models, VR enables designers, planners, and stakeholders to experience and interact with dynamic virtual environments in real-time [11,36]. This shift from passive visualization to active engagement has redefined how urban spaces are conceptualized, tested, and communicated [29,41]. Through VR, designers can explore spatial relationships, assess environmental conditions, and refine design proposals based on direct experiential feedback [16,28]. This capacity for real-time interaction enhances decision-making processes, allowing for more adaptive and responsive urban planning solutions [30,36].
The integration of VR in urban design has significantly advanced co-design methodologies by facilitating real-time collaboration within shared virtual environments [13,17]. Unlike conventional engagement tools, which often lack accessibility for non-experts, VR offers an intuitive platform for participants to interact directly with digital models [3,34]. This facilitates more inclusive decision-making, as community members and policymakers can experience proposals from a first-person perspective, identify design flaws, and contribute to iterative improvements [7,21]. By bridging the gap between professional expertise and public input, VR fosters a democratized approach to urban design that integrates diverse perspectives into the development of public spaces [40].
A key technological advancement that has further enhanced the role of VR in urban design is the integration of digital twin technology [16,37]. A digital twin is a real-time virtual replica of a physical urban environment that continuously updates with live data streams from sensors, geographic information systems (GIS), and environmental analytics [11,35]. When coupled with VR, digital twins provide a powerful tool for dynamic urban simulation, enabling designers to model and predict how different spatial configurations will perform under varying conditions [17,36]. These simulations allow planners to assess pedestrian movement patterns, traffic flow, environmental factors, and infrastructure resilience in a controlled virtual setting before implementing design interventions in the physical world [5,32]. This capability significantly reduces the risks associated with urban development by allowing for data-driven decision-making, where potential issues can be identified and resolved in advance [1,2]. Moreover, AI-enhanced digital twins further extend the analytical power of VR by incorporating predictive modeling algorithms that forecast urban growth patterns, climate impacts, and infrastructure performance, allowing for more proactive and sustainable urban planning strategies [39,44].
Beyond its application in spatial analysis and simulation, VR is increasingly recognized as an empathy-driven tool that enhances human-centered urban design [3,40]. One of the fundamental limitations of traditional urban planning is its reliance on objective data and generalizations that do not fully capture the lived experiences of diverse user groups [7,26]. VR addresses this challenge by enabling designers to immerse themselves in alternative perspectives, experiencing urban spaces as individuals with varying physical abilities, sensory impairments, and cognitive conditions [45,46]. This application of VR as an “empathy machine” allows for the simulation of real-world challenges faced by marginalized and differently abled individuals, providing critical insights into how design decisions affect accessibility, mobility, and social inclusion [6,27,47]. For example, VR can be used to simulate how wheelchair users navigate public spaces, how visually impaired individuals experience wayfinding in complex environments, or how environmental stressors such as noise pollution impact mental well-being [21,33]. By engaging with these simulations, designers and policymakers can make more informed decisions that prioritize inclusivity and accessibility in urban development [23,26].
The emergence of the Metaverse introduces new dimensions of AI-driven urban design, integrating AI-enhanced VR environments with cloud-based multi-user interactions [2,28,38]. Within these digital ecosystems, designers and policymakers can collaborate in real-time, engaging with stakeholders through immersive design simulations [13,17]. The Metaverse enables multi-user urban design sessions, where AI-powered tools dynamically adjust urban layouts based on real-time feedback, behavioral simulations, and generative design algorithms [1,39,44]. This approach fosters participatory urban design, allowing communities to co-create digital urban spaces before real-world implementation [29,40].

2.3. Artificial Intelligence in Urban Design

The integration of AI in urban design has introduced a paradigm shift, moving beyond traditional rule-based planning toward data-driven, predictive, and generative methodologies [1,2,44]. AI enhances computational urban design through automated spatial optimization, real-time environmental simulations, and intelligent urban engagement models, bridging the gap between theoretical planning and real-world urban complexity [28,35,39]. With the increasing demand for sustainable, adaptive, and user-responsive cities, AI has become an indispensable tool in urban design, offering novel ways to analyze, predict, and generate urban spaces dynamically [29,38]. The synergy between AI and VR has led to more immersive, responsive, and interactive urban environments, fostering enhanced decision-making, participatory design, and real-time simulation capabilities [17,36].
A key area of AI’s influence in urban design is generative AI, which facilitates the automated production of spatial configurations, optimizing urban layouts based on pre-defined constraints such as land use, transportation accessibility, and environmental factors [1,39,44]. By leveraging deep learning models such as Generative Adversarial Networks (GANs) and diffusion models, AI can propose multiple urban design iterations within seconds, each conforming to predefined sustainability and efficiency criteria [2,48]. The integration of AI-generated urban prototypes into VR environments allows urban planners to interactively test, refine, and optimize spatial compositions, ensuring a seamless transition between digital modeling and real-world implementation [33,35]. Furthermore, advancements in text-to-3D and text-to-image AI technologies have accelerated the creation of digital urban landscapes [39]. AI-driven tools such as Google’s DreamFusion and OpenAI’s Shap-E allow designers to generate complex 3D urban models directly from textual descriptions, streamlining early-stage conceptualization and reducing the manual workload associated with 3D modeling [1,29]. The integration of AI-generated skyboxes in VR environments further enhances realism, dynamically adapting lighting conditions and atmospheric effects based on geospatial and environmental data [37,41].
Beyond spatial generation, AI significantly contributes to urban spatial analysis and predictive modeling, providing data-driven insights into infrastructure resilience, mobility patterns, and future urban growth trends [2,49,50]. AI-driven algorithms analyze vast datasets, including satellite imagery, geospatial data, and sensor-based environmental metrics, to predict population density fluctuations, transportation efficiency, and climate resilience [51,52,53]. The application of machine learning models in urban planning allows designers to simulate various developmental scenarios, optimizing urban infrastructure for long-term sustainability [1,50]. Real-time AI-driven simulations, integrated within VR environments, enable planners to explore the potential consequences of urban interventions before physical implementation, ensuring proactive rather than reactive planning strategies [16,33,34].
One of the most transformative applications of AI in urban VR environments is the emergence of AI-driven avatars, designed to simulate human behavior, urban engagement, and social interactions within digital urban spaces [39,54,55]. AI-powered avatars equipped with natural language processing (NLP) and behavioral modeling interact with urban environments, responding dynamically to spatial configurations, environmental changes, and user interactions [39,41]. These avatars provide an advanced means of evaluating urban accessibility, allowing planners to simulate diverse user experiences, including those of elderly individuals, people with disabilities, and marginalized communities [27,45,46]. AI-driven pedestrian simulations enhance urban inclusivity assessments, ensuring that spatial interventions accommodate a broad spectrum of user needs [3,21].
Another critical advancement in AI-driven urban modeling is the integration of photogrammetry and automated 3D reconstruction [52,55,56]. AI-enhanced photogrammetry techniques significantly improve the efficiency and accuracy of digital urban modeling by reconstructing real-world urban spaces from aerial imagery, LiDAR scans, and satellite data [51,57]. AI-driven software such as RealityCapture 1.5, Meshroom V2023.3.0, and Agisoft Metashape 2.2.1 automates the conversion of raw photographic data into high-fidelity 3D point clouds and textured cityscapes, reducing the time required for manual model reconstruction [49,50]. The incorporation of neural radiance fields (NeRFs) and splat-based rendering techniques further refines photogrammetric outputs, enhancing the visual and spatial accuracy of urban VR environments [55,58]. These advancements facilitate the creation of hyper-realistic digital twins, which serve as interactive platforms for real-time urban analysis, infrastructure monitoring, and predictive urban simulations [11,16].
The role of AI-powered sensors in urban digital twins has also expanded significantly, providing real-time environmental data that informs dynamic urban simulations within VR platforms [1,39,44]. AI-enhanced IoT sensors continuously update digital twin models with live environmental conditions, air quality metrics, pedestrian traffic patterns, and infrastructure performance analytics, allowing urban designers to adaptively refine spatial configurations [33,37]. The integration of AI-driven real-time data processing within game engines, such as Unreal Engine, Unity, and NVIDIA Omniverse, enables designers to experience urban spaces under dynamically shifting conditions, assessing the impact of factors such as day-night cycles, seasonal variations, and extreme weather conditions [54,59,60]. The adaptive rendering capabilities of AI-enhanced urban visualization tools allow for instantaneous recalibration of textures, lighting, and spatial elements, ensuring that virtual urban experiences remain both visually realistic and analytically rigorous [32,56].
Furthermore, AI is playing an increasing role in climate-responsive urban design, enabling designers to create adaptive environments that respond dynamically to temperature fluctuations, solar exposure, and wind flow patterns [39,49,50]. AI-driven climate simulations embedded in urban design software provide real-time feedback on urban sustainability metrics, allowing planners to optimize spatial layouts for energy efficiency, urban heat island mitigation, and environmental resilience [51,53,56]. These AI-enhanced analytical tools ensure that urban environments are designed not only for aesthetic and functional excellence but also for long-term environmental sustainability [1,2,44].

2.4. Virtual Reality Software and Game Engines for Urban Design

This section examines the diverse spectrum of VR software and applications utilized in architecture and urban design, highlighting their compatibility and key functionalities [33,35,54]. These solutions can be categorized into four primary types. First, standalone VR applications function independently, offering immersive design experiences without requiring additional software integration [19,37]. Second, game engines and development platforms provide robust toolsets for generating interactive, high-fidelity virtual environments, often integrated with architectural modeling software to enhance realism and interactivity [54,59,60]. Third, VR plugins for architectural modeling software extend the capabilities of existing design tools by incorporating VR functionalities, enabling real-time visualization, spatial analysis, and collaborative design processes [14,44,61]. Finally, WebXR-based platforms facilitate browser-accessible virtual environments, eliminating the need for dedicated hardware and software while enabling multi-user interaction and remote collaboration in urban design workflows [17,28,55].
The following Table 1 provides an updated comparative overview of widely used VR software and plugins relevant to urban and architectural design workflows. It categorizes each tool based on software type, platform compatibility, and core functionalities. This comparative insight helps practitioners and researchers evaluate which tools align best with varying project requirements, supporting informed decisions for selecting VR applications suited to immersive design, collaboration, and real-time visualization [39].

2.5. AI-Enhanced Urban Design Software and Tools

The rapid evolution of artificial intelligence (AI) in urban design software has fundamentally transformed architectural visualization, participatory planning, and real-time urban simulations [39,44]. Advances in computational power, generative AI, and automated simulation techniques have given rise to sophisticated tools that allow urban designers to model, analyze, and interact with digital cityscapes with unprecedented accuracy and efficiency [1]. These platforms integrate AI-driven automation, predictive analytics, and parametric modeling, making them essential for intelligent urban planning, spatial optimization, and data-informed decision-making (Table 2) [2,50]. By enhancing real-time visualization, adaptive environmental modeling, and procedural city generation, AI-driven tools have become indispensable in contemporary urban design workflows [49,54]. This section critically examines the key software and tools that are redefining urban design methodologies, highlighting their capabilities in automation, real-time simulation, and data-driven decision-making [33].

2.5.1. AI-Enhanced Tools for Urban Modeling, Simulation, and Procedural Generation

AI-driven urban modeling platforms have revolutionized the process of city planning by enabling automated generative modeling, predictive spatial analysis, and procedural urban design [2,39]. Among the most significant innovations in this field is NVIDIA Omniverse, a collaborative simulation platform that facilitates real-time digital twin development and multi-disciplinary urban workflows [44,59]. By leveraging AI-powered automation, Omniverse enhances procedural city modeling, traffic flow analysis, and material optimization, supporting a data-driven approach to urban planning [33]. Similarly, CityEngine, developed by Esri, employs rule-based procedural modeling to generate complex urban landscapes, allowing architects and planners to explore various development scenarios dynamically [28,62].
These tools enable rapid experimentation with different spatial configurations, optimizing city layouts based on environmental, social, and infrastructural parameters [1,56]. Beyond procedural modeling, AI-based generative design tools, including Generative Adversarial Networks (GANs) and diffusion models, have introduced novel approaches to architectural visualization [50,54]. AI frameworks such as Stable Diffusion and DALL·E contribute to the automated generation of urban textures, skyboxes, and contextual elements, enhancing the realism and efficiency of digital urban environments [39,54]. These advancements allow for the rapid synthesis of architectural features and landscape elements, significantly reducing the manual effort required for high-fidelity urban simulations [58].

2.5.2. AI in Real-Time Rendering and Automated Environmental Simulation

AI has also played a transformative role in real-time rendering, optimizing computational efficiency, material realism, and environmental responsiveness in architectural visualization [39,59]. AI-powered light baking algorithms enhance rendering performance by dynamically optimizing lightmaps, allowing for more efficient global illumination in real-time visualization software [33]. These techniques, integrated into platforms like Unreal Engine, facilitate high-fidelity urban representations by automatically adjusting reflections, shadows, and ambient occlusion [44,60]. Similarly, AI-driven dynamic skybox generation employs machine learning models to synthesize realistic weather conditions, time-of-day transitions, and atmospheric effects, providing immersive and context-aware visualizations of urban spaces [54].
Another critical application of AI in urban design is automated texture upscaling, which significantly improves the visual fidelity of low-resolution textures through deep learning-based super-resolution techniques [39]. AI upscaling technologies, such as NVIDIA DLSS (Deep Learning Super Sampling) and Topaz Gigapixel AI, refine material textures in architectural models, ensuring high-resolution outputs without increasing computational load [56,59]. These advancements enhance the efficiency and realism of urban design visualizations, making them more accessible for large-scale simulations and digital twin development [33].

2.5.3. AI-Driven Photogrammetry and 3D Reconstruction

Photogrammetry and 3D reconstruction have undergone significant advancements with the integration of AI-driven processing techniques [52,56]. AI-enhanced photogrammetry software, including RealityCapture, Meshroom, and Agisoft Metashape, enables the automated conversion of aerial imagery, LiDAR scans, and photographic datasets into highly detailed 3D models of urban environments [51,57]. These tools employ machine learning algorithms to refine point clouds, reconstruct complex geometries, and enhance texture mapping, improving the accuracy and realism of digital urban replicas [50,54].
Recent developments in neural radiance fields (NeRFs) and splat-based rendering have further enhanced 3D reconstruction methodologies, allowing for the generation of hyper-realistic city models with minimal computational overhead [55,58]. These AI-driven techniques enable the reconstruction of urban environments with accurate material properties, lighting conditions, and spatial relationships, significantly expanding the potential of photorealistic urban simulations [52,53]. By integrating these tools into urban planning workflows, architects and designers can create highly detailed and interactive models that accurately represent existing urban conditions while enabling predictive analysis of future developments [33,54].

2.5.4. AI-Based Climate and Environmental Simulation for Sustainable Urban Design

AI-driven environmental simulation tools are playing an increasingly vital role in climate-adaptive urban design by providing real-time analyses of environmental conditions, energy efficiency, and microclimate interactions [44,49]. Parametric modeling platforms such as Grasshopper AI (integrated within Rhino) leverage machine learning algorithms to optimize building forms, passive cooling strategies, and wind flow analysis, ensuring climate-responsive urban layouts [1,50].
Similarly, Ladybug Tools provide AI-assisted simulations for solar radiation analysis, wind studies, and daylight optimization, enabling architects to evaluate environmental performance during early-stage design iterations [51]. By integrating real-time climatic data and AI-generated predictive models, Ladybug Tools allow urban planners to enhance sustainability strategies, improving the energy efficiency and livability of future urban developments [2,32].
Autodesk Forma (formerly Spacemaker AI) further advances AI-based environmental modeling by incorporating automated site analysis, generative urban layout optimization, and geospatial data-driven climate simulations [39]. By employing AI to analyze terrain morphology, noise pollution, and heat island effects, Autodesk Forma assists in the design of resilient and adaptable urban environments [53]. These tools support evidence-based urban planning by enabling designers to test multiple environmental scenarios and optimize urban layouts for sustainability and energy efficiency [44,50].

3. Methodology: Developing a Multi-Platform VR Model

The methodological framework for this study is designed to explore the capabilities of AI-integrated virtual reality (VR) environments in urban and architectural visualization. By developing a high-fidelity digital model of Loughborough University, this research examines the technical affordances, interoperability, and AI-enhanced functionalities of multiple VR platforms. The study aims to evaluate rendering fidelity, multi-user collaboration features, AI-driven optimizations, and real-time interactivity across five selected platforms: Twinmotion, Unreal Engine, Hubs, FrameVR, and ShapesXR.

3.1. Case Study: Loughborough University Digital Model

Loughborough University was selected as the case study due to its architectural diversity, urban complexity, and potential for AI-enhanced digital modeling in urban design education, inclusive design, and co-creation. The campus features a mix of historic, modernist, and contemporary buildings, providing a robust dataset for AI-driven procedural modeling and real-time environmental simulations. Its large-scale urban layout, encompassing academic buildings, residential zones, and open spaces, allows for spatial analysis, pedestrian flow modeling, and accessibility assessments within a controlled digital twin environment.
Beyond its architectural significance, the university serves as an ideal testbed for interactive, multi-user VR applications, fostering collaborative design workflows and participatory urban planning. AI-powered analytics enable real-time simulations of human-environment interactions, advancing co-creation methodologies for inclusive urban design education. The integration of AI-enhanced VR tools within this digital model supports stakeholder engagement, interactive spatial modifications, and climate-responsive urban strategies, positioning it as a dynamic platform for both research and pedagogical innovation.

3.2. Workflow for Implementing the Model in Different VR Platforms

Following the development and refinement of the Loughborough University digital model, the next phase involved systematically deploying it across five selected VR platforms. These platforms were chosen based on their distinct capabilities in AI-enhanced rendering, real-time collaboration, and immersive urban simulation, ensuring a comprehensive evaluation of digital urban design workflows.

3.2.1. Digital Model Development and Optimization

The Loughborough University digital model was developed using Autodesk Revit 2025, leveraging its parametric modeling and BIM integration for accurate spatial representation. The model was then exported to Twinmotion and Unreal Engine for real-time visualization and interactive VR simulation. Cesium Ion plugin in Unreal Engine facilitated the integration of Google API data, enabling real-world site context integration. Additionally, Polycam and RealityCapture were used for AI-assisted photogrammetry, capturing fine architectural details and complex spatial elements.
For WebXR platforms such as Mozilla Hubs, FrameVR, and ShapesXR, additional optimization was required due to the constraints of browser-based rendering. The model was first exported from Revit to Blender, where materials were baked, textures were optimized, and polygon count was reduced to ensure real-time performance. The final optimized model was exported in GLB format, which is compatible with WebXR platforms, allowing for efficient loading, real-time interaction, and accessibility across multiple devices.

3.2.2. Platform Selection Justification

The selection of Twinmotion, Unreal Engine, Mozilla Hubs, FrameVR, and ShapesXR was informed by the comparative analysis presented in the literature review, highlighting their prominence within architectural visualization, real-time urban modeling, and AI-driven spatial interactions [33,39,54]. Each platform was chosen for its unique strengths in facilitating VR-based architectural co-creation, participatory urban planning, and AI-integrated simulations [17,44].
Twinmotion was selected as a real-time rendering engine optimized for architectural and urban design visualization [63,64]. Its direct integration with Building Information Modeling (BIM) software, particularly Autodesk Revit 2024, enhances seamless data flow, making it ideal for large-scale city modeling and AI-enhanced procedural landscapes (Figure 2). The platform’s ability to generate dynamic environmental conditions and automate landscape features using AI makes it a valuable tool for testing real-time urban scenarios [44,53].
Unreal Engine was chosen for its high-fidelity rendering capabilities, real-time physics simulations, and AI-enhanced interactivity (Figure 3) [59,60]. Recognized for its robust blueprints system, Unreal enables designers to create highly interactive and immersive VR environments, incorporating AI-driven lighting adjustments, pedestrian behavior simulations, and automated spatial interactions [33,39]. Its sophisticated ray-tracing technology further enhances visual realism, making it an industry leader in digital twin development [54].
Mozilla Hubs was selected as a WebXR-based platform, offering multi-user interaction, real-time accessibility testing, and AI-powered avatar behavior modeling [17,65]. Hubs enables stakeholders to collaborate in virtual urban spaces, facilitating co-creation and participatory design through immersive web-based environments (Figure 4) [66].
FrameVR was integrated due to its lightweight, browser-based architecture, allowing multi-user interaction, AI-driven spatial analytics, and interactive 3D environments [55,66]. The platform is well-suited for urban engagement projects that require broad accessibility without high-performance hardware, making it a scalable option for participatory urbanism (Figure 5) [28].
ShapesXR was chosen for its AI-assisted spatial prototyping and immersive collaborative design workflows [67,68]. As a VR-native application, ShapesXR supports intuitive co-creation processes, enabling architects and urban designers to engage in interactive ideation, manipulate spatial elements in real time, and integrate AI-powered generative design features (Figure 6) [39,54].
The diversity of these platforms ensures that the study captures a wide spectrum of AI-integrated VR capabilities, from high-fidelity simulation to web-based participatory urban planning and AI-driven procedural modeling.

3.3. Evaluation Framework

The evaluation framework is structured around four core dimensions: compatibility, design and VR features, collaboration and accessibility, and AI capabilities. Compatibility focuses on the ease of importing and exporting digital models, cross-platform functionality, and hardware requirements, ensuring seamless integration with architectural software like Revit and Blender while maintaining performance across devices. The design and VR features dimension assesses rendering quality, including textures, lighting, and material fidelity, alongside AI-driven functionalities such as generative design, skybox creation, and Gaussian splatting. Real-time environmental simulations are also evaluated, as they support dynamic exploration and iterative design adjustments within urban contexts.
Collaboration and accessibility are assessed based on multi-user interaction capabilities, remote access, and user interface intuitiveness, with particular emphasis on facilitating co-creation in urban design. This includes evaluating how effectively each platform enables collaborative workflows among stakeholders, ensuring inclusive participation in the design process. The final dimension, AI capabilities, examines the extent to which platforms support AI-generated 3D objects, avatars, and real-time adaptation, alongside predictive design features and intelligent automation. These elements collectively enhance decision-making, foster participatory urbanism, and ensure that urban design outcomes are user-centered and contextually responsive.

4. Comparative Technical Analysis

This section presents a qualitative evaluation of the five selected VR platforms, including Twinmotion, Unreal Engine, Mozilla Hubs, FrameVR, and ShapesXR, with a focus on their technical performance and suitability for AI and VR real-time design features in urban design. While the study does not involve participant-based data collection, the analysis follows a structured evaluation framework, offering a comprehensive comparative assessment of each platform’s strengths and limitations in supporting digital co-creation workflows.

4.1. Twinmotion

Twinmotion, developed by Epic Games, is a real-time visualization and immersive design tool widely utilized in architecture and urban planning. In this study, Twinmotion was employed to develop the 3D model of the Loughborough University virtual campus, serving as a comprehensive case study for evaluating platform capabilities in AI-enhanced urban co-design. Its seamless integration with Building Information Modeling (BIM) platforms, such as Autodesk Revit, facilitated a direct link between design environments, enabling continuous synchronization without the need for extensive model optimization (Figure 7). This integration extends to various file formats, including FBX, GLB, and OBJ, allowing for efficient importation and exportation of complex models. Moreover, Twinmotion supports point cloud data and photogrammetry-based GLB files, enhancing contextual realism through the incorporation of scanned site elements (Figure 8). In this case study, elements of the Loughborough University campus were refined using photogrammetry, and animated GLB models were added to simulate human activity based on real-world motion capture using Move AI. These animations rely on external motion capture data rather than real-time AI-driven behavior generation.
From a design and visualization perspective, Twinmotion offers advanced rendering capabilities supported by Lumen global illumination and path tracing technologies. These features ensure photorealistic outputs with accurate lighting, shadows, and material textures (Figure 9). The extensive asset library, comprising materials, vegetation, objects, HDRI environments, characters, and vehicles, further enriches design realism. Within the VR environment, users can engage in real-time design workflows, including changing materials, adding assets and vegetation, and adjusting lighting conditions. Additionally, Twinmotion supports dynamic environmental simulations, allowing designers to visualize changes in weather, seasonal conditions, and time-of-day settings, thereby enhancing contextual understanding within immersive virtual environments.
In terms of collaboration and accessibility, Twinmotion facilitates multi-user interaction through Twinmotion Cloud, a cloud-based service that allows projects to be shared with stakeholders via web browsers. This platform supports virtual reality viewing for Panorama Sets and Local Presentations, enabling users to navigate immersive scenes, interact with objects, and make real-time selections using VR controllers. The teleportation-based navigation minimizes motion sickness, enhancing the user experience during collaborative design reviews. Moreover, Twinmotion allows the integration of site context through its linkage with OpenStreetMap, enabling the importation of real-world geographic data for urban modeling. Another notable interactive feature is the ability to build 3D configurators within the Twinmotion VR environment, empowering stakeholders to explore design variations and customization options intuitively.
While Twinmotion incorporates certain AI-driven functionalities, such as real-time environmental adaptation and automated material optimization, its AI capabilities are largely focused on enhancing rendering and visualization rather than generative design. For example, the platform supports the integration of AI-generated point clouds and photogrammetry-based assets but does not natively provide AI-powered generative design or Gaussian splatting. However, it does offer path-tracing simulations for dynamic elements such as people, bicycles, and vehicles, enhancing the realism of urban scenes. The inclusion of animated GLB models, representing human activities captured through motion capture technology, further enriches the simulation of urban environments. These animations rely on external motion capture data rather than real-time AI-driven behavior generation.
Overall, Twinmotion proves to be a powerful tool for urban co-design, particularly for real-time visualization, immersive collaboration, and environmental simulations. Its robust VR functionalities, including material changes, asset integration, light and weather simulations, and site context importing, support iterative design workflows. However, while Twinmotion enhances visualization and interactivity, its AI functionalities remain focused on rendering optimization rather than advanced generative design (Figure 10). For projects requiring more sophisticated AI-driven automation, integration with platforms such as Unreal Engine or external AI tools remains necessary.

4.2. Unreal Engine

Unreal Engine, developed by Epic Games, stands as a leading real-time visualization and simulation platform renowned for its high-fidelity rendering, advanced interactivity, and customizable workflows. In this study, Unreal Engine was employed to further develop the Loughborough University virtual campus, enhancing the digital model with AI-driven functionalities, real-time co-creation tools, and immersive urban design simulations. The platform’s compatibility with industry-standard file formats, including FBX, GLB, and OBJ, ensured seamless model importation from Autodesk Revit without extensive optimization. The integration of the Datasmith plugin further streamlined this workflow, allowing direct synchronization between Unreal Engine and modeling software such as Revit, SketchUp, and Rhino, preserving materials, textures, and geometry with minimal manual adjustments. Additionally, the Cesium plugin enriched the virtual environment by enabling the incorporation of geospatial data, providing accurate site context and real-world topography for the virtual campus (Figure 11).
From a design and VR feature perspective, Unreal Engine delivers superior rendering quality through its Lumen global illumination and path tracing technologies, ensuring photorealistic lighting, shadows, and material textures (Figure 12). The platform’s advanced material editor, powered by Blueprint visual scripting, allows for real-time design modifications, including material changes, environmental adjustments, and object manipulation within the virtual environment. This dynamic interactivity extends to cameras for capturing user-defined viewpoints and generating high-resolution imagery for documentation and presentation purposes. Moreover, the platform supports annotation tools, facilitating collaborative feedback and iterative design refinement.
Unreal Engine’s collaboration and accessibility features are further enhanced through multi-user VR sessions, enabling real-time co-creation among geographically dispersed stakeholders. The platform’s compatibility with VR headsets facilitates immersive walkthroughs, while the teleportation-based navigation system reduces user discomfort during exploration. In addition, the integration of the MetaHuman framework allows for the creation of highly detailed, photorealistic avatars, enhancing user representation and engagement within the virtual environment. These avatars can perform realistic animations, simulate human-scale interactions, and facilitate user-centric evaluations of urban spaces.
The AI capabilities of Unreal Engine are exemplified by the integration of Convai AI avatars, which provide real-time conversational interactions within the virtual environment (Figure 13). These avatars, powered by natural language processing (NLP), enable users to engage with virtual agents for informational queries, design walkthroughs, and scenario-based simulations. Additionally, Unreal’s Blueprint system facilitates AI-driven automation, supporting real-time environmental simulations, pedestrian flow analysis, and object behaviors based on user interactions (Figure 14). This functionality empowers designers to test various urban scenarios, including crowd dynamics, traffic patterns, and accessibility assessments, within an immersive digital twin environment (Figure 15).
Another significant advantage of Unreal Engine is its capability to support complex urban simulations through the Cesium plugin. This integration allows for the importation of high-fidelity geospatial data, including satellite imagery, terrain models, and 3D building footprints, thereby enhancing contextual accuracy for urban design evaluations. The platform also supports AI-enhanced photogrammetry workflows, enabling the importation of reality-captured assets to further enrich the virtual campus environment.
Despite its advanced capabilities, Unreal Engine presents certain limitations when compared to WebXR platforms. While multi-user collaboration is supported, it is less seamless than web-based solutions, requiring dedicated software installations and robust hardware. Additionally, project packaging is limited to executable applications, restricting browser-based accessibility and increasing file size, which can impede collaborative workflows across devices with limited storage or processing power.
Overall, Unreal Engine stands out as a comprehensive platform for AI-driven urban co-design, offering advanced rendering, real-time interactivity, and multi-user collaboration. Its integration with Datasmith, MetaHuman, Convai AI avatars, and the Cesium plugin enhances both the realism and functionality of digital urban environments. While the platform demands significant computational resources and technical expertise, its robust capabilities make it particularly well-suited for complex urban design projects requiring high-fidelity visualization, AI-driven simulations, and interactive co-creation workflows.

4.3. Mozilla Hubs Instance

Mozilla Hubs is a browser-based WebXR platform that supports multi-user virtual co-creation, making it highly accessible for collaborative urban design, education, and stakeholder engagement. As Mozilla has ended official support for Hubs, this study launched its own self-hosted Hubs instance to facilitate interactive navigation and spatial analysis of the Loughborough University virtual campus. This custom deployment enabled real-time co-design sessions, enhancing accessibility for remote students and professionals engaging in urban planning.
Mozilla Hubs supports GLB, FBX, and OBJ file formats, allowing integration with Autodesk Revit and Blender. However, pre-optimization in Blender is essential to maintain performance and visual fidelity, particularly for complex environments such as the Loughborough University virtual campus. This process includes polygon reduction, texture baking, and material optimization before exporting the model to Hubs via the Blender Spoke plugin. These steps ensure efficient rendering and smooth navigation within the WebXR environment (Figure 16).
Mozilla Hubs provides interactive spatial tools that enhance virtual urban planning and co-creation. It supports the integration of 360-degree images and videos, enabling panoramic environmental elements within the digital model. Spatial audio placement further enhances immersion, simulating realistic soundscapes for enhanced user engagement. Additionally, real-time object importing from Sketchfab allows users to dynamically modify the virtual environment and adapt spatial compositions based on evolving design needs. Drawing and annotation tools enable users to provide real-time feedback, facilitating iterative design discussions. Furthermore, documents such as PDFs and PowerPoint presentations can be shared within the virtual space, making Hubs a valuable tool for architectural education and collaborative design workshops. A significant VR navigation feature is the ability to fly through environments, offering an alternative exploration mode for large-scale models such as the Loughborough University campus. This function aids in assessing urban spatial arrangements, supporting both educational and professional design evaluations.
Mozilla Hubs excels in multi-user collaboration, allowing real-time engagement among geographically dispersed participants. Within Loughborough University, the platform has been deployed in architectural courses, enabling students to engage in immersive learning and collaborative urban analysis (Figure 17). Additionally, Hubs supports virtual campus open days, providing prospective students with an interactive 3D experience of campus design, facilities, and accommodation options.
A key strength of Hubs in inclusive urban design is the customizable avatar system, which allows users to experience spatial configurations through diverse perspectives. Avatars designed in Blender can be rigged and imported into Hubs, including representations of wheelchair users, visually impaired individuals, and other protected characteristics (Figure 18). This functionality ensures that urban spaces are evaluated from multiple user perspectives, fostering a human-centered approach to accessibility.
Beyond static representations, interactive object behaviors can be implemented through Blender’s behavior graph system, enabling object grabbing, movement, and interactivity. This capability enhances spatial testing by allowing users to simulate real-world interactions with urban spaces, supporting co-design and accessibility assessments.
Although Hubs does not natively support AI-driven real-time behavior generation, it accommodates external AI integrations for interactive engagement. AI-powered text-based avatars and conversational agents can be embedded, enabling real-time information queries and guided walkthroughs. However, these capabilities rely on third-party plugins rather than native platform functionalities.
Despite its advantages, Hubs has notable limitations. The platform’s performance is constrained by WebXR, requiring extensive Blender-based optimization to ensure smooth navigation. While multi-user collaboration is a defining feature, scalability remains limited compared to dedicated VR engines. Furthermore, since Mozilla has discontinued support for Hubs, launching an independent instance requires manual server deployment and coding expertise, increasing the technical demands for long-term usability.

4.4. Frame VR

FrameVR is a web-based multi-user virtual environment designed to facilitate real-time collaboration and interactive urban design. This platform was employed in the development and exploration of the Loughborough University virtual campus, providing an accessible and immersive experience for educational and co-design purposes. FrameVR offers automated optimization of polygon counts and file sizes; however, for optimal performance, it is preferable to construct models in a low-poly format within Blender, bake textures, and then export them as a GLB file for seamless integration. The platform’s support for various file formats ensures its adaptability to different workflows, making it a flexible solution for digital urban design applications.
FrameVR supports multiple file formats, including GLB, PLY, SPZ, and SPLAT, ensuring broad compatibility with 3D modeling workflows. The inclusion of SPZ and SPLAT files reflects FrameVR’s capability to support Gaussian Splatting, an advanced neural rendering technique that optimizes real-time visualization of high-fidelity photogrammetric scans while reducing computational overhead. This feature enables efficient rendering of complex environments without excessive polygonal complexity, improving performance in large-scale digital twins. While the platform does not support direct avatar integration from Blender, it offers alternative avatar systems such as Ready Player Me avatars, robotic android avatars, and standard human avatars, providing flexibility in user representation.
FrameVR provides a robust set of design and VR functionalities tailored for real-time co-creation and interactive urban design. The platform enables dynamic object placement, modification, and removal, allowing users to iteratively refine virtual environments. Additionally, FrameVR incorporates annotation tools, spatial audio, and interactive features, supporting intuitive collaboration in design discussions. The ability to fly within the virtual environment further enhances spatial exploration, particularly useful in urban-scale digital twins where large areas need to be navigated efficiently. The platform’s integration of Google Street View 360 in VR enhances urban contextualization by enabling users to interact with real-world geospatial environments.
FrameVR is inherently designed to support multi-user collaboration, enabling real-time co-creation and engagement among geographically dispersed stakeholders. Its intuitive user interface allows seamless remote participation, making it particularly valuable for virtual open days, student engagement, and collaborative urban design projects. The platform is widely utilized at Loughborough University, where the virtual campus model facilitates educational activities, architectural walkthroughs, and participatory design discussions. This level of interactivity enhances accessibility by allowing users to experience and interact with architectural spaces remotely, ensuring broader engagement in urban design discourse.
FrameVR integrates a suite of AI-driven functionalities, extending its capabilities beyond conventional VR platforms. It features AI-generated robot avatars that support real-time text-based and chat interactions, facilitating enhanced engagement within the virtual space (Figure 19). Additionally, the platform supports photogrammetric workflows that enable the importing of Gaussian Splats, improving the realism of scanned environments while maintaining optimized performance. AI-powered meeting transcripts further enhance collaborative discussions by providing automated documentation of user interactions, ensuring that design iterations and stakeholder feedback are recorded in real-time. The platform also integrates AI-driven generative tools, including AI-generated skyboxes from text prompts, text-to-3D object generation, and AI-enhanced image-to-3D conversion, which collectively enhance its potential for real-time spatial modifications and interactive co-creation processes (Figure 20).
FrameVR serves as a powerful web-based VR platform that seamlessly integrates real-time object manipulation, AI-enhanced generative design, and collaborative multi-user engagement. Its support for Gaussian Splats, AI-driven automation, and immersive co-creation features positions it as a highly effective tool for interactive urban design, participatory planning, and digital twin applications (Figure 21). However, while FrameVR provides extensive multi-user support, its reliance on cloud-based infrastructure may present limitations in terms of scalability, file size constraints, and optimization requirements. Nonetheless, its intuitive interface, AI-assisted functionalities, and dynamic co-design tools make it an essential platform for architectural visualization, spatial prototyping, and urban research.

4.5. ShapesXR

ShapesXR is a VR-native collaborative design platform tailored for spatial prototyping, co-creation, and immersive communication in architecture and urban design. It allows designers to create three-dimensional environments and interactive prototypes without prior experience in game engines or programming. This section critically evaluates ShapesXR based on its compatibility, VR features, collaborative capacity, and integration within AI-enhanced workflows, aligning with the methodological framework of this study.
ShapesXR facilitates the importation of a variety of asset types, including OBJ, GLB, glTF (ZIP), PNG, JPG, MP3, WAV, and Figma files, ensuring smooth interoperability with common architectural modeling and visualization pipelines. To maintain optimal performance within VR environments, imported assets are recommended to comply with specified guidelines, including a limit of approximately 300,000 polygons per scene and a maximum of 10,000 vertices per individual object. Asset management is streamlined through the ShapesXR Dashboard, with further export capabilities supporting glTF and USDZ formats, enabling interoperability with platforms such as Unity and Unreal Engine. As part of this study, the digital model of Shirley Pearce Square, which is a key urban plaza on the Loughborough University campus, was exported from Blender to ShapesXR. This model was utilized in participatory placemaking workshops, where students collaboratively explored mixed reality design strategies to enhance the square by introducing new public art installations, flexible seating arrangements, and activity zones (Figure 22).
ShapesXR offers a comprehensive suite of design tools, including procedural shape generation, freeform drawing, text creation, and material customization via the Paint tool. The platform enables iterative prototyping through its multi-scene system, where users can create, modify, and present spatial design variants. A distinctive feature is the “Background Scene”, which allows for persistent environmental elements across all scenes, enhancing the flexibility of storyboard creation and spatial design workflows.
Prototyping interactivity is integral to ShapesXR’s offering. Designers can assign interactive behaviors to objects, employing a variety of triggers such as on-click, on-hover, and scene-enter functions, alongside corresponding actions such as navigation to viewpoints, playback of sound, or activation of haptics. These interactions can be tested and refined in “Play mode”, facilitating user-centric exploration of interaction flows and spatial narratives.
Additionally, Holonotes provides functionality for recording voice annotations and avatar movements, supporting asynchronous communication within collaborative workflows. In the context of this study, Holonotes were employed during the participatory placemaking sessions of the Shirley Pearce Square model, providing spatial guidance and design commentary from tutors to enhance student engagement during independent design iterations.
ShapesXR prioritizes real-time, multi-user collaboration, supporting up to eight co-creators per session for optimal performance. Spaces can be shared via unique codes, facilitating both remote and co-located teamwork. Scene synchronization and presentation modes enable moderated sessions, where one user can lead design reviews while restricting participant navigation or editing permissions. Mixed Reality (MR) functionality is also supported, allowing for room-scale persistent content and simulation of augmented reality (AR) headset fields of view, further expanding the platform’s applicability for MR and AR prototyping.
The platform integrates avatar-based presence indicators. Notably, the MetaHuman avatar was identified as the sole avatar system available and operational within the ShapesXR environment at the time of this research, providing a consistent representation for users within collaborative VR sessions. Additional communication features include integrated voice chat with proximity-based audio propagation and manual muting options to facilitate user comfort.
Accessibility is further extended through a web-based editor, enabling non-VR participants to engage with the design process via standard browsers, thus broadening the platform’s inclusivity across diverse user groups.
While ShapesXR does not natively support AI-driven generative design or behavioral simulation within its VR environment, it serves as a pre-visualization and prototyping tool that complements AI-enhanced workflows through external integrations. The export functionality to Unity enables the incorporation of AI-driven elements such as procedural modeling, intelligent agent simulations, and dynamic visual effects, while USDZ export compatibility facilitates further development within Unreal Engine for advanced urban simulations.
ShapesXR demonstrates significant utility as a collaborative and intuitive platform for early-stage spatial ideation, participatory design, and immersive prototyping. Its direct application in the co-design of the Loughborough University Shirley Pearce Square illustrates its potential to foster inclusive and engaging placemaking processes. While ShapesXR’s scope remains oriented towards rapid prototyping and real-time collaboration rather than high-fidelity simulation or AI-driven urban analytics, its interoperability with advanced platforms such as Unreal Engine reinforces its role as a valuable intermediary within contemporary digital urban design workflows.

5. Discussion

The integration of AI within VR environments presents a paradigm shift in digital urban planning, enabling designers to transcend traditional static modeling techniques [29,54]. The AI–VR workflows (Table 3) examined in this study, distributed across platforms such as Twinmotion, Unreal Engine, Mozilla Hubs, FrameVR, and ShapesXR, collectively introduce dynamic, participatory, and intelligent frameworks for urban simulation and design engagement [16,17,34]. These workflows contribute to a new frontier in urbanism, fostering real-time collaboration, generative design, and predictive environmental modeling within immersive environments [2,39]. However, the heterogeneous nature of these platforms also reveals critical insights regarding their respective affordances and limitations in the context of digital urban design (Table 4).
Twinmotion demonstrates exceptional capabilities in real-time visualization and environmental simulations, making it particularly suited for early-stage urban scenario testing and large-scale masterplanning workflows. Its robust interoperability with BIM software such as Revit and Archicad facilitates seamless data exchange, enabling the rapid integration of architectural models into broader urban contexts. Twinmotion’s AI-powered environmental tools—including procedural vegetation generation, real-time weather simulation, and dynamic lighting adjustment—further enhance its value as a simulation tool for early design ideation. Moreover, Twinmotion affords designers the ability to modify materials and inject assets in real-time, import high-fidelity photogrammetry models, and support animated GLB files, significantly enriching the realism and narrative potential of urban visualizations. The inclusion of a “flying mode” feature expands user navigation flexibility, allowing stakeholders to explore vast urban environments from aerial perspectives, which is particularly advantageous for city-scale planning.
Nevertheless, while Twinmotion excels in creating immersive environmental simulations, its AI functions remain largely focused on automating visual outputs and lack the depth of generative design capabilities and complex agent-based behaviors found in more advanced AI-driven platforms. Furthermore, Twinmotion’s VR deployment presents limitations, particularly regarding multi-user collaboration. Unlike dedicated multi-user platforms such as Mozilla Hubs or FrameVR, Twinmotion does not inherently facilitate real-time co-creation in VR environments with multiple participants, which constrains its utility for collaborative urban design sessions. Additionally, the platform lacks built-in embodiment features such as customizable avatars or direct interaction metaphors (e.g., hand tracking, haptics), reducing the sense of social presence and engagement typically desired in participatory urban design workflows.
Conversely, Unreal Engine offers a high degree of fidelity in rendering and AI-driven interactivity. The integration of Convai AI avatars introduces conversational agents that augment the immersive narrative of urban spaces, facilitating human-centered design assessments through real-time interaction. Furthermore, Unreal’s blueprint system and compatibility with geospatial plugins like Cesium enhance procedural automation, enabling real-world context modeling and pedestrian behavior simulations. Nonetheless, Unreal’s reliance on high-performance hardware and standalone applications restricts its accessibility for broader participatory design initiatives, contrasting with the inclusive nature of web-based platforms.
Mozilla Hubs and FrameVR emerge as accessible alternatives, particularly for fostering remote collaboration and participatory urbanism. Mozilla Hubs excels in supporting inclusive design assessments, enabling customized avatars to simulate spatial interactions from diverse user perspectives. However, the discontinuation of official support for Hubs imposes infrastructural constraints, necessitating technical expertise for independent deployment. FrameVR, in contrast, leverages cloud-based infrastructure and integrates AI-driven features such as generative skyboxes and Gaussian Splatting, facilitating interactive spatial modifications and photorealistic reconstructions. Despite its strengths, FrameVR’s dependency on browser-based rendering imposes optimization constraints, potentially limiting the platform’s scalability in high-fidelity urban modeling.
ShapesXR offers a distinctive value proposition through its VR-native collaborative prototyping and AI-assisted interaction workflows. Its intuitive interface and capacity for immersive spatial ideation empower interdisciplinary teams to engage in participatory co-design processes. However, its limited integration with complex AI-driven simulations, such as pedestrian flow modeling and environmental analytics, restricts its applicability in data-intensive urban planning workflows.
The synthesis of findings across these platforms underscores a critical gap in existing AI–VR ecosystems: no single platform holistically integrates advanced AI-driven generative design, real-time environmental simulations, and inclusive, browser-accessible participatory features [42,68]. Consequently, there is a pronounced need for a unified system that amalgamates the rendering and simulation fidelity of Unreal Engine, the accessibility and real-time engagement features of FrameVR, and the intuitive prototyping tools of ShapesXR [41,65,69]. Such an integrated platform could democratize access to high-fidelity urban simulations, fostering deeper engagement among architects, planners, and non-expert stakeholders [2,29,39].

6. Conclusions

This study systematically explored the integration of AI within VR workflows, specifically applied to digital urban design through the creation and evaluation of a detailed digital model of Loughborough University. The primary research objective was to assess the capabilities of various VR platforms—Twinmotion, Unreal Engine, Mozilla Hubs, FrameVR, and ShapesXR—in supporting AI-driven interactive simulations, multi-user collaboration, and inclusive design workflows. The findings clearly addressed these objectives, highlighting platform-specific strengths and limitations in facilitating dynamic user interactions and accessibility evaluations within virtual urban environments.
A notable contribution of this research is its methodological innovation, representing the first systematic study to develop and evaluate a unified digital university model across multiple VR platforms. This comparative approach offers practical insights into platform interoperability and serves as a methodological reference for integrating AI-enhanced urban design into VR environments. The research advances scholarly discourse by critically analyzing platform-specific strengths and limitations, thereby providing architects, planners, educators, and policymakers with concrete guidelines to optimize VR usage in urban design practices.
The paper’s implications extend broadly across academia, industry, and local authorities. For academia, it introduces innovative methodological frameworks and pedagogical strategies for teaching digital urbanism and inclusive design principles through immersive and interactive AI-enhanced VR environments. Industry professionals benefit from actionable insights into the application of AI-driven VR tools in real-world urban design projects, enhancing both efficiency and user engagement. For local authorities, the findings underscore the potential of AI-integrated VR tools to foster more participatory and transparent urban planning processes, thereby improving public engagement and community satisfaction with urban developments.
While this study offers a comprehensive and technically grounded comparative evaluation, it is important to acknowledge its scope as a methodological investigation based on a single case study. The findings are drawn from a structured platform analysis using the digital model of Loughborough University, and although they provide valuable insights into the capabilities and limitations of AI-integrated VR systems, they are not based on empirical user testing. As such, the conclusions reflect platform-specific performance within a defined context.
Future research directions include expanding the evaluation of additional VR platforms and incorporating comprehensive user-based studies to assess practical implications and user experiences more thoroughly. Further work will explore deeper immersive technologies, including haptic feedback and sense gloves, to enhance realism and real-time evaluation of AI-driven interactions within VR environments. In addition, social and cultural dimensions—such as user identity, avatar embodiment, and perceptions of social presence—were not empirically examined in this study. These aspects warrant further investigation through user-centered design methods to ensure that AI–VR platforms support inclusivity across diverse populations and contexts. By incorporating user testing and experiential feedback, subsequent studies aim to validate and refine the proposed methodologies, further bridging the gap between theoretical capabilities and real-world usability.
In conclusion, this study demonstrates that the integration of AI within VR workflows presents significant opportunities for enhancing inclusive, interactive, and collaborative urban design. It sets a robust foundation for future methodological innovations and practical applications, bridging academic research with industry practices and public policy strategies to support more dynamic, responsive, and participatory urban environments.

Author Contributions

Conceptualization, A.E. and G.B.; methodology, A.E.; software, A.E.; validation, A.E., G.B. and A.A.; formal analysis, A.E.; investigation, A.E.; resources, A.E. and G.B.; data curation, A.E.; writing—original draft preparation, A.E.; writing—review and editing, G.B. and A.E.; visualization, A.E.; supervision, G.B.; project administration, A.E.; funding acquisition, not applicable. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
VRVirtual Reality
ARAugmented Reality
MRMixed Reality
XRExtended Reality
BIMBuilding Information Modeling
NeRFNeural Radiance Field
SPLATSpatial Point Light Approximation Technique (used in Gaussian Splatting)
NLPNatural Language Processing
GANsGenerative Adversarial Networks

References

  1. Pena, M.L.C.; Carballal, A.; Rodríguez-Fernández, N.; Santos, I.; Romero, J. Artificial intelligence applied to conceptual design: A review of its use in architecture. Autom. Constr. 2021, 124, 103550. [Google Scholar] [CrossRef]
  2. Son, T.H.; Weedon, Z.; Yigitcanlar, T.; Sanchez, T.; Corchado, J.M.; Mehmood, R. Algorithmic urban planning for smart and sustainable development: Systematic review of the literature. Sustain. Cities Soc. 2023, 94, 104562. [Google Scholar] [CrossRef]
  3. Young, G.W.; O’Dwyer, N.; Smolic, A. Exploring virtual reality for quality immersive empathy-building experiences. Behav. Inf. Technol. 2021, 41, 3415–3431. [Google Scholar] [CrossRef]
  4. Jamei, E.; Mortimer, M.; Seyedmahmoudian, M.; Horan, B.; Stojcevski, A. Investigating the role of virtual reality in planning for sustainable smart cities. Sustainability 2017, 9, 2006. [Google Scholar] [CrossRef]
  5. Portman, M.E.; Natapov, A.; Fisher-Gewirtzman, D. To go where no man has gone before: Virtual reality in architecture, landscape architecture and environmental planning. Comput. Environ. Urban Syst. 2015, 54, 376–384. [Google Scholar] [CrossRef]
  6. Disability Unit. UK Disability Survey Research Report, June 2021; Cabinet Office: London, UK, 2021. Available online: https://www.gov.uk/government/publications/uk-disability-survey-research-report-june-2021 (accessed on 21 March 2025).
  7. Gill, T. Urban Playground: How Child-Friendly Planning and Design Can Save Cities; RIBA Publishing: London, UK, 2021. [Google Scholar]
  8. Jian, I.Y.; Luo, J.; Chan, E.H. Spatial justice in public open space planning: Accessibility and inclusivity. Habitat Int. 2020, 97, 102122. [Google Scholar] [CrossRef]
  9. Nabatchi, T.; Ertinger, E.; Leighninger, M. The future of public participation: Better design, better laws, better systems. Confl. Resolut. Q. 2015, 33, S35–S44. [Google Scholar] [CrossRef]
  10. Wates, N. The Community Planning Handbook: How People Can Shape Their Cities, Towns and Villages in Any Part of the World; Routledge: Abingdon, UK, 2014. [Google Scholar]
  11. Schrom-Feiertag, H.; Stubenschrott, M.; Regal, G.; Matyus, T.; Seer, S. An interactive and responsive virtual reality environment for participatory urban planning. In Proceedings of the 11th Annual Symposium on Simulation for Architecture and Urban Design, Online, 25–28 May 2020; pp. 1–7. [Google Scholar]
  12. Rubio-Tamayo, J.L.; Gertrudix Barrio, M.; García García, F. Immersive environments and virtual reality: Systematic review and advances in communication, interaction and simulation. Multimodal Technol. Interact. 2017, 1, 21. [Google Scholar] [CrossRef]
  13. Meenar, M.; Kitson, J. Using multi-sensory and multi-dimensional immersive virtual reality in participatory planning. Urban Sci. 2020, 4, 34. [Google Scholar] [CrossRef]
  14. Alizadehsalehi, S.; Hadavi, A.; Huang, J.C. From BIM to extended reality in the AEC industry. Autom. Constr. 2020, 116, 103254. [Google Scholar] [CrossRef]
  15. Noghabaei, M.; Heydarian, A.; Balali, V.; Han, K. Trend analysis on adoption of virtual and augmented reality in the architecture, engineering, and construction industry. Data 2020, 5, 26. [Google Scholar] [CrossRef]
  16. Zhang, C.; Zeng, W.; Liu, L. UrbanVR: An immersive analytics system for context-aware urban design. Comput. Graph. 2021, 99, 128–138. [Google Scholar] [CrossRef]
  17. Ververidis, D.; Nikolopoulos, S.; Kompatsiaris, I. A review of collaborative virtual reality systems for the architecture, engineering, and construction industry. Architecture 2022, 2, 476–496. [Google Scholar] [CrossRef]
  18. Huang, Y.; Shakya, S.; Odeleye, T. Comparing the functionality between virtual reality and mixed reality for architecture and construction uses. J. Civ. Eng. Archit. 2019, 13, 409–414. [Google Scholar]
  19. Schiavi, B.; Havard, V.; Beddiar, K.; Baudry, D. BIM data flow architecture with AR/VR technologies: Use cases in architecture, engineering and construction. Autom. Constr. 2022, 134, 104054. [Google Scholar] [CrossRef]
  20. Davidson, J.; Fowler, J.; Pantazis, C.; Sannino, M.; Walker, J.; Sheikhkhoshkar, M.; Rahimian, F.P. Integration of VR with BIM to facilitate real-time creation of bill of quantities during the design phase. Front. Eng. Manag. 2020, 7, 396–403. [Google Scholar] [CrossRef]
  21. Watchorn, V.; Tucker, R.; Hitch, D.; Frawley, P. Co-design in the context of universal design: An Australian case study exploring the role of people with disabilities in the design of public buildings. Des. J. 2023, 27, 68–88. [Google Scholar] [CrossRef]
  22. Pesch, A.; Ochoa, K.D.; Fletcher, K.K.; Bermudez, V.N.; Todaro, R.D.; Salazar, J.; Hirsh-Pasek, K. Reinventing the public square and early educational settings through culturally informed, community co-design: Playful Learning Landscapes. Front. Psychol. 2022, 13, 933320. [Google Scholar] [CrossRef] [PubMed]
  23. Oetken, K.J. Unravelling the why: Exploring the increasing recognition and adoption of co-creation in contemporary urban design. Sustain. Communities 2025, 2, 1. [Google Scholar] [CrossRef]
  24. Oetken, K.J.; Hennig, K.; Henkel, S.; Merfeld, K. A psychoanalytical approach in urban design: Exploring dynamics of co-creation through theme-centred interaction. J. Urban Des. 2024, 1–28. [Google Scholar] [CrossRef]
  25. Reuter, T.K. Human rights and the city: Including marginalized communities in urban development and smart cities. J. Hum. Rights 2019, 18, 382–402. [Google Scholar] [CrossRef]
  26. Lynch, H.; Moore, A.; Edwards, C.; Horgan, L. Advancing play participation for all: The challenge of addressing play diversity and inclusion in community parks and playgrounds. Br. J. Occup. Ther. 2020, 83, 107–117. [Google Scholar] [CrossRef]
  27. Pineda, V.S.; Corburn, J. Disability, urban health equity, and the coronavirus pandemic: Promoting cities for all. J. Urban Health 2020, 97, 336–341. [Google Scholar] [CrossRef]
  28. Sánchez-Sepúlveda, M.; Fonseca, D.; Franquesa, J.; Redondo, E. Virtual interactive innovations applied for digital urban transformations: Mixed approach. Future Gener. Comput. Syst. 2019, 91, 371–381. [Google Scholar] [CrossRef]
  29. Dane, G.; Evers, S.; van den Berg, P.; Klippel, A.; Verduijn, T.; Wallgrün, J.O.; Arentze, T. Experiencing the future: Evaluating a new framework for the participatory co-design of healthy public spaces using immersive virtual reality. Comput. Environ. Urban Syst. 2024, 114, 102194. [Google Scholar] [CrossRef]
  30. Van Leeuwen, J.P.; Hermans, K.; Jylhä, A.; Quanjer, A.J.; Nijman, H. Effectiveness of virtual reality in participatory urban planning: A case study. In Proceedings of the 4th Media Architecture Biennale Conference, Beijing, China, 13–16 November 2018; pp. 128–136. [Google Scholar] [CrossRef]
  31. Parker, R.; Al-Maiyah, S. Developing an integrated approach to the evaluation of outdoor play settings: Rethinking the position of play value. Child. Geogr. 2021, 20, 1–23. [Google Scholar] [CrossRef]
  32. Pineo, H. Towards healthy urbanism: Inclusive, equitable and sustainable (THRIVES)—An urban design and planning framework from theory to praxis. Cities Health 2022, 6, 974–992. [Google Scholar] [CrossRef] [PubMed]
  33. Safikhani, S.; Keller, S.; Schweiger, G.; Pirker, J. Immersive virtual reality for extending the potential of building information modeling in architecture, engineering, and construction sector: Systematic review. Int. J. Digit. Earth 2022, 15, 503–526. [Google Scholar] [CrossRef]
  34. Ehab, A.; Burnett, G.; Heath, T. Enhancing Public Engagement in Architectural Design: A Comparative Analysis of Advanced Virtual Reality Approaches in Building Information Modeling and Gamification Techniques. Buildings 2023, 13, 1262. [Google Scholar] [CrossRef]
  35. Zaker, R.; Coloma, E. Virtual reality-integrated workflow in BIM-enabled projects collaboration and design review: A case study. Vis. Eng. 2018, 6, 4. [Google Scholar] [CrossRef]
  36. Yu, R.; Gu, N.; Lee, G.; Khan, A. A systematic review of architectural design collaboration in immersive virtual environments. Designs 2022, 6, 93. [Google Scholar] [CrossRef]
  37. Panya, D.S.; Kim, T.; Choo, S. An interactive design change methodology using BIM-based virtual reality and augmented reality. J. Build. Eng. 2023, 68, 106030. [Google Scholar] [CrossRef]
  38. Makanadar, A. Neuro-adaptive architecture: Buildings and city design that respond to human emotions, cognitive states. Res. Glob. 2024, 8, 100222. [Google Scholar] [CrossRef]
  39. Chen, X.; Zhang, Y.; Huang, X.; Liu, Z.; Wu, W. Enhancing interaction in virtual-real architectural environments: A comparative analysis of generative AI-driven reality approaches. Build. Environ. 2024, 266, 112113. [Google Scholar] [CrossRef]
  40. Abdelsalam, A.E. Resilient Design for London’s Elevated Social Spaces: Exploring Challenges, Opportunities, and Harnessing Interactive Virtual Reality Co-Design Approaches for Community Engagement. Ph.D. Thesis, University of Nottingham, Nottingham, UK, 2023. [Google Scholar]
  41. Ehab, A.; Heath, T. Exploring Immersive Co-Design: Comparing Human Interaction in Real and Virtual Elevated Urban Spaces in London. Sustainability 2023, 15, 9184. [Google Scholar] [CrossRef]
  42. Prabhakaran, A.; Mahamadu, A.M.; Mahdjoubi, L. Understanding the challenges of immersive technology use in the architecture and construction industry: A systematic review. Autom. Constr. 2022, 137, 104228. [Google Scholar] [CrossRef]
  43. Ehab, A.; Heath, T.; Burnett, G. Virtual Reality and the Interactive Design of Elevated Public Spaces: Cognitive Experience vs. VR Experience. In HCI International 2023 Posters. HCII 2023. Communications in Computer and Information Science; Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G., Eds.; Springer: Cham, The Netherlands, 2023; Volume 1836. [Google Scholar] [CrossRef]
  44. Pan, Y.; Zhang, L. Integrating BIM and AI for smart construction management: Current status and future directions. Arch. Comput. Methods Eng. 2022, 30, 1081–1110. [Google Scholar] [CrossRef]
  45. Kapsalis, E.; Jaeger, N.; Hale, J. Disabled-by-design: Effects of inaccessible urban public spaces on users of mobility assistive devices—A systematic review. Disabil. Rehabil. Assist. Technol. 2022, 19, 604–622. [Google Scholar] [CrossRef]
  46. Rueda, J.; Lara, F. Virtual reality and empathy enhancement: Ethical aspects. Front. Robot. AI 2020, 7, 506984. [Google Scholar] [CrossRef]
  47. Yao, T.; Yoo, S.; Parker, C. Evaluating Virtual Reality as a Tool for Empathic Modelling of Vision Impairment: Insights from a Simulated Public Interactive Display Experience. In Proceedings of the 33rd Australian Conference on Human-Computer Interaction (OzCHI ‘21), Melbourne, Australia, 30 November–3 December 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 190–197. [Google Scholar] [CrossRef]
  48. Liu, Z.; Lu, Y.; Peh, L.C. A review and scientometric analysis of global building information modeling (BIM) research in the architecture, engineering and construction (AEC) industry. Buildings 2019, 9, 210. [Google Scholar] [CrossRef]
  49. Chaturvedi, V.; de Vries, W.T. Machine Learning Algorithms for Urban Land Use Planning: A Review. Urban Sci. 2021, 5, 68. [Google Scholar] [CrossRef]
  50. Koutra, S.; Ioakimidis, C.S. Unveiling the Potential of Machine Learning Applications in Urban Planning Challenges. Land 2023, 12, 83. [Google Scholar] [CrossRef]
  51. Casali, Y.; Aydin, N.Y.; Comes, T. Machine learning for spatial analyses in urban areas: A scoping review. Sustain. Cities society 2022, 85, 104050. [Google Scholar] [CrossRef]
  52. Qin, R.; Gruen, A. The role of machine intelligence in photogrammetric 3D modeling—An overview and perspectives. Int. J. Digit. Earth 2020, 14, 15–31. [Google Scholar] [CrossRef]
  53. Zhang, M.; Tan, S.; Liang, J.; Zhang, C.; Chen, E. Predicting the impacts of urban development on urban thermal environment using machine learning algorithms in Nanjing, China. J. Environ. Manag. 2024, 356, 120560. [Google Scholar] [CrossRef]
  54. Xu, H.; Omitaomu, F.; Sabri, S.; Zlatanova, S.; Li, X.; Song, Y. Leveraging Generative AI for Urban Digital Twins: A Scoping Review on the Autonomous Generation of Urban Data, Scenarios, Designs, and 3D City Models for Smart City Advancement. Urban Inform. 2024, 3, 29. [Google Scholar] [CrossRef]
  55. Abramov, N.; Lankegowda, H.; Liu, S.; Barazzetti, L.; Beltracchi, C.; Ruttico, P. Implementing Immersive Worlds for Metaverse-Based Participatory Design through Photogrammetry and Blockchain. ISPRS Int. J. Geo-Inf. 2024, 13, 211. [Google Scholar] [CrossRef]
  56. Zhang, K.; Fassi, F. Transforming Architectural Digitisation: Advancements in AI-Driven 3D Reality-Based Modelling. Heritage 2025, 8, 81. [Google Scholar] [CrossRef]
  57. Sestras, P.; Badea, G.; Badea, A.C.; Salagean, T.; Roșca, S.; Kader, S.; Remondino, F. Land surveying with UAV photogrammetry and LiDAR for optimal building planning. Autom. Constr. 2025, 173, 106092. [Google Scholar] [CrossRef]
  58. Jamil, O.; Brennan, A. Immersive heritage through Gaussian Splatting: A new visual aesthetic for reality capture. Front. Comput. Sci. 2025, 7, 1515609. [Google Scholar] [CrossRef]
  59. Morse, C. Gaming Engines: Unity, Unreal, and Interactive 3D Spaces. Technol. Archit. Des. 2021, 5, 246–249. [Google Scholar] [CrossRef]
  60. Qiu, W.; Yuille, A. UnrealCV: Connecting Computer Vision to Unreal Engine. In Computer Vision—ECCV 2016 Workshops. ECCV 2016; Hua, G., Jégou, H., Eds.; Lecture Notes in Computer Science; Springer: Cham, The Netherlands, 2016; Volume 9915. [Google Scholar] [CrossRef]
  61. Sidani, A.; Dinis, F.M.; Sanhudo, L.; Duarte, J.; Santos Baptista, J.; Pocas Martins, J.; Soeiro, A. Recent tools and techniques of BIM-based virtual reality: A systematic review. Arch. Comput. Methods Eng. 2021, 28, 449–462. [Google Scholar] [CrossRef]
  62. Badwi, I.M.; Ellaithy, H.M.; Youssef, H.E. 3D-GIS Parametric Modelling for Virtual Urban Simulation Using CityEngine. Ann. GIS 2022, 28, 325–341. [Google Scholar] [CrossRef]
  63. Belaroussi, R.; Dai, H.; González, E.D.; Gutiérrez, J.M. Designing a Large-Scale Immersive Visit in Architecture, Engineering, and Construction. Appl. Sci. 2023, 13, 3044. [Google Scholar] [CrossRef]
  64. Belaroussi, R.; Pazzini, M.; Issa, I.; Dionisio, C.; Lantieri, C.; González, E.D.; Vignali, V.; Adelé, S. Assessing the Future Streetscape of Rimini Harbor Docks with Virtual Reality. Sustainability 2023, 15, 5547. [Google Scholar] [CrossRef]
  65. Lee, H.; Hwang, Y. Technology-Enhanced Education through VR-Making and Metaverse-Linking to Foster Teacher Readiness and Sustainable Learning. Sustainability 2022, 14, 4786. [Google Scholar] [CrossRef]
  66. Kao, H.-W.; Chen, Y.-C.; Wu, E.H.-K.; Yeh, S.-C.; Kao, S.-C. Loka: A Cross-Platform Virtual Reality Streaming Framework for the Metaverse. Sensors 2025, 25, 1066. [Google Scholar] [CrossRef]
  67. Lo, T.T.S.; Chen, Y.; Lai, T.Y.; Goodman, A. Phygital workspace: A systematic review in developing a new typological work environment using XR technology to reduce the carbon footprint. Front. Built Environ. 2024, 10, 1370423. [Google Scholar] [CrossRef]
  68. Zhang, Y.; Zhang, X.; Wang, J.; Lin, Y.; Li, Y. Application of Artificial Intelligence Technology in Urban Planning and Design: Opportunities, Challenges, and Prospects. Buildings 2024, 14, 835. [Google Scholar] [CrossRef]
  69. Bauerová, R.; Halaška, M.; Kopřivová, V. User experience with virtual reality in team-based prototyping and brainstorming. Int. J. Hum.–Comput. Interact. 2025, 1–19. [Google Scholar] [CrossRef]
Figure 1. Overview of co-design methodologies in urban design. The diagram illustrates participatory design activities and consultation processes, highlighting their integration with XR technology, digital twins, and ultimately the Metaverse for enhanced collaborative urban design.
Figure 1. Overview of co-design methodologies in urban design. The diagram illustrates participatory design activities and consultation processes, highlighting their integration with XR technology, digital twins, and ultimately the Metaverse for enhanced collaborative urban design.
Urbansci 09 00196 g001
Figure 2. BIM (VR+AI) System Workflow Using Twinmotion. The diagram illustrates the integration between Revit, Twinmotion, and VR environments with AI-enhanced interactive functions including real-time editing, environmental simulations, and participant-based scene customization.
Figure 2. BIM (VR+AI) System Workflow Using Twinmotion. The diagram illustrates the integration between Revit, Twinmotion, and VR environments with AI-enhanced interactive functions including real-time editing, environmental simulations, and participant-based scene customization.
Urbansci 09 00196 g002
Figure 3. Gamification VR Workflow Using Unreal Engine. The figure shows how the Revit BIM model is enhanced with AI Metahuman avatars and plugins to create interactive, immersive VR environments via the Unreal Engine.
Figure 3. Gamification VR Workflow Using Unreal Engine. The figure shows how the Revit BIM model is enhanced with AI Metahuman avatars and plugins to create interactive, immersive VR environments via the Unreal Engine.
Urbansci 09 00196 g003
Figure 4. Mozilla Hubs: WebXR Multi-User Collaboration System. The workflow illustrates the use of open-source tools, Blender, and cloud deployment to enable multi-user AI-enhanced collaboration in browser-based VR.
Figure 4. Mozilla Hubs: WebXR Multi-User Collaboration System. The workflow illustrates the use of open-source tools, Blender, and cloud deployment to enable multi-user AI-enhanced collaboration in browser-based VR.
Urbansci 09 00196 g004
Figure 5. FrameVR: WebXR Spatial Design and Collaboration System. The diagram details the use of no-code tools, avatars, and AI-powered visual and spatial generation features to support collaborative urban design.
Figure 5. FrameVR: WebXR Spatial Design and Collaboration System. The diagram details the use of no-code tools, avatars, and AI-powered visual and spatial generation features to support collaborative urban design.
Urbansci 09 00196 g005
Figure 6. ShapesXR: Multi-User Collaboration Application. The figure outlines the integration of BIM, Web Editor tools, and Meta avatars for immersive prototyping and cross-platform collaboration in real-time VR environments.
Figure 6. ShapesXR: Multi-User Collaboration Application. The figure outlines the integration of BIM, Web Editor tools, and Meta avatars for immersive prototyping and cross-platform collaboration in real-time VR environments.
Urbansci 09 00196 g006
Figure 7. Loughborough University Shirley Pearce Square Model developed in Autodesk Revit. The model served as the base for real-time VR simulation across various platforms.
Figure 7. Loughborough University Shirley Pearce Square Model developed in Autodesk Revit. The model served as the base for real-time VR simulation across various platforms.
Urbansci 09 00196 g007
Figure 8. Interactive real-time VR visualization of Shirley Pearce Square in Twinmotion. The scene demonstrates photorealistic rendering, contextual vegetation, and asset integration.
Figure 8. Interactive real-time VR visualization of Shirley Pearce Square in Twinmotion. The scene demonstrates photorealistic rendering, contextual vegetation, and asset integration.
Urbansci 09 00196 g008
Figure 9. West Park Teaching Hub Model in Twinmotion, part of the Loughborough University Virtual Campus. The model showcases enhanced material textures, realistic lighting, and environmental context.
Figure 9. West Park Teaching Hub Model in Twinmotion, part of the Loughborough University Virtual Campus. The model showcases enhanced material textures, realistic lighting, and environmental context.
Urbansci 09 00196 g009
Figure 10. Interactive VR functionalities demonstrated within the Loughborough University Virtual Campus using Twinmotion. The image showcases material editing, weather simulation, animated avatars, annotation tools, and pedestrian movement simulation, supporting real-time design interaction.
Figure 10. Interactive VR functionalities demonstrated within the Loughborough University Virtual Campus using Twinmotion. The image showcases material editing, weather simulation, animated avatars, annotation tools, and pedestrian movement simulation, supporting real-time design interaction.
Urbansci 09 00196 g010
Figure 11. Integration of the Cesium plugin within Unreal Engine, enabling real-world geospatial context for the Loughborough University campus through the use of Google Maps API.
Figure 11. Integration of the Cesium plugin within Unreal Engine, enabling real-world geospatial context for the Loughborough University campus through the use of Google Maps API.
Urbansci 09 00196 g011
Figure 12. Immersive VR view of the Loughborough University virtual campus, showcasing Shirley Pearce Square within Unreal Engine’s virtual environment.
Figure 12. Immersive VR view of the Loughborough University virtual campus, showcasing Shirley Pearce Square within Unreal Engine’s virtual environment.
Urbansci 09 00196 g012
Figure 13. Implementation of a MetaHuman avatar linked with Convai AI, enabling real-time conversational interaction between users and AI-driven virtual agents.
Figure 13. Implementation of a MetaHuman avatar linked with Convai AI, enabling real-time conversational interaction between users and AI-driven virtual agents.
Urbansci 09 00196 g013
Figure 14. Blueprint visual scripting used to activate AI functionalities and build interactive VR scenarios, including avatar behaviors and environmental responses.
Figure 14. Blueprint visual scripting used to activate AI functionalities and build interactive VR scenarios, including avatar behaviors and environmental responses.
Urbansci 09 00196 g014
Figure 15. User testing the interactive VR system within Unreal Engine 5, engaging with AI avatars and navigating the digital twin environment of the university campus.
Figure 15. User testing the interactive VR system within Unreal Engine 5, engaging with AI avatars and navigating the digital twin environment of the university campus.
Urbansci 09 00196 g015
Figure 16. Virtual campus model of Loughborough University hosted within the self-deployed Mozilla Hubs instance, showcasing various campus spaces optimized for WebXR-based exploration.
Figure 16. Virtual campus model of Loughborough University hosted within the self-deployed Mozilla Hubs instance, showcasing various campus spaces optimized for WebXR-based exploration.
Urbansci 09 00196 g016
Figure 17. Students and staff interacting with the virtual campus environment during open days and teaching sessions, using Mozilla Hubs for collaborative education and immersive urban analysis.
Figure 17. Students and staff interacting with the virtual campus environment during open days and teaching sessions, using Mozilla Hubs for collaborative education and immersive urban analysis.
Urbansci 09 00196 g017
Figure 18. Inclusive avatar design in Mozilla Hubs, created using Blender, representing diverse user experiences such as a robot avatar, a wheelchair user, a visually impaired user (with glaucoma or blurred vision filters), and pet avatars for spatial accessibility testing.
Figure 18. Inclusive avatar design in Mozilla Hubs, created using Blender, representing diverse user experiences such as a robot avatar, a wheelchair user, a visually impaired user (with glaucoma or blurred vision filters), and pet avatars for spatial accessibility testing.
Urbansci 09 00196 g018
Figure 19. AI-generated avatars within the FrameVR virtual campus environment, showcasing the configuration panel where users can customize character type, tone, and knowledge settings.
Figure 19. AI-generated avatars within the FrameVR virtual campus environment, showcasing the configuration panel where users can customize character type, tone, and knowledge settings.
Urbansci 09 00196 g019
Figure 20. AI generative tools within FrameVR, including text-to-image, text-to-video, image-to-3D, and skybox creation, along with the creation of AI-powered agent characters directly inside the virtual space.
Figure 20. AI generative tools within FrameVR, including text-to-image, text-to-video, image-to-3D, and skybox creation, along with the creation of AI-powered agent characters directly inside the virtual space.
Urbansci 09 00196 g020
Figure 21. Photogrammetry integration in FrameVR using Gaussian Splatting to visualize realistic environments in the Loughborough University virtual campus, enhancing immersion with optimized performance.
Figure 21. Photogrammetry integration in FrameVR using Gaussian Splatting to visualize realistic environments in the Loughborough University virtual campus, enhancing immersion with optimized performance.
Urbansci 09 00196 g021
Figure 22. Real-time collaborative prototyping of Shirley Pearce Square in ShapesXR, with users adding elements, modeling features, and sketching designs in passthrough VR mode.
Figure 22. Real-time collaborative prototyping of Shirley Pearce Square in ShapesXR, with users adding elements, modeling features, and sketching designs in passthrough VR mode.
Urbansci 09 00196 g022
Table 1. Comparison of VR Software, Plugins, and Game Engines Used in Urban Design Workflows. The table outlines the software type, compatible platforms, and core features supporting immersive architectural and urban visualization.
Table 1. Comparison of VR Software, Plugins, and Game Engines Used in Urban Design Workflows. The table outlines the software type, compatible platforms, and core features supporting immersive architectural and urban visualization.
NameTypeCompatible SoftwareFeaturesSource
ArkioVR Standalone ApplicationRevit, RhinoImmersive VR environment, real-time design modifications, absence of texture support, multi-user collaboration, 3D modeling, and presentation capabilities in virtual realityhttps://www.arkio.is/ (accessed on 20 February 2025)
FuzorVR Standalone ApplicationRevit, RhinoSynchronized live updates, integration of various disciplines within a virtual reality environment, clash detection, 4D simulations, and BIM data visualizationhttps://www.kalloctech.com/ (accessed on 25 February 2025)
Gravity SketchVR Standalone ApplicationRhinoVirtual reality support, real-time editing, absence of texture support, multi-user functionality, 3D sketching and modeling in immersive environments, and export capabilities in OBJ, IGES, and FBX formatshttps://gravitysketch.com/ (accessed on 20 February 2025)
Holodeck NvidiaVR Standalone Application3Ds Max, MayaNVIDIA Iray rendering technology, compatibility with standard NVIDIA vMaterials, high-quality visualization in virtual reality, limited connectivity with Omniverse, and AI-enhanced graphicshttps://www.nvidia.com/en-gb/design-visualization/technologies/holodeck/ (accessed on 20 February 2025)
TwinMotionVR Standalone ApplicationRevit, Rhino, 3Ds Max, SketchUp, ArchiCAD, Cinema 4DCompatibility with virtual reality, real-time visualization, dynamic weather system, landscape and vegetation tools, import and export capabilities for 3D models, and efficient design explorationhttps://www.twinmotion.com/en-US (accessed on 25 February 2025)
VU.CITYVR Standalone ApplicationRhino, Revit, SketchUp, AutoCAD3D city modeling, urban planning and analysis tools, interactive visualization, virtual reality support, integration with BIM data, and scenario-based planninghttps://www.vu.city/news/vu-city-virtual-reality-model (accessed on 28 February 2025)
IrisVR—The WildVR Standalone ApplicationRhino, Revit, Navisworks, SketchUpSupport for multiple users, 3D model viewing and annotation within virtual reality, real-time collaboration, and visualization of BIM datahttps://irisvr.com/ (accessed on 28 February 2025)
EnscapeVR PluginRevit, SketchUp, Rhino, ArchiCAD, VectorworksPhotorealistic rendering, interactive virtual reality environment, real-time walkthroughs, material and lighting adjustments, and efficient communication among stakeholdershttps://www.chaos.com/enscape (accessed on 10 February 2025)
MindeskVR PluginRhino, Revit, Solidworks, Unreal EngineAbsence of web interface and database support, real-time virtual reality modeling, seamless CAD integration, and streamlined design workflowshttps://mindeskvr.com/ (accessed on 28 February 2025)
TridifyVR PluginRevit, ArchiCAD, Tekla StructuresBIM data visualization, virtual reality support, web-based platform, interactive 3D models, and collaboration toolshttps://apps.autodesk.com/en/Publisher/PublisherHomepage?ID=ZQ8PQN75GY7D (accessed on 28 February 2025)
SENTIO VRVR PluginSketchUp, Revit, RhinoVirtual reality support, immersive presentations, 360-degree rendering, real-time collaboration, and integration with various design softwarehttps://www.sentiovr.com/ (accessed on 20 February 2025)
Autodesk Revit LiveVR PluginRevitInteractive visualization, virtual reality support, real-time design modifications, integration with BIM data, and streamlined collaboration among stakeholdershttps://www.autodesk.com/products/revit-live (accessed on 20 February 2025)
VR SketchVR PluginSketchUpVirtual reality support, real-time design modifications, integration with SketchUp models, navigation and presentation tools, and compatibility with various virtual reality headsetshttps://vrsketch.eu/ (accessed on 25 February 2025)
UnityGame EngineFBX, OBJ, 3ds Max, Maya, BlenderReal-time rendering, support for virtual and augmented reality, 2D and 3D visualization, comprehensive asset library, scripting capabilities, integration with BIM tools, and customizable design workflowshttps://unity.com/ (accessed on 10 February 2025)
Unreal EngineGame EngineFBX, OBJ, 3ds Max, Maya, Blender, SketchUp, RevitReal-time rendering, virtual and augmented reality support, high-quality visualization, integration with BIM tools, Datasmith import, interactive experiences, and advanced material and lighting adjustmentshttps://www.unrealengine.com/en-US (accessed on 28 February 2025)
CryEngineGame EngineFBX, OBJ, 3ds Max, Maya, BlenderHigh-quality rendering, support for virtual reality, real-time lighting and reflections, large-scale terrain tools, and integration with architectural visualization toolshttps://www.cryengine.com/ (accessed on 28 February 2025)
Godot EngineGame EngineFBX, OBJ, Blender, Collada2D and 3D visualization, virtual and augmented reality support, scripting capabilities, customizable workflows, and integration with 3D modeling softwarehttps://godotengine.org/ (accessed on 28 February 2025)
Mozilla HubsVR Standalone Chat PlatformGlTF, FBX, OBJBrowser-based virtual reality platform, real-time collaboration, 3D model importing, avatars, customizable spaces, and cross-platform compatibilityhttps://hubsfoundation.org/ (accessed on 28 February 2025)
Any LandVR Standalone Chat PlatformN/AVirtual reality chat platform, in-world creation tools, user-generated content, customization, and interactive environmentshttps://anyland.com/ (accessed on 28 February 2025)
VRChatVR Standalone Chat PlatformUnity, Blender, FBX, OBJVirtual reality chat platform, user-generated content, avatars, interactive worlds, and integration with Unity for custom content creationhttps://hello.vrchat.com/ (accessed on 20 February 2025)
FrameVRVR Standalone Chat PlatformBlender, ply, spzMulti-user collaboration, 3D model import, voice chat, browser-based immersive spatial designhttps://learn.framevr.io/ (accessed on 10 May 2025)
ShapesXRVR Standalone ApplicationOBJ, GLB, glTFImmersive design collaboration, real-time prototyping, spatial sketching in VR, multi-user interactionhttps://www.shapesxr.com/ (accessed on 10 May 2025)
Table 2. Overview of AI-Enhanced Urban Design Software and Tools. The table summarizes key platforms used in AI-driven urban workflows, covering generative modeling, real-time rendering, photogrammetry, environmental simulation, and digital twin development, along with their core features and primary use cases.
Table 2. Overview of AI-Enhanced Urban Design Software and Tools. The table summarizes key platforms used in AI-driven urban workflows, covering generative modeling, real-time rendering, photogrammetry, environmental simulation, and digital twin development, along with their core features and primary use cases.
SoftwareKey FeaturesAI IntegrationPrimary Use CaseSource
NVIDIA OmniverseAI-driven real-time digital twins, collaborative workflows, and automated texture enhancementAI-powered texturing, predictive environmental adaptation, and multi-user collaborationInteractive digital twins, real-time urban model interaction and AI-based optimizationshttps://www.nvidia.com/en-gb/omniverse/
(accessed on 20 February 2025)
CityEngineProcedural urban modeling and rule-based AI for generative city layoutsAI-assisted generative city modeling and automated urban landscapesAutomated procedural city generation, optimizing complex urban layoutshttps://www.esri.com/en-us/arcgis/products/arcgis-cityengine/overview
(accessed on 15 February 2025)
RealityCaptureAI-powered photogrammetry and high-accuracy 3D model reconstructionAI-enhanced photogrammetry and real-time 3D urban reconstruction3D scanning of real-world urban environments for digital twinshttps://www.capturingreality.com/
(accessed on 20 February 2025)
MeshroomOpen-source photogrammetry and AI-assisted 3D reconstruction from imagesNeural network-based texture generation and point cloud reconstructionPhotogrammetry and high-accuracy 3D reconstruction from photos, high-quality visualization in virtual reality, limited connectivity with Omniverse, and AI-enhanced graphicshttps://meshroom.en.softonic.com/
(accessed on 10 February 2025)
Agisoft MetashapeAI-enhanced photogrammetry, texture mapping, and digital twin creationAutomated 3D model creation from aerial/satellite imageryUrban digitization and integration into AI-based analysis workflowshttps://www.agisoft.com/
(accessed on 18 February 2025)
Grasshopper AIAI-assisted parametric modeling, adaptive urban forms, and passive cooling strategiesAI-driven optimization for climate-adaptive architectural designsClimate-responsive building design and energy-efficient urban formhttps://simplyrhino.co.uk/3d-modelling-software/grasshopper
(accessed on 20 February 2025)
Ladybug ToolsAI-driven climate simulations, solar radiation, and daylight optimizationMachine learning for real-time climate modeling and energy efficiency calculationsAI-driven urban microclimate simulations and sustainability planninghttps://www.ladybug.tools/
(accessed on 25 February 2025)
PolycamAI-powered 3D scanning, LiDAR, and photogrammetry-based model creationAI-enhanced 3D reconstruction and real-time processing of scan dataHigh-fidelity 3D scanning of urban environments and objects for design workflowshttps://poly.cam/
(accessed on 10 February 2025)
Convai AIAI-driven conversational agents and interactive avatarsAI-powered NPCs and digital assistants for spatial interactionAI-driven engagement in urban simulations, interactive NPCs for digital urban spaceshttps://www.convai.com/ (accessed on 10 February 2025)
Roden AIAI-powered generative architecture, automated 3D modeling, and urban planning optimizationAI-driven parametric urban design, automated massing studies, and generative urban scenariosAI-based architectural design automation, optimizing urban form and spatial efficiencyhttps://www.rundown.ai/tools/rodin
(accessed on 15 February 2025)
Cesium3D geospatial visualization, digital twins, and real-time rendering of massive urban datasetsAI-assisted geospatial analytics and real-time rendering of global-scale urban modelsLarge-scale city modeling, geospatial analysis, and real-time 3D visualizationhttps://cesium.com/
(accessed on 20 February 2025)
Autodesk FormaCloud-based generative design, environmental analysis, and scenario simulationAI-powered site analysis, energy usage prediction, and generative massing toolsEarly-stage urban planning, real-time performance feedback, sustainable designhttps://www.autodesk.com/company/autodesk-platform/aec
(accessed on 20 February 2025)
Table 3. Comparative Technical Evaluation of Five AI-Enhanced VR Platforms for Inclusive Urban Design.
Table 3. Comparative Technical Evaluation of Five AI-Enhanced VR Platforms for Inclusive Urban Design.
PlatformCompatibilityDesign and VR FeaturesCollaboration and AccessibilityAI Capabilities
TwinmotionRevit, Rhino, 3Ds Max, SketchUp, ArchiCAD, Cinema 4D, FBX, GLB, and OBJLumen rendering, path tracing, material editing, real-time weather/light simulation, and asset libraryTwinmotion Cloud, VR headset support, and browser-based viewingPhotogrammetry integration, AI-enhanced assets (external), and no generative design
Unreal EngineRevit, Rhino, SketchUp, 3Ds Max FBX, GLB, and DatasmithHigh-fidelity rendering, blueprint scripting, real-time design, MetaHuman, and annotationsMulti-user VR sessions and real-time walkthroughsConvai avatars, Cesium for geodata, and AI scripting for creating characters and environments
Hubs InstanceGLB, FBX, and OBJ (via Blender/Spoke)360° media integration, spatial audio, fly navigation, annotation tools, virtual camera views, real-time object importing, and sharing PDFs and PowerPoint presentations within the metaverseBrowser-based, WebXR, multi-user, and custom avatars Supports AI chatbots and no native AI behavior
FrameVRGLB, PLY, SPZ, and SPLAT (Blender)Object interaction, AI avatar design, Google Street View, Skybox editing, Gaussian splatting, and spatial audioBrowser-based, WebXR, multi-user, and flying navigationText-to-3D Gen, Image-to-3D Gen, AI chatbot avatars, AI-generated skyboxes, AI meeting transcripts, and Gaussian splatting
ShapesXROBJ, GLB, glTF (ZIP), PNG, JPG, MP3, WAV, and FigmaSketching, procedural design, prototyping, Holonotes, multi-user interaction and real-time collaboration, passthrough support, asset library, changing materials, and cross-platform collaborationUp to 8 users, web access, voice chat, Meta avatars, and browser editorExport to Unity/Unreal for AI integration
Table 4. Comparative Summary of Use Cases, Strengths, and Limitations of Five AI–VR Platforms in Inclusive Urban Design.
Table 4. Comparative Summary of Use Cases, Strengths, and Limitations of Five AI–VR Platforms in Inclusive Urban Design.
PlatformUse Cases in Inclusive Urban DesignStrengthsWeaknesses and Limitations
Twinmotion
  • Real-time design iteration in urban design studios
  • Interactive stakeholder consultations
  • Visual accessibility testing for public spaces
  • Teaching spatial planning and environmental empathy
  • Small-group workshops on material/lighting decisions
  • Seamless Revit ↔ VR synchronization
  • High-fidelity rendering (Lumen, path tracing)
  • Rich asset library
  • Weather/time simulations for context empathy
  • Cloud sharing via Twinmotion Cloud
  • Intuitive 3D configurator tools for stakeholder engagement
  • No coding or scripting required for building immersive VR environments
  • Natively supports photogrammetry-based AI assets and animated GLB/FBX files
  • Free to use for students, educators, and small businesses (earning under 1M USD annually)
  • No sense of embodiment in VR; lacks avatar representation or hand tracking interaction
  • Does not support object manipulation or physics-based interactions such as picking up, moving, or dropping items
  • Designed primarily as a single-user experience with limited multi-user engagement; lacks real-time voice communication or collaborative editing in VR
  • Limited native AI functionality; lacks built-in support for real-time generative design
  • Requires a high-performance workstation tethered to a VR headset to enable full interactive design capabilities
  • Twinmotion Cloud is only available with a paid license and supports basic browser-based viewing with limited interactivity
  • The commercial license costs approximately £426 per seat per year for large businesses
Unreal Engine
  • Advanced co-creation sessions with AI avatars
  • Community co-design games simulating user roles
  • Scenario-based planning simulations
  • Collaborative critique workshops
  • Public VR installations for participatory visioning
  • Postgraduate teaching in digital twin urbanism
  • Research on social dynamics in urban environments
  • Enables photorealistic rendering through advanced lighting systems, including Lumen global illumination and path tracing
  • Supports real-time design modifications and interactive spatial simulations via Blueprint scripting
  • Integrates Convai AI to enable conversational avatars, facilitating scenario-based walkthroughs and empathic design evaluations.
  • Supports Cesium plugin for geospatial and terrain accuracy
  • Enables multi-user VR sessions for collaborative design
  • Advanced material editing and environment simulation tools.
  • MetaHuman avatars for enhanced embodiment and realism.
  • Capable of integrating AI-driven behavior simulations, such as pedestrian dynamics and environmental responsiveness.
  • Datasmith compatibility for seamless BIM/CAD imports.
  • Extensible with external AI tools and photogrammetry workflows, enabling hybrid integration for smart city modeling
  • Free to use for students, educators, and small businesses (earning under 1M USD annually)
  • Requires high-performance hardware for smooth real-time rendering and VR operation
  • Lacks browser-based deployment, limiting accessibility for remote or non-technical users
  • Project files are large and need standalone packaging, restricting real-time web-based collaboration
  • Steep learning curve due to advanced coding requirements and technical setup
  • No native tools for inclusive avatar diversity (e.g., disability representation) without external creation
  • High maintenance overhead for updates, plugin compatibility, and server-side integration
  • AI features require separate configuration and may incur additional costs.
  • Not ideal for lightweight participatory workshops or rapid prototyping due to technical complexity
  • The commercial license costs approximately £1770 per seat per year for large businesses
Hubs Instance
  • Remote teaching and critiques
  • Immersive Focus groups and interviews
  • Urban storytelling and walk-alongs for memory mapping
  • Open-access VR spaces for community workshops
  • Inclusive stakeholder meetings with custom avatars
  • Browser-based and accessible via WebXR, requiring no software installation, which enhances remote accessibility and inclusivity
  • Supports multi-user collaboration in real-time, enabling synchronous engagement among dispersed stakeholders.
  • Facilitates inclusive design evaluation through customizable avatars representing diverse user perspectives (e.g., disability, mobility)
  • Allows real-time object import and spatial annotation, supporting iterative design discussions and prototyping
  • Integrates 360° imagery, spatial audio, and media sharing, enriching environmental context and stakeholder engagement.
  • Used effectively in educational settings, such as open days and virtual urban design studios, enhancing participatory learning
  • Enables fly-through navigation and virtual cameras, supporting spatial understanding in large-scale urban models
  • Compatible with PDF and presentation uploads, allowing design documentation to be shared and discussed within the VR space
  • Official support discontinued by Mozilla, requiring technical expertise to self-host and maintain custom instances
  • Limited graphical fidelity and rendering quality, especially for complex urban models, due to WebXR performance constraints
  • AI features rely on third-party plugins or external embedding
  • Limited environmental realism, with basic lighting and material options compared to game engines
  • Requires extensive optimization via Blender for large models, increasing setup time and technical complexity
FrameVR
  • Public consultation labs during planning phases
  • Urban design symposiums with interactive scenes
  • Educational co-design studios
  • Participatory events during local council planning
  • Focus groups exploring accessibility narratives
  • Remote VR community planning charrettes
  • Large-scale planning workshops
  • WebXR-based and browser-accessible, allowing multi-user collaboration without the need for software installation or powerful hardware
  • Integrates AI-driven tools, including text-to-3D generation, image-to-3D conversion, AI chatbot avatars, and AI-generated skyboxes
  • Includes Gaussian splatting support (SPZ and SPLAT files) for high-fidelity photogrammetric visualization with optimized performance
  • Facilitates live meetings with AI-generated transcripts, enhancing documentation of stakeholder discussions
  • Features embedded Google Street View, enabling contextual urban analysis directly within the VR environment
  • Fly mode and spatial audio support intuitive navigation and immersive communication in large-scale digital twins
  • Free tier includes up to 3 frames with essential features, supporting a maximum of 8 users per frame
  • The Pro version supports up to 100 concurrent users in a single frame, enabling large-scale collaborative sessions
  • Limited visual fidelity due to browser-based rendering, which restricts the realism achievable compared to engine-based platforms like Unreal
  • Does not support real-time 3D object manipulation with physics (e.g., gravity, grabbing) like more advanced VR platforms
  • Avatar systems are limited to predefined types (e.g., Ready Player Me, robot, human) and lack full embodiment features such as expressive animations and hand tracking
  • Limited AI behavior modeling
  • Access to full features, higher user capacity, and branding options requires higher-tier plans (e.g., Pro at 200 USD/month or Enterprise starting at 5000 USD), making advanced use costly for small organizations
ShapesXR
  • Sketch-based urban concept development
  • Participatory prototyping in inclusive design studios
  • Design thinking workshops in VR
  • Feedback loops via Holonotes in teaching contexts
  • Co-design activities simulating multi-user decision-making
  • Remote collaboration on urban interventions
  • VR-native platform designed for real-time collaborative spatial design and prototyping
  • Intuitive sketching, procedural shape creation, and material editing tools support early-stage ideation
  • Multi-scene storyboard system enables iteration and presentation of sequential design proposals
  • Holonotes feature allows recording of voice annotations and avatar movement for asynchronous feedback
  • Browser-based editor allows participation from non-VR users, enhancing accessibility in collaborative workflows
  • Supports export to Unity and Unreal Engine via glTF and USDZ for further AI-enhanced development
  • Lacks built-in AI functionalities such as generative design, crowd simulation, or AI-driven interaction
  • No support for real-time behavioral simulations or dynamic data visualization within the VR environment
  • Limited avatar options and absence of expressive embodiment features like facial animation or hand tracking
  • Supports a maximum of 8 simultaneous users per session, which may constrain larger-scale co-design workshops
  • High-fidelity models require pre-optimization, and complex environments may exceed import size limits
  • Exported content must be transferred to Unity or Unreal for advanced simulation, adding workflow complexity
  • Business and Enterprise plans are priced via quotation only; Enterprise access can exceed 1000 USD per user annually, limiting affordability for small teams or academic users
  • The free version allows only one editable space and restricted import/export capabilities, which limits design scope for extended projects
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ehab, A.; Aladawi, A.; Burnett, G. Exploring AI-Integrated VR Systems: A Methodological Approach to Inclusive Digital Urban Design. Urban Sci. 2025, 9, 196. https://doi.org/10.3390/urbansci9060196

AMA Style

Ehab A, Aladawi A, Burnett G. Exploring AI-Integrated VR Systems: A Methodological Approach to Inclusive Digital Urban Design. Urban Science. 2025; 9(6):196. https://doi.org/10.3390/urbansci9060196

Chicago/Turabian Style

Ehab, Ahmed, Ahmad Aladawi, and Gary Burnett. 2025. "Exploring AI-Integrated VR Systems: A Methodological Approach to Inclusive Digital Urban Design" Urban Science 9, no. 6: 196. https://doi.org/10.3390/urbansci9060196

APA Style

Ehab, A., Aladawi, A., & Burnett, G. (2025). Exploring AI-Integrated VR Systems: A Methodological Approach to Inclusive Digital Urban Design. Urban Science, 9(6), 196. https://doi.org/10.3390/urbansci9060196

Article Metrics

Back to TopTop