Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (98)

Search Parameters:
Keywords = spatial reality display

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 8484 KB  
Article
Differentiable Automated Design of Automotive Freeform AR-HUD Optical Systems
by Chengxiang Fan, Jihong Zheng, Xinjun Wan, Xiaoxiao Wei and Yunfeng Nie
Photonics 2026, 13(4), 337; https://doi.org/10.3390/photonics13040337 - 30 Mar 2026
Abstract
The automotive augmented reality head-up display (AR-HUD) system projects critical driving information directly into the driver’s line of sight, enhancing driving safety, user experience, and navigation efficiency. However, due to the intrinsic asymmetry of vehicle windshields, existing optical configurations are difficult to use [...] Read more.
The automotive augmented reality head-up display (AR-HUD) system projects critical driving information directly into the driver’s line of sight, enhancing driving safety, user experience, and navigation efficiency. However, due to the intrinsic asymmetry of vehicle windshields, existing optical configurations are difficult to use as effective design starting points. The asymmetric transmission region of the windshield causes the AR-HUD optical system to deviate significantly from the YOZ plane, increasing the complexity of system design and optimization. To address these challenges, this paper proposes an automated design method for automotive AR-HUD optical systems. Given the windshield geometry and system design specifications, a normal-guided iterative construction method is first employed to generate a high-performance initial optical structure with low distortion. Subsequently, differentiable ray tracing combined with optimization algorithms is employed to further improve system performance. Based on the proposed method, an AR-HUD optical system with a 130 mm × 50 mm eye-box and a 13° × 4° field of view was designed. The design results indicate that the maximum optical distortion is 0.51%. At five sampled eye positions within the eye-box, the MTF exceeds 0.5 at the spatial frequency of 6 lp/mm, and the dynamic distortion remains below 5.36 ′. Finally, a complete experimental prototype was established, and the experimental results verified the feasibility and effectiveness of the proposed automated design method. Full article
(This article belongs to the Special Issue Emerging Topics in Freeform Optics)
21 pages, 3768 KB  
Article
Spatial Plane Positioning of AR-HUD Graphics: Implications for Driver Inattentional Blindness in Navigation and Collision Warning Scenarios
by Menlong Ye and Jun Yin
Electronics 2025, 14(23), 4768; https://doi.org/10.3390/electronics14234768 - 4 Dec 2025
Viewed by 779
Abstract
In-vehicle Augmented Reality Head-Up Displays (AR-HUDs) enhance driving performance and experience by presenting critical information such as navigation cues and collision warnings. Although many studies have investigated the efficacy of AR-HUD navigation and collision warning interface designs, existing research has overlooked the critical [...] Read more.
In-vehicle Augmented Reality Head-Up Displays (AR-HUDs) enhance driving performance and experience by presenting critical information such as navigation cues and collision warnings. Although many studies have investigated the efficacy of AR-HUD navigation and collision warning interface designs, existing research has overlooked the critical interplay between graphic spatial positioning and safety risks arising from inattentional blindness. This study employed a single-factor within-subjects design, with Experiment 1 and Experiment 2 separately examining the impact of the spatial planar position (horizontal planar position, vertical planar position, mixed planar position) of AR-HUD navigation graphics and collision warning graphics on drivers’ inattentional blindness. The results revealed that the spatial planar position of AR-HUD navigation graphics has no significant effect on inattentional blindness behavior or reaction time. However, the horizontal planar position yielded the best user experience with low workload, followed by the mixed planar position. For AR-HUD collision warning graphics, their spatial planar position does not significantly influence the frequency of inattentional blindness; From the perspectives of workload and user experience, the vertical planar position of collision warning graphics provides the best experience with the lowest workload, while the mixed planar position demonstrates superior hedonic qualities. Overall, this study offers design guidelines for in-vehicle AR-HUD interfaces. Full article
Show Figures

Figure 1

29 pages, 11221 KB  
Article
A Spatio-Temporal Overlap Narrative Experience Model for Archaeological Site Museums: A Case Study of the Panlongcheng Archaeological Site Museum
by Qi Hu, Xiao He, Tianyu Wei and Yi Yuan
Buildings 2025, 15(21), 3956; https://doi.org/10.3390/buildings15213956 - 2 Nov 2025
Cited by 1 | Viewed by 2167
Abstract
In the global trend of museums transitioning from static displays to digital, narrative, and experiential forms, heritage museums face challenges such as weakened cultural identity, insufficient emotional resonance, and the separation of reality and fiction. To address these issues, this study, based on [...] Read more.
In the global trend of museums transitioning from static displays to digital, narrative, and experiential forms, heritage museums face challenges such as weakened cultural identity, insufficient emotional resonance, and the separation of reality and fiction. To address these issues, this study, based on the theory of spatial narrative, introduces the tripartite theory of spatial production to jointly construct a narrative experience model with overlapping time and space. By expanding the dimensions of time and space, it achieves a deep correspondence of virtual experiences, providing guidance for the virtual-real integration experience design of heritage museums. Methodologically, a combined approach of FAHP1-spatial syntax-FAHP2-FCE is adopted. Taking the Panlongcheng Heritage Museum as an example, with user experience needs as the starting point and the analysis results of the physical exhibition space as the basis, the heritage culture theme serves as the narrative thread, integrating into an experiential model with contextual virtual-real fusion. Finally, the design practice is verified through FCE. The results show that this model can optimize the virtual-real integration experience, enhance users’ cultural identity and emotional resonance, and provide beneficial insights for the digital and experiential transformation of heritage museums. Full article
Show Figures

Figure 1

15 pages, 8493 KB  
Article
Phase-Retrieval Algorithm for Hololens Resolution Analysis in a Sustainable Photopolymer
by Tomás Lloret, Víctor Navarro-Fuster, Marta Morales-Vidal and Inmaculada Pascual
Polymers 2025, 17(20), 2732; https://doi.org/10.3390/polym17202732 - 11 Oct 2025
Cited by 1 | Viewed by 894
Abstract
In this paper, the iterative Gerchberg–Saxton (GS) phase-retrieval algorithm is employed to reconstruct the amplitude spread function (ASF) of hololenses (HLs) recorded on a sustainable PVA/acrylate-based photopolymer, Biophotopol, when working with a CCD sensor. The main objective of this work is [...] Read more.
In this paper, the iterative Gerchberg–Saxton (GS) phase-retrieval algorithm is employed to reconstruct the amplitude spread function (ASF) of hololenses (HLs) recorded on a sustainable PVA/acrylate-based photopolymer, Biophotopol, when working with a CCD sensor. The main objective of this work is to characterize the spatial resolution of HLs, which are key components in a wide range of optical systems, including augmented reality (AR) glasses, combined information displays, and holographic solar concentrators. The GS algorithm, known for its efficiency in phase retrieval without prior knowledge of the phase of the optical system, is used to reconstruct the ASF, which is critical for mitigating information loss during imaging. Spatial resolution is quantified by convolving the ASFs obtained with two resolution tests (objective and subjective) and analyzing the resulting image using a CCD sensor. The convolution process allows an accurate assessment of lens performance, highlighting the resolution limits of manufactured lenses. The results show that the iterative GS algorithm provides a reliable method to improve image quality by recovering phase and amplitude information that might otherwise be lost, especially when using CCD or CMOS sensors. In addition, the recorded hololenses exhibit a spatial resolution of 8.9 lp/mm when evaluated with the objective Siemens star chart, and 30 cycles/degree when evaluated with the subjective Random E visual acuity test, underscoring the ability of Biophotopol-based HLs to meet the performance requirements of advanced optical applications. This work contributes to the development of sustainable high-resolution holographic lenses for modern imaging technologies, offering a promising alternative for future optical systems. Full article
(This article belongs to the Special Issue Advances in Photopolymer Materials: Holographic Applications)
Show Figures

Figure 1

25 pages, 1278 KB  
Review
Eye-Tracking Advancements in Architecture: A Review of Recent Studies
by Mário Bruno Cruz, Francisco Rebelo and Jorge Cruz Pinto
Buildings 2025, 15(19), 3496; https://doi.org/10.3390/buildings15193496 - 28 Sep 2025
Cited by 1 | Viewed by 2938
Abstract
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new [...] Read more.
This Scoping Review (ScR) synthesizes advances in architectural eye-tracking (ET) research published between 2010 and 2024. Drawing on 75 peer-reviewed studies that met clear inclusion criteria, it monitors the field’s rapid expansion, from only 20 experiments before 2018, to more than 45 new investigations in the three years thereafter, situating these developments within the longer historical evolution of ET hardware and analytical paradigms. The review maps 13 recurrent areas of application, focusing on design evaluation, wayfinding and spatial navigation, end-user experience, and architectural education. Across these domains, ET reliably reveals where occupants focus, for how long, and in what sequence, providing objective evidence that complements designer intuition and conventional post-occupancy surveys. Experts and novices might display distinct gaze signatures; for example, architects spend longer fixating on contextual and structural cues, whereas lay users dwell on decorative details, highlighting possible pedagogical opportunities. Despite these benefits, persistent challenges include data loss in dynamic or outdoor settings, calibration drift, single-user hardware constraints, and the need to triangulate gaze metrics with cognitive or affective measures. Future research directions emphasize integrating ET with virtual or augmented reality (VR) (AR) to validate design interactively, improving mobile tracking accuracy, and establishing shared datasets to enable replication and meta-analysis. Overall, the study demonstrates that ET is maturing into an indispensable, evidence-based lens for creating more intuitive, legible, and human-centered architecture. Full article
(This article belongs to the Special Issue Emerging Trends in Architecture, Urbanization, and Design)
Show Figures

Figure 1

21 pages, 4655 KB  
Article
A Geometric Distortion Correction Method for UAV Projection in Non-Planar Scenarios
by Hao Yi, Sichen Li, Feifan Yu, Mao Xu and Xinmin Chen
Aerospace 2025, 12(10), 870; https://doi.org/10.3390/aerospace12100870 - 27 Sep 2025
Viewed by 909
Abstract
Conventional projection systems typically require a fixed spatial configuration relative to the projection surface, with strict control over distance and angle. In contrast, UAV-mounted projectors overcome these constraints, enabling dynamic, large-scale projections onto non-planar and complex environments. However, such flexible scenarios introduce a [...] Read more.
Conventional projection systems typically require a fixed spatial configuration relative to the projection surface, with strict control over distance and angle. In contrast, UAV-mounted projectors overcome these constraints, enabling dynamic, large-scale projections onto non-planar and complex environments. However, such flexible scenarios introduce a key challenge: severe geometric distortions caused by intricate surface geometry and continuous camera–projector motion. To address this, we propose a novel image registration method based on global dense matching, which estimates the real-time optical flow field between the input projection image and the target surface. The estimated flow is used to pre-warp the image, ensuring that the projected content appears geometrically consistent across arbitrary, deformable surfaces. The core idea of our method lies in reformulating the geometric distortion correction task as a global feature matching problem, effectively reducing 3D spatial deformation into a 2D dense correspondence learning process. To support learning and evaluation, we construct a hybrid dataset that covers a wide range of projection scenarios, including diverse lighting conditions, object geometries, and projection contents. Extensive simulation and real-world experiments show that our method achieves superior accuracy and robustness in correcting geometric distortions in dynamic UAV projection, significantly enhancing visual fidelity in complex environments. This approach provides a practical solution for real-time, high-quality projection in UAV-based augmented reality, outdoor display, and aerial information delivery systems. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

13 pages, 2058 KB  
Article
Development of a Spatial Alignment System for Interacting with BIM Objects in Mixed Reality
by Jaehong Cho, Sungpyo Kim and Sanghyeok Kang
Appl. Sci. 2025, 15(17), 9713; https://doi.org/10.3390/app15179713 - 4 Sep 2025
Cited by 2 | Viewed by 1051
Abstract
This study proposes a Two-points Spatial Alignment System (TSAS) for accurate positioning of Building Information Modeling (BIM) objects in Mixed Reality (MR) environments at construction sites. Conventional spatial alignment methods present limitations: marker-based approaches require precise marker installation and setup in predefined locations, [...] Read more.
This study proposes a Two-points Spatial Alignment System (TSAS) for accurate positioning of Building Information Modeling (BIM) objects in Mixed Reality (MR) environments at construction sites. Conventional spatial alignment methods present limitations: marker-based approaches require precise marker installation and setup in predefined locations, while drag-based methods rely considerably on user manipulation skills. TSAS utilizes Y-axis rotation and vector-based scaling mechanisms to facilitate alignment processes. Through usability evaluation with 30 participants in MR environments, TSAS demonstrated a performance with a 50.3 mm alignment error, compared to marker-based (64.0 mm) and drag methods (199.7 mm). A one-way Analysis of Variance (ANOVA) confirmed that these differences in accuracy were statistically significant (p < 0.001). Notably, TSAS meets the Korean building regulation’s tolerance while maintaining consistent accuracy in indoor environments. Although the marker method showed better efficiency in operation time, this evaluation excluded initial installation time requirements. The usability evaluation suggests this approach could be beneficial for BIM visualization and review processes in construction settings. Future research will focus on validating the system’s performance in diverse construction environments, including larger buildings and complex sites. Full article
Show Figures

Figure 1

13 pages, 1145 KB  
Communication
Fighting STEM Stereotypes in Adolescence: The Role of Spatial Skills, Identity, and Digital Interventions
by Victoria D. Chamizo
Virtual Worlds 2025, 4(3), 36; https://doi.org/10.3390/virtualworlds4030036 - 8 Aug 2025
Viewed by 1395
Abstract
Traditionally, formal education has favored boys, while girls, in the past, were relegated to the domestic sphere. This has been the case for centuries, without considering the possible specific cognitive needs of girls, which have been ignored. In Western countries, this has generated [...] Read more.
Traditionally, formal education has favored boys, while girls, in the past, were relegated to the domestic sphere. This has been the case for centuries, without considering the possible specific cognitive needs of girls, which have been ignored. In Western countries, this has generated significant educational problems, especially in the learning of more technical subjects, with which girls not only do not identify but also often exclude themselves with the excuse that “it is not for them” (i.e., they tend to display a strong stereotype, a false belief, regarding these disciplines). The consequences have not been long in coming. Currently, in many Western countries, the low percentage of women in technical careers (such as Physics, Engineering, and Computer Science) is alarming. Is it possible to change stereotypes? This article addresses this complex issue, placing particular emphasis on the learning of spatial abilities, so important in all STEM careers (i.e., science, technology, engineering, and mathematics). This study concludes with examples of other stereotypes (mainly cultural) that have been eliminated or significantly reduced thanks to virtual reality (VR) and the help of artificial intelligence (AI). Could the same be achieved in the spatial domain? Full article
Show Figures

Figure 1

13 pages, 1329 KB  
Article
The Complex Interaction Between the Sense of Presence, Movement Features, and Performance in a Virtual Reality Spatial Task: A Preliminary Study
by Tommaso Palombi, Andrea Chirico, Laura Mandolesi, Maurizio Mancini, Noemi Passarello, Erica Volta, Fabio Alivernini and Fabio Lucidi
Electronics 2025, 14(15), 3143; https://doi.org/10.3390/electronics14153143 - 7 Aug 2025
Cited by 1 | Viewed by 1159
Abstract
The present study explores the innovative application of virtual reality (VR) in conducting the Radial Arm Maze (RAM) task, a performance-based test traditionally utilized for assessing spatial memory. This study aimed to develop a gamified version of the RAM implemented in immersive VR [...] Read more.
The present study explores the innovative application of virtual reality (VR) in conducting the Radial Arm Maze (RAM) task, a performance-based test traditionally utilized for assessing spatial memory. This study aimed to develop a gamified version of the RAM implemented in immersive VR and investigate the interaction between the sense of presence, movement features, and performance within the RAM. We developed software supporting a head-mounted display (HMD), addressing prior limitations in the scientific literature concerning user interaction, data collection accuracy, operational flexibility, and immersion level. This study involved a sample of healthy young adults who engaged with the immersive VR version of the RAM, examining the influence of VR experience variables (sense of presence, motion sickness, and usability) on RAM performance. Notably, it also introduced the collection and analysis of movement features within the VR environment to ascertain their impact on performance outcomes and their relationship with VR experience variables. The VR application developed is notable for its user-friendliness, adaptability, and integration capability with physiological monitoring devices, marking a significant advance in utilizing VR for cognitive assessments. Findings from our study underscore the importance of VR experience factors in RAM performance, highlighting how a heightened sense of presence can predict better performance, thereby emphasizing engagement and immersion as crucial for task success in VR settings. Additionally, this study revealed how movement parameters within the VR environment, specifically speed and directness, significantly influence RAM performance, offering new insights into optimizing VR experiences for enhanced task performance. Full article
(This article belongs to the Special Issue Augmented Reality, Virtual Reality, and 3D Reconstruction)
Show Figures

Figure 1

12 pages, 8520 KB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Cited by 4 | Viewed by 4277
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

18 pages, 5112 KB  
Article
Gaze–Hand Steering for Travel and Multitasking in Virtual Environments
by Mona Zavichi, André Santos, Catarina Moreira, Anderson Maciel and Joaquim Jorge
Multimodal Technol. Interact. 2025, 9(6), 61; https://doi.org/10.3390/mti9060061 - 13 Jun 2025
Cited by 1 | Viewed by 1497
Abstract
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique [...] Read more.
As head-mounted displays (HMDs) with eye tracking become increasingly accessible, the need for effective gaze-based interfaces in virtual reality (VR) grows. Traditional gaze- or hand-based navigation often limits user precision or impairs free viewing, making multitasking difficult. We present a gaze–hand steering technique that combines eye tracking with hand pointing: users steer only when gaze aligns with a hand-defined target, reducing unintended actions and enabling free look. Speed is controlled via either a joystick or a waist-level speed circle. We evaluated our method in a user study (n = 20) across multitasking and single-task scenarios, comparing it to a similar technique. Results show that gaze–hand steering maintains performance and enhances user comfort and spatial awareness during multitasking. Our findings support using gaze–hand steering in gaze-dominant VR applications requiring precision and simultaneous interaction. Our method significantly improves VR navigation in gaze–dominant, multitasking-intensive applications, supporting immersion and efficient control. Full article
Show Figures

Figure 1

26 pages, 1812 KB  
Article
Evaluating Virtual Game Design for Cultural Heritage Interpretation: An Exploratory Study on arkeOyun
by Sevde Güner and Leman Figen Gül
Heritage 2025, 8(6), 208; https://doi.org/10.3390/heritage8060208 - 4 Jun 2025
Cited by 3 | Viewed by 4523
Abstract
The interpretation of archaeological heritage encounters inherent challenges due to the fragmentation and contextual loss of the physical site. Virtual reality has emerged as an innovative medium for enhancing user engagement and promoting meaningful dissemination of culture. This exploratory study investigates the design [...] Read more.
The interpretation of archaeological heritage encounters inherent challenges due to the fragmentation and contextual loss of the physical site. Virtual reality has emerged as an innovative medium for enhancing user engagement and promoting meaningful dissemination of culture. This exploratory study investigates the design and preliminary expert-based evaluation of arkeOyun, a virtual reality game created to better understand archaeological sites’ spatial and cultural significance, by sampling the Kültepe Archaeological Site. The aim of this study is to evaluate the usefulness of virtual game-based approaches in the dissemination of cultural heritage and user interaction, emphasising spatial clarity, narrative integration, and immersive engagement. Our study incorporates qualitative and quantitative methods, utilising concurrent think-aloud and heuristic evaluation with participants who were selected due to their expertise in heritage, design, and human–computer interaction domains. Participants engaged with arkeOyun via a head-mounted display, and their real-time comments and post-experience evaluations were systematically evaluated. Results indicate that although participants responded positively to the game’s immersive design, interface simplicity, and spatial organisation, notable deficiencies were seen in narrative coherence, emotional resonance, and multimodal feedback. Navigation and the presentation of informative content were seen as critical areas requiring improvement. The data triangulation revealed both consistent and varying assessments, highlighting the need for context-specific support, varied task structures, and emotionally compelling narratives for enhanced interpretation of cultural significance. The findings of our study illustrate the potential of virtual reality games as a medium for cultural heritage interpretation via arkeOyun. For experiences to evolve from immersive simulations to major interpretative platforms, it is vital to integrate narrative frameworks, multimodal scaffolding, and user-centred interaction tactics more deeply. The results of this exploratory pilot study present preliminary findings on integrating virtual reality games in archaeological heritage interpretation and contribute to further projects. Full article
(This article belongs to the Special Issue Heritage as a Design Resource for Virtual Reality)
Show Figures

Figure 1

30 pages, 7559 KB  
Article
Deciphering Socio-Spatial Integration Governance of Community Regeneration: A Multi-Dimensional Evaluation Using GBDT and MGWR to Address Non-Linear Dynamics and Spatial Heterogeneity in Life Satisfaction and Spatial Quality
by Hong Ni, Jiana Liu, Haoran Li, Jinliu Chen, Pengcheng Li and Nan Li
Buildings 2025, 15(10), 1740; https://doi.org/10.3390/buildings15101740 - 20 May 2025
Cited by 2 | Viewed by 1678
Abstract
Urban regeneration is pivotal to sustainable development, requiring innovative strategies that align social dynamics with spatial configurations. Traditional paradigms increasingly fail to tackle systemic challenges—neighborhood alienation, social fragmentation, and resource inequality—due to their inability to integrate human-centered spatial governance. This study addresses these [...] Read more.
Urban regeneration is pivotal to sustainable development, requiring innovative strategies that align social dynamics with spatial configurations. Traditional paradigms increasingly fail to tackle systemic challenges—neighborhood alienation, social fragmentation, and resource inequality—due to their inability to integrate human-centered spatial governance. This study addresses these shortcomings with a novel multidimensional framework that merges social perception (life satisfaction) analytics with spatial quality (GIS-based) assessment. At its core, we utilize geospatial and machine learning models, deploying an ensemble of Gradient Boosted Decision Trees (GBDT), Random Forest (RF), and multiscale geographically weighted regression (MGWR) to decode nonlinear socio-spatial interactions within Suzhou’s community environmental matrix. Our findings reveal critical intersections where residential density thresholds interact with commercial accessibility patterns and transport network configurations. Notably, we highlight the scale-dependent influence of educational proximity and healthcare distribution on community satisfaction, challenging conventional planning doctrines that rely on static buffer-zone models. Through rigorous spatial econometric modeling, this research uncovers three transformative insights: (1) Urban environment exerts a dominant influence on life satisfaction, accounting for 52.61% of the variance. Air quality emerges as a critical determinant, while factors such as proximity to educational institutions, healthcare facilities, and public landmarks exhibit nonlinear effects across spatial scales. (2) Housing price growth in Suzhou displays significant spatial clustering, with a Moran’s I of 0.130. Green space coverage positively correlates with price appreciation (β = 21.6919 ***), whereas floor area ratio exerts a negative impact (β = −4.1197 ***), highlighting the trade-offs between density and property value. (3) The MGWR model outperforms OLS in explaining housing price dynamics, achieving an R2 of 0.5564 and an AICc of 11,601.1674. This suggests that MGWR captures 55.64% of pre- and post-pandemic price variations while better reflecting spatial heterogeneity. By merging community-expressed sentiment mapping with morphometric urban analysis, this interdisciplinary research pioneers a protocol for socio-spatial integrated urban transitions—one where algorithmic urbanism meets human-scale needs, not technological determinism. These findings recalibrate urban regeneration paradigms, demonstrating that data-driven socio-spatial integration is not a theoretical aspiration but an achievable governance reality. Full article
Show Figures

Figure 1

20 pages, 76650 KB  
Article
Enhancing Cultural Heritage Engagement with Novel Interactive Extended-Reality Multisensory System
by Adolfo Muñoz, Juan José Climent-Ferrer, Ana Martí-Testón, J. Ernesto Solanes and Luis Gracia
Electronics 2025, 14(10), 2039; https://doi.org/10.3390/electronics14102039 - 16 May 2025
Cited by 11 | Viewed by 6692
Abstract
Extended-reality (XR) tools are increasingly used to revitalise museum experiences, but typical head-mounted or smartphone solutions tend to fragment audiences and suppress the social dialogue that makes cultural heritage memorable. This article addresses that gap on two fronts. First, it proposes a four-phase [...] Read more.
Extended-reality (XR) tools are increasingly used to revitalise museum experiences, but typical head-mounted or smartphone solutions tend to fragment audiences and suppress the social dialogue that makes cultural heritage memorable. This article addresses that gap on two fronts. First, it proposes a four-phase design methodology—spanning artifact selection, narrative framing, tangible-interface fabrication, spatial installation, software integration, validation, and deployment—that helps curators, designers, and technologists to co-create XR exhibitions in which co-presence, embodied action, and multisensory cues are treated as primary design goals rather than afterthoughts. Second, the paper reports LanternXR, a proof-of-concept built with the methodology: visitors share a 3D-printed replica of the fourteenth-century Virgin of Boixadors while wielding a tracked “camera” and a candle-like lantern that lets them illuminate, photograph, and annotate the sculpture inside a life-sized Gothic nave rendered on large 4K displays with spatial audio and responsive lighting. To validate the approach, the article presents an analytical synthesis of feedback from curators, museologists, and XR technologists, underscoring the system’s capacity to foster collaboration, deepen engagement, and broaden accessibility. The findings show how XR can move museum audiences from isolated immersion to collective, multisensory exploration. Full article
Show Figures

Figure 1

23 pages, 4826 KB  
Article
Visualization of High-Intensity Laser–Matter Interactions in Virtual Reality and Web Browser
by Martin Matys, James P. Thistlewood, Mariana Kecová, Petr Valenta, Martina Greplová Žáková, Martin Jirka, Prokopis Hadjisolomou, Alžběta Špádová, Marcel Lamač and Sergei V. Bulanov
Photonics 2025, 12(5), 436; https://doi.org/10.3390/photonics12050436 - 30 Apr 2025
Viewed by 3926
Abstract
We present the Virtual Beamline (VBL) application, an interactive web-based platform for visualizing high-intensity laser–matter interactions using particle-in-cell (PIC) simulations, with future potential for experimental data visualization. These interactions include ion acceleration, electron acceleration, γ-flash generation, electron–positron pair production, and attosecond and [...] Read more.
We present the Virtual Beamline (VBL) application, an interactive web-based platform for visualizing high-intensity laser–matter interactions using particle-in-cell (PIC) simulations, with future potential for experimental data visualization. These interactions include ion acceleration, electron acceleration, γ-flash generation, electron–positron pair production, and attosecond and spiral pulse generation. Developed at the ELI Beamlines facility, VBL integrates a custom-built WebGL engine with WebXR-based Virtual Reality (VR) support, allowing users to explore complex plasma dynamics in non-VR mode on a computer screen or in fully immersive VR mode using a head-mounted display. The application runs directly in a standard web browser, ensuring broad accessibility. VBL enhances the visualization of PIC simulations by efficiently processing and rendering four main data types: point particles, 1D lines, 2D textures, and 3D volumes. By utilizing interactive 3D visualization, it overcomes the limitations of traditional 2D representations, offering enhanced spatial understanding and real-time manipulation of visualization parameters such as time steps, data layers, and colormaps. Users can interactively explore the visualized data by moving their body or using a controller for navigation, zooming, and rotation. These interactive capabilities improve data exploration and interpretation, making VBL a valuable tool for both scientific analysis and educational outreach. The visualizations are hosted online and freely accessible on our server, providing researchers, the general public, and broader audiences with an interactive tool to explore complex plasma physics simulations. By offering an intuitive and dynamic approach to large-scale datasets, VBL enhances both scientific research and knowledge dissemination in high-intensity laser–matter physics. Full article
Show Figures

Figure 1

Back to TopTop