Journal Description
Virtual Worlds
Virtual Worlds
is an international, peer-reviewed, open access journal on virtual reality, augmented and mixed reality, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 36.6 days after submission; acceptance to publication is undertaken in 7.4 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Virtual Worlds is a companion journal of Applied Sciences.
Latest Articles
Virtual Versus Reality: A Systematic Review of Real-World Built Environment Tasks Performed in CAVEs and a Framework for Performance and Experience Evaluation
Virtual Worlds 2024, 3(4), 536-571; https://doi.org/10.3390/virtualworlds3040028 - 20 Nov 2024
Abstract
►
Show Figures
With operations in the built environment becoming increasingly data-rich (via Building Information Models and Internet of Things devices) and the rapid development of highly immersive environments, there are new opportunities for components of traditional “real-world” tasks to be undertaken in a “virtual” environment.
[...] Read more.
With operations in the built environment becoming increasingly data-rich (via Building Information Models and Internet of Things devices) and the rapid development of highly immersive environments, there are new opportunities for components of traditional “real-world” tasks to be undertaken in a “virtual” environment. However, an approach to compare both subjective (psychological) and objective (task-based) performance in real and virtual environments is rarely used in this context. This paper begins by introducing the industrial, technological, and psychological context of real-world and virtual tasks. A systematic review of the application of CAVE Automatic Virtual Environments (CAVEs) for “virtual” built environment tasks is conducted, and research gaps regarding the development of systems and comparison of task environments (CAVE and real-world condition) is identified. A theoretical framework to assess task performance is developed, and a novel practical experiment to compare participant(s) psychological and decision-making performance for an identical task in the real world and in a CAVE is proposed.
Full article
Open AccessArticle
Mitigating Cybersickness in Virtual Reality: Impact of Eye–Hand Coordination Tasks, Immersion, and Gaming Skills
by
Sokratis Papaefthymiou, Anastasios Giannakopoulos, Petros Roussos and Panagiotis Kourtesis
Virtual Worlds 2024, 3(4), 506-535; https://doi.org/10.3390/virtualworlds3040027 - 15 Nov 2024
Abstract
►▼
Show Figures
Cybersickness remains a significant challenge for virtual reality (VR) applications, particularly in highly immersive environments. This study examined the effects of immersion, task performance, and individual differences on cybersickness symptoms across multiple stages of VR exposure. Forty-seven participants aged 18–45 completed a within-subjects
[...] Read more.
Cybersickness remains a significant challenge for virtual reality (VR) applications, particularly in highly immersive environments. This study examined the effects of immersion, task performance, and individual differences on cybersickness symptoms across multiple stages of VR exposure. Forty-seven participants aged 18–45 completed a within-subjects design that involved the Cybersickness in Virtual Reality Questionnaire (CSQ-VR) and the Deary–Liewald Reaction Time (DLRT) task. Cybersickness symptoms were assessed across four stages: before and after VR immersion, and before and after a 12 min rollercoaster ride designed to induce cybersickness. The results showed significant increases in symptoms following the rollercoaster ride, with partial recovery during the post-ride tasks. Eye–hand coordination tasks, performed after the ride and VR immersion, mitigated nausea, as well as vestibular, and oculomotor symptoms, suggesting that task engagement plays a key role in alleviating cybersickness. The key predictors of symptom severity included a susceptibility to motion sickness and gaming experience, particularly proficiency in first-person shooter (FPS) games, which was associated with a reduced cybersickness intensity. While task engagement reduced symptoms in the later stages, particularly nausea and vestibular discomfort, overall cybersickness levels remained elevated post-immersion. These findings underscore the importance of task timing, individual differences, and immersive experience design in developing strategies to mitigate cybersickness and enhance user experiences in VR environments.
Full article
Figure 1
Open AccessSystematic Review
Immersive Learning: A Systematic Literature Review on Transforming Engineering Education Through Virtual Reality
by
Artwell Regis Muzata, Ghanshyam Singh, Mikhail Sergeevich Stepanov and Innocent Musonda
Virtual Worlds 2024, 3(4), 480-505; https://doi.org/10.3390/virtualworlds3040026 - 5 Nov 2024
Abstract
Integrating Virtual Reality (VR) with developing technology has become crucial in today’s schools to transform in-the-moment instruction. A change in perspective has occurred because of VR, enabling teachers to create immersive learning experiences in addition to conventional classes. This paper presents a systematic
[...] Read more.
Integrating Virtual Reality (VR) with developing technology has become crucial in today’s schools to transform in-the-moment instruction. A change in perspective has occurred because of VR, enabling teachers to create immersive learning experiences in addition to conventional classes. This paper presents a systematic literature review with an in-depth analysis of the changing environment of immersive learning. It discusses advantages and challenges, noting results from previous researchers. VR facilitates more profound knowledge and memory of complex subjects by allowing students to collaborate with digital structures, explore virtual landscapes, and participate in simulated experiments. Developing VR gear, like thin headsets and tactile feedback mechanisms, has democratised immersive engineering learning by making it more approachable and natural for a broader range of students. This study sheds light on the revolutionary potential of immersive learning via VR integration with new technologies in real-time education by examining current trends, discussing obstacles, and an outlook on future directions using the new Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). This study used four databases: Scopus, IEEE, Springer, and Google Scholar. During the selection, 24 articles were added during the review, and 66 studies were selected. It clarifies best practices for adopting VR-enhanced learning environments through empirical analysis and case studies, and it also points out directions for future innovation and growth in the field of immersive pedagogy.
Full article
(This article belongs to the Special Issue Contemporary Developments in Mixed, Augmented, and Virtual Reality: Implications for Teaching and Learning)
►▼
Show Figures
Figure 1
Open AccessFeature PaperArticle
Avatar Detection in Metaverse Recordings
by
Felix Becker, Patrick Steinert, Stefan Wagenpfeil and Matthias L. Hemmje
Virtual Worlds 2024, 3(4), 459-479; https://doi.org/10.3390/virtualworlds3040025 - 30 Oct 2024
Abstract
►▼
Show Figures
The metaverse is gradually expanding. There is a growing number of photo and video recordings of metaverse virtual worlds being used in multiple domains, and the collection of these recordings is a rapidly growing field. An essential element of the metaverse and its
[...] Read more.
The metaverse is gradually expanding. There is a growing number of photo and video recordings of metaverse virtual worlds being used in multiple domains, and the collection of these recordings is a rapidly growing field. An essential element of the metaverse and its recordings is the concept of avatars. In this paper, we present the novel task of avatar detection in metaverse recordings, supporting semantic retrieval in collections of metaverse recordings and other use cases. Our work addresses the characterizations and definitions of avatars and presents a new model that supports avatar detection. The latest object detection algorithms are trained and tested on a variety of avatar types in metaverse recordings. Our work achieves a significantly higher level of accuracy than existing models, which encourages further research in this field.
Full article
Figure 1
Open AccessTechnical Note
Development of a Modular Adjustable Wearable Haptic Device for XR Applications
by
Ali Najm, Domna Banakou and Despina Michael-Grigoriou
Virtual Worlds 2024, 3(4), 436-458; https://doi.org/10.3390/virtualworlds3040024 - 16 Oct 2024
Cited by 1
Abstract
Current XR applications move beyond audiovisual information, with haptic feedback rapidly gaining ground. However, current haptic devices are still evolving and often struggle to combine key desired features in a balanced way. In this paper, we propose the development of a high-resolution haptic
[...] Read more.
Current XR applications move beyond audiovisual information, with haptic feedback rapidly gaining ground. However, current haptic devices are still evolving and often struggle to combine key desired features in a balanced way. In this paper, we propose the development of a high-resolution haptic (HRH) system for perception enhancement, a wearable technology designed to augment extended reality (XR) experiences through precise and localized tactile feedback. The HRH system features a modular design with 58 individually addressable actuators, enabling intricate haptic interactions within a compact wearable form. Dual ESP32-S3 microcontrollers and a custom-designed system ensure robust processing and low-latency performance, crucial for real-time applications. Integration with the Unity game engine provides developers with a user-friendly and dynamic environment for accurate, simple control and customization. The modular design, utilizing a flexible PCB, supports a wide range of actuators, enhancing its versatility for various applications. A comparison of our proposed system with existing solutions indicates that the HRH system outperforms other devices by encapsulating several key features, including adjustability, affordability, modularity, and high-resolution feedback. The HRH system not only aims to advance the field of haptic feedback but also introduces an intuitive tool for exploring new methods of human–computer and XR interactions. Future work will focus on refining and exploring the haptic feedback communication methods used to convey information and expand the system’s applications.
Full article
(This article belongs to the Special Issue New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Virtual Reality Pursuit: Using Individual Predispositions towards VR to Understand Perceptions of a Virtualized Workplace Team Experience
by
Diana R. Sanchez, Joshua McVeigh-Schultz, Katherine Isbister, Monica Tran, Kassidy Martinez, Marjan Dost, Anya Osborne, Daniel Diaz, Philip Farillas, Timothy Lang, Alexandra Leeds, George Butler and Monique Ferronatto
Virtual Worlds 2024, 3(4), 418-435; https://doi.org/10.3390/virtualworlds3040023 - 10 Oct 2024
Abstract
This study investigates how individual predispositions toward Virtual Reality (VR) affect user experiences in collaborative VR environments, particularly in workplace settings. By adapting the Video Game Pursuit Scale to measure VR predisposition, we aim to establish the reliability and validity of this adapted
[...] Read more.
This study investigates how individual predispositions toward Virtual Reality (VR) affect user experiences in collaborative VR environments, particularly in workplace settings. By adapting the Video Game Pursuit Scale to measure VR predisposition, we aim to establish the reliability and validity of this adapted measure in assessing how personal characteristics influence engagement and interaction in VR. Two studies, the first correlational and the second quasi-experimental, were conducted to examine the impact of environmental features, specifically the differences between static and mobile VR platforms, on participants’ perceptions of time, presence, and task motivation. The findings indicate that individual differences in VR predisposition significantly influence user experiences in virtual environments with important implications for enhancing VR applications in training and team collaboration. This research contributes to the understanding of human–computer interaction in VR and offers valuable insights for organizations aiming to implement VR technologies effectively. The results highlight the importance of considering psychological factors in the design and deployment of VR systems, paving the way for future research in this rapidly evolving field.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
XR MUSE: An Open-Source Unity Framework for Extended Reality-Based Networked Multi-User Studies
by
Stéven Picard, Ningyuan Sun and Jean Botev
Virtual Worlds 2024, 3(4), 404-417; https://doi.org/10.3390/virtualworlds3040022 - 2 Oct 2024
Abstract
In recent years, extended reality (XR) technologies have been increasingly used as a research tool in behavioral studies. They allow experimenters to conduct user studies in simulated environments that are both controllable and reproducible across participants. However, creating XR experiences for such studies
[...] Read more.
In recent years, extended reality (XR) technologies have been increasingly used as a research tool in behavioral studies. They allow experimenters to conduct user studies in simulated environments that are both controllable and reproducible across participants. However, creating XR experiences for such studies remains challenging, particularly in networked, multi-user setups that investigate collaborative or competitive scenarios. Numerous aspects need to be implemented and coherently integrated, e.g., in terms of user interaction, environment configuration, and data synchronization. To reduce this complexity and facilitate development, we present the open-source Unity framework XR MUSE for devising user studies in shared virtual environments. The framework provides various ready-to-use components and sample scenes that researchers can easily customize and adapt to their specific needs.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessReview
Advancing Medical Education Using Virtual and Augmented Reality in Low- and Middle-Income Countries: A Systematic and Critical Review
by
Xi Li, Dalia Elnagar, Ge Song and Rami Ghannam
Virtual Worlds 2024, 3(3), 384-403; https://doi.org/10.3390/virtualworlds3030021 - 18 Sep 2024
Abstract
►▼
Show Figures
This review critically examines the integration of Virtual Reality (VR) and Augmented Reality (AR) in medical training across Low- and Middle-Income Countries (LMICs), offering a novel perspective by combining quantitative analysis with qualitative insights from medical students in Egypt and Ghana. Through a
[...] Read more.
This review critically examines the integration of Virtual Reality (VR) and Augmented Reality (AR) in medical training across Low- and Middle-Income Countries (LMICs), offering a novel perspective by combining quantitative analysis with qualitative insights from medical students in Egypt and Ghana. Through a systematic review process, 17 peer-reviewed studies published between 2010 and 2023 were analysed. Altogether, these studies involved a total of 887 participants. The analysis reveals a growing interest in VR and AR applications for medical training in LMICs with a peak in published articles in 2023, indicating an expanding research landscape. A unique contribution of this review is the integration of feedback from 35 medical students assessed through questionnaires, which demonstrates the perceived effectiveness of immersive technologies over traditional 2D illustrations in understanding complex medical concepts. Key findings highlight that VR and AR technologies in medical training within LMICs predominantly focus on surgical skills. The majority of studies focus on enhancing surgical training, particularly general surgery. This emphasis reflects the technology’s strong alignment with the needs of LMICs, where surgical skills training is often a priority. Despite the promising applications and expanding interest in VR and AR, significant challenges such as accessibility and device limitations remain, demonstrating the need for ongoing research and integration with traditional methods to fully leverage these technologies for effective medical education. Therefore, this review provides a comprehensive analysis of existing VR and AR applications, their evaluation methodologies, and student perspectives to address educational challenges and enhance healthcare outcomes in LMICs.
Full article
Figure 1
Open AccessPerspective
Navigating the Healthcare Metaverse: Immersive Technologies and Future Perspectives
by
Kevin Yi-Lwern Yap
Virtual Worlds 2024, 3(3), 368-383; https://doi.org/10.3390/virtualworlds3030020 - 11 Sep 2024
Abstract
The year is 2030. The internet has evolved into the metaverse. People navigate through advanced avatars, shop in digital marketplaces, and connect with others through extended reality social media platforms. Three-dimensional patient scans, multidisciplinary tele-collaborations, digital twins and metaverse health records are part
[...] Read more.
The year is 2030. The internet has evolved into the metaverse. People navigate through advanced avatars, shop in digital marketplaces, and connect with others through extended reality social media platforms. Three-dimensional patient scans, multidisciplinary tele-collaborations, digital twins and metaverse health records are part of clinical practices. Younger generations regularly immerse themselves in virtual worlds, playing games and attending social events in the metaverse. This sounds like a sci-fi movie, but as the world embraces immersive technologies post-COVID-19, this future is not too far off. This article aims to provide a foundational background to immersive technologies and their applications and discuss their potential for transforming healthcare and education. Moreover, this article will introduce the metaverse ecosystem and characteristics, and its potential for health prevention, treatment, education, and research. Finally, this article will explore the synergy between generative artificial intelligence and the metaverse. As younger generations of healthcare professionals embrace this digital frontier, the metaverse’s potential in healthcare is definitely attractive. Mainstream adoption may take time, but it is imperative that healthcare professionals be equipped with interdisciplinary skills to navigate the plethora of immersive technologies in the future of healthcare.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
Figure 1
Open AccessArticle
“Case By Case”: Investigating the Use of a VR-Based Allegorical Serious Game for Consent Education
by
Autumn May Aindow, Alexander Baines, Toby Mccaffery, Sterling O’Neill, Frolynne Rose Martinez Salido, Gail Collyer-Hoar, George Limbert, Elisa Rubegni and Abhijit Karnik
Virtual Worlds 2024, 3(3), 354-367; https://doi.org/10.3390/virtualworlds3030019 - 6 Sep 2024
Abstract
The topic of consent within interpersonal relationships is sensitive and complex. A serious game can provide a safe medium for the exploration of the topic of consent. In this paper, we aim to alleviate the challenges of designing a serious game artefact with
[...] Read more.
The topic of consent within interpersonal relationships is sensitive and complex. A serious game can provide a safe medium for the exploration of the topic of consent. In this paper, we aim to alleviate the challenges of designing a serious game artefact with the implicit goal of exploring the topic of consent. The resulting artefact, “Case By Case”, is a VR-based serious game targeting university students, which uses an allegory-based approach to achieve its goal. The participants play the role of a detective who is tasked with determining if individuals have committed theft, which serves as an allegory for breach of consent. “Case By Case” provides the users an opportunity to reflect on their decisions within the game and apply them to the complex situations of consent such as victim-blaming and bystander awareness. To evaluate the effectiveness of the game in achieving its implicit goal, we ran a user study (n = 24). The results show that “Case By Case” provided a safe environment for the users to reflect on the concept of consent and increase their understanding about the topic further.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Language Learning and Intergroup Empathy through Multi-User Interactions and Simulations in a Virtual World
by
Elaine Hoter, Manal Yazbak Abu Ahmad and Hannah Azulay
Virtual Worlds 2024, 3(3), 333-353; https://doi.org/10.3390/virtualworlds3030018 - 13 Aug 2024
Abstract
In an increasingly globalized world, the development of language skills and intercultural empathy has become crucial for effective communication and collaboration across diverse societies. Virtual worlds offer a unique and immersive environment to address these needs through innovative educational approaches. This study explores
[...] Read more.
In an increasingly globalized world, the development of language skills and intercultural empathy has become crucial for effective communication and collaboration across diverse societies. Virtual worlds offer a unique and immersive environment to address these needs through innovative educational approaches. This study explores the impact of multi-user interactions, group work, and simulations within virtual worlds on language learning and the development of intergroup empathy. Two distinct research projects were conducted, involving 241 participants aged 19–45. The language learning study engaged 116 participants in diverse interactive experiences, while the intercultural study had 125 participants collaborating in multicultural groups and participating in perspective-taking simulations. Both studies employed qualitative data collection methods, including surveys, interviews, and observations. The findings suggest that the combination of networking strategies, collaborative learning, and simulations within virtual worlds contributes to improvements in learners’ language proficiency, confidence, and empathy towards diverse social groups. Participants reported increased motivation and engagement, which was attributed to the immersive and interactive nature of the virtual environments. These studies highlight the importance of collaboration and reflection in facilitating language acquisition and intercultural understanding. Technical challenges were identified as potential barriers to implementation. The results demonstrate the potential of virtual worlds to enhance language education and foster empathy in diverse societies, offering valuable insights for educators and researchers. However, the findings may be limited by the specific contexts and sample sizes of these studies, warranting further research to explore the generalizability and long-term impact of virtual world interventions and not exaggerate the main conclusions.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
Mixed Reality in Building Construction Inspection and Monitoring: A Systematic Review
by
Rana Muhammad Irfan Anwar and Salman Azhar
Virtual Worlds 2024, 3(3), 319-332; https://doi.org/10.3390/virtualworlds3030017 - 13 Aug 2024
Abstract
►▼
Show Figures
Mixed reality (MR) technology has the potential to enhance building construction inspection and monitoring processes, improving efficiency, accuracy, and safety. This systematic review intends to investigate the present research status on MR in building construction inspection and monitoring. The review covers existing literature
[...] Read more.
Mixed reality (MR) technology has the potential to enhance building construction inspection and monitoring processes, improving efficiency, accuracy, and safety. This systematic review intends to investigate the present research status on MR in building construction inspection and monitoring. The review covers existing literature and practical case studies that scrutinize current technologies, their applications, challenges, and future trends in this rapidly evolving field. This article follows a methodology known as Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) to enhance the credibility and reliability of research. The study includes articles published between 2018 and 2023, identified through a comprehensive search of Scopus and Google Scholar databases. Findings indicate that MR technology has the potential to enhance visualization, communication, and collaboration between stakeholders, as well as increase efficiency and accuracy in inspection and monitoring tasks by providing real-time interactable data and quick decision-making among the project team members. The adoption of MR technology in the construction industry will not only boost its effectiveness but also improve its productivity. However, limitations such as high costs, technical issues, and user acceptance pose challenges to the widespread adoption of MR in building construction. Future research should address these limitations and investigate MR’s long-term impact on building construction inspection and monitoring.
Full article
Figure 1
Open AccessArticle
Leveraging Virtual Reality for the Visualization of Non-Observable Electrical Circuit Principles in Engineering Education
by
Elliott Wolbach, Michael Hempel and Hamid Sharif
Virtual Worlds 2024, 3(3), 303-318; https://doi.org/10.3390/virtualworlds3030016 - 2 Aug 2024
Abstract
As technology advances, the field of electrical and computer engineering continuously demands the introduction of innovative new tools and methodologies to facilitate the effective learning and comprehension of fundamental concepts. This research addresses an identified gap in technology-augmented education capabilities and researches the
[...] Read more.
As technology advances, the field of electrical and computer engineering continuously demands the introduction of innovative new tools and methodologies to facilitate the effective learning and comprehension of fundamental concepts. This research addresses an identified gap in technology-augmented education capabilities and researches the integration of virtual reality (VR) technology with real-time electronic circuit simulation to enable and enhance the visualization of non-observable concepts such as voltage distribution and current flow within these circuits. In this paper, we describe the development of our immersive educational platform, which makes understanding these abstract concepts intuitive and engaging. This research also involves the design and development of a VR-based circuit simulation environment. By leveraging VR’s immersive capabilities, our system enables users to physically interact with electronic components, observe the flow of electrical signals, and manipulate circuit parameters in real-time. Through this immersive experience, learners can gain a deeper understanding of fundamental electronic principles, transcending the limitations of traditional two-dimensional diagrams and equations. Furthermore, this research focuses on the implementation of advanced and novel visualization techniques within the VR environment for non-observable electrical and electromagnetic properties, providing users with a clearer and more intuitive understanding of electrical circuit concepts. Examples include color-coded pathways for current flow and dynamic voltage gradient visualization. Additionally, real-time data representation and graphical overlays are researched and integrated to offer users insights into the dynamic behavior of circuits, allowing for better analysis and troubleshooting.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality, 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Challenges and Opportunities of Using Metaverse Tools for Participatory Architectural Design Processes
by
Provides Ng, Sara Eloy, Micaela Raposo, Alberto Fernández González, Nuno Pereira da Silva, Marcos Figueiredo and Hira Zuberi
Virtual Worlds 2024, 3(3), 283-302; https://doi.org/10.3390/virtualworlds3030015 - 10 Jul 2024
Abstract
Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design
[...] Read more.
Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design solutions in a built environment. This paper explores how metaverse tools can help enhance participatory design by providing new collaborative opportunities via networked 3D environments. A hybrid format (online and in situ) co-creation process was documented and analysed, targeting public space design in London, Hong Kong, and Lisbon. The participants collaborated to address a set of design requirements via a tailored metaverse space, following a six-step methodology (Tour, Discuss, Rate, Define, Action, and Show and Tell). The preliminary results indicated that non-immersive metaverse tools help strengthen spatial collaboration through user perspective simulations, introducing novel interaction possibilities within design processes. The technology’s still-existing technical limitations may be tackled with careful engagement design, iterative reviews, and participants’ feedback. The experience documented prompts a reflection on the role of architects in process design and mediating multi-stakeholder collaboration, contributing to more inclusive, intuitive, and informed co-creation.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
Geometric Fidelity Requirements for Meshes in Automotive Lidar Simulation
by
Christopher Goodin, Marc N. Moore, Daniel W. Carruth, Zachary Aspin and John Kaniarz
Virtual Worlds 2024, 3(3), 270-282; https://doi.org/10.3390/virtualworlds3030014 - 3 Jul 2024
Abstract
►▼
Show Figures
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves.
[...] Read more.
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves. In this work, we propose to answer the question, “What degree of geometric fidelity is required to realistically simulate lidar in AGV simulations?” To answer this question, in this work we present an analysis that determines the required geometric fidelity of digital scenes and assets used in the simulation of AGVs. Focusing on vegetation, we use a comparison of the real and simulated perceived distribution of leaf orientation angles in lidar point clouds to determine the number of triangles required to reliably reproduce realistic results. By comparing real lidar scans of vegetation to simulated lidar scans of vegetation with a variety of geometric fidelities, we find that digital tree models (meshes) need to have a minimum triangle density of >1600 triangles per cubic meter in order to accurately reproduce the geometric properties of lidar scans of real vegetation, with a recommended triangle density of 11,000 triangles per cubic meter for best performance. Furthermore, by comparing these experiments to past work investigating the same question for cameras, we develop a general “rule-of-thumb” for vegetation mesh fidelity in AGV sensor simulation.
Full article
Figure 1
Open AccessArticle
A Virtual Reality Game-Based Intervention to Enhance Stress Mindset and Performance among Firefighting Trainees from the Singapore Civil Defence Force (SCDF)
by
Muhammad Akid Durrani Bin Imran, Cherie Shu Yun Goh, Nisha V, Meyammai Shanmugham, Hasan Kuddoos, Chen Huei Leo and Bina Rai
Virtual Worlds 2024, 3(3), 256-269; https://doi.org/10.3390/virtualworlds3030013 - 1 Jul 2024
Abstract
This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three
[...] Read more.
This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three groups: control, placebo, and intervention. The participants’ physiological responses, psychological responses, and training performances were evaluated during specific times over the standard 22-week training regimen. Participants from the control and placebo groups showed a similar overall perceived stress profile, with an initial increase in the early stages that was subsequently maintained over the remaining training period. Participants from the intervention group had a significantly lower level of perceived stress compared to the control and placebo groups, and their stress-is-enhancing mindset was significantly increased before the game in week 12 compared to week 3. Cortisol levels remained comparable between pre-game and post-game for the placebo group at week 12, but there was a significant reduction in cortisol levels post-game in comparison to pre-game for the intervention group. The biofeedback data as a measurement of root mean square of successive differences (RMSSD) during the gameplay were also significantly increased at week 12 when compared to week 3. Notably, the intervention group had a significant improvement in the final exercise assessment when compared to the control based on the participants’ role as duty officers. In conclusion, a VR game-based intervention with real-time biofeedback shows promise as an engaging and effective way of training firefighting trainees to enhance their stress mindset and reduce their perceived stress, which may enable them to perform better in the daily emergencies that they respond to.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Exploring Dynamic Difficulty Adjustment Methods for Video Games
by
Nicholas Fisher and Arun K. Kulshreshth
Virtual Worlds 2024, 3(2), 230-255; https://doi.org/10.3390/virtualworlds3020012 - 7 Jun 2024
Abstract
►▼
Show Figures
Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms
[...] Read more.
Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms to keep the gaming experience engaging. This study introduces a custom first-person-shooter (FPS) game to explore Dynamic Difficulty Adjustment (DDA) techniques, leveraging both performance metrics and emotional responses gathered from physiological sensors. Through a within-subjects experiment involving casual and experienced gamers, we scrutinized the effects of various DDA methods on player performance and self-reported game perceptions. Contrary to expectations, our research did not identify a singular, most effective DDA strategy. Instead, findings suggest a complex landscape where no one approach—be it performance-based, emotion-based, or a hybrid—demonstrably surpasses static difficulty settings in enhancing player engagement or game experience. Noteworthy is the data’s alignment with Flow Theory, suggesting potential for the Emotion DDA technique to foster engagement by matching challenges to player skill levels. However, the overall modest impact of DDA on performance metrics and emotional responses highlights the intricate challenge of designing adaptive difficulty that resonates with both the mechanical and emotional facets of gameplay. Our investigation contributes to the broader dialogue on adaptive game design, emphasizing the need for further research to refine DDA approaches. By advancing our understanding and methodologies, especially in emotion recognition, we aim to develop more sophisticated DDA strategies. These strategies aspire to dynamically align game challenges with individual player states, making games more accessible, engaging, and enjoyable for a wider audience.
Full article
Figure 1
Open AccessArticle
An Augmented Reality Application for Wound Management: Enhancing Nurses’ Autonomy, Competence and Connectedness
by
Carina Albrecht-Gansohr, Lara Timm, Sabrina C. Eimler and Stefan Geisler
Virtual Worlds 2024, 3(2), 208-229; https://doi.org/10.3390/virtualworlds3020011 - 3 Jun 2024
Abstract
The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process
[...] Read more.
The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process with nurses using the positive computing paradigm, with a focus on the improvement of the working conditions of nursing staff. In a qualitative study with 14 participants, the factors of autonomy, competence and connectedness were examined in particular. It was shown that good individual adaptability and flexibility of the system with respect to the work task and personal preferences lead to a high degree of autonomy. The availability of the right information at the right time strengthens the feeling of competence. On the one hand, the connection to patients is increased by the additional information in the glasses, but on the other hand, it is hindered by the unusual appearance of the device and the lack of eye contact. In summary, the potential of Augmented Reality glasses in care was confirmed, and approaches for a well-being-centered system design were identified but, at the same time, a number of future research questions, including the effects on patients, were also identified.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Tactile Speech Communication: Reception of Words and Two-Way Messages through a Phoneme-Based Display
by
Jaehong Jung, Charlotte M. Reed, Juan S. Martinez and Hong Z. Tan
Virtual Worlds 2024, 3(2), 184-207; https://doi.org/10.3390/virtualworlds3020010 - 7 May 2024
Cited by 1
Abstract
The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based
[...] Read more.
The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based tactile display of speech consisting of a 4-by-6 array of tactors worn on the dorsal and ventral surfaces of the forearm. Unique tactile signals were assigned to the 39 English phonemes. Study I consisted of training and testing on the identification of 4-phoneme words. Performance on a trained set of 100 words averaged 87% across the three participants and generalized well to a novel set of words (77%). Study II consisted of two-way messaging between two users of TAPS (TActile Phonemic Sleeve) for 13 h over 45 days. The participants conversed with each other by inputting text that was translated into tactile phonemes sent over the device. Messages were identified with an accuracy of 73% correct in conjunction with 82% of the words. Although rates of communication were slow (roughly 1 message per minute), the results obtained with this ecologically valid procedure represent progress toward the goal of a stand-alone tactile device for speech communication.
Full article
(This article belongs to the Special Issue New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Story Starter: A Tool for Controlling Multiple Virtual Reality Headsets with No Active Internet Connection
by
Andy T. Woods, Laryssa Whittaker, Neil Smith, Robert Ispas, Jackson Moore, Roderick D. Morgan and James Bennett
Virtual Worlds 2024, 3(2), 171-183; https://doi.org/10.3390/virtualworlds3020009 - 8 Apr 2024
Abstract
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement
[...] Read more.
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement here, as it streamlines the onboarding process and maximizes the number of viewers. Current off-the-shelf solutions require headsets to be connected to a cloud-based app via an active internet connection, which can be problematic in some locations. To address this challenge, we present Story Starter, a solution that enables the control of VR headsets without an active internet connection. Story Starter can start, stop, and install VR experiences, adjust device volume, and display information such as remaining battery life. We developed Story Starter in response to the UK-wide StoryTrails tour in the summer of 2022, which was held across 15 locations and attracted thousands of attendees who experienced a range of immersive content, including six VR experiences. Story Starter helped streamline the onboarding process by allowing onboarders to avoid putting the headset on themselves to complete routine tasks such as selecting and starting experiences, thereby minimizing COVID risks. Another benefit of not needing an active internet connection was that our headsets did not automatically update at inconvenient times, which we have found sometimes to break experiences. Converging evidence suggests that Story Starter was well-received and reliable. However, we also acknowledge some limitations of the solution and discuss several next steps we are considering.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
Conferences
Special Issues
Special Issue in
Virtual Worlds
New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality
Guest Editors: Panagiotis Kourtesis, Domna BanakouDeadline: 31 March 2025
Special Issue in
Virtual Worlds
Contemporary Developments in Mixed, Augmented, and Virtual Reality: Implications for Teaching and Learning
Guest Editors: Kenneth Y T Lim, Michael VallanceDeadline: 30 April 2025
Special Issue in
Virtual Worlds
Empowering Health Education: Digital Transformation Frontiers for All
Guest Editors: Stathis Konstantinidis, Panagiotis Bamidis, Eleni Dafli, Panagiotis AntoniouDeadline: 30 June 2025