Journal Description
Virtual Worlds
Virtual Worlds
is an international, peer-reviewed, open access journal on virtual reality, augmented and mixed reality, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 36.6 days after submission; acceptance to publication is undertaken in 7.4 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Virtual Worlds is a companion journal of Applied Sciences.
Latest Articles
Virtual Reality Pursuit: Using Individual Predispositions towards VR to Understand Perceptions of a Virtualized Workplace Team Experience
Virtual Worlds 2024, 3(4), 418-435; https://doi.org/10.3390/virtualworlds3040023 - 10 Oct 2024
Abstract
This study investigates how individual predispositions toward Virtual Reality (VR) affect user experiences in collaborative VR environments, particularly in workplace settings. By adapting the Video Game Pursuit Scale to measure VR predisposition, we aim to establish the reliability and validity of this adapted
[...] Read more.
This study investigates how individual predispositions toward Virtual Reality (VR) affect user experiences in collaborative VR environments, particularly in workplace settings. By adapting the Video Game Pursuit Scale to measure VR predisposition, we aim to establish the reliability and validity of this adapted measure in assessing how personal characteristics influence engagement and interaction in VR. Two studies, the first correlational and the second quasi-experimental, were conducted to examine the impact of environmental features, specifically the differences between static and mobile VR platforms, on participants’ perceptions of time, presence, and task motivation. The findings indicate that individual differences in VR predisposition significantly influence user experiences in virtual environments with important implications for enhancing VR applications in training and team collaboration. This research contributes to the understanding of human–computer interaction in VR and offers valuable insights for organizations aiming to implement VR technologies effectively. The results highlight the importance of considering psychological factors in the design and deployment of VR systems, paving the way for future research in this rapidly evolving field.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►
Show Figures
Open AccessArticle
XR MUSE: An Open-Source Unity Framework for Extended Reality-Based Networked Multi-User Studies
by
Stéven Picard, Ningyuan Sun and Jean Botev
Virtual Worlds 2024, 3(4), 404-417; https://doi.org/10.3390/virtualworlds3040022 - 2 Oct 2024
Abstract
In recent years, extended reality (XR) technologies have been increasingly used as a research tool in behavioral studies. They allow experimenters to conduct user studies in simulated environments that are both controllable and reproducible across participants. However, creating XR experiences for such studies
[...] Read more.
In recent years, extended reality (XR) technologies have been increasingly used as a research tool in behavioral studies. They allow experimenters to conduct user studies in simulated environments that are both controllable and reproducible across participants. However, creating XR experiences for such studies remains challenging, particularly in networked, multi-user setups that investigate collaborative or competitive scenarios. Numerous aspects need to be implemented and coherently integrated, e.g., in terms of user interaction, environment configuration, and data synchronization. To reduce this complexity and facilitate development, we present the open-source Unity framework XR MUSE for devising user studies in shared virtual environments. The framework provides various ready-to-use components and sample scenes that researchers can easily customize and adapt to their specific needs.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessReview
Advancing Medical Education Using Virtual and Augmented Reality in Low- and Middle-Income Countries: A Systematic and Critical Review
by
Xi Li, Dalia Elnagar, Ge Song and Rami Ghannam
Virtual Worlds 2024, 3(3), 384-403; https://doi.org/10.3390/virtualworlds3030021 - 18 Sep 2024
Abstract
►▼
Show Figures
This review critically examines the integration of Virtual Reality (VR) and Augmented Reality (AR) in medical training across Low- and Middle-Income Countries (LMICs), offering a novel perspective by combining quantitative analysis with qualitative insights from medical students in Egypt and Ghana. Through a
[...] Read more.
This review critically examines the integration of Virtual Reality (VR) and Augmented Reality (AR) in medical training across Low- and Middle-Income Countries (LMICs), offering a novel perspective by combining quantitative analysis with qualitative insights from medical students in Egypt and Ghana. Through a systematic review process, 17 peer-reviewed studies published between 2010 and 2023 were analysed. Altogether, these studies involved a total of 887 participants. The analysis reveals a growing interest in VR and AR applications for medical training in LMICs with a peak in published articles in 2023, indicating an expanding research landscape. A unique contribution of this review is the integration of feedback from 35 medical students assessed through questionnaires, which demonstrates the perceived effectiveness of immersive technologies over traditional 2D illustrations in understanding complex medical concepts. Key findings highlight that VR and AR technologies in medical training within LMICs predominantly focus on surgical skills. The majority of studies focus on enhancing surgical training, particularly general surgery. This emphasis reflects the technology’s strong alignment with the needs of LMICs, where surgical skills training is often a priority. Despite the promising applications and expanding interest in VR and AR, significant challenges such as accessibility and device limitations remain, demonstrating the need for ongoing research and integration with traditional methods to fully leverage these technologies for effective medical education. Therefore, this review provides a comprehensive analysis of existing VR and AR applications, their evaluation methodologies, and student perspectives to address educational challenges and enhance healthcare outcomes in LMICs.
Full article
Figure 1
Open AccessPerspective
Navigating the Healthcare Metaverse: Immersive Technologies and Future Perspectives
by
Kevin Yi-Lwern Yap
Virtual Worlds 2024, 3(3), 368-383; https://doi.org/10.3390/virtualworlds3030020 - 11 Sep 2024
Abstract
The year is 2030. The internet has evolved into the metaverse. People navigate through advanced avatars, shop in digital marketplaces, and connect with others through extended reality social media platforms. Three-dimensional patient scans, multidisciplinary tele-collaborations, digital twins and metaverse health records are part
[...] Read more.
The year is 2030. The internet has evolved into the metaverse. People navigate through advanced avatars, shop in digital marketplaces, and connect with others through extended reality social media platforms. Three-dimensional patient scans, multidisciplinary tele-collaborations, digital twins and metaverse health records are part of clinical practices. Younger generations regularly immerse themselves in virtual worlds, playing games and attending social events in the metaverse. This sounds like a sci-fi movie, but as the world embraces immersive technologies post-COVID-19, this future is not too far off. This article aims to provide a foundational background to immersive technologies and their applications and discuss their potential for transforming healthcare and education. Moreover, this article will introduce the metaverse ecosystem and characteristics, and its potential for health prevention, treatment, education, and research. Finally, this article will explore the synergy between generative artificial intelligence and the metaverse. As younger generations of healthcare professionals embrace this digital frontier, the metaverse’s potential in healthcare is definitely attractive. Mainstream adoption may take time, but it is imperative that healthcare professionals be equipped with interdisciplinary skills to navigate the plethora of immersive technologies in the future of healthcare.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
Figure 1
Open AccessArticle
“Case By Case”: Investigating the Use of a VR-Based Allegorical Serious Game for Consent Education
by
Autumn May Aindow, Alexander Baines, Toby Mccaffery, Sterling O’Neill, Frolynne Rose Martinez Salido, Gail Collyer-Hoar, George Limbert, Elisa Rubegni and Abhijit Karnik
Virtual Worlds 2024, 3(3), 354-367; https://doi.org/10.3390/virtualworlds3030019 - 6 Sep 2024
Abstract
The topic of consent within interpersonal relationships is sensitive and complex. A serious game can provide a safe medium for the exploration of the topic of consent. In this paper, we aim to alleviate the challenges of designing a serious game artefact with
[...] Read more.
The topic of consent within interpersonal relationships is sensitive and complex. A serious game can provide a safe medium for the exploration of the topic of consent. In this paper, we aim to alleviate the challenges of designing a serious game artefact with the implicit goal of exploring the topic of consent. The resulting artefact, “Case By Case”, is a VR-based serious game targeting university students, which uses an allegory-based approach to achieve its goal. The participants play the role of a detective who is tasked with determining if individuals have committed theft, which serves as an allegory for breach of consent. “Case By Case” provides the users an opportunity to reflect on their decisions within the game and apply them to the complex situations of consent such as victim-blaming and bystander awareness. To evaluate the effectiveness of the game in achieving its implicit goal, we ran a user study (n = 24). The results show that “Case By Case” provided a safe environment for the users to reflect on the concept of consent and increase their understanding about the topic further.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Language Learning and Intergroup Empathy through Multi-User Interactions and Simulations in a Virtual World
by
Elaine Hoter, Manal Yazbak Abu Ahmad and Hannah Azulay
Virtual Worlds 2024, 3(3), 333-353; https://doi.org/10.3390/virtualworlds3030018 - 13 Aug 2024
Abstract
In an increasingly globalized world, the development of language skills and intercultural empathy has become crucial for effective communication and collaboration across diverse societies. Virtual worlds offer a unique and immersive environment to address these needs through innovative educational approaches. This study explores
[...] Read more.
In an increasingly globalized world, the development of language skills and intercultural empathy has become crucial for effective communication and collaboration across diverse societies. Virtual worlds offer a unique and immersive environment to address these needs through innovative educational approaches. This study explores the impact of multi-user interactions, group work, and simulations within virtual worlds on language learning and the development of intergroup empathy. Two distinct research projects were conducted, involving 241 participants aged 19–45. The language learning study engaged 116 participants in diverse interactive experiences, while the intercultural study had 125 participants collaborating in multicultural groups and participating in perspective-taking simulations. Both studies employed qualitative data collection methods, including surveys, interviews, and observations. The findings suggest that the combination of networking strategies, collaborative learning, and simulations within virtual worlds contributes to improvements in learners’ language proficiency, confidence, and empathy towards diverse social groups. Participants reported increased motivation and engagement, which was attributed to the immersive and interactive nature of the virtual environments. These studies highlight the importance of collaboration and reflection in facilitating language acquisition and intercultural understanding. Technical challenges were identified as potential barriers to implementation. The results demonstrate the potential of virtual worlds to enhance language education and foster empathy in diverse societies, offering valuable insights for educators and researchers. However, the findings may be limited by the specific contexts and sample sizes of these studies, warranting further research to explore the generalizability and long-term impact of virtual world interventions and not exaggerate the main conclusions.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
Mixed Reality in Building Construction Inspection and Monitoring: A Systematic Review
by
Rana Muhammad Irfan Anwar and Salman Azhar
Virtual Worlds 2024, 3(3), 319-332; https://doi.org/10.3390/virtualworlds3030017 - 13 Aug 2024
Abstract
►▼
Show Figures
Mixed reality (MR) technology has the potential to enhance building construction inspection and monitoring processes, improving efficiency, accuracy, and safety. This systematic review intends to investigate the present research status on MR in building construction inspection and monitoring. The review covers existing literature
[...] Read more.
Mixed reality (MR) technology has the potential to enhance building construction inspection and monitoring processes, improving efficiency, accuracy, and safety. This systematic review intends to investigate the present research status on MR in building construction inspection and monitoring. The review covers existing literature and practical case studies that scrutinize current technologies, their applications, challenges, and future trends in this rapidly evolving field. This article follows a methodology known as Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) to enhance the credibility and reliability of research. The study includes articles published between 2018 and 2023, identified through a comprehensive search of Scopus and Google Scholar databases. Findings indicate that MR technology has the potential to enhance visualization, communication, and collaboration between stakeholders, as well as increase efficiency and accuracy in inspection and monitoring tasks by providing real-time interactable data and quick decision-making among the project team members. The adoption of MR technology in the construction industry will not only boost its effectiveness but also improve its productivity. However, limitations such as high costs, technical issues, and user acceptance pose challenges to the widespread adoption of MR in building construction. Future research should address these limitations and investigate MR’s long-term impact on building construction inspection and monitoring.
Full article
Figure 1
Open AccessArticle
Leveraging Virtual Reality for the Visualization of Non-Observable Electrical Circuit Principles in Engineering Education
by
Elliott Wolbach, Michael Hempel and Hamid Sharif
Virtual Worlds 2024, 3(3), 303-318; https://doi.org/10.3390/virtualworlds3030016 - 2 Aug 2024
Abstract
As technology advances, the field of electrical and computer engineering continuously demands the introduction of innovative new tools and methodologies to facilitate the effective learning and comprehension of fundamental concepts. This research addresses an identified gap in technology-augmented education capabilities and researches the
[...] Read more.
As technology advances, the field of electrical and computer engineering continuously demands the introduction of innovative new tools and methodologies to facilitate the effective learning and comprehension of fundamental concepts. This research addresses an identified gap in technology-augmented education capabilities and researches the integration of virtual reality (VR) technology with real-time electronic circuit simulation to enable and enhance the visualization of non-observable concepts such as voltage distribution and current flow within these circuits. In this paper, we describe the development of our immersive educational platform, which makes understanding these abstract concepts intuitive and engaging. This research also involves the design and development of a VR-based circuit simulation environment. By leveraging VR’s immersive capabilities, our system enables users to physically interact with electronic components, observe the flow of electrical signals, and manipulate circuit parameters in real-time. Through this immersive experience, learners can gain a deeper understanding of fundamental electronic principles, transcending the limitations of traditional two-dimensional diagrams and equations. Furthermore, this research focuses on the implementation of advanced and novel visualization techniques within the VR environment for non-observable electrical and electromagnetic properties, providing users with a clearer and more intuitive understanding of electrical circuit concepts. Examples include color-coded pathways for current flow and dynamic voltage gradient visualization. Additionally, real-time data representation and graphical overlays are researched and integrated to offer users insights into the dynamic behavior of circuits, allowing for better analysis and troubleshooting.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality, 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Challenges and Opportunities of Using Metaverse Tools for Participatory Architectural Design Processes
by
Provides Ng, Sara Eloy, Micaela Raposo, Alberto Fernández González, Nuno Pereira da Silva, Marcos Figueiredo and Hira Zuberi
Virtual Worlds 2024, 3(3), 283-302; https://doi.org/10.3390/virtualworlds3030015 - 10 Jul 2024
Abstract
Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design
[...] Read more.
Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design solutions in a built environment. This paper explores how metaverse tools can help enhance participatory design by providing new collaborative opportunities via networked 3D environments. A hybrid format (online and in situ) co-creation process was documented and analysed, targeting public space design in London, Hong Kong, and Lisbon. The participants collaborated to address a set of design requirements via a tailored metaverse space, following a six-step methodology (Tour, Discuss, Rate, Define, Action, and Show and Tell). The preliminary results indicated that non-immersive metaverse tools help strengthen spatial collaboration through user perspective simulations, introducing novel interaction possibilities within design processes. The technology’s still-existing technical limitations may be tackled with careful engagement design, iterative reviews, and participants’ feedback. The experience documented prompts a reflection on the role of architects in process design and mediating multi-stakeholder collaboration, contributing to more inclusive, intuitive, and informed co-creation.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
Geometric Fidelity Requirements for Meshes in Automotive Lidar Simulation
by
Christopher Goodin, Marc N. Moore, Daniel W. Carruth, Zachary Aspin and John Kaniarz
Virtual Worlds 2024, 3(3), 270-282; https://doi.org/10.3390/virtualworlds3030014 - 3 Jul 2024
Abstract
►▼
Show Figures
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves.
[...] Read more.
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves. In this work, we propose to answer the question, “What degree of geometric fidelity is required to realistically simulate lidar in AGV simulations?” To answer this question, in this work we present an analysis that determines the required geometric fidelity of digital scenes and assets used in the simulation of AGVs. Focusing on vegetation, we use a comparison of the real and simulated perceived distribution of leaf orientation angles in lidar point clouds to determine the number of triangles required to reliably reproduce realistic results. By comparing real lidar scans of vegetation to simulated lidar scans of vegetation with a variety of geometric fidelities, we find that digital tree models (meshes) need to have a minimum triangle density of >1600 triangles per cubic meter in order to accurately reproduce the geometric properties of lidar scans of real vegetation, with a recommended triangle density of 11,000 triangles per cubic meter for best performance. Furthermore, by comparing these experiments to past work investigating the same question for cameras, we develop a general “rule-of-thumb” for vegetation mesh fidelity in AGV sensor simulation.
Full article
Figure 1
Open AccessArticle
A Virtual Reality Game-Based Intervention to Enhance Stress Mindset and Performance among Firefighting Trainees from the Singapore Civil Defence Force (SCDF)
by
Muhammad Akid Durrani Bin Imran, Cherie Shu Yun Goh, Nisha V, Meyammai Shanmugham, Hasan Kuddoos, Chen Huei Leo and Bina Rai
Virtual Worlds 2024, 3(3), 256-269; https://doi.org/10.3390/virtualworlds3030013 - 1 Jul 2024
Abstract
This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three
[...] Read more.
This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three groups: control, placebo, and intervention. The participants’ physiological responses, psychological responses, and training performances were evaluated during specific times over the standard 22-week training regimen. Participants from the control and placebo groups showed a similar overall perceived stress profile, with an initial increase in the early stages that was subsequently maintained over the remaining training period. Participants from the intervention group had a significantly lower level of perceived stress compared to the control and placebo groups, and their stress-is-enhancing mindset was significantly increased before the game in week 12 compared to week 3. Cortisol levels remained comparable between pre-game and post-game for the placebo group at week 12, but there was a significant reduction in cortisol levels post-game in comparison to pre-game for the intervention group. The biofeedback data as a measurement of root mean square of successive differences (RMSSD) during the gameplay were also significantly increased at week 12 when compared to week 3. Notably, the intervention group had a significant improvement in the final exercise assessment when compared to the control based on the participants’ role as duty officers. In conclusion, a VR game-based intervention with real-time biofeedback shows promise as an engaging and effective way of training firefighting trainees to enhance their stress mindset and reduce their perceived stress, which may enable them to perform better in the daily emergencies that they respond to.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
Figure 1
Open AccessArticle
Exploring Dynamic Difficulty Adjustment Methods for Video Games
by
Nicholas Fisher and Arun K. Kulshreshth
Virtual Worlds 2024, 3(2), 230-255; https://doi.org/10.3390/virtualworlds3020012 - 7 Jun 2024
Abstract
►▼
Show Figures
Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms
[...] Read more.
Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms to keep the gaming experience engaging. This study introduces a custom first-person-shooter (FPS) game to explore Dynamic Difficulty Adjustment (DDA) techniques, leveraging both performance metrics and emotional responses gathered from physiological sensors. Through a within-subjects experiment involving casual and experienced gamers, we scrutinized the effects of various DDA methods on player performance and self-reported game perceptions. Contrary to expectations, our research did not identify a singular, most effective DDA strategy. Instead, findings suggest a complex landscape where no one approach—be it performance-based, emotion-based, or a hybrid—demonstrably surpasses static difficulty settings in enhancing player engagement or game experience. Noteworthy is the data’s alignment with Flow Theory, suggesting potential for the Emotion DDA technique to foster engagement by matching challenges to player skill levels. However, the overall modest impact of DDA on performance metrics and emotional responses highlights the intricate challenge of designing adaptive difficulty that resonates with both the mechanical and emotional facets of gameplay. Our investigation contributes to the broader dialogue on adaptive game design, emphasizing the need for further research to refine DDA approaches. By advancing our understanding and methodologies, especially in emotion recognition, we aim to develop more sophisticated DDA strategies. These strategies aspire to dynamically align game challenges with individual player states, making games more accessible, engaging, and enjoyable for a wider audience.
Full article
Figure 1
Open AccessArticle
An Augmented Reality Application for Wound Management: Enhancing Nurses’ Autonomy, Competence and Connectedness
by
Carina Albrecht-Gansohr, Lara Timm, Sabrina C. Eimler and Stefan Geisler
Virtual Worlds 2024, 3(2), 208-229; https://doi.org/10.3390/virtualworlds3020011 - 3 Jun 2024
Abstract
The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process
[...] Read more.
The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process with nurses using the positive computing paradigm, with a focus on the improvement of the working conditions of nursing staff. In a qualitative study with 14 participants, the factors of autonomy, competence and connectedness were examined in particular. It was shown that good individual adaptability and flexibility of the system with respect to the work task and personal preferences lead to a high degree of autonomy. The availability of the right information at the right time strengthens the feeling of competence. On the one hand, the connection to patients is increased by the additional information in the glasses, but on the other hand, it is hindered by the unusual appearance of the device and the lack of eye contact. In summary, the potential of Augmented Reality glasses in care was confirmed, and approaches for a well-being-centered system design were identified but, at the same time, a number of future research questions, including the effects on patients, were also identified.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Tactile Speech Communication: Reception of Words and Two-Way Messages through a Phoneme-Based Display
by
Jaehong Jung, Charlotte M. Reed, Juan S. Martinez and Hong Z. Tan
Virtual Worlds 2024, 3(2), 184-207; https://doi.org/10.3390/virtualworlds3020010 - 7 May 2024
Abstract
The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based
[...] Read more.
The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based tactile display of speech consisting of a 4-by-6 array of tactors worn on the dorsal and ventral surfaces of the forearm. Unique tactile signals were assigned to the 39 English phonemes. Study I consisted of training and testing on the identification of 4-phoneme words. Performance on a trained set of 100 words averaged 87% across the three participants and generalized well to a novel set of words (77%). Study II consisted of two-way messaging between two users of TAPS (TActile Phonemic Sleeve) for 13 h over 45 days. The participants conversed with each other by inputting text that was translated into tactile phonemes sent over the device. Messages were identified with an accuracy of 73% correct in conjunction with 82% of the words. Although rates of communication were slow (roughly 1 message per minute), the results obtained with this ecologically valid procedure represent progress toward the goal of a stand-alone tactile device for speech communication.
Full article
(This article belongs to the Special Issue New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
Story Starter: A Tool for Controlling Multiple Virtual Reality Headsets with No Active Internet Connection
by
Andy T. Woods, Laryssa Whittaker, Neil Smith, Robert Ispas, Jackson Moore, Roderick D. Morgan and James Bennett
Virtual Worlds 2024, 3(2), 171-183; https://doi.org/10.3390/virtualworlds3020009 - 8 Apr 2024
Abstract
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement
[...] Read more.
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement here, as it streamlines the onboarding process and maximizes the number of viewers. Current off-the-shelf solutions require headsets to be connected to a cloud-based app via an active internet connection, which can be problematic in some locations. To address this challenge, we present Story Starter, a solution that enables the control of VR headsets without an active internet connection. Story Starter can start, stop, and install VR experiences, adjust device volume, and display information such as remaining battery life. We developed Story Starter in response to the UK-wide StoryTrails tour in the summer of 2022, which was held across 15 locations and attracted thousands of attendees who experienced a range of immersive content, including six VR experiences. Story Starter helped streamline the onboarding process by allowing onboarders to avoid putting the headset on themselves to complete routine tasks such as selecting and starting experiences, thereby minimizing COVID risks. Another benefit of not needing an active internet connection was that our headsets did not automatically update at inconvenient times, which we have found sometimes to break experiences. Converging evidence suggests that Story Starter was well-received and reliable. However, we also acknowledge some limitations of the solution and discuss several next steps we are considering.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
APIs in the Metaverse—A Systematic Evaluation
by
Marius Traub and Markus Weinberger
Virtual Worlds 2024, 3(2), 157-170; https://doi.org/10.3390/virtualworlds3020008 - 8 Apr 2024
Abstract
One of the most critical challenges for the success of the Metaverse is interoperability amongst its virtual platforms and worlds. In this context, application programming interfaces (APIs) are essential. This study analyzes a sample of 15 Metaverse platforms. In the first step, the
[...] Read more.
One of the most critical challenges for the success of the Metaverse is interoperability amongst its virtual platforms and worlds. In this context, application programming interfaces (APIs) are essential. This study analyzes a sample of 15 Metaverse platforms. In the first step, the availability of publicly accessible APIs was examined. For those platforms offering an API, i.e., Decentraland, Second Life, Voxels, Roblox, Axie Infinity, Upland, and VRChat, the available API contents were collected, analyzed, and presented in the paper. The results show that only a few Metaverse platforms offer APIs at all. In addition, the available APIs are very diverse and heterogeneous. Information is somewhat fragmented, requiring access to several APIs to compile a comprehensive data set. Thus, standardized APIs will enable better interoperability and foster a more seamless and immersive user experience in the Metaverse.
Full article
Open AccessArticle
Motion Capture in Mixed-Reality Applications: A Deep Denoising Approach
by
André Correia Gonçalves, Rui Jesus and Pedro Mendes Jorge
Virtual Worlds 2024, 3(1), 135-156; https://doi.org/10.3390/virtualworlds3010007 - 11 Mar 2024
Abstract
►▼
Show Figures
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to
[...] Read more.
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to obtain this movement from an actor is to capture the motion of the player through an optical sensor to interact with the virtual world. However, during movement some parts of the human body can be occluded by others and there can be noise caused by difficulties in sensor capture, reducing the user experience. This work presents a solution to correct the motion capture errors from the Microsoft Kinect sensor or similar through a deep neural network (DNN) trained with a pre-processed dataset of poses offered by Carnegie Mellon University (CMU) Graphics Lab. A temporal filter is implemented to smooth the movement, given by a set of poses returned by the deep neural network. This system is implemented in Python with the TensorFlow application programming interface (API), which supports the machine learning techniques and the Unity game engine to visualize and interact with the obtained skeletons. The results are evaluated using the mean absolute error (MAE) metric where ground truth is available and with the feedback of 12 participants through a questionnaire for the Kinect data.
Full article
Figure 1
Open AccessArticle
Real-Time Diminished Reality Application Specifying Target Based on 3D Region
by
Kaito Kobayashi and Masanobu Takahashi
Virtual Worlds 2024, 3(1), 115-134; https://doi.org/10.3390/virtualworlds3010006 - 4 Mar 2024
Abstract
►▼
Show Figures
Diminished reality (DR) is a technology in which a background image is overwritten on a real object to make it appear as if the object has been removed from real space. This paper presents a real-time DR application that employs deep learning. A
[...] Read more.
Diminished reality (DR) is a technology in which a background image is overwritten on a real object to make it appear as if the object has been removed from real space. This paper presents a real-time DR application that employs deep learning. A DR application can remove objects inside a 3D region defined by a user in images captured using a smartphone. By specifying the 3D region containing the target object to be removed, DR can be realized for targets with various shapes and sizes, and the specified target can be removed even if the viewpoint changes. To achieve fast and accurate DR, a suitable network was employed based on the experimental results. Additionally, the loss function during the training process was improved to enhance completion accuracy. Then, the operation of the DR application at 10 fps was verified using a smartphone and a laptop computer.
Full article
Figure 1
Open AccessArticle
Comparing and Contrasting Near-Field, Object Space, and a Novel Hybrid Interaction Technique for Distant Object Manipulation in VR
by
Wei-An Hsieh, Hsin-Yi Chien, David Brickler, Sabarish V. Babu and Jung-Hong Chuang
Virtual Worlds 2024, 3(1), 94-114; https://doi.org/10.3390/virtualworlds3010005 - 21 Feb 2024
Abstract
►▼
Show Figures
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field
[...] Read more.
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field and object-space manipulation techniques. We employed bimanual near-field metaphor with scaled replica (BMSR) as our near-field interaction technique, which enabled us to perform multilevel degrees-of-freedom (DoF) separation transformations, such as 1~3DoF translation, 1~3DoF uniform and anchored scaling, 1DoF and 3DoF rotation, and 6DoF simultaneous translation and rotation, with enhanced depth perception and fine motor control provided by near-field manipulation techniques. The object-space interaction technique we utilized was the classic Scaled HOMER, which is known to be effective and appropriate for coarse transformations in distant object manipulation. In a repeated measures within-subjects evaluation, we empirically evaluated the three interaction techniques for their accuracy, efficiency, and economy of movement in pick-and-place, docking, and tunneling tasks in VR. Our findings revealed that the near-field BMSR technique outperformed the object space Scaled HOMER technique in terms of accuracy and economy of movement, but the participants performed more slowly overall with BMSR. Additionally, our results revealed that the participants preferred to use the hybrid interaction technique, as it allowed them to switch and transition seamlessly between the constituent BMSR and Scaled HOMER interaction techniques, depending on the level of accuracy, precision and efficiency required.
Full article
Figure 1
Open AccessArticle
Cybersickness in Virtual Reality: The Role of Individual Differences, Its Effects on Cognitive Functions and Motor Skills, and Intensity Differences during and after Immersion
by
Panagiotis Kourtesis, Agapi Papadopoulou and Petros Roussos
Virtual Worlds 2024, 3(1), 62-93; https://doi.org/10.3390/virtualworlds3010004 - 2 Feb 2024
Cited by 10
Abstract
►▼
Show Figures
Background: Given that VR is used in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness is becoming increasing important. This study aimed to explore the predictors of cybersickness and its interplay with
[...] Read more.
Background: Given that VR is used in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness is becoming increasing important. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20–45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to the CSQ-VR and performed VR-based cognitive and psychomotor tasks. After the VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience with videogaming was a significant predictor of cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall the intensity of cybersickness’s nausea and vestibular symptoms significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears to be a cybersickness biomarker. Cybersickness affects visuospatial working memory and psychomotor skills. Concerning user experience, cybersickness and its effects on performance should be examined during and not after immersion.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
Conferences
Special Issues
Special Issue in
Virtual Worlds
Networked Virtual Reality, Mixed Reality and Augmented Reality Systems
Guest Editors: Thiago Malheiros Porcino, Jorge CardosoDeadline: 30 November 2024
Special Issue in
Virtual Worlds
Serious Games and Extended Reality in Healthcare and/or Education
Guest Editors: Kang Hao Cheong, Bina Rai, Chen Huei LeoDeadline: 30 November 2024
Special Issue in
Virtual Worlds
Empowering Health Education: Digital Transformation Frontiers for All
Guest Editors: Stathis Konstantinidis, Panagiotis Bamidis, Eleni Dafli, Panagiotis AntoniouDeadline: 31 December 2024
Special Issue in
Virtual Worlds
New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality
Guest Editors: Domna Banakou, Panagiotis KourtesisDeadline: 31 March 2025