Journal Description
Virtual Worlds
Virtual Worlds
is an international, peer-reviewed, open access journal on virtual reality, augmented and mixed reality, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: first decisions in 16 days; acceptance to publication in 5.8 days (median values for MDPI journals in the first half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Virtual Worlds is a companion journal of Applied Sciences.
Latest Articles
Challenges and Opportunities of Using Metaverse Tools for Participatory Architectural Design Processes
Virtual Worlds 2024, 3(3), 283-302; https://doi.org/10.3390/virtualworlds3030015 - 10 Jul 2024
Abstract
Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design
[...] Read more.
Participatory design emerges as a proactive approach involving different stakeholders in design and decision-making processes, addressing diverse values and ensuring outcomes align with users’ needs. However, the inadequacy of engaging stakeholders with a spatial experience can result in uninformed and, consequently, unsuccessful design solutions in a built environment. This paper explores how metaverse tools can help enhance participatory design by providing new collaborative opportunities via networked 3D environments. A hybrid format (online and in situ) co-creation process was documented and analysed, targeting public space design in London, Hong Kong, and Lisbon. The participants collaborated to address a set of design requirements via a tailored metaverse space, following a six-step methodology (Tour, Discuss, Rate, Define, Action, and Show and Tell). The preliminary results indicated that non-immersive metaverse tools help strengthen spatial collaboration through user perspective simulations, introducing novel interaction possibilities within design processes. The technology’s still-existing technical limitations may be tackled with careful engagement design, iterative reviews, and participants’ feedback. The experience documented prompts a reflection on the role of architects in process design and mediating multi-stakeholder collaboration, contributing to more inclusive, intuitive, and informed co-creation.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►
Show Figures
Open AccessArticle
Geometric Fidelity Requirements for Meshes in Automotive Lidar Simulation
by
Christopher Goodin, Marc N. Moore, Daniel W. Carruth, Zachary Aspin and John Kaniarz
Virtual Worlds 2024, 3(3), 270-282; https://doi.org/10.3390/virtualworlds3030014 - 3 Jul 2024
Abstract
►▼
Show Figures
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves.
[...] Read more.
The perception of vegetation is a critical aspect of off-road autonomous navigation, and consequentially a critical aspect of the simulation of autonomous ground vehicles (AGVs). Representing vegetation with triangular meshes requires detailed geometric modeling that captures the intricacies of small branches and leaves. In this work, we propose to answer the question, “What degree of geometric fidelity is required to realistically simulate lidar in AGV simulations?” To answer this question, in this work we present an analysis that determines the required geometric fidelity of digital scenes and assets used in the simulation of AGVs. Focusing on vegetation, we use a comparison of the real and simulated perceived distribution of leaf orientation angles in lidar point clouds to determine the number of triangles required to reliably reproduce realistic results. By comparing real lidar scans of vegetation to simulated lidar scans of vegetation with a variety of geometric fidelities, we find that digital tree models (meshes) need to have a minimum triangle density of >1600 triangles per cubic meter in order to accurately reproduce the geometric properties of lidar scans of real vegetation, with a recommended triangle density of 11,000 triangles per cubic meter for best performance. Furthermore, by comparing these experiments to past work investigating the same question for cameras, we develop a general “rule-of-thumb” for vegetation mesh fidelity in AGV sensor simulation.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00014/article_deploy/html/images/virtualworlds-03-00014-g001-550.jpg?1719993090)
Figure 1
Open AccessArticle
A Virtual Reality Game-Based Intervention to Enhance Stress Mindset and Performance among Firefighting Trainees from the Singapore Civil Defence Force (SCDF)
by
Muhammad Akid Durrani Bin Imran, Cherie Shu Yun Goh, Nisha V, Meyammai Shanmugham, Hasan Kuddoos, Chen Huei Leo and Bina Rai
Virtual Worlds 2024, 3(3), 256-269; https://doi.org/10.3390/virtualworlds3030013 - 1 Jul 2024
Abstract
This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three
[...] Read more.
This research paper investigates the effectiveness of a virtual reality (VR) game-based intervention using real-time biofeedback for stress management and performance among fire-fighting trainees from the Singapore Civil Defence Force (SCDF). Forty-seven trainees were enrolled in this study and randomly assigned into three groups: control, placebo, and intervention. The participants’ physiological responses, psychological responses, and training performances were evaluated during specific times over the standard 22-week training regimen. Participants from the control and placebo groups showed a similar overall perceived stress profile, with an initial increase in the early stages that was subsequently maintained over the remaining training period. Participants from the intervention group had a significantly lower level of perceived stress compared to the control and placebo groups, and their stress-is-enhancing mindset was significantly increased before the game in week 12 compared to week 3. Cortisol levels remained comparable between pre-game and post-game for the placebo group at week 12, but there was a significant reduction in cortisol levels post-game in comparison to pre-game for the intervention group. The biofeedback data as a measurement of root mean square of successive differences (RMSSD) during the gameplay were also significantly increased at week 12 when compared to week 3. Notably, the intervention group had a significant improvement in the final exercise assessment when compared to the control based on the participants’ role as duty officers. In conclusion, a VR game-based intervention with real-time biofeedback shows promise as an engaging and effective way of training firefighting trainees to enhance their stress mindset and reduce their perceived stress, which may enable them to perform better in the daily emergencies that they respond to.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00013/article_deploy/html/images/virtualworlds-03-00013-g001-550.jpg?1719829547)
Figure 1
Open AccessArticle
Exploring Dynamic Difficulty Adjustment Methods for Video Games
by
Nicholas Fisher and Arun K. Kulshreshth
Virtual Worlds 2024, 3(2), 230-255; https://doi.org/10.3390/virtualworlds3020012 - 7 Jun 2024
Abstract
►▼
Show Figures
Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms
[...] Read more.
Maintaining player engagement is pivotal for video game success, yet achieving the optimal difficulty level that adapts to diverse player skills remains a significant challenge. Initial difficulty settings in games often fail to accommodate the evolving abilities of players, necessitating adaptive difficulty mechanisms to keep the gaming experience engaging. This study introduces a custom first-person-shooter (FPS) game to explore Dynamic Difficulty Adjustment (DDA) techniques, leveraging both performance metrics and emotional responses gathered from physiological sensors. Through a within-subjects experiment involving casual and experienced gamers, we scrutinized the effects of various DDA methods on player performance and self-reported game perceptions. Contrary to expectations, our research did not identify a singular, most effective DDA strategy. Instead, findings suggest a complex landscape where no one approach—be it performance-based, emotion-based, or a hybrid—demonstrably surpasses static difficulty settings in enhancing player engagement or game experience. Noteworthy is the data’s alignment with Flow Theory, suggesting potential for the Emotion DDA technique to foster engagement by matching challenges to player skill levels. However, the overall modest impact of DDA on performance metrics and emotional responses highlights the intricate challenge of designing adaptive difficulty that resonates with both the mechanical and emotional facets of gameplay. Our investigation contributes to the broader dialogue on adaptive game design, emphasizing the need for further research to refine DDA approaches. By advancing our understanding and methodologies, especially in emotion recognition, we aim to develop more sophisticated DDA strategies. These strategies aspire to dynamically align game challenges with individual player states, making games more accessible, engaging, and enjoyable for a wider audience.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00012/article_deploy/html/images/virtualworlds-03-00012-g001-550.jpg?1717752692)
Figure 1
Open AccessArticle
An Augmented Reality Application for Wound Management: Enhancing Nurses’ Autonomy, Competence and Connectedness
by
Carina Albrecht-Gansohr, Lara Timm, Sabrina C. Eimler and Stefan Geisler
Virtual Worlds 2024, 3(2), 208-229; https://doi.org/10.3390/virtualworlds3020011 - 3 Jun 2024
Abstract
The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process
[...] Read more.
The use of Augmented Reality glasses opens up many possibilities in hospital care, as they facilitate treatments and their documentation. In this paper, we present a prototype for the HoloLens 2 supporting wound care and documentation. It was developed in a participatory process with nurses using the positive computing paradigm, with a focus on the improvement of the working conditions of nursing staff. In a qualitative study with 14 participants, the factors of autonomy, competence and connectedness were examined in particular. It was shown that good individual adaptability and flexibility of the system with respect to the work task and personal preferences lead to a high degree of autonomy. The availability of the right information at the right time strengthens the feeling of competence. On the one hand, the connection to patients is increased by the additional information in the glasses, but on the other hand, it is hindered by the unusual appearance of the device and the lack of eye contact. In summary, the potential of Augmented Reality glasses in care was confirmed, and approaches for a well-being-centered system design were identified but, at the same time, a number of future research questions, including the effects on patients, were also identified.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00011/article_deploy/html/images/virtualworlds-03-00011-g001-550.jpg?1717406735)
Figure 1
Open AccessArticle
Tactile Speech Communication: Reception of Words and Two-Way Messages through a Phoneme-Based Display
by
Jaehong Jung, Charlotte M. Reed, Juan S. Martinez and Hong Z. Tan
Virtual Worlds 2024, 3(2), 184-207; https://doi.org/10.3390/virtualworlds3020010 - 7 May 2024
Abstract
The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based
[...] Read more.
The long-term goal of this research is the development of a stand-alone tactile device for the communication of speech for persons with profound sensory deficits as well as for applications for persons with intact hearing and vision. Studies were conducted with a phoneme-based tactile display of speech consisting of a 4-by-6 array of tactors worn on the dorsal and ventral surfaces of the forearm. Unique tactile signals were assigned to the 39 English phonemes. Study I consisted of training and testing on the identification of 4-phoneme words. Performance on a trained set of 100 words averaged 87% across the three participants and generalized well to a novel set of words (77%). Study II consisted of two-way messaging between two users of TAPS (TActile Phonemic Sleeve) for 13 h over 45 days. The participants conversed with each other by inputting text that was translated into tactile phonemes sent over the device. Messages were identified with an accuracy of 73% correct in conjunction with 82% of the words. Although rates of communication were slow (roughly 1 message per minute), the results obtained with this ecologically valid procedure represent progress toward the goal of a stand-alone tactile device for speech communication.
Full article
(This article belongs to the Special Issue New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00010/article_deploy/html/images/virtualworlds-03-00010-g001-550.jpg?1715090909)
Figure 1
Open AccessArticle
Story Starter: A Tool for Controlling Multiple Virtual Reality Headsets with No Active Internet Connection
by
Andy T. Woods, Laryssa Whittaker, Neil Smith, Robert Ispas, Jackson Moore, Roderick D. Morgan and James Bennett
Virtual Worlds 2024, 3(2), 171-183; https://doi.org/10.3390/virtualworlds3020009 - 8 Apr 2024
Abstract
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement
[...] Read more.
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement here, as it streamlines the onboarding process and maximizes the number of viewers. Current off-the-shelf solutions require headsets to be connected to a cloud-based app via an active internet connection, which can be problematic in some locations. To address this challenge, we present Story Starter, a solution that enables the control of VR headsets without an active internet connection. Story Starter can start, stop, and install VR experiences, adjust device volume, and display information such as remaining battery life. We developed Story Starter in response to the UK-wide StoryTrails tour in the summer of 2022, which was held across 15 locations and attracted thousands of attendees who experienced a range of immersive content, including six VR experiences. Story Starter helped streamline the onboarding process by allowing onboarders to avoid putting the headset on themselves to complete routine tasks such as selecting and starting experiences, thereby minimizing COVID risks. Another benefit of not needing an active internet connection was that our headsets did not automatically update at inconvenient times, which we have found sometimes to break experiences. Converging evidence suggests that Story Starter was well-received and reliable. However, we also acknowledge some limitations of the solution and discuss several next steps we are considering.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00009/article_deploy/html/images/virtualworlds-03-00009-g001-550.jpg?1712574289)
Figure 1
Open AccessArticle
APIs in the Metaverse—A Systematic Evaluation
by
Marius Traub and Markus Weinberger
Virtual Worlds 2024, 3(2), 157-170; https://doi.org/10.3390/virtualworlds3020008 - 8 Apr 2024
Abstract
One of the most critical challenges for the success of the Metaverse is interoperability amongst its virtual platforms and worlds. In this context, application programming interfaces (APIs) are essential. This study analyzes a sample of 15 Metaverse platforms. In the first step, the
[...] Read more.
One of the most critical challenges for the success of the Metaverse is interoperability amongst its virtual platforms and worlds. In this context, application programming interfaces (APIs) are essential. This study analyzes a sample of 15 Metaverse platforms. In the first step, the availability of publicly accessible APIs was examined. For those platforms offering an API, i.e., Decentraland, Second Life, Voxels, Roblox, Axie Infinity, Upland, and VRChat, the available API contents were collected, analyzed, and presented in the paper. The results show that only a few Metaverse platforms offer APIs at all. In addition, the available APIs are very diverse and heterogeneous. Information is somewhat fragmented, requiring access to several APIs to compile a comprehensive data set. Thus, standardized APIs will enable better interoperability and foster a more seamless and immersive user experience in the Metaverse.
Full article
Open AccessArticle
Motion Capture in Mixed-Reality Applications: A Deep Denoising Approach
by
André Correia Gonçalves, Rui Jesus and Pedro Mendes Jorge
Virtual Worlds 2024, 3(1), 135-156; https://doi.org/10.3390/virtualworlds3010007 - 11 Mar 2024
Abstract
►▼
Show Figures
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to
[...] Read more.
Motion capture is a fundamental technique in the development of video games and in film production to animate a virtual character based on the movements of an actor, creating more realistic animations in a short amount of time. One of the ways to obtain this movement from an actor is to capture the motion of the player through an optical sensor to interact with the virtual world. However, during movement some parts of the human body can be occluded by others and there can be noise caused by difficulties in sensor capture, reducing the user experience. This work presents a solution to correct the motion capture errors from the Microsoft Kinect sensor or similar through a deep neural network (DNN) trained with a pre-processed dataset of poses offered by Carnegie Mellon University (CMU) Graphics Lab. A temporal filter is implemented to smooth the movement, given by a set of poses returned by the deep neural network. This system is implemented in Python with the TensorFlow application programming interface (API), which supports the machine learning techniques and the Unity game engine to visualize and interact with the obtained skeletons. The results are evaluated using the mean absolute error (MAE) metric where ground truth is available and with the feedback of 12 participants through a questionnaire for the Kinect data.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00007/article_deploy/html/images/virtualworlds-03-00007-g001-550.jpg?1710158272)
Figure 1
Open AccessArticle
Real-Time Diminished Reality Application Specifying Target Based on 3D Region
by
Kaito Kobayashi and Masanobu Takahashi
Virtual Worlds 2024, 3(1), 115-134; https://doi.org/10.3390/virtualworlds3010006 - 4 Mar 2024
Abstract
►▼
Show Figures
Diminished reality (DR) is a technology in which a background image is overwritten on a real object to make it appear as if the object has been removed from real space. This paper presents a real-time DR application that employs deep learning. A
[...] Read more.
Diminished reality (DR) is a technology in which a background image is overwritten on a real object to make it appear as if the object has been removed from real space. This paper presents a real-time DR application that employs deep learning. A DR application can remove objects inside a 3D region defined by a user in images captured using a smartphone. By specifying the 3D region containing the target object to be removed, DR can be realized for targets with various shapes and sizes, and the specified target can be removed even if the viewpoint changes. To achieve fast and accurate DR, a suitable network was employed based on the experimental results. Additionally, the loss function during the training process was improved to enhance completion accuracy. Then, the operation of the DR application at 10 fps was verified using a smartphone and a laptop computer.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00006/article_deploy/html/images/virtualworlds-03-00006-g001-550.jpg?1709530129)
Figure 1
Open AccessArticle
Comparing and Contrasting Near-Field, Object Space, and a Novel Hybrid Interaction Technique for Distant Object Manipulation in VR
by
Wei-An Hsieh, Hsin-Yi Chien, David Brickler, Sabarish V. Babu and Jung-Hong Chuang
Virtual Worlds 2024, 3(1), 94-114; https://doi.org/10.3390/virtualworlds3010005 - 21 Feb 2024
Abstract
►▼
Show Figures
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field
[...] Read more.
In this contribution, we propose a hybrid interaction technique that integrates near-field and object-space interaction techniques for manipulating objects at a distance in virtual reality (VR). The objective of the hybrid interaction technique was to seamlessly leverage the strengths of both the near-field and object-space manipulation techniques. We employed bimanual near-field metaphor with scaled replica (BMSR) as our near-field interaction technique, which enabled us to perform multilevel degrees-of-freedom (DoF) separation transformations, such as 1~3DoF translation, 1~3DoF uniform and anchored scaling, 1DoF and 3DoF rotation, and 6DoF simultaneous translation and rotation, with enhanced depth perception and fine motor control provided by near-field manipulation techniques. The object-space interaction technique we utilized was the classic Scaled HOMER, which is known to be effective and appropriate for coarse transformations in distant object manipulation. In a repeated measures within-subjects evaluation, we empirically evaluated the three interaction techniques for their accuracy, efficiency, and economy of movement in pick-and-place, docking, and tunneling tasks in VR. Our findings revealed that the near-field BMSR technique outperformed the object space Scaled HOMER technique in terms of accuracy and economy of movement, but the participants performed more slowly overall with BMSR. Additionally, our results revealed that the participants preferred to use the hybrid interaction technique, as it allowed them to switch and transition seamlessly between the constituent BMSR and Scaled HOMER interaction techniques, depending on the level of accuracy, precision and efficiency required.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00005/article_deploy/html/images/virtualworlds-03-00005-g001-550.jpg?1708513862)
Figure 1
Open AccessArticle
Cybersickness in Virtual Reality: The Role of Individual Differences, Its Effects on Cognitive Functions and Motor Skills, and Intensity Differences during and after Immersion
by
Panagiotis Kourtesis, Agapi Papadopoulou and Petros Roussos
Virtual Worlds 2024, 3(1), 62-93; https://doi.org/10.3390/virtualworlds3010004 - 2 Feb 2024
Cited by 3
Abstract
►▼
Show Figures
Background: Given that VR is used in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness is becoming increasing important. This study aimed to explore the predictors of cybersickness and its interplay with
[...] Read more.
Background: Given that VR is used in multiple domains, understanding the effects of cybersickness on human cognition and motor skills and the factors contributing to cybersickness is becoming increasing important. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20–45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to the CSQ-VR and performed VR-based cognitive and psychomotor tasks. After the VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience with videogaming was a significant predictor of cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall the intensity of cybersickness’s nausea and vestibular symptoms significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears to be a cybersickness biomarker. Cybersickness affects visuospatial working memory and psychomotor skills. Concerning user experience, cybersickness and its effects on performance should be examined during and not after immersion.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00004/article_deploy/html/images/virtualworlds-03-00004-g001-550.jpg?1706837082)
Figure 1
Open AccessArticle
Speech Intelligibility versus Congruency: User Preferences of the Acoustics of Virtual Reality Game Spaces
by
Constantin Popp and Damian T. Murphy
Virtual Worlds 2024, 3(1), 40-61; https://doi.org/10.3390/virtualworlds3010003 - 19 Jan 2024
Abstract
►▼
Show Figures
3D audio spatializers for Virtual Reality (VR) can use the acoustic properties of the surfaces of a visualised game space to calculate a matching reverb. However, this approach could lead to reverbs that impair the tasks performed in such a space, such as
[...] Read more.
3D audio spatializers for Virtual Reality (VR) can use the acoustic properties of the surfaces of a visualised game space to calculate a matching reverb. However, this approach could lead to reverbs that impair the tasks performed in such a space, such as listening to speech-based audio. Sound designers would then have to alter the room’s acoustic properties independently of its visualisation to improve speech intelligibility, causing audio-visual incongruency. As user expectation of simulated room acoustics regarding speech intelligibility in VR has not been studied, this study asked participants to rate the congruency of reverbs and their visualisations in 6-DoF VR while listening to speech-based audio. The participants compared unaltered, matching reverbs with sound-designed, mismatching reverbs. The latter feature improved D50s and reduced RT60s at the cost of lower audio-visual congruency. Results suggest participants preferred improved reverbs only when the unaltered reverbs had comparatively low D50s or excessive ringing. Otherwise, too dry or too reverberant reverbs were disliked. The range of expected RT60s depended on the surface visualisation. Differences in timbre between the reverbs may not affect preferences as strongly as shorter RT60s. Therefore, sound designers can intervene and prioritise speech intelligibility over audio-visual congruency in acoustically challenging game spaces.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00003/article_deploy/html/images/virtualworlds-03-00003-g001-550.jpg?1706692403)
Figure 1
Open AccessFeature PaperArticle
Evaluating the Effect of Outfit on Personality Perception in Virtual Characters
by
Yanbo Cheng and Yingying Wang
Virtual Worlds 2024, 3(1), 21-39; https://doi.org/10.3390/virtualworlds3010002 - 4 Jan 2024
Cited by 2
Abstract
►▼
Show Figures
Designing virtual characters that are capable of reflecting a sense of personality is a key goal in research and applications in virtual reality and computer graphics. More and more research efforts are dedicated to investigating approaches to construct a diverse, equitable, and inclusive
[...] Read more.
Designing virtual characters that are capable of reflecting a sense of personality is a key goal in research and applications in virtual reality and computer graphics. More and more research efforts are dedicated to investigating approaches to construct a diverse, equitable, and inclusive metaverse by infusing expressive personalities and styles into virtual avatars. While most previous work focused on exploring variations in virtual characters’ dynamic behaviors, characters’ visual appearance plays a crucial role in affecting their perceived personalities. This paper presents a series of experiments evaluating the effect of virtual characters’ outfits on their perceived personality. Based on the related psychology research conducted in the real world, we determined a set of outfit factors likely to reflect personality in virtual characters: color, design, and type. As a framework for our study, we used the “Big Five” personality model for evaluating personality traits. To test our hypothesis, we conducted three perceptual experiments to evaluate the outfit parameters’ contributions to the characters’ personality. In our first experiment, we studied the color factor by varying color hue, saturation, and value; in the second experiment, we evaluated the impact of different neckline, waistline, and sleeve designs; and in our third experiment, we examined the personality perception of five outfit types: professional, casual, fashionable, outdoor, and indoor. Significant results offer guidance to avatar designers on how to create virtual characters with specific personality profiles. We further conducted a verification test to extend the application of our findings to animated virtual characters in augmented reality (AR) and virtual reality (VR) settings. Results confirmed that our findings can be broadly applied to both static and animated virtual characters in VR and AR environments that are commonly used in games, entertainment, and social networking scenarios.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00002/article_deploy/html/images/virtualworlds-03-00002-g001-550.jpg?1704359840)
Figure 1
Open AccessArticle
Design and Evaluation of an Asynchronous VR Exploration System for Architectural Design Discussion Content
by
Hsuan-Ming Chang, Ting-Wei Hsu, Ming-Han Tsai, Sabarish V. Babu and Jung-Hong Chuang
Virtual Worlds 2024, 3(1), 1-20; https://doi.org/10.3390/virtualworlds3010001 - 27 Dec 2023
Abstract
►▼
Show Figures
Design discussion is crucial in the architectural design process. To enhance the spatial understanding of 3D space and discussion effectiveness, recently, some systems have been proposed to support design discussion interactively in an immersive virtual environment. The entire design discussion can be archived
[...] Read more.
Design discussion is crucial in the architectural design process. To enhance the spatial understanding of 3D space and discussion effectiveness, recently, some systems have been proposed to support design discussion interactively in an immersive virtual environment. The entire design discussion can be archived and potentially become course materials for future learners. In this paper, we propose an asynchronous VR exploration system that aims to help learners explore content effectively and efficiently anywhere and at any time. To improve effectiveness and efficiency, we also propose a summarization-to-detail approach with the application space by which students can observe the visualization of spatial summarization of actions and participants’ dwell time or the temporal distribution of dialogues and then locate the important or interesting region or dialogue for further exploration. To further explore the discussion content, students can call the preview to see the time-lapse animation of the object operation to understand the change in models or playback to view the discussion details. We conducted an exploratory user study with 10 participants to evaluate user experience, user impression, and effectiveness of learning the design discussion course content using our asynchronous VR design discussion content exploration system. The results indicate that the interactive VR exploration system presented can help learners study the design discussion content effectively. Participants also provided some positive feedback and confirmed the usefulness and value of the system presented. Our applications and lessons learned have implications for future asynchronous VR exploration systems, not only for architectural design discussion content, but also for other applications, such as industrial visual inspections and educational visualizations of design discussions.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-03-00001/article_deploy/html/images/virtualworlds-03-00001-g001-550.jpg?1703677188)
Figure 1
Open AccessArticle
Physics-Based Watercraft Simulator in Virtual Reality
by
Kelly Ervin, Jonathan Boone, Karl Smink, Gaurav Savant, Keith Martin, Spicer Bak and Shyla Clark
Virtual Worlds 2023, 2(4), 422-438; https://doi.org/10.3390/virtualworlds2040024 - 14 Dec 2023
Abstract
In this paper, watercraft and ship simulation is summarized, and the way that it can be extended through realistic physics is explored. A hydrodynamic, data-driven, immersive watercraft simulation experience is also introduced, using the Unreal Engine to visualize a Landing Craft Utility (LCU)
[...] Read more.
In this paper, watercraft and ship simulation is summarized, and the way that it can be extended through realistic physics is explored. A hydrodynamic, data-driven, immersive watercraft simulation experience is also introduced, using the Unreal Engine to visualize a Landing Craft Utility (LCU) operation and interaction with near-shore waves in virtual reality (VR). The VR application provides navigation scientists with a better understanding of how coastal waves impact landing operations and channel design. FUNWAVE data generated on the supercomputing resources at the U.S. Army Corps of Engineers (USACE) Engineering Research and Development Center (ERDC) are employed, and using these data, a graphical representation of the domain is created, including the vessel model and a customizable VR bridge to control the vessel within the virtual environment. Several dimension reduction methods are being devised to ensure that the FUNWAVE data can inform the model but keep the application running in real time at an acceptable frame rate for the VR headset. By importing millions of data points output from the FUNWAVE version 3.4 software into Unreal Engine, virtual vessels can be affected by physics-driven data.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-02-00024/article_deploy/html/images/virtualworlds-02-00024-g001-550.jpg?1702635849)
Figure 1
Open AccessReview
360-Degree Virtual Reality Utilising Head-Mounted Devices in Undergraduate Nursing and Midwifery Education: A Scoping Review
by
Maram A. Alammary, Lesley Halliday and Stathis Th. Konstantinidis
Virtual Worlds 2023, 2(4), 396-421; https://doi.org/10.3390/virtualworlds2040023 - 7 Dec 2023
Cited by 1
Abstract
►▼
Show Figures
Immersive Virtual Reality (IVR) is a promising tool for improving the teaching and learning of nursing and midwifery students. However, the preexisting literature does not comprehensively examine scenario development, theoretical underpinnings, duration, and debriefing techniques. The aim of this review was to assess
[...] Read more.
Immersive Virtual Reality (IVR) is a promising tool for improving the teaching and learning of nursing and midwifery students. However, the preexisting literature does not comprehensively examine scenario development, theoretical underpinnings, duration, and debriefing techniques. The aim of this review was to assess the available evidence of how 360-degree Virtual Reality (VR) utilising head-mounted devices has been used in undergraduate nursing and midwifery education programmes and to explore the potential pedagogical value based on Kirkpatrick’s evaluation model. This review followed the Joanna Briggs Institute (JBI) methodology. A comprehensive electronic search was conducted across five databases. All studies published in English between 2007–2022 were included, regardless of design, if the focus was undergraduate nursing and midwifery programmes and utilised fully immersive 360-degree VR scenarios. Out of an initial pool of 1700 articles, 26 were selected for final inclusion. The findings indicated a limited diversity in scenario design, with only one study employing a participatory approach. Within the Kirkpatrick model, the most measurable outcomes were found at level 2. The main drawback observed in interventional studies was the absence of a theoretical framework and debriefing. The review concludes that the increased use of fully IVR in nursing education has improved student learning outcomes; however, published literature on midwifery education is scarce.
Full article
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-02-00023/article_deploy/html/images/virtualworlds-02-00023-g001-550.jpg?1701941699)
Figure 1
Open AccessReview
Integration of Immersive Approaches for Collaborative Processes with Building Information Modeling (BIM) Methodology for the AEC Industry: An Analysis of the Current State and Future Challenges
by
Simone Balin, Cecilia M. Bolognesi and Paolo Borin
Virtual Worlds 2023, 2(4), 374-395; https://doi.org/10.3390/virtualworlds2040022 - 15 Nov 2023
Cited by 1
Abstract
This study aims to identify and analyze existing gaps in the integration of immersive approaches for collaborative processes with Building Information Modeling (BIM) in the Architecture, Engineering, and Construction (AEC) sector. Using a systematic approach that includes metadata analysis and review procedures, we
[...] Read more.
This study aims to identify and analyze existing gaps in the integration of immersive approaches for collaborative processes with Building Information Modeling (BIM) in the Architecture, Engineering, and Construction (AEC) sector. Using a systematic approach that includes metadata analysis and review procedures, we have formulated specific research questions aimed at guiding future investigations into these gaps. Additionally, the analysis generates insights that could guide future research directions and improvements in the field. The methodology involves a comprehensive review of the literature, focusing on the interaction between immersiveness, BIM methodology, and collaborative processes. Data from 2010 to 2023 have been analyzed to ensure relevance and completeness. Our findings reveal current limitations in the field, such as the need for fully integrated prototypes and the execution of empirical studies to clarify operational processes. These limitations serve as the basis for our research questions. The study offers actionable insights that could guide future research and improvements in the AEC sector, particularly in the adoption of immersive technologies. The research underscores the urgency of addressing these challenges to facilitate ongoing development and greater adoption of immersive technologies in the AEC sector.
Full article
(This article belongs to the Special Issue Networked Virtual Reality, Mixed Reality and Augmented Reality Systems)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-02-00022/article_deploy/html/images/virtualworlds-02-00022-g001-550.jpg?1700013421)
Figure 1
Open AccessSystematic Review
The Integration and Application of Extended Reality (XR) Technologies within the General Practice Primary Medical Care Setting: A Systematic Review
by
Donovan Jones, Roberto Galvez, Darrell Evans, Michael Hazelton, Rachel Rossiter, Pauletta Irwin, Peter S. Micalos, Patricia Logan, Lorraine Rose and Shanna Fealy
Virtual Worlds 2023, 2(4), 359-373; https://doi.org/10.3390/virtualworlds2040021 - 2 Nov 2023
Cited by 1
Abstract
The COVID-19 pandemic instigated a paradigm shift in healthcare delivery with a rapid adoption of technology-enabled models of care, particularly within the general practice primary care setting. The emergence of the Metaverse and its associated technology mediums, specifically extended reality (XR) technology, presents
[...] Read more.
The COVID-19 pandemic instigated a paradigm shift in healthcare delivery with a rapid adoption of technology-enabled models of care, particularly within the general practice primary care setting. The emergence of the Metaverse and its associated technology mediums, specifically extended reality (XR) technology, presents a promising opportunity for further industry transformation. Therefore, the objective of this study was to explore the current application and utilisation of XR technologies within the general practice primary care setting to establish a baseline for tracking its evolution and integration. A systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted and registered with the international database of prospectively registered systematic reviews as PROSPERO-CRD42022339905. Eleven articles met the inclusion criteria and were quality appraised and included for review. All databases searched, inclusive of search terms, are supplied to enhance the transparency and reproducibility of the findings. All study interventions used virtual reality technology exclusively. The application of virtual reality within the primary care setting was grouped under three domains: (1) childhood vaccinations, (2) mental health, and (3) health promotion. There is immense potential for the future application of XR technologies within the general practice primary care setting. As technology evolves, healthcare practitioners, XR technology specialists, and researchers should collaborate to harness the full potential of implementing XR mediums.
Full article
(This article belongs to the Special Issue Serious Games and Extended Reality in Healthcare and/or Education)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-02-00021/article_deploy/html/images/virtualworlds-02-00021-g001-550.jpg?1701849348)
Figure 1
Open AccessArticle
Enhancing Self-Learning in Higher Education with Virtual and Augmented Reality Role Games: Students’ Perceptions
by
Luis Valladares Ríos, Ricardo Acosta-Diaz and Pedro C. Santana-Mancilla
Virtual Worlds 2023, 2(4), 343-358; https://doi.org/10.3390/virtualworlds2040020 - 30 Oct 2023
Cited by 6
Abstract
This study investigates how virtual and augmented reality role games impact self-learning in higher education settings. A qualitative research–action approach that involved creating augmented reality micro-stories to encourage creativity and critical thinking was used. Through role-playing, students collaborated and gained a deeper understanding
[...] Read more.
This study investigates how virtual and augmented reality role games impact self-learning in higher education settings. A qualitative research–action approach that involved creating augmented reality micro-stories to encourage creativity and critical thinking was used. Through role-playing, students collaborated and gained a deeper understanding of the course, improving their self-learning abilities. The findings indicate that incorporating virtual and augmented reality into higher education positively affects self-learning, promoting active student engagement and meaningful learning experiences. Additionally, students perceive these immersive educational methods as bridging the gap between virtual and in-person learning environments, ultimately leading to enhanced educational results.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures
![](https://pub.mdpi-res.com/virtualworlds/virtualworlds-02-00020/article_deploy/html/images/virtualworlds-02-00020-g001-550.jpg?1698648371)
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
![loading...](https://pub.mdpi-res.com/img/loading_circle.gif?9a82694213036313?1721979229)
Conferences
Special Issues
Special Issue in
Virtual Worlds
Networked Virtual Reality, Mixed Reality and Augmented Reality Systems
Guest Editors: Jorge Cardoso, Thiago Malheiros PorcinoDeadline: 31 July 2024
Special Issue in
Virtual Worlds
New Insights on Haptics and Human–Computer Interaction Systems in Virtual Reality
Guest Editors: Panagiotis Kourtesis, Domna BanakouDeadline: 30 September 2024
Special Issue in
Virtual Worlds
Serious Games and Extended Reality in Healthcare and/or Education
Guest Editors: Chen Huei Leo, Kang Hao Cheong, Bina RaiDeadline: 30 November 2024
Special Issue in
Virtual Worlds
Empowering Health Education: Digital Transformation Frontiers for All
Guest Editors: Stathis Konstantinidis, Panagiotis Bamidis, Eleni Dafli, Panagiotis AntoniouDeadline: 31 December 2024