Journal Description
Virtual Worlds
Virtual Worlds
is an international, peer-reviewed, open access journal on virtual reality, augmented and mixed reality, published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: first decisions in 16 days; acceptance to publication in 5.8 days (median values for MDPI journals in the second half of 2022).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Virtual Worlds is a companion journal of Applied Sciences.
Latest Articles
Can Brain–Computer Interfaces Replace Virtual Reality Controllers? A Machine Learning Movement Prediction Model during Virtual Reality Simulation Using EEG Recordings
Virtual Worlds 2023, 2(2), 182-202; https://doi.org/10.3390/virtualworlds2020011 - 09 Jun 2023
Abstract
►
Show Figures
Brain–Machine Interfaces (BMIs) have made significant progress in recent years; however, there are still several application areas in which improvement is needed, including the accurate prediction of body movement during Virtual Reality (VR) simulations. To achieve a high level of immersion in VR
[...] Read more.
Brain–Machine Interfaces (BMIs) have made significant progress in recent years; however, there are still several application areas in which improvement is needed, including the accurate prediction of body movement during Virtual Reality (VR) simulations. To achieve a high level of immersion in VR sessions, it is important to have bidirectional interaction, which is typically achieved through the use of movement-tracking devices, such as controllers and body sensors. However, it may be possible to eliminate the need for these external tracking devices by directly acquiring movement information from the motor cortex via electroencephalography (EEG) recordings. This could potentially lead to more seamless and immersive VR experiences. There have been numerous studies that have investigated EEG recordings during movement. While the majority of these studies have focused on movement prediction based on brain signals, a smaller number of them have focused on how to utilize them during VR simulations. This suggests that there is still a need for further research in this area in order to fully understand the potential for using EEG to predict movement in VR simulations. We propose two neural network decoders designed to predict pre-arm-movement and during-arm-movement behavior based on brain activity recorded during the execution of VR simulation tasks in this research. For both decoders, we employ a Long Short-Term Memory model. The study’s findings are highly encouraging, lending credence to the premise that this technology has the ability to replace external tracking devices.
Full article
Open AccessArticle
Developing an Interactive VR CAVE for Immersive Shared Gaming Experiences
by
, , , and
Virtual Worlds 2023, 2(2), 162-181; https://doi.org/10.3390/virtualworlds2020010 - 19 May 2023
Abstract
►▼
Show Figures
The popularity of VR technology has led to the development of public VR setups in entertainment venues, museums, and exhibitions. Interactive VR CAVEs can create compelling gaming experiences for both players and the spectators, with a strong sense of presence and emotional engagement.
[...] Read more.
The popularity of VR technology has led to the development of public VR setups in entertainment venues, museums, and exhibitions. Interactive VR CAVEs can create compelling gaming experiences for both players and the spectators, with a strong sense of presence and emotional engagement. This paper presents the design and development processes of a VR interactive environment called MobiCave (in room-scale size), that uses motion-tracking systems for an immersive experience. A user study was conducted in the MobiCave, aimed to gather feedback regarding their experience with a demo game. The study researched factors such as immersion, presence, flow, perceived usability, and motivation regarding players and the bystanders. Results showed promising findings for both fun and learning purposes while the experience was found highly immersive. This study suggests that interactive VR setups for public usage could be a motivating opportunity for creating new forms of social interaction and collaboration in gaming.
Full article

Figure 1
Open AccessArticle
Piecewise: A Non-Isomorphic 3D Manipulation Technique That Factors Upper-Limb Ergonomics
Virtual Worlds 2023, 2(2), 144-161; https://doi.org/10.3390/virtualworlds2020009 - 17 May 2023
Abstract
Virtual reality (VR) is gaining popularity as an educational, training, and healthcare tool due to its decreasing cost. Because of the high user variability in terms of ergonomics, 3D manipulation techniques (3DMTs) for 3D user interfaces (3DUIs) must be adjustable for comfort and
[...] Read more.
Virtual reality (VR) is gaining popularity as an educational, training, and healthcare tool due to its decreasing cost. Because of the high user variability in terms of ergonomics, 3D manipulation techniques (3DMTs) for 3D user interfaces (3DUIs) must be adjustable for comfort and usability, hence avoiding interactions that only function for the typical user. Given the role of the upper limb (i.e., arm, forearm, and hands) in interacting with virtual objects, research has led to the development of 3DMTs for facilitating isomorphic (i.e., an equal translation of controller movement) and non-isomorphic (i.e., adjusted controller visuals in VR) interactions. Although advances in 3DMTs have been proven to facilitate VR interactions, user variability has not been addressed in terms of ergonomics. This work introduces Piecewise, an upper-limb-customized non-isomorphic 3DMT for 3DUIs that accounts for user variability by incorporating upper-limb ergonomics and comfort range of motion. Our research investigates the effects of upper-limb ergonomics on time completion, skipped objects, percentage of reach, upper-body lean, engagement, and presence levels in comparison to common 3DMTs, such as normal (physical reach), object translation, and reach-bounded non-linear input amplification (RBNLIA). A 20-person within-subjects study revealed that upper-limb ergonomics influence the execution and perception of tasks in virtual reality. The proposed Piecewise approach ranked second behind the RBNLIA method, although all 3DMTs were evaluated as usable, engaging, and favorable in general. The implications of our research are significant because upper-limb ergonomics can affect VR performance for a broader range of users as the technology becomes widely available and adopted for accessibility and inclusive design, providing opportunities to provide additional customizations that can affect the VR user experience.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Inter-Subject EEG Synchronization during a Cooperative Motor Task in a Shared Mixed-Reality Environment
by
and
Virtual Worlds 2023, 2(2), 129-143; https://doi.org/10.3390/virtualworlds2020008 - 20 Apr 2023
Abstract
►▼
Show Figures
Mixed-reality (MR) environments, in which virtual objects are overlaid on the real environment and shared with peers by wearing a transparent optical head-mounted display, are considered to be well suited for collaborative work. However, no studies have been conducted to provide neuroscientific evidence
[...] Read more.
Mixed-reality (MR) environments, in which virtual objects are overlaid on the real environment and shared with peers by wearing a transparent optical head-mounted display, are considered to be well suited for collaborative work. However, no studies have been conducted to provide neuroscientific evidence of its effectiveness. In contrast, inter-brain synchronization has been repeatedly observed in cooperative tasks and can be used as an index of the quality of cooperation. In this study, we used electroencephalography (EEG) to simultaneously measure the brain activity of pairs of participants, a technique known as hyperscanning, during a cooperative motor task to investigate whether inter-brain synchronization would be also observed in a shared MR environment. The participants were presented with virtual building blocks to grasp and build up an object cooperatively with a partner or individually. We found that inter-brain synchronization in the cooperative condition was stronger than that in the individual condition (F(1, 15) = 4.70, p < 0.05). In addition, there was a significant correlation between task performance and inter-brain synchronization in the cooperative condition (rs = 0.523, p < 0.05). Therefore, the shared MR environment was sufficiently effective to evoke inter-brain synchronization, which reflects the quality of cooperation. This study offers a promising neuroscientific method to objectively measure the effectiveness of MR technology.
Full article

Figure 1
Open AccessCommunication
Current Perspective of Metaverse Application in Medical Education, Research and Patient Care
Virtual Worlds 2023, 2(2), 115-128; https://doi.org/10.3390/virtualworlds2020007 - 18 Apr 2023
Cited by 2
Abstract
As virtual and augmented reality simulation technologies advance, the use of such technologies in medicine is widespread. The advanced virtual and augmented systems coupled with a complex interactive, immersive environment create a metaverse. The metaverse enables us to connect with others in a
[...] Read more.
As virtual and augmented reality simulation technologies advance, the use of such technologies in medicine is widespread. The advanced virtual and augmented systems coupled with a complex interactive, immersive environment create a metaverse. The metaverse enables us to connect with others in a virtual world free of spatial restrictions and time constraints. In the educational aspect, it allows collaboration among peers and educators in an immersive 3D environment that can imitate the actual classroom setting with learning tools. Metaverse technology enables visualization of virtual 3D structures, facilitates collaboration and small group activities, improves mentor–mentee interactions, provides opportunities for self-directed learning experiences, and helps develop teamwork skills. The metaverse will be adapted rapidly in healthcare, boost digitalization, and grow in use in surgical procedures and medical education. The potential advantages of using the metaverse in diagnosing and treating patients are tremendous. This perspective paper provides the current state of technology in the medical field and proposes potential research directions to harness the benefits of the metaverse in medical education, research, and patient care. It aims to spark interest and discussion in the application of metaverse technology in healthcare and inspire further research in this area.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures

Figure 1
Open AccessReview
Static Terrestrial Laser Scanning (TLS) for Heritage Building Information Modeling (HBIM): A Systematic Review
Virtual Worlds 2023, 2(2), 90-114; https://doi.org/10.3390/virtualworlds2020006 - 14 Apr 2023
Abstract
Heritage Building Information Modeling (HBIM) is an essential technology for heritage documentation, conservation, and management. It enables people to understand, archive, advertise, and virtually reconstruct their built heritage. Creating highly accurate HBIM models requires the use of several reality capture tools, such as
[...] Read more.
Heritage Building Information Modeling (HBIM) is an essential technology for heritage documentation, conservation, and management. It enables people to understand, archive, advertise, and virtually reconstruct their built heritage. Creating highly accurate HBIM models requires the use of several reality capture tools, such as terrestrial laser scanning (TLS), photogrammetry, unmanned aerial vehicles (UAV), etc. However, the existing literature did not explicitly review the applications and impacts of TLS in implementing HBIM. This paper uses the PRISMA protocol to present a systematic review of TLS utilization in capturing reality data in order to recognize the status of applications of TLS for HBIM and identify the knowledge gaps on the topic. A thorough examination of the 58 selected articles revealed the state-of-the-art practices when utilizing static TLS technology for surveying and processing captured TLS data for developing HBIM models. Moreover, the absence of guidelines for using static TLS surveys for HBIM data acquisition, the lack of robust automated frameworks for producing/transferring 3D geometries and their attributes from TLS data to BIM entities, and the under-utilized application of TLS for long-term monitoring and change detection were identified as gaps in knowledge. The findings of this research provide stakeholders with a good grasp of static TLS for HBIM and therefore lay the foundation for further research, strategies, and scientific solutions for improving the utilization of TLS when documenting heritage structures and developing HBIM.
Full article
(This article belongs to the Special Issue Digital Twins in Cultural Heritage)
►▼
Show Figures

Figure 1
Open AccessConcept Paper
Eliciting Co-Creation Best Practices of Virtual Reality Reusable e-Resources
by
, , , , , , and
Virtual Worlds 2023, 2(1), 75-89; https://doi.org/10.3390/virtualworlds2010005 - 20 Mar 2023
Abstract
Immersive experiential technologies find fertile grounds to grow and support healthcare education. Virtual, Augmented, or Mixed reality (VR/AR/MR) have proven to be impactful in both the educational and the affective state of the healthcare student’s increasing engagement. However, there is a lack of
[...] Read more.
Immersive experiential technologies find fertile grounds to grow and support healthcare education. Virtual, Augmented, or Mixed reality (VR/AR/MR) have proven to be impactful in both the educational and the affective state of the healthcare student’s increasing engagement. However, there is a lack of guidance for healthcare stakeholders on developing and integrating virtual reality resources into healthcare training. Thus, the authors applied Bardach’s Eightfold Policy Analysis Framework to critically evaluate existing protocols to determine if they are inconsistent, ineffective, or result in uncertain outcomes, following systematic pathways from concepts to decision-making. Co-creative VR resource development resulted as the preferred method. Best practices for co-creating VR Reusable e-Resources identified co-creation as an effective pathway to the prolific use of immersive media in healthcare education. Co-creation should be considered in conjunction with a training framework to enhance educational quality. Iterative cycles engaging all stakeholders enhance educational quality, while co-creation is central to the quality assurance process both for technical and topical fidelity, and tailoring resources to learners’ needs. Co-creation itself is seen as a bespoke learning modality. This paper provides the first body of evidence for co-creative VR resource development as a valid and strengthening method for healthcare immersive content development. Despite prior research supporting co-creation in immersive resource development, there were no established guidelines for best practices.
Full article
Open AccessArticle
Exploring an Affective and Responsive Virtual Environment to Improve Remote Learning
Virtual Worlds 2023, 2(1), 53-74; https://doi.org/10.3390/virtualworlds2010004 - 02 Mar 2023
Abstract
►▼
Show Figures
Online classes are typically conducted by using video conferencing software such as Zoom, Microsoft Teams, and Google Meet. Research has identified drawbacks of online learning, such as “Zoom fatigue”, characterized by distractions and lack of engagement. This study presents the CUNY Affective and
[...] Read more.
Online classes are typically conducted by using video conferencing software such as Zoom, Microsoft Teams, and Google Meet. Research has identified drawbacks of online learning, such as “Zoom fatigue”, characterized by distractions and lack of engagement. This study presents the CUNY Affective and Responsive Virtual Environment (CARVE) Hub, a novel virtual reality hub that uses a facial emotion classification model to generate emojis for affective and informal responsive interaction in a 3D virtual classroom setting. A web-based machine learning model is employed for facial emotion classification, enabling students to communicate four basic emotions live through automated web camera capture in a virtual classroom without activating their cameras. The experiment is conducted in undergraduate classes on both Zoom and CARVE, and the results of a survey indicate that students have a positive perception of interactions in the proposed virtual classroom compared with Zoom. Correlations between automated emojis and interactions are also observed. This study discusses potential explanations for the improved interactions, including a decrease in pressure on students when they are not showing faces. In addition, video panels in traditional remote classrooms may be useful for communication but not for interaction. Students favor features in virtual reality, such as spatial audio and the ability to move around, with collaboration being identified as the most helpful feature.
Full article

Figure 1
Open AccessArticle
A Comparison Study on the Learning Effectiveness of Construction Training Scenarios in a Virtual Reality Environment
Virtual Worlds 2023, 2(1), 36-52; https://doi.org/10.3390/virtualworlds2010003 - 02 Feb 2023
Cited by 1
Abstract
►▼
Show Figures
While VR-based training has been proven to improve learning effectiveness over conventional methods, there is a lack of study on its learning effectiveness due to the implementation of training modes. This study aims to investigate the learning effectiveness of engineering students under different
[...] Read more.
While VR-based training has been proven to improve learning effectiveness over conventional methods, there is a lack of study on its learning effectiveness due to the implementation of training modes. This study aims to investigate the learning effectiveness of engineering students under different training modes in VR-based construction design training. Three VR scenarios with varying degrees of immersiveness were developed based on Dale’s cone of learning experience, including (1) Audio-visual based training, (2) Interactive-based training, and (3) Contrived hands-on experience training. Sixteen students with varying backgrounds participated in this study. The results posit a positive correlation between learning effectiveness and the degree of immersiveness, with a mean score of 77.33%, 81.33%, and 82.67% in each training scenario, respectively. Participants with lower academic performance tend to perform significantly better in audio-visual and interactive-based training. Meanwhile, participants with experience in gaming tend to outperform the latter group. Results also showed that participants with less experience in gaming benefited the most from hands-on VR training. The findings suggest that the general audience retained the most information via hands-on VR training; however, training scenarios should be contextualized toward the targeted group to maximize learning effectiveness.
Full article

Figure 1
Open AccessArticle
Cybersickness in Virtual Reality Questionnaire (CSQ-VR): A Validation and Comparison against SSQ and VRSQ
Virtual Worlds 2023, 2(1), 16-35; https://doi.org/10.3390/virtualworlds2010002 - 29 Jan 2023
Cited by 3
Abstract
Cybersickness is a drawback of virtual reality (VR), which also affects the cognitive and motor skills of users. The Simulator Sickness Questionnaire (SSQ) and its variant, the Virtual Reality Sickness Questionnaire (VRSQ), are two tools that measure cybersickness. However, both tools suffer from
[...] Read more.
Cybersickness is a drawback of virtual reality (VR), which also affects the cognitive and motor skills of users. The Simulator Sickness Questionnaire (SSQ) and its variant, the Virtual Reality Sickness Questionnaire (VRSQ), are two tools that measure cybersickness. However, both tools suffer from important limitations which raise concerns about their suitability. Two versions of the Cybersickness in VR Questionnaire (CSQ-VR), a paper-and-pencil and a 3D–VR version, were developed. The validation of the CSQ-VR and a comparison against the SSQ and the VRSQ were performed. Thirty-nine participants were exposed to three rides with linear and angular accelerations in VR. Assessments of cognitive and psychomotor skills were performed at baseline and after each ride. The validity of both versions of the CSQ-VR was confirmed. Notably, CSQ-VR demonstrated substantially better internal consistency than both SSQ and VRSQ. Additionally, CSQ-VR scores had significantly better psychometric properties in detecting a temporary decline in performance due to cybersickness. Pupil size was a significant predictor of cybersickness intensity. In conclusion, the CSQ-VR is a valid assessment of cybersickness with superior psychometric properties to SSQ and VRSQ. The CSQ-VR enables the assessment of cybersickness during VR exposure, and it benefits from examining pupil size, a biomarker of cybersickness.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Characterization of Functional Connectivity in Chronic Stroke Subjects after Augmented Reality Training
Virtual Worlds 2023, 2(1), 1-15; https://doi.org/10.3390/virtualworlds2010001 - 16 Jan 2023
Abstract
Augmented reality (AR) tools have been investigated with promising outcomes in rehabilitation. Recently, some studies have addressed the neuroplasticity effects induced by this type of therapy using functional connectivity obtained from resting-state functional magnetic resonance imaging (rs-fMRI). This work aims to perform an
[...] Read more.
Augmented reality (AR) tools have been investigated with promising outcomes in rehabilitation. Recently, some studies have addressed the neuroplasticity effects induced by this type of therapy using functional connectivity obtained from resting-state functional magnetic resonance imaging (rs-fMRI). This work aims to perform an initial assessment of possible changes in brain functional connectivity associated with the use of NeuroR, an AR system for upper limb motor rehabilitation of poststroke participants. An experimental study with a case series is presented. Three chronic stroke participants with left hemiparesis were enrolled in the study. They received eight sessions with NeuroR to provide shoulder rehabilitation exercises. Measurements of range of motion (ROM) were obtained at the beginning and end of each session, and rs-fMRI data were acquired at baseline (pretest) and after the last training session (post-test). Functional connectivity analyses of the rs-fMRI data were performed using a seed placed at the noninjured motor cortex. ROM increased in two patients who presented spastic hemiparesis in the left upper limb, with a change in muscle tone, and stayed the same (at zero angles) in one of the patients, who had the highest degree of impairment, showing flaccid hemiplegia. All participants had higher mean connectivity values in the ipsilesional brain regions associated with motor function at post-test than at pretest. Our findings show the potential of the NeuroR system to promote neuroplasticity related to AR-based therapy for motor rehabilitation in stroke participants.
Full article
(This article belongs to the Topic Simulations and Applications of Augmented and Virtual Reality)
►▼
Show Figures

Figure 1
Open AccessArticle
Moirai: A No-Code Virtual Serious Game Authoring Platform
Virtual Worlds 2022, 1(2), 147-171; https://doi.org/10.3390/virtualworlds1020009 - 19 Dec 2022
Cited by 1
Abstract
►▼
Show Figures
Serious games, that is, games whose primary purpose is education and training, are gaining widespread popularity in higher education contexts and have been associated with increased learner memory retention, engagement, and motivation even among learners with special needs. Despite these benefits, serious games
[...] Read more.
Serious games, that is, games whose primary purpose is education and training, are gaining widespread popularity in higher education contexts and have been associated with increased learner memory retention, engagement, and motivation even among learners with special needs. Despite these benefits, serious games have fixed scenarios that cannot be easily modified, leading to predictable and dull experiences that can reduce user engagement. Therefore, there is a demand for tools that allow educators to create new modifications and customize serious game scenarios, and avoid the fixed-scenario problem and a one-size-fits-all approach. Here, we present and detail our novel virtual serious games authoring platform called Moirai, which uses a no-code approach to allow educators who may have limited (or no) prior programming experience to use a diagram-based interface to author and customize serious games focused on decision and communication skills development. We describe two case studies, each of which involved creating serious games for nursing education (one for mental health education and the other for internationally educated nurses). The usability of both games was qualitatively evaluated using the system usability scale (SUS) questionnaire and achieved above-average usability scores.
Full article

Figure 1
Open AccessReview
Virtual Reality Induced Symptoms and Effects: Concerns, Causes, Assessment & Mitigation
by
, , , , , and
Virtual Worlds 2022, 1(2), 130-146; https://doi.org/10.3390/virtualworlds1020008 - 01 Nov 2022
Cited by 2
Abstract
The utilization of commercially available virtual reality (VR) environments has increased over the last decade. Motion sickness that is commonly reported while using VR devices is still prevalent and reported at a higher than acceptable rate. The virtual reality induced symptoms and effects
[...] Read more.
The utilization of commercially available virtual reality (VR) environments has increased over the last decade. Motion sickness that is commonly reported while using VR devices is still prevalent and reported at a higher than acceptable rate. The virtual reality induced symptoms and effects (VRISE) are considered the largest barrier to widespread usage. Current measurement methods have uniform use across studies but are subjective and are not designed for VR. VRISE and other motion sickness symptom profiles are similar but not exactly the same. Common objective physiological and biomechanical as well as subjective perception measures correlated with VRISE should be used instead. Many physiological biomechanical and subjective changes evoked by VRISE have been identified. There is a great difficulty in claiming that these changes are directly caused by VRISE due to numerous other factors that are known to alter these variables resting states. Several theories exist regarding the causation of VRISE. Among these is the sensory conflict theory resulting from differences in expected and actual sensory input. Reducing these conflicts has been shown to decrease VRISE. User characteristics contributing to VRISE severity have shown inconsistent results. Guidelines of field of view (FOV), resolution, and frame rate have been developed to prevent VRISE. Motion-to-photons latency movement also contributes to these symptoms and effects. Intensity of content is positively correlated to VRISE, as is the speed of navigation and oscillatory displays. Duration of immersion shows greater VRISE, though adaptation has been shown to occur from multiple immersions. The duration of post immersion VRISE is related to user history of motion sickness and speed of onset. Cognitive changes from VRISE include decreased reaction time and eye hand coordination. Methods to lower VRISE have shown some success. Postural control presents a potential objective variable for predicting and monitoring VRISE intensity. Further research is needed to lower the rate of VRISE symptom occurrence as a limitation of use.
Full article
Open AccessArticle
Advances in Metaverse Investigation: Streams of Research and Future Agenda
by
and
Virtual Worlds 2022, 1(2), 103-129; https://doi.org/10.3390/virtualworlds1020007 - 29 Oct 2022
Cited by 7
Abstract
►▼
Show Figures
The metaverse has increasingly attracted the attention of academics and practitioners, who attempt to better understand its theoretical foundations and business application areas. This paper provides an overarching picture of what has already been studied and investigated in metaverse academic investigation. It adopts
[...] Read more.
The metaverse has increasingly attracted the attention of academics and practitioners, who attempt to better understand its theoretical foundations and business application areas. This paper provides an overarching picture of what has already been studied and investigated in metaverse academic investigation. It adopts a systematic literature review and a bibliometric analysis. The study designs a thematic map of the metaverse research. It proposes four streams of research (metaverse technologies, metaverse areas of application, marketing and consumer behaviour and sustainability) for future investigation, which academics and practitioners should explore. It also contributes towards a systematic advancement of knowledge in the field, provides some preliminary theoretical contributions by shedding light on future research avenues, and offers insights for business.
Full article

Figure 1
Open AccessReview
Cultural Heritage in Fully Immersive Virtual Reality
Virtual Worlds 2022, 1(1), 82-102; https://doi.org/10.3390/virtualworlds1010006 - 14 Sep 2022
Cited by 5
Abstract
►▼
Show Figures
Fully immersive virtual reality (VR) applications have modified the way people access cultural heritage—from the visiting of virtual museums containing large collections of paintings to the visiting of ancient buildings. In this paper, we propose to review the software that are currently available
[...] Read more.
Fully immersive virtual reality (VR) applications have modified the way people access cultural heritage—from the visiting of virtual museums containing large collections of paintings to the visiting of ancient buildings. In this paper, we propose to review the software that are currently available that deal with cultural heritage in fully immersive virtual reality. It goes beyond technologies that were available prior to virtual reality headsets, at a time where virtual was simply the synonym of the application of digital technologies to cultural heritage. We propose to group these applications depending on their content—from generic art galleries and museums to applications that focus on a single artwork or single artist. Furthermore, we review different ways to assess the performance of such applications with workload, usability, flow, and potential VR symptoms surveys. This paper highlights the progress in the implementation of applications that provide immersive learning experiences related to cultural heritage, from 360 images to photogrammetry and 3D models. The paper shows the discrepancy between available software to the general audience on various VR headsets and scholarship activities dealing with cultural heritage in VR.
Full article

Figure 1
Open AccessArticle
Evaluating the Effect of Multi-Sensory Stimulation on Startle Response Using the Virtual Reality Locomotion Interface MS.TPAWT
by
, , , , , , , and
Virtual Worlds 2022, 1(1), 62-81; https://doi.org/10.3390/virtualworlds1010005 - 09 Sep 2022
Abstract
►▼
Show Figures
The purpose of the study was to understand how various aspects of virtual reality and extended reality, specifically, environmental displays (e.g., wind, heat, smell, and moisture), audio, and graphics, can be exploited to cause a good startle, or to prevent them. The TreadPort
[...] Read more.
The purpose of the study was to understand how various aspects of virtual reality and extended reality, specifically, environmental displays (e.g., wind, heat, smell, and moisture), audio, and graphics, can be exploited to cause a good startle, or to prevent them. The TreadPort Active Wind Tunnel (TPAWT) was modified to include several haptic environmental displays: heat, wind, olfactory, and mist, resulting in the Multi-Sensory TreadPort Active Wind Tunnel (MS.TPAWT). In total, 120 participants played a VR game that contained three startling situations. Audio and environmental effects were varied in a two-way analysis of variance (ANOVA) study. Muscle activity levels of their orbicularis oculi, sternocleidomastoid, and trapezius were measured using electromyography (EMG). Participants then answered surveys on their perceived levels of startle for each situation. We show that adjusting audio and environmental levels can alter participants physiological and psychological response to the virtual world. Notably, audio is key for eliciting stronger responses and perceptions of the startling experiences, but environmental displays can be used to either amplify those responses or to diminish them. The results also highlight that traditional eye muscle response measurements of startles may not be valid for measuring startle responses to strong environmental displays, suggesting that alternate muscle groups should be used. The study’s implications, in practice, will allow designers to control the participants response by adjusting these settings.
Full article

Figure 1
Open AccessArticle
User Identification Utilizing Minimal Eye-Gaze Features in Virtual Reality Applications
Virtual Worlds 2022, 1(1), 42-61; https://doi.org/10.3390/virtualworlds1010004 - 06 Sep 2022
Cited by 1
Abstract
►▼
Show Figures
Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of
[...] Read more.
Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work.
Full article

Figure 1
Open AccessArticle
Applications of Digital Twins in the Healthcare Industry: Case Review of an IoT-Enabled Remote Technology in Dentistry
by
and
Virtual Worlds 2022, 1(1), 20-41; https://doi.org/10.3390/virtualworlds1010003 - 02 Sep 2022
Cited by 1
Abstract
►▼
Show Figures
Industries are increasing their adoption of digital twins for their unprecedented ability to control physical entities and help manage complex systems by integrating multiple technologies. Recently, the dental industry has seen several technological advancements, but it is uncertain if dental institutions are making
[...] Read more.
Industries are increasing their adoption of digital twins for their unprecedented ability to control physical entities and help manage complex systems by integrating multiple technologies. Recently, the dental industry has seen several technological advancements, but it is uncertain if dental institutions are making an effort to adopt digital twins in their education. In this work, we employ a mixed-method approach to investigate the added value of digital twins for remote learning in the dental industry. We examine the extent of digital twin adoption by dental institutions for remote education, shed light on the concepts and benefits it brings, and provide an application-based roadmap for more extended adoption. We report a review of digital twins in the healthcare industry, followed by identifying use cases and comparing them with use cases in other disciplines. We compare reported benefits, the extent of research, and the level of digital twin adoption by industries. We distill the digital twin characteristics that can add value to the dental industry from the examined digital twin applications in remote learning and other disciplines. Then, inspired by digital twin applications in different fields, we propose a roadmap for digital twins in remote education for dental institutes, consisting of examples of growing complexity. We conclude this paper by identifying the distinctive characteristics of dental digital twins for remote learning.
Full article

Figure 1
Open AccessEditorial
Virtual Worlds: A New Open Access Journal of Virtual Reality, Augmented and Mixed Reality Technologies, and Their Uses
Virtual Worlds 2022, 1(1), 18-19; https://doi.org/10.3390/virtualworlds1010002 - 10 Aug 2022
Abstract
Books, movies, and performances create virtual worlds [...]
Full article
Open AccessFeature PaperArticle
Exploring the Perception of Additional Information Content in 360° 3D VR Video for Teaching and Learning
Virtual Worlds 2022, 1(1), 1-17; https://doi.org/10.3390/virtualworlds1010001 - 13 May 2022
Cited by 4
Abstract
►▼
Show Figures
360° 3D virtual reality (VR) video is used in education to bring immersive environments into a teaching space for learners to experience in a safe and controlled way. Within 360° 3D VR video, informational elements such as additional text, labelling and directions can
[...] Read more.
360° 3D virtual reality (VR) video is used in education to bring immersive environments into a teaching space for learners to experience in a safe and controlled way. Within 360° 3D VR video, informational elements such as additional text, labelling and directions can be easily incorporated to augment such content. Despite this, the usefulness of this information for learners has not yet been determined. This article presents a study which aims to explore the usefulness of labelling and text within 360° stereoscopic 3D VR video content and how this contributes to the user experience. Postgraduate students from a university in the UK (n = 30) were invited to take part in the study to evaluate VR video content augmented with labels and summary text or neither of these elements. Interconnected themes associated with the user experience were identified from semi-structured interviews. From this, it was established that the incorporation of informational elements resulted in the expansion of the field of view experienced by participants. This “augmented signposting” may facilitate a greater spatial awareness of the virtual environment. Four recommendations for educators developing 360° stereoscopic 3D VR video content are presented.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds
Simulations and Applications of Augmented and Virtual Reality
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 December 2023

Conferences
Special Issues
Special Issue in
Virtual Worlds
Digital Twins in Cultural Heritage
Guest Editor: Franco NiccolucciDeadline: 31 August 2023