Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (98)

Search Parameters:
Keywords = 360° virtual reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3559 KiB  
Article
Advancing Online Road Safety Education: A Gamified Approach for Secondary School Students in Belgium
by Imran Nawaz, Ariane Cuenen, Geert Wets, Roeland Paul and Davy Janssens
Appl. Sci. 2025, 15(15), 8557; https://doi.org/10.3390/app15158557 (registering DOI) - 1 Aug 2025
Viewed by 194
Abstract
Road traffic accidents are a leading cause of injury and death among adolescents, making road safety education crucial. This study assesses the performance of and users’ opinions on the Route 2 School (R2S) traffic safety education program, designed for secondary school students (13–17 [...] Read more.
Road traffic accidents are a leading cause of injury and death among adolescents, making road safety education crucial. This study assesses the performance of and users’ opinions on the Route 2 School (R2S) traffic safety education program, designed for secondary school students (13–17 years) in Belgium. The program incorporates gamified e-learning modules containing, among others, podcasts, interactive 360° visuals, and virtual reality (VR), to enhance traffic knowledge, situation awareness, risk detection, and risk management. This study was conducted across several cities and municipalities within Belgium. More than 600 students from school years 3 to 6 completed the platform and of these more than 200 students filled in a comprehensive questionnaire providing detailed feedback on platform usability, preferences, and behavioral risk assessments. The results revealed shortcomings in traffic knowledge and skills, particularly among older students. Gender-based analysis indicated no significant performance differences overall, though females performed better in risk management and males in risk detection. Furthermore, students from cities outperformed those from municipalities. Feedback on the R2S platform indicated high usability and engagement, with VR-based simulations receiving the most positive reception. In addition, it was highlighted that secondary school students are high-risk groups for distraction and red-light violations as cyclists and pedestrians. This study demonstrates the importance of gamified, technology-enhanced road safety education while underscoring the need for module-specific improvements and regional customization. The findings support the broader application of e-learning methodologies for sustainable, behavior-oriented traffic safety education targeting adolescents. Full article
(This article belongs to the Special Issue Technology Enhanced and Mobile Learning: Innovations and Applications)
Show Figures

Figure 1

12 pages, 3315 KiB  
Article
NeRF-RE: An Improved Neural Radiance Field Model Based on Object Removal and Efficient Reconstruction
by Ziyang Li, Yongjian Huai, Qingkuo Meng and Shiquan Dong
Information 2025, 16(8), 654; https://doi.org/10.3390/info16080654 - 31 Jul 2025
Viewed by 145
Abstract
High-quality green gardens can markedly enhance the quality of life and mental well-being of their users. However, health and lifestyle constraints make it difficult for people to enjoy urban gardens, and traditional methods struggle to offer the high-fidelity experiences they need. This study [...] Read more.
High-quality green gardens can markedly enhance the quality of life and mental well-being of their users. However, health and lifestyle constraints make it difficult for people to enjoy urban gardens, and traditional methods struggle to offer the high-fidelity experiences they need. This study introduces a 3D scene reconstruction and rendering strategy based on implicit neural representation through the efficient and removable neural radiation fields model (NeRF-RE). Leveraging neural radiance fields (NeRF), the model incorporates a multi-resolution hash grid and proposal network to improve training efficiency and modeling accuracy, while integrating a segment-anything model to safeguard public privacy. Take the crabapple tree, extensively utilized in urban garden design across temperate regions of the Northern Hemisphere. A dataset comprising 660 images of crabapple trees exhibiting three distinct geometric forms is collected to assess the NeRF-RE model’s performance. The results demonstrated that the ‘harvest gold’ crabapple scene had the highest reconstruction accuracy, with PSNR, LPIPS and SSIM of 24.80 dB, 0.34 and 0.74, respectively. Compared to the Mip-NeRF 360 model, the NeRF-RE model not only showed an up to 21-fold increase in training efficiency for three types of crabapple trees, but also exhibited a less pronounced impact of dataset size on reconstruction accuracy. This study reconstructs real scenes with high fidelity using virtual reality technology. It not only facilitates people’s personal enjoyment of the beauty of natural gardens at home, but also makes certain contributions to the publicity and promotion of urban landscapes. Full article
(This article belongs to the Special Issue Extended Reality and Its Applications)
Show Figures

Figure 1

16 pages, 9522 KiB  
Article
Tabonuco and Plantation Forests at Higher Elevations Are More Vulnerable to Hurricane Damage and Slower to Recover in Southeastern Puerto Rico
by Michael W. Caslin, Madhusudan Katti, Stacy A. C. Nelson and Thrity Vakil
Land 2025, 14(7), 1324; https://doi.org/10.3390/land14071324 - 21 Jun 2025
Viewed by 1412
Abstract
Hurricanes are major drivers of forest structure in the Caribbean. In 2017, Hurricane Maria caused substantial damage to Puerto Rico’s forests. We studied forest structure variation across 75 sites at Las Casas de la Selva, a sustainable forest plantation in Patillas, Puerto Rico, [...] Read more.
Hurricanes are major drivers of forest structure in the Caribbean. In 2017, Hurricane Maria caused substantial damage to Puerto Rico’s forests. We studied forest structure variation across 75 sites at Las Casas de la Selva, a sustainable forest plantation in Patillas, Puerto Rico, seven years after Hurricane Maria hit the property. At each site we analyzed 360° photos in a 3D VR headset to quantify the vertical structure and transformed them into hemispherical images to quantify canopy closure and ground cover. We also computed the Vertical Habitat Diversity Index (VHDI) from the amount of foliage in four strata: herbaceous, shrub, understory, and canopy. Using the Local Bivariate Relationship tool in ArcGIS Pro, we analyzed the relationship between forest recovery (vertical structure, canopy closure, and ground cover) and damage. Likewise, we analyzed the effects of elevation, slope, and aspect, on damage, canopy closure, and vertical forest structure. We found that canopy closure decreases with increasing elevation and increases with the amount of damage. Higher elevations show a greater amount of damage even seven years post hurricane. We conclude that trees in the mixed tabonuco/plantation forest are more susceptible to hurricanes at higher elevations. The results have implications for plantation forest management under climate-change-driven higher intensity hurricane regimes. Full article
Show Figures

Figure 1

27 pages, 1880 KiB  
Article
UAV-Enabled Video Streaming Architecture for Urban Air Mobility: A 6G-Based Approach Toward Low-Altitude 3D Transportation
by Liang-Chun Chen, Chenn-Jung Huang, Yu-Sen Cheng, Ken-Wen Hu and Mei-En Jian
Drones 2025, 9(6), 448; https://doi.org/10.3390/drones9060448 - 18 Jun 2025
Viewed by 687
Abstract
As urban populations expand and congestion intensifies, traditional ground transportation struggles to satisfy escalating mobility demands. Unmanned Electric Vertical Take-Off and Landing (eVTOL) aircraft, as a key enabler of Urban Air Mobility (UAM), leverage low-altitude airspace to alleviate ground traffic while offering environmentally [...] Read more.
As urban populations expand and congestion intensifies, traditional ground transportation struggles to satisfy escalating mobility demands. Unmanned Electric Vertical Take-Off and Landing (eVTOL) aircraft, as a key enabler of Urban Air Mobility (UAM), leverage low-altitude airspace to alleviate ground traffic while offering environmentally sustainable solutions. However, supporting high bandwidth, real-time video applications, such as Virtual Reality (VR), Augmented Reality (AR), and 360° streaming, remains a major challenge, particularly within bandwidth-constrained metropolitan regions. This study proposes a novel Unmanned Aerial Vehicle (UAV)-enabled video streaming architecture that integrates 6G wireless technologies with intelligent routing strategies across cooperative airborne nodes, including unmanned eVTOLs and High-Altitude Platform Systems (HAPS). By relaying video data from low-congestion ground base stations to high-demand urban zones via autonomous aerial relays, the proposed system enhances spectrum utilization and improves streaming stability. Simulation results validate the framework’s capability to support immersive media applications in next-generation autonomous air mobility systems, aligning with the vision of scalable, resilient 3D transportation infrastructure. Full article
Show Figures

Figure 1

22 pages, 376 KiB  
Article
Impact of a Single Virtual Reality Relaxation Session on Mental-Health Outcomes in Frontline Workers on Duty During the COVID-19 Pandemic: A Preliminary Study
by Sara Faria, Sílvia Monteiro Fonseca, António Marques and Cristina Queirós
Healthcare 2025, 13(12), 1434; https://doi.org/10.3390/healthcare13121434 - 16 Jun 2025
Viewed by 918
Abstract
Background/Objectives: The COVID-19 pandemic affected frontline workers’ mental health, including healthcare workers, firefighters, and police officers, increasing the need for effective interventions. This study focuses on the pandemic’s psychological impact, perceived stress, depression/anxiety symptoms, and resilience, examining if a brief virtual reality [...] Read more.
Background/Objectives: The COVID-19 pandemic affected frontline workers’ mental health, including healthcare workers, firefighters, and police officers, increasing the need for effective interventions. This study focuses on the pandemic’s psychological impact, perceived stress, depression/anxiety symptoms, and resilience, examining if a brief virtual reality (VR)–based relaxation session could reduce psychological symptoms. Methods: In this preliminary study with data collected in 2025 from frontline workers who had served during the acute phase of the COVID-19 pandemic, 54 frontline workers completed a baseline assessment of the perceived psychological impact of COVID-19 pandemic, general perceived well-being, perceived stress (PSS-4), anxiety/depression (PHQ-4) and resilience (RS-25). Each participant then engaged in a 10-min immersive VR relaxation session featuring a calming 360° nature environment with audio guidance, after which questionnaires were re-administered. Paired samples t-tests and repeated-measures ANOVA evaluated pre-/post-session differences, and a hierarchical multiple linear regression model tested predictors of the change in stress. Results: Pre-session results showed moderate perceived stress and resilience and low depression/anxiety. Occupation groups varied in baseline stress, mostly reporting negative pandemic psychological effects. After VR, significantly perceived well-being increased, and stress decreased, whereas depression/anxiety changes were nonsignificant. Repeated-measures ANOVA revealed a main effect of time on stress (p = 0.003) without occupation-by-time interaction (p = 0.246), indicating all occupational groups benefited similarly from the VR session. Hierarchical regression indicated baseline depression and higher perceived pandemic-related harm independently predicted greater stress reduction, whereas resilience and baseline anxiety showed no statistically significant results. Conclusions: A single VR relaxation session lowered perceived stress among frontline workers, particularly those reporting higher baseline depression or pandemic-related burden. Limitations include the absence of a control group. Results support VR-based interventions as feasible, rapidly deployable tools for high-stress settings. Future research should assess longer-term outcomes, compare VR to alternative interventions, and consider multi-session protocols. Full article
(This article belongs to the Special Issue Depression, Anxiety and Emotional Problems Among Healthcare Workers)
28 pages, 2542 KiB  
Article
Evaluating the Use of 360-Degree Video in Education
by Sam Kavanagh, Andrew Luxton-Reilly, Burkhard C. Wünsche, Beryl Plimmer and Sebastian Dunn
Electronics 2025, 14(9), 1830; https://doi.org/10.3390/electronics14091830 - 29 Apr 2025
Cited by 2 | Viewed by 705
Abstract
Virtual reality (VR) has existed in the realm of education for over half a century; however, it has never achieved widespread adoption. This was traditionally attributed to costs and usability problems associated with these technologies, but a new generation of consumer VR headsets [...] Read more.
Virtual reality (VR) has existed in the realm of education for over half a century; however, it has never achieved widespread adoption. This was traditionally attributed to costs and usability problems associated with these technologies, but a new generation of consumer VR headsets has helped mitigate these issues to a large degree. Arguably, the greater barrier is now the overhead involved in creating educational VR content, the process of which has remained largely unchanged. In this paper, we investigate the use of 360 video as an alternative way of producing educational VR content with a much lower barrier to entry. We report on the differences in user experience between 360 and standard desktop video. We also compare the short- and long-term learning retention of tertiary students who viewed the same video recordings but watched them in either 360 or standard video formats. Our results indicate that students retain an equal amount of information from either video format but perceive 360 video to be more enjoyable and engaging, and would prefer to use it as additional learning resources in their coursework. Full article
(This article belongs to the Special Issue Augmented Reality, Virtual Reality, and 3D Reconstruction)
Show Figures

Figure 1

13 pages, 1078 KiB  
Article
Understanding Cybersickness and Presence in Seated VR: A Foundation for Exploring Therapeutic Applications of Immersive Virtual Environments
by Witold Pawełczyk, Dorota Olejarz, Zofia Gaweł, Magdalena Merta, Aleksandra Nowakowska, Magdalena Nowak, Anna Rutkowska, Ladislav Batalik and Sebastian Rutkowski
J. Clin. Med. 2025, 14(8), 2718; https://doi.org/10.3390/jcm14082718 - 15 Apr 2025
Cited by 1 | Viewed by 1194
Abstract
Background/Objectives: To assess the spatial presence and impact of an immersive virtual reality (VR) walk on symptoms of cybersickness, emotions, and participant engagement, with the aim of providing insights applicable to future therapeutic VR interventions for individuals with limited mobility. Methods: The experiment [...] Read more.
Background/Objectives: To assess the spatial presence and impact of an immersive virtual reality (VR) walk on symptoms of cybersickness, emotions, and participant engagement, with the aim of providing insights applicable to future therapeutic VR interventions for individuals with limited mobility. Methods: The experiment involved 30 healthy individuals who used VR headsets while seated on chairs to experience a 360° virtual tour of the Venice Canals in Los Angeles. The effect of immersion was evaluated using the Virtual Reality Sickness Questionnaire (VRSQ) to measure cybersickness symptoms, the International Positive and Negative Affect Schedule-Short Form (I-PANAS-SF) to assess emotions, the Spatial Presence Experience Scale (SPES) to evaluate spatial presence, and the Flow State Scale (FSS) to quantify the flow state. Results: The results indicated that the virtual walk elicited both positive and negative reactions. The increase in eye strain (+0.66), general discomfort (+0.6), and headache (+0.43) was achieved in the VRSQ scale. Despite experiencing nausea and oculomotor symptoms, participants reported a high level of flow (range of scale items from 3.47 to 3.70), suggesting a beneficial impact of immersion on their well-being. Furthermore, the analysis of the I-PANAS-SF results revealed a predominance of positive emotions, indicating a favorable perception of the experience. However, the SPES scores exhibited variability in the perception of spatial presence (mean spatial presence score 3.74, SD 2.06), likely influenced by the characteristics of the visual material used. Conclusions: Overall, the immersive VR walk, despite the potential risk of cybersickness symptoms, as a seated passive exploration still promoted feelings of satisfaction and fulfillment, allowing the participants to actively engage with the virtual environment. These findings suggest that seated VR experiences hold promise as a tool for promoting well-being, but further research is needed to address cybersickness and optimize VR content for therapeutic use in populations with limited mobility. Full article
Show Figures

Figure 1

18 pages, 2170 KiB  
Article
Multiuser Access Control for 360° VR Video Service Systems Exploiting Proactive Caching and Mobile Edge Computing
by Qiyan Weng, Yijing Tang and Hangguan Shan
Appl. Sci. 2025, 15(8), 4201; https://doi.org/10.3390/app15084201 - 10 Apr 2025
Viewed by 433
Abstract
Mobile virtual reality (VR) is considered a killer application for future mobile broadband networks. However, for cloud VR, the long content delivery path and time-varying transmission rate from the content provider’s cloud VR server to the users make the quality-of-service (QoS) provisioning for [...] Read more.
Mobile virtual reality (VR) is considered a killer application for future mobile broadband networks. However, for cloud VR, the long content delivery path and time-varying transmission rate from the content provider’s cloud VR server to the users make the quality-of-service (QoS) provisioning for VR users very challenging. To this end, in this paper, we design a 360° VR video service system that leverages proactive caching and mobile edge computing (MEC) technologies. Furthermore, we propose a multiuser access control algorithm tailored to the system, based on analytical results of the delay violation probability, which is derived considering the impact of both the multi-hop wired network from the cloud VR server to the MEC server and the wireless network from the MEC server-connected base station (BS) to the users. The proposed access control algorithm aims to maximize the number of served users by exploiting real-time and dynamic network resources, while ensuring that the end-to-end delay violation probability for each accessed user remains within an acceptable limit. Simulation results are presented to analyze the impact of diverse system parameters on both the user access probability and the delay violation probability of the accessed users, demonstrating the effectiveness of the proposed multiuser access control algorithm. It is observed in the simulation that increasing the computing capacity of the MEC server or the communication bandwidth of the BS is one of the most effective methods to accommodate more users for the system. In the tested scenarios, when the MEC server’s computing capacity (the BS’s bandwidth) increases from 0.8 Tbps (50 MHz) to 3.2 Tbps (150 MHz), the user access probability improves on average by 92.53% (85.49%). Full article
Show Figures

Figure 1

21 pages, 3928 KiB  
Article
Emotion Analysis AI Model for Sensing Architecture Using EEG
by Seung-Yeul Ji, Mi-Kyoung Kim and Han-Jong Jun
Appl. Sci. 2025, 15(5), 2742; https://doi.org/10.3390/app15052742 - 4 Mar 2025
Viewed by 3051
Abstract
The rapid advancement of artificial intelligence (AI) has spurred innovation across various domains—information technology, medicine, education, and the social sciences—and is likewise creating new opportunities in architecture for understanding human–environment interactions. This study aims to develop a fine-tuned AI model that leverages electroencephalography [...] Read more.
The rapid advancement of artificial intelligence (AI) has spurred innovation across various domains—information technology, medicine, education, and the social sciences—and is likewise creating new opportunities in architecture for understanding human–environment interactions. This study aims to develop a fine-tuned AI model that leverages electroencephalography (EEG) data to analyse users’ emotional states in real time and apply these insights to architectural spaces. Specifically, the SEED dataset—an EEG-based emotion recognition resource provided by the BCMI laboratory at Shanghai Jiao Tong University—was employed to fine-tune the ChatGPT model for classifying three emotional states (positive, neutral, and negative). Experimental results demonstrate the model’s effectiveness in differentiating these states based on EEG signals, although the limited number of participants confines our findings to a proof of concept. Furthermore, to assess the feasibility of the proposed approach in real architectural contexts, we integrated the model into a 360° virtual reality (VR) setting, where it showed promise for real-time emotion recognition and adaptive design. By combining AI-driven biometric data analysis with user-centred architectural design, this study aims to foster sustainable built environments that respond dynamically to human emotions. The results underscore the potential of EEG-based emotion recognition for enhancing occupant experiences and provide foundational insights for future investigations into human–space interactions. Full article
Show Figures

Figure 1

20 pages, 2465 KiB  
Article
The Ecology of Climate Change: Using Virtual Reality to Share, Experience, and Cultivate Local and Global Perspectives
by Victor Daniel Carmona-Galindo, Maryory Andrea Velado-Cano and Anna Maria Groat-Carmona
Educ. Sci. 2025, 15(3), 290; https://doi.org/10.3390/educsci15030290 - 26 Feb 2025
Cited by 1 | Viewed by 1206
Abstract
The global challenge of climate change demands innovative, inclusive, and experiential education that fosters ecological literacy, behavioral change, and climate advocacy. This study explores a cross-cultural collaboration between two undergraduate ecology courses—one at the University of La Verne (ULV) in California and the [...] Read more.
The global challenge of climate change demands innovative, inclusive, and experiential education that fosters ecological literacy, behavioral change, and climate advocacy. This study explores a cross-cultural collaboration between two undergraduate ecology courses—one at the University of La Verne (ULV) in California and the other at the Universidad Centroamericana José Simeón Cañas (UCA) in El Salvador—that employed 360° virtual reality (VR) photosphere photographs to investigate climate change impacts. Students documented local ecological phenomena, such as drought and habitat loss, and shared insights with international peers, facilitating a rich exchange of perspectives across biomes. Generative AI tools like ChatGPT were utilized to overcome language barriers, enabling equitable participation and enhancing cross-cultural communication. The findings highlight VR’s transformative role in helping students visualize and communicate complex ecological concepts while fostering empathy, emotional engagement, and agency as climate advocates. Institutional and curricular factors shaping the integration of VR-based approaches are discussed, along with their potential to drive behavioral shifts and promote global engagement. This study demonstrates that immersive technologies, combined with collaborative learning, provide a powerful framework for bridging geographic and cultural divides, equipping students with the tools and perspectives needed to address the critical global challenges posed by climate change. Full article
19 pages, 33194 KiB  
Article
A 3D-Printed, High-Fidelity Pelvis Training Model: Cookbook Instructions and First Experience
by Radu Claudiu Elisei, Florin Graur, Amir Szold, Răzvan Couți, Sever Cãlin Moldovan, Emil Moiş, Călin Popa, Doina Pisla, Calin Vaida, Paul Tucan and Nadim Al-Hajjar
J. Clin. Med. 2024, 13(21), 6416; https://doi.org/10.3390/jcm13216416 - 26 Oct 2024
Viewed by 1585
Abstract
Background: Since laparoscopic surgery became the gold standard for colorectal procedures, specific skills are required to achieve good outcomes. The best way to acquire basic and advanced skills and reach the learning curve plateau is by using dedicated simulators: box-trainers, video-trainers and virtual [...] Read more.
Background: Since laparoscopic surgery became the gold standard for colorectal procedures, specific skills are required to achieve good outcomes. The best way to acquire basic and advanced skills and reach the learning curve plateau is by using dedicated simulators: box-trainers, video-trainers and virtual reality simulators. Laparoscopic skills training outside the operating room is cost-beneficial, faster and safer, and does not harm the patient. When compared to box-trainers, virtual reality simulators and cadaver models have no additional benefits. Several laparoscopic trainers available on the market as well as homemade box and video-trainers, most of them using plastic boxes and standard webcams, were described in the literature. The majority of them involve training on a flat surface without any anatomical environment. In addition to their demonstrated benefits, box-trainers which add anatomic details can improve the training quality and skills development of surgeons. Methods: We created a 3D-printed anatomic pelvi-trainer which offers a real-size narrow pelvic space environment for training. The model was created starting with a CT-scan performed on a female pelvis from the Anatomy Museum (Cluj-Napoca University of Medicine and Pharmacy, Romania), using Invesalius 3 software (Centro de Tecnologia da informação Renato Archer CTI, InVesalius open-source software, Campinas, Brazil) for segmentation, Fusion 360 with Netfabb software (Autodesk software company, Fusion 360 with Netfabb, San Francisco, CA, USA) for 3D modeling and a FDM technology 3D printer (Stratasys 3D printing company, Fortus 380mc 3D printer, Minneapolis, MN, USA). In addition, a metal mold for casting silicone valves was made for camera and endoscopic instruments ports. The trainer was tested and compared using a laparoscopic camera, a standard full HD webcam and “V-Box” (INTECH—Innovative Training Technologies, Milano, Italia), a dedicated hard paper box. The pelvi-trainer was tested by 33 surgeons with different qualifications and expertise. Results: We made a complete box-trainer with a versatile 3D-printed pelvi-trainer inside, designed for a wide range of basic and advanced laparoscopic skills training in the narrow pelvic space. We assessed the feedback of 33 surgeons regarding their experience using the anatomic 3D-printed pelvi-trainer for laparoscopic surgery training in the narrow pelvic space. Each surgeon tested the pelvi-trainer in three different setups: using a laparoscopic camera, using a webcam connected to a laptop and a “V-BOX” hard paper box. In the experiments that were performed, each participant completed a questionnaire regarding his/her experience using the pelvi-trainer. The results were positive, validating the device as a valid tool for training. Conclusions: We validated the anatomic pelvi-trainer designed by our team as a valuable alternative for basic and advanced laparoscopic surgery training outside the operating room for pelvic organs procedures, proving that it supports a much faster learning curve for colorectal procedures without harming the patients. Full article
(This article belongs to the Special Issue Recent Advances in the Management of Colorectal Cancer)
Show Figures

Figure 1

21 pages, 3805 KiB  
Article
Hospital Web Quality Multicriteria Analysis Model (HWQ): Development and Application Test in Spanish Hospitals
by Santiago Tejedor and Luis M. Romero-Rodríguez
Big Data Cogn. Comput. 2024, 8(10), 131; https://doi.org/10.3390/bdcc8100131 - 8 Oct 2024
Viewed by 1151
Abstract
The Hospital Web Quality Multicriteria Analysis Model (HWQ) is constructed, designed, and validated in this research. For this purpose, we examined the web quality analysis models specialized in hospitals and health centers through a literature review and the most current taxonomies to analyze [...] Read more.
The Hospital Web Quality Multicriteria Analysis Model (HWQ) is constructed, designed, and validated in this research. For this purpose, we examined the web quality analysis models specialized in hospitals and health centers through a literature review and the most current taxonomies to analyze digital media. Based on the benchmarking and walkthrough methods, the analysis model was built and validated by a panel of experts (X = 3.54, CVI = 0.88, Score Σ = 45.58). To test its applicability and reliability, the model was pilot-tested on the websites of the ten public and private hospitals with the best reputation in Spain in 2022, according to the Merco Sanitario ranking. The results showed very similar web structures divided by specific proposals or sections of some centers. In this regard, this study identifies a general communication proposal in hospitals that does not adapt to the guidelines of screen-mediated communication, as well as a lack of personalization and disruptive storytelling ideation. In addition, the work concludes that Spanish hospitals, for the moment, have not opted for formats and technological developments derived from the possibilities of gamified content, 360° immersion, Virtual Reality (V.R), or Augmented Reality (A.R). Full article
Show Figures

Figure 1

16 pages, 1822 KiB  
Article
A Machine Learning Approach to Classifying EEG Data Collected with or without Haptic Feedback during a Simulated Drilling Task
by Michael S. Ramirez Campos, Heather S. McCracken, Alvaro Uribe-Quevedo, Brianna L. Grant, Paul C. Yielder and Bernadette A. Murphy
Brain Sci. 2024, 14(9), 894; https://doi.org/10.3390/brainsci14090894 - 31 Aug 2024
Cited by 1 | Viewed by 2428
Abstract
Artificial Intelligence (AI), computer simulations, and virtual reality (VR) are increasingly becoming accessible tools that can be leveraged to implement training protocols and educational resources. Typical assessment tools related to sensory and neural processing associated with task performance in virtual environments often rely [...] Read more.
Artificial Intelligence (AI), computer simulations, and virtual reality (VR) are increasingly becoming accessible tools that can be leveraged to implement training protocols and educational resources. Typical assessment tools related to sensory and neural processing associated with task performance in virtual environments often rely on self-reported surveys, unlike electroencephalography (EEG), which is often used to compare the effects of different types of sensory feedback (e.g., auditory, visual, and haptic) in simulation environments in an objective manner. However, it can be challenging to know which aspects of the EEG signal represent the impact of different types of sensory feedback on neural processing. Machine learning approaches offer a promising direction for identifying EEG signal features that differentiate the impact of different types of sensory feedback during simulation training. For the current study, machine learning techniques were applied to differentiate neural circuitry associated with haptic and non-haptic feedback in a simulated drilling task. Nine EEG channels were selected and analyzed, extracting different time-domain, frequency-domain, and nonlinear features, where 360 features were tested (40 features per channel). A feature selection stage identified the most relevant features, including the Hurst exponent of 13–21 Hz, kurtosis of 21–30 Hz, power spectral density of 21–30 Hz, variance of 21–30 Hz, and spectral entropy of 13–21 Hz. Using those five features, trials with haptic feedback were correctly identified from those without haptic feedback with an accuracy exceeding 90%, increasing to 99% when using 10 features. These results show promise for the future application of machine learning approaches to predict the impact of haptic feedback on neural processing during VR protocols involving drilling tasks, which can inform future applications of VR and simulation for occupational skill acquisition. Full article
(This article belongs to the Special Issue Deep into the Brain: Artificial Intelligence in Brain Diseases)
Show Figures

Figure 1

14 pages, 895 KiB  
Article
Virtual Reality-Based Psychoeducation for Dementia Caregivers: The Link between Caregivers’ Characteristics and Their Sense of Presence
by Francesca Morganti, Maria Gattuso, Claudio Singh Solorzano, Cristina Bonomini, Sandra Rosini, Clarissa Ferrari, Michela Pievani and Cristina Festari
Brain Sci. 2024, 14(9), 852; https://doi.org/10.3390/brainsci14090852 - 23 Aug 2024
Viewed by 1882
Abstract
In neuropsychology and clinical psychology, the efficacy of virtual reality (VR) experiences for knowledge acquisition and the potential for modifying conduct are well documented. Consequently, the scope of VR experiences for educational purposes has expanded in the health field in recent years. In [...] Read more.
In neuropsychology and clinical psychology, the efficacy of virtual reality (VR) experiences for knowledge acquisition and the potential for modifying conduct are well documented. Consequently, the scope of VR experiences for educational purposes has expanded in the health field in recent years. In this study, we sought to assess the effectiveness of ViveDe in a psychoeducational caregiver program. ViveDe is a VR application that presents users with possible daily life situations from the perspective of individuals with dementia. These situations can be experienced in immersive mode through 360° video. This research aimed to ascertain the associations between the sense of presence that can be achieved in VR and some users’ psychological characteristics, such as distress and empathetic disposition. The study involved 36 informal caregivers of individuals with Alzheimer’s disease. These participants were assessed using scales of anxiety and depression, perceived stress, empathy, and emotional regulation. They were asked to participate in a six-session psychoeducation program conducted online on dementia topics, in addition to experiencing the ViveDe application. The immersive VR sessions enabled the caregivers to directly experience the symptoms of dementia (e.g., spatial disorientation, agnosia, difficulty in problem-solving, and anomia) in everyday and social settings. The results indicated that although the experience in ViveDe (evaluated using the XRPS scale and five questions about emotional attunement) showed efficacy in producing a sense of first-person participation in the symptoms of dementia, further research is needed to confirm this. The structural equation model provided evidence that the characteristics of individuals who enjoy the VR experience play a determining role in the perceived sense of presence, which in turn affects the efficacy of the VR experience as a psychoeducational tool. Further research will be conducted to ascertain the potential role of these elements in conveying change in the caregivers of people with dementia. This will help us study the long-term effectiveness of a large-scale psychoeducation program in VR. Full article
Show Figures

Figure 1

11 pages, 214 KiB  
Article
Virtual Reality and Higher Education Sporting Events: Social Anxiety Perception as an Outcome of VR Simulation
by Kyu-Soo Chung, Chad Goebert and John David Johnson
Behav. Sci. 2024, 14(8), 695; https://doi.org/10.3390/bs14080695 - 10 Aug 2024
Cited by 2 | Viewed by 2266
Abstract
Background: This study investigates the relationship between Virtual Reality Exposure Therapy (VRET) and social anxiety in sport environments. Social anxiety is a mental health condition that manifests people’s intense fear of being watched and judged by others and worrying about humiliation It is [...] Read more.
Background: This study investigates the relationship between Virtual Reality Exposure Therapy (VRET) and social anxiety in sport environments. Social anxiety is a mental health condition that manifests people’s intense fear of being watched and judged by others and worrying about humiliation It is important to research potential tools like VRET that could help to mitigate the impact of social anxiety as people with social anxiety often avoid attending live events due to the venue’s sensory stimuli and the social encounters they anticipate. VR simulation could allow socially anxious individuals to fully experience a sporting event simulation minus the anxiety induced by potential social encounters. VR’s therapeutic effects on social anxiety should be explored when considering several findings of VR intervention to mental health. Aim: The study aims to assess the impact of exposing socially anxious people to a virtual sporting game by measuring their levels of social anxiety, team identification, and intentions to attend a live sporting event before and after the VR exposure. Due to VR’s positive experience, social anxiety is expected to decrease. However, team identification and intentions to attend live sporting events are expected to increase because of VR’s ability to develop sport fanship. Method: Fourteen students with symptoms of social anxiety participated in the study. To create the VR simulation stimuli, the researchers used six 360° cameras to record an NCAA Division-I women’s volleyball game. Participants experienced the sporting event via VR simulation. Data were analyzed via one-group pre- and post-comparison. Results and Conclusions: Significant results were found for behavioral intentions of participants after experiencing the simulation. Social anxiety’s difference was negative 0.22, t(13) = 3.47, p < 0.01. After watching the game in VR, the respondents’ social anxiety decreased significantly. Team identification’s difference was 0.53, t(13) = −3.56, p < 0.01. Lastly, event visit intentions’ difference was 0.24, t(13) = −2.35, p < 0.05. Team identification and intentions to visit a sporting event rose significantly after viewing the game in VR. Full article
Back to TopTop