Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = VR headset display

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 7057 KiB  
Article
VRBiom: A New Periocular Dataset for Biometric Applications of Head-Mounted Display
by Ketan Kotwal, Ibrahim Ulucan, Gökhan Özbulak, Janani Selliah and Sébastien Marcel
Electronics 2025, 14(9), 1835; https://doi.org/10.3390/electronics14091835 - 30 Apr 2025
Cited by 1 | Viewed by 877
Abstract
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially [...] Read more.
With advancements in hardware, high-quality head-mounted display (HMD) devices are being developed by numerous companies, driving increased consumer interest in AR, VR, and MR applications. This proliferation of HMD devices opens up possibilities for a wide range of applications beyond entertainment. Most commercially available HMD devices are equipped with internal inward-facing cameras to record the periocular areas. Given the nature of these devices and captured data, many applications such as biometric authentication and gaze analysis become feasible. To effectively explore the potential of HMDs for these diverse use-cases and to enhance the corresponding techniques, it is essential to have an HMD dataset that captures realistic scenarios. In this work, we present a new dataset of periocular videos acquired using a virtual reality headset called VRBiom. The VRBiom, targeted at biometric applications, consists of 900 short videos acquired from 25 individuals recorded in the NIR spectrum. These 10 s long videos have been captured using the internal tracking cameras of Meta Quest Pro at 72 FPS. To encompass real-world variations, the dataset includes recordings under three gaze conditions: steady, moving, and partially closed eyes. We have also ensured an equal split of recordings without and with glasses to facilitate the analysis of eye-wear. These videos, characterized by non-frontal views of the eye and relatively low spatial resolutions (400×400), can be instrumental in advancing state-of-the-art research across various biometric applications. The VRBiom dataset can be utilized to evaluate, train, or adapt models for biometric use-cases such as iris and/or periocular recognition and associated sub-tasks such as detection and semantic segmentation. In addition to data from real individuals, we have included around 1100 presentation attacks constructed from 92 PA instruments. These PAIs fall into six categories constructed through combinations of print attacks (real and synthetic identities), fake 3D eyeballs, plastic eyes, and various types of masks and mannequins. These PA videos, combined with genuine (bona fide) data, can be utilized to address concerns related to spoofing, which is a significant threat if these devices are to be used for authentication. The VRBiom dataset is publicly available for research purposes related to biometric applications only. Full article
Show Figures

Figure 1

20 pages, 4426 KiB  
Article
Virtual Inspection System for Pumping Stations with Multimodal Feedback
by Zhiyu Shao, Tianyuan Liu, Jingwei Li and Hongru Tang
Sensors 2024, 24(15), 4932; https://doi.org/10.3390/s24154932 - 30 Jul 2024
Viewed by 1567
Abstract
Pumping stations have undergone significant modernization and digitalization in recent decades. However, traditional virtual inspections often prioritize the visual experience and fail to effectively represent the haptic physical properties of devices during inspections, resulting in poor immersion and interactivity. This paper presents a [...] Read more.
Pumping stations have undergone significant modernization and digitalization in recent decades. However, traditional virtual inspections often prioritize the visual experience and fail to effectively represent the haptic physical properties of devices during inspections, resulting in poor immersion and interactivity. This paper presents a novel virtual inspection system for pumping stations, incorporating virtual reality interaction and haptic force feedback technology to enhance immersion and realism. The system leverages a 3D model, crafted in 3Ds Max, to provide immersive visualizations. Multimodal feedback is achieved through a combination of haptic force feedback provided by a haptic device and visual information delivered by a VR headset. The system’s data platform integrates with external databases using Unity3D to display relevant information. The system provides immersive 3D visualizations and realistic force feedback during simulated inspections. We compared this system to a traditional virtual inspection method that demonstrated statistically significant improvements in task completion rates and a reduction in failure rates when using the multimodal feedback approach. This innovative approach holds the potential to enhance inspection safety, efficiency, and effectiveness in the pumping station industry. Full article
(This article belongs to the Topic Water and Energy Monitoring and Their Nexus)
Show Figures

Figure 1

13 pages, 2769 KiB  
Article
Story Starter: A Tool for Controlling Multiple Virtual Reality Headsets with No Active Internet Connection
by Andy T. Woods, Laryssa Whittaker, Neil Smith, Robert Ispas, Jackson Moore, Roderick D. Morgan and James Bennett
Virtual Worlds 2024, 3(2), 171-183; https://doi.org/10.3390/virtualworlds3020009 - 8 Apr 2024
Viewed by 1740
Abstract
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement [...] Read more.
Immersive events are becoming increasingly popular, allowing multiple people to experience a range of VR content simultaneously. Onboarders help people do VR experiences in these situations. Controlling VR headsets for others without physically having to put them on first is an important requirement here, as it streamlines the onboarding process and maximizes the number of viewers. Current off-the-shelf solutions require headsets to be connected to a cloud-based app via an active internet connection, which can be problematic in some locations. To address this challenge, we present Story Starter, a solution that enables the control of VR headsets without an active internet connection. Story Starter can start, stop, and install VR experiences, adjust device volume, and display information such as remaining battery life. We developed Story Starter in response to the UK-wide StoryTrails tour in the summer of 2022, which was held across 15 locations and attracted thousands of attendees who experienced a range of immersive content, including six VR experiences. Story Starter helped streamline the onboarding process by allowing onboarders to avoid putting the headset on themselves to complete routine tasks such as selecting and starting experiences, thereby minimizing COVID risks. Another benefit of not needing an active internet connection was that our headsets did not automatically update at inconvenient times, which we have found sometimes to break experiences. Converging evidence suggests that Story Starter was well-received and reliable. However, we also acknowledge some limitations of the solution and discuss several next steps we are considering. Full article
Show Figures

Figure 1

15 pages, 18498 KiB  
Article
The Optimal Color Space for Realistic Color Reproduction in Virtual Reality Content Design
by Hyun-Suh Kim, Eun Joung Kim and JungYoon Kim
Electronics 2023, 12(22), 4630; https://doi.org/10.3390/electronics12224630 - 12 Nov 2023
Cited by 3 | Viewed by 3811
Abstract
In the emerging era of the Metaverse in which virtual reality (VR) is used for various purposes such as product demonstration, marketing, and online commerce, it becomes essential to reproduce colors accurately not only for gaming but also for brand recognition and product [...] Read more.
In the emerging era of the Metaverse in which virtual reality (VR) is used for various purposes such as product demonstration, marketing, and online commerce, it becomes essential to reproduce colors accurately not only for gaming but also for brand recognition and product representation. In this regard, this study investigated the optimal color space to minimize the difference between the intended colors for the VR device and the colors selected on the designers’ monitor during the development process. To this end, this study conducted measurements and provided technical demonstrations to highlight the color differences between three different color spaces of sRGB, AdobeRGB, and DCI-P3. Through this approach, we discovered that designing the VR content using the DCI-P3 color gamut yields the most ideal results currently available. Full article
(This article belongs to the Special Issue Virtual Reality and Scientific Visualization, 2nd Edition)
Show Figures

Figure 1

17 pages, 4519 KiB  
Review
Metaverse as Tech for Good: Current Progress and Emerging Opportunities
by Muhammad Zahid Iqbal and Abraham G. Campbell
Virtual Worlds 2023, 2(4), 326-342; https://doi.org/10.3390/virtualworlds2040019 - 17 Oct 2023
Cited by 15 | Viewed by 10466
Abstract
Metaverse is an upcoming transformative technology that will impact our future society with immersive experiences. The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. The Metaverse concept is [...] Read more.
Metaverse is an upcoming transformative technology that will impact our future society with immersive experiences. The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. The Metaverse concept is the most recent trend to encapsulate and define the potential new digital landscape. However, with the introduction of 5G with high speed and low latency advancements in the hardware and software with the graphics power to display millions of polygons in 3D and blockchain technology, this concept is no longer fiction. This transition from today’s Internet to a spatially embodied Internet is, at its core, a transition from 2D to 3D interactions taking place in multiple virtual universes. In recent years, augmented virtual reality has created possibilities in the private and professional spheres. The new Virtual Reality (VR) headsets and Augmented Reality (AR) glasses can provide immersion in the physical sense. Technology must offer realistic experiences for users to turn this concept into reality. This paper focuses on the potential use cases and benefits of the Metaverse as a tech for good. The research paper outlines the potential areas where a positive impact could occur, highlights recent progress, and discusses the issues around trust, ethics, and cognitive load. Full article
Show Figures

Figure 1

25 pages, 16913 KiB  
Article
Development and Evaluation of an Enhanced Virtual Reality Flight Simulation Tool for Airships
by Mohsen Rostami, Jafer Kamoonpuri, Pratik Pradhan and Joon Chung
Aerospace 2023, 10(5), 457; https://doi.org/10.3390/aerospace10050457 - 15 May 2023
Cited by 8 | Viewed by 3966
Abstract
A real-time flight simulation tool is proposed using a virtual reality head-mounted display (VR-HMD) for remotely piloted airships operating in beyond-line-of-sight (BLOS) conditions. In particular, the VR-HMD was developed for stratospheric airships flying at low/high altitudes. The proposed flight simulation tool uses the [...] Read more.
A real-time flight simulation tool is proposed using a virtual reality head-mounted display (VR-HMD) for remotely piloted airships operating in beyond-line-of-sight (BLOS) conditions. In particular, the VR-HMD was developed for stratospheric airships flying at low/high altitudes. The proposed flight simulation tool uses the corresponding aerodynamics characteristics of the airship, the buoyancy effect, mass balance, added mass, propulsion contributions and ground reactions in the FlightGear Flight Simulator (FGFS). The VR headset was connected to the FGFS along with the radio controller containing the real-time orientation/state of each button, which is also simulated to provide better situational awareness, and a head-up display (HUD) that was developed to provide the required flight data. In this work, a system was developed to connect the FGFS and the VR-capable graphics engine Unity to a PC and a wireless VR-HMD in real time with minimal lag between data transmission. A balance was found for FGFS to write to a CSV file at a period of 0.01 s. For Unity, the file was read every frame, which translates to around 0.0167 s (60 Hz). A test procedure was also conducted with a similar rating technique based on the NASA TLX questionnaire, which identifies the pilot’s available mental capacity when completing an assigned task to assure the comfortability of the proposed VR-HMD. Accordingly, a comparison was made for the aircraft control using the desktop simulator and the VR-HMD tool. The results showed that the current iteration of the system is ideal to train pilots on using similar systems in a safe and immersive environment. Furthermore, such an advanced portable system may even increase the situational awareness of pilots and allow them to complete a sizeable portion of actual flight tests with the same data transmission procedures in simulation. The VR-HMD flight simulator was also conceived to express the ground control station (GCS) concept and transmit flight information as well as the point of view (POV) visuals in real-time using the real environment broadcast using an onboard camera. Full article
(This article belongs to the Special Issue Mission Analysis and Design of Lighter-than-Air Flying Vehicles)
Show Figures

Figure 1

15 pages, 9217 KiB  
Review
Advanced Study of Optical Imaging Systems for Virtual Reality Head-Mounted Displays
by Zhongju Ren, Xiuhua Fu, Keyan Dong, Ying Lai and Jingjing Zhang
Photonics 2023, 10(5), 555; https://doi.org/10.3390/photonics10050555 - 10 May 2023
Cited by 12 | Viewed by 5041
Abstract
Driven by the rapid innovation of science and technology and industrial manufacturing technology, virtual reality display technology has developed rapidly. At present, the application of virtual reality display technology is expanding in many fields such as military, medical, aviation and education. This paper [...] Read more.
Driven by the rapid innovation of science and technology and industrial manufacturing technology, virtual reality display technology has developed rapidly. At present, the application of virtual reality display technology is expanding in many fields such as military, medical, aviation and education. This paper analyzes the imaging principle of the human vision system and the optical performance requirements of VR heads-up display, summarizes the current design scheme of VR heads-up optical imaging system, focuses on the principle and index parameters of each optical system, and compares the advantages and disadvantages of different schemes. The development prospects and directions of virtual reality headset displays are also prospected. Full article
(This article belongs to the Special Issue New Advances in Freeform Optics Design)
Show Figures

Figure 1

23 pages, 21161 KiB  
Article
Efficiency of VR-Based Safety Training for Construction Equipment: Hazard Recognition in Heavy Machinery Operations
by Ankit Shringi, Mehrdad Arashpour, Emadaldin Mohammadi Golafshani, Abbas Rajabifard, Tim Dwyer and Heng Li
Buildings 2022, 12(12), 2084; https://doi.org/10.3390/buildings12122084 - 28 Nov 2022
Cited by 27 | Viewed by 5705
Abstract
Machinery operations on construction sites result in many serious injuries and fatalities. Practical training in a virtual environment is the key to improving the safety performance of machinery operators on construction sites. However, there is limited research focusing on factors responsible for the [...] Read more.
Machinery operations on construction sites result in many serious injuries and fatalities. Practical training in a virtual environment is the key to improving the safety performance of machinery operators on construction sites. However, there is limited research focusing on factors responsible for the efficiency of virtual training in increasing hazard identification ability among novice trainees. This study analyzes the efficiency of virtual safety training with head-mounted VR displays against flat screen displays among novice operators. A cohort of tower crane operation trainees was subjected to multiple simulations in a virtual towards this aim. During the simulations, feedback was collected using a joystick to record the accuracy of hazard identification while a post-simulation questionnaire was used to collect responses regarding factors responsible for effective virtual training. Questionnaire responses were analyzed using interval type-2 fuzzy analytical hierarchical process to interpret the effect of display types on training efficiency while joystick response times were statistically analyzed to understand the effect of display types on the accuracy of identification across different types of safety hazards. It was observed that VR headsets increase the efficiency of virtual safety training by providing greater immersion, realism and depth perception while increasing the accuracy of hazard identification for critical hazards such as electric cables. Full article
Show Figures

Figure 1

12 pages, 4190 KiB  
Article
Implementation of the Haptic Tele-Weight Device Using a 10 MHz Smart Torch VLC Link
by Aqeel Farooq and Xiping Wu
Micromachines 2022, 13(11), 2031; https://doi.org/10.3390/mi13112031 - 20 Nov 2022
Viewed by 2438
Abstract
Considering the prerequisite need for a protected e-commerce platform, absence of haptic interaction in head-mounted displays (HMD), and exploitation of faster communication technology, this research work aims to present an amended version of the tele-weight device that utilizes the 6G visible light communication [...] Read more.
Considering the prerequisite need for a protected e-commerce platform, absence of haptic interaction in head-mounted displays (HMD), and exploitation of faster communication technology, this research work aims to present an amended version of the tele-weight device that utilizes the 6G visible light communication (VLC) technology, is faster in performance, and deals with a heavier article. The enhanced version of the device is to be called the ‘VLC tele-weight device’ and the aim for the VLC tele-weight device is to get it affixed over the headset which will allow the user to have the weight-based sensation of the product ordered on the virtual store. The proposed device sending end and receiving end part performs communication over the VLC link. Furthermore, Arduino Nano is used as the microcontroller (MCU) in the project. Sending end circuitry measures the weight using the load cell and HX711 amplifier combination and transmits it via the connected LED. The pre-equalizer circuit is connected between the LED and sending end part to improve the bandwidth. On the receiver side, the post-equalizer circuit improves the shape of the received pulse. The received weight value is then displayed using the motor-gear combination. The sending end device is to be sited at the virtual store, while the receiving end is planned to be positioned over the VR headset. The performance of the device was measured by performing repeated trials and the percentage error was found to be between 0.5–3%. Merging the field of embedded systems, internet of things (IoT), VLC, signal processing, virtual reality (VR), e-commerce, and haptic sensing, the idea proposed in this research work can help introduce the haptic interaction, and sensational realization-based innovation in immersive visualization (IV) and graphical user interface (GUI) domain. Full article
(This article belongs to the Special Issue Embedded System for Smart Sensors/Actuators and IoT Applications)
Show Figures

Figure 1

18 pages, 1980 KiB  
Article
Usability and Acceptance of Exergames Using Different Types of Training among Older Hypertensive Patients in a Simulated Mixed Reality
by Oskar Stamm, Susan Vorwerg, Michele Haink, Kristian Hildebrand and Ilona Buchem
Appl. Sci. 2022, 12(22), 11424; https://doi.org/10.3390/app122211424 - 10 Nov 2022
Cited by 12 | Viewed by 3420
Abstract
Virtual and augmented reality (VR/AR) exergames are promising tools for increasing training motivation. However, the use of exergames with mixed reality (MR) headsets remains under-researched. Older adults with hypertension could also benefit from the increased training adherence associated with MR. Endurance and strength [...] Read more.
Virtual and augmented reality (VR/AR) exergames are promising tools for increasing training motivation. However, the use of exergames with mixed reality (MR) headsets remains under-researched. Older adults with hypertension could also benefit from the increased training adherence associated with MR. Endurance and strength endurance exercises are recommended for this group to lower blood pressure. The aim of the preliminary study (n = 22) was to compare the usability and acceptance of two exergames, which represent two different training types—strength endurance training (SET) and endurance training (ET). The developed exergame prototypes were applied in “simulated MR” using a VR head-mounted display. We examined the following outcomes: usability (TUI), intention to use (TUI), subjective task load (NASA-TLX), frustration (NASA-TLX), and presence (PQ). The results showed that frustration was significantly greater in the ET than in the SET (p = 0.038). Presence was significantly higher in the SET (p = 0.002). No significant differences in usability and acceptance were found in the exergames. The results indicate that usability and acceptance are not related to the type of training when utilizing MR exergames. Whether the results are transferable with a real MR headset must be determined in further research. Full article
Show Figures

Figure 1

22 pages, 29561 KiB  
Article
HoloKinect: Holographic 3D Video Conferencing
by Stephen Siemonsma and Tyler Bell
Sensors 2022, 22(21), 8118; https://doi.org/10.3390/s22218118 - 23 Oct 2022
Cited by 8 | Viewed by 3958
Abstract
Recent world events have caused a dramatic rise in the use of video conferencing solutions such as Zoom and FaceTime. Although 3D capture and display technologies are becoming common in consumer products (e.g., Apple iPhone TrueDepth sensors, Microsoft Kinect devices, and Meta Quest [...] Read more.
Recent world events have caused a dramatic rise in the use of video conferencing solutions such as Zoom and FaceTime. Although 3D capture and display technologies are becoming common in consumer products (e.g., Apple iPhone TrueDepth sensors, Microsoft Kinect devices, and Meta Quest VR headsets), 3D telecommunication has not yet seen any appreciable adoption. Researchers have made great progress in developing advanced 3D telepresence systems, but often with burdensome hardware and network requirements. In this work, we present HoloKinect, an open-source, user-friendly, and GPU-accelerated platform for enabling live, two-way 3D video conferencing on commodity hardware and a standard broadband internet connection. A Microsoft Azure Kinect serves as the capture device and a Looking Glass Portrait multiscopically displays the final reconstructed 3D mesh for a hologram-like effect. HoloKinect packs color and depth information into a single video stream, leveraging multiwavelength depth (MWD) encoding to store depth maps in standard RGB video frames. The video stream is compressed with highly optimized and hardware-accelerated video codecs such as H.264. A search of the depth and video encoding parameter space was performed to analyze the quantitative and qualitative losses resulting from HoloKinect’s lossy compression scheme. Visual results were acceptable at all tested bitrates (3–30 Mbps), while the best results were achieved with higher video bitrates and full 4:4:4 chroma sampling. RMSE values of the recovered depth measurements were low across all settings permutations. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

16 pages, 2853 KiB  
Article
Assessing Electroencephalography as a Stress Indicator: A VR High-Altitude Scenario Monitored through EEG and ECG
by Vasileios Aspiotis, Andreas Miltiadous, Konstantinos Kalafatakis, Katerina D. Tzimourta, Nikolaos Giannakeas, Markos G. Tsipouras, Dimitrios Peschos, Euripidis Glavas and Alexandros T. Tzallas
Sensors 2022, 22(15), 5792; https://doi.org/10.3390/s22155792 - 3 Aug 2022
Cited by 29 | Viewed by 6297
Abstract
Over the last decade, virtual reality (VR) has become an increasingly accessible commodity. Head-mounted display (HMD) immersive technologies allow researchers to simulate experimental scenarios that would be unfeasible or risky in real life. An example is extreme heights exposure simulations, which can be [...] Read more.
Over the last decade, virtual reality (VR) has become an increasingly accessible commodity. Head-mounted display (HMD) immersive technologies allow researchers to simulate experimental scenarios that would be unfeasible or risky in real life. An example is extreme heights exposure simulations, which can be utilized in research on stress system mobilization. Until recently, electroencephalography (EEG)-related research was focused on mental stress prompted by social or mathematical challenges, with only a few studies employing HMD VR techniques to induce stress. In this study, we combine a state-of-the-art EEG wearable device and an electrocardiography (ECG) sensor with a VR headset to provoke stress in a high-altitude scenarios while monitoring EEG and ECG biomarkers in real time. A robust pipeline for signal clearing is implemented to preprocess the noise-infiltrated (due to movement) EEG data. Statistical and correlation analysis is employed to explore the relationship between these biomarkers with stress. The participant pool is divided into two groups based on their heart rate increase, where statistically important EEG biomarker differences emerged between them. Finally, the occipital-region band power changes and occipital asymmetry alterations were found to be associated with height-related stress and brain activation in beta and gamma bands, which correlates with the results of the self-reported Perceived Stress Scale questionnaire. Full article
(This article belongs to the Special Issue Wearable and Mobile Sensors and Data Processing)
Show Figures

Figure 1

14 pages, 2458 KiB  
Article
Affective States and Virtual Reality to Improve Gait Rehabilitation: A Preliminary Study
by Jafet Rodriguez, Carolina Del-Valle-Soto and Javier Gonzalez-Sanchez
Int. J. Environ. Res. Public Health 2022, 19(15), 9523; https://doi.org/10.3390/ijerph19159523 - 3 Aug 2022
Cited by 6 | Viewed by 2721
Abstract
Over seven million people suffer from an impairment in Mexico; 64.1% are gait-related, and 36.2% are children aged 0 to 14 years. Furthermore, many suffer from neurological disorders, which limits their verbal skills to provide accurate feedback. Robot-assisted gait therapy has shown significant [...] Read more.
Over seven million people suffer from an impairment in Mexico; 64.1% are gait-related, and 36.2% are children aged 0 to 14 years. Furthermore, many suffer from neurological disorders, which limits their verbal skills to provide accurate feedback. Robot-assisted gait therapy has shown significant benefits, but the users must make an active effort to accomplish muscular memory, which usually is only around 30% of the time. Moreover, during therapy, the patients’ affective state is mostly unsatisfied, wide-awake, and powerless. This paper proposes a method for increasing the efficiency by combining affective data from an Emotiv Insight, an Oculus Go headset displaying an immersive interaction, and a feedback system. Our preliminary study had eight patients during therapy and eight students analyzing the footage using the self-assessment Manikin. It showed that it is possible to use an EEG headset and identify the affective state with a weighted average precision of 97.5%, recall of 87.9%, and F1-score of 92.3% in general. Furthermore, using a VR device could boost efficiency by 16% more. In conclusion, this method allows providing feedback to the therapist in real-time even if the patient is non-verbal and has a limited amount of facial and body expressions. Full article
Show Figures

Figure 1

28 pages, 13414 KiB  
Article
A Novel Untethered Hand Wearable with Fine-Grained Cutaneous Haptic Feedback
by Alexander Co Abad, David Reid and Anuradha Ranasinghe
Sensors 2022, 22(5), 1924; https://doi.org/10.3390/s22051924 - 1 Mar 2022
Cited by 12 | Viewed by 7984
Abstract
During open surgery, a surgeon relies not only on the detailed view of the organ being operated upon and on being able to feel the fine details of this organ but also heavily relies on the combination of these two senses. In laparoscopic [...] Read more.
During open surgery, a surgeon relies not only on the detailed view of the organ being operated upon and on being able to feel the fine details of this organ but also heavily relies on the combination of these two senses. In laparoscopic surgery, haptic feedback provides surgeons information on interaction forces between instrument and tissue. There have been many studies to mimic the haptic feedback in laparoscopic-related telerobotics studies to date. However, cutaneous feedback is mostly restricted or limited in haptic feedback-based minimally invasive studies. We argue that fine-grained information is needed in laparoscopic surgeries to study the details of the instrument’s end and can convey via cutaneous feedback. We propose an exoskeleton haptic hand wearable which consists of five 4 × 4 miniaturized fingertip actuators, 80 in total, to convey cutaneous feedback. The wearable is described as modular, lightweight, Bluetooth, and WiFi-enabled, and has a maximum power consumption of 830 mW. Software is developed to demonstrate rapid tactile actuation of edges; this allows the user to feel the contours in cutaneous feedback. Moreover, to demonstrate the idea as an object displayed on a flat monitor, initial tests were carried out in 2D. In the second phase, the wearable exoskeleton glove is then further developed to feel 3D virtual objects by using a virtual reality (VR) headset demonstrated by a VR environment. Two-dimensional and 3D objects were tested by our novel untethered haptic hand wearable. Our results show that untethered humans understand actuation in cutaneous feedback just in a single tapping with 92.22% accuracy. Our wearable has an average latency of 46.5 ms, which is much less than the 600 ms tolerable delay acceptable by a surgeon in teleoperation. Therefore, we suggest our untethered hand wearable to enhance multimodal perception in minimally invasive surgeries to naturally feel the immediate environments of the instruments. Full article
(This article belongs to the Special Issue Robotics and Haptics: Haptic Feedback for Medical Robots)
Show Figures

Figure 1

22 pages, 11187 KiB  
Article
Virtual Reality Visualization of CFD Simulated Blood Flow in Cerebral Aneurysms Treated with Flow Diverter Stents
by Sima Baheri Islami, Mike Wesolowski, William Revell and Xiongbiao Chen
Appl. Sci. 2021, 11(17), 8082; https://doi.org/10.3390/app11178082 - 31 Aug 2021
Cited by 10 | Viewed by 4872
Abstract
Virtual reality (VR) has the potential to be a powerful tool for the visualization of simulated blood flow in cerebral aneurysms. This paper presents our study aimed at developing the VR visualization of computational fluid dynamics (CFD) simulations of cerebral aneurysms treated with [...] Read more.
Virtual reality (VR) has the potential to be a powerful tool for the visualization of simulated blood flow in cerebral aneurysms. This paper presents our study aimed at developing the VR visualization of computational fluid dynamics (CFD) simulations of cerebral aneurysms treated with flow-diverting (FD) stents. First, a spherical sidewall aneurysm located at a simplified internal carotid artery was considered for investigating the impact of stent deployment and positioning on the corresponding spatially time-varying blood flow behavior. The three-dimensional unsteady blood flow over a cardiac cycle was simulated numerically using the finite volume method, and the distributions of hemodynamic parameters inside the aneurysm sac, and on its wall, were presented with and without stent cases. Two stent positions, with and without a gap between the artery wall and stent, were considered to show the influence of correct stent position on aneurysm treatment. Second, a straightforward workflow was developed to import, process, and visualize the CFD analysis data in a VR environment by using open-source software with a high resolution. The Unity3D engine was used for displaying the processed animations in a VR environment operated on a head-mounted display (HMD). The refining process of each frame of time-varying CFD data was automated. The animated flow elements rendered in the VR environment were velocity vectors, velocity contours, streamlines, particle traces, and point clouds. CFD results showed that proper stenting facilitates thrombosis and occlusion of the aneurysm by modification of the flow patterns, which leads to lower inflow jet velocities into the aneurysm, longer turnover time, lower aneurysm-averaged kinetic energy, and lower wall shear stress. Additionally, the results indicated that a gap between the stent and the parent artery may lead to undesirable hemodynamic alterations. The VR visualization illustrated that the recognition of the potential in danger regions of aneurysms and the evaluation of the performance of FD stents in aneurysm treatment can be conducted without the need for several slices through the parent artery and aneurysm, as is required for traditional postprocessing methods. Through VR visualization, the details of the simulation results become readily available by navigating in the 3D animated flow elements using a high-degree-of-freedom headset. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

Back to TopTop