Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (48)

Search Parameters:
Keywords = stereoscopic displays

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4166 KB  
Article
Preliminary Study on the Accuracy Comparison Between 3D-Printed Bone Models and Naked-Eye Stereoscopy-Based Virtual Reality Models for Presurgical Molding in Orbital Floor Fracture Repair
by Masato Tsuchiya, Izumi Yasutake, Satoru Tamura, Satoshi Kubo and Ryuichi Azuma
Appl. Sci. 2025, 15(24), 12963; https://doi.org/10.3390/app152412963 - 9 Dec 2025
Viewed by 316
Abstract
Three-dimensional (3D) printing enables accurate implant pre-shaping in orbital reconstruction but is costly and time-consuming. Naked-eye stereoscopic displays (NEDs) enable virtual implant modeling without fabrication. This study aimed to compare the reproducibility and accuracy of NED-based virtual reality (VR) pre-shaping with conventional 3D-printed [...] Read more.
Three-dimensional (3D) printing enables accurate implant pre-shaping in orbital reconstruction but is costly and time-consuming. Naked-eye stereoscopic displays (NEDs) enable virtual implant modeling without fabrication. This study aimed to compare the reproducibility and accuracy of NED-based virtual reality (VR) pre-shaping with conventional 3D-printed models. Two surgeons pre-shaped implants for 11 unilateral orbital floor fractures using both 3D-printed and NED-based VR models with identical computed tomography data. The depth, area, and axis dimensions were measured, and reproducibility and agreement were assessed using intraclass correlation coefficients (ICCs), Bland–Altman analysis, and shape similarity metrics—Hausdorff distance (HD) and root mean square error (RMSE). Intra-rater ICCs were ≥0.80 for all parameters except depth in the VR model. The HD and RMSE reveal no significant differences between 3D (2.64 ± 0.85 mm; 1.02 ± 0.42 mm) and VR (3.14 ± 1.18 mm; 1.24 ± 0.53 mm). Inter-rater ICCs were ≥0.80 for the area and axes in both modalities, while depth remained low. Between modalities, no significant differences were found; HD and RMSE were 2.95 ± 0.94 mm and 1.28 ± 0.49 mm. The NED-based VR pre-shaping achieved reproducibility and dimensional agreement comparable to 3D printing, suggesting a feasible cost- and time-efficient alternative for orbital reconstruction. These preliminary findings suggest that NED-based preshaping may be feasible; however, larger studies are required to confirm whether VR can achieve performance comparable to 3D-printed models. Full article
(This article belongs to the Special Issue Virtual Reality (VR) in Healthcare)
Show Figures

Figure 1

21 pages, 4566 KB  
Article
Impact of Stereoscopic Technologies on Heart Rate Variability in Extreme VR Gaming Conditions
by Penio Lebamovski and Evgeniya Gospodinova
Technologies 2025, 13(12), 545; https://doi.org/10.3390/technologies13120545 - 24 Nov 2025
Viewed by 513
Abstract
This study examines the effects of different stereoscopic technologies on physiological responses in immersive virtual reality (VR) environments. Five participant groups were evaluated: a control group (no stereoscopy) and four groups using anaglyph, passive, active glasses, or VR helmets. Heart rate variability (HRV) [...] Read more.
This study examines the effects of different stereoscopic technologies on physiological responses in immersive virtual reality (VR) environments. Five participant groups were evaluated: a control group (no stereoscopy) and four groups using anaglyph, passive, active glasses, or VR helmets. Heart rate variability (HRV) was measured in both time (MeanRR, SDNN, RMSSD, pNN50) and frequency (LF, HF, LF/HF) domains to assess autonomic nervous system activity. Active, polarized glasses and VR helmets significantly reduced SDNN and RMSSD compared to the control group (p < 0.01), with VR helmets causing the largest decrease (MeanRR −70%, RMSSD −51%). Anaglyph glasses showed milder effects. Nonlinear analysis revealed reduced entropies and Hurst parameter in highly immersive conditions, indicating impaired fractal heart rate structure and increased physiological load. These results demonstrate a clear relationship between immersion level and cardiovascular response, emphasising that higher immersion increases physiological stress. The scientific contribution lies in the combined application of linear and nonlinear HRV analysis to systematically compare different stereoscopic display types under controlled gaming immersion. The study proposes a practical methodology for assessing HRV in VR settings, which can inform the ergonomic design of VR systems and ensure users’ physiological safety. By highlighting the differential impacts of stereoscopic technologies on HRV, the findings offer guidance for optimising VR visualisation to balance immersive experience with user comfort and health. Full article
Show Figures

Figure 1

20 pages, 3686 KB  
Article
Comparative Analysis of Correction Methods for Multi-Camera 3D Image Processing System and Its Application Design in Safety Improvement on Hot-Working Production Line
by Joanna Gąbka
Appl. Sci. 2025, 15(16), 9136; https://doi.org/10.3390/app15169136 - 19 Aug 2025
Viewed by 1013
Abstract
The paper presents the results of research focused on configuring a system for stereoscopic view capturing and processing. The system is being developed for use in staff training scenarios based on Virtual Reality (VR), where high-quality, distortion-free imagery is essential. This research addresses [...] Read more.
The paper presents the results of research focused on configuring a system for stereoscopic view capturing and processing. The system is being developed for use in staff training scenarios based on Virtual Reality (VR), where high-quality, distortion-free imagery is essential. This research addresses key challenges in image distortion, including the fish-eye effect and other aberrations. In addition, it considers the computational and bandwidth efficiency required for effective and economical streaming and real-time display of recorded content. Measurements and calculations were performed using a selected set of cameras, adapters, and lenses, chosen based on predefined criteria. A comparative analysis was conducted between the nearest-neighbour linear interpolation method and a third-order polynomial interpolation (ABCD polynomial). These methods were tested and evaluated using three different computational approaches, each aimed at optimizing data processing efficiency critical for real-time image correction. Images captured during real-time video transmission—processed using the developed correction techniques—are presented. In the final sections, the paper describes the configuration of an innovative VR-based training system incorporating an edge computing device. A case study involving a factory producing wheel rims is also presented to demonstrate the practical application of the system. Full article
Show Figures

Figure 1

18 pages, 12334 KB  
Article
Canopy Height Integration for Precise Forest Aboveground Biomass Estimation in Natural Secondary Forests of Northeast China Using Gaofen-7 Stereo Satellite Data
by Caixia Liu, Huabing Huang, Zhiyu Zhang, Wenyi Fan and Di Wu
Remote Sens. 2025, 17(1), 47; https://doi.org/10.3390/rs17010047 - 27 Dec 2024
Cited by 2 | Viewed by 1741
Abstract
Accurate estimates of forest aboveground biomass (AGB) are necessary for the accurate tracking of forest carbon stock. Gaofen-7 (GF-7) is the first civilian sub-meter three-dimensional (3D) mapping satellite from China. It is equipped with a laser altimeter system and a dual-line array stereoscopic [...] Read more.
Accurate estimates of forest aboveground biomass (AGB) are necessary for the accurate tracking of forest carbon stock. Gaofen-7 (GF-7) is the first civilian sub-meter three-dimensional (3D) mapping satellite from China. It is equipped with a laser altimeter system and a dual-line array stereoscopic mapping camera, which enables it to synchronously generate full-waveform LiDAR data and stereoscopic images. The bulk of existing research has examined how accurate GF-7 is for topographic measurements of bare land or canopy height. The measurement of forest aboveground biomass has not received as much attention as it deserves. This study aimed to assess the GF-7 stereo imaging capability, displayed as topographic features for aboveground biomass estimation in forests. The aboveground biomass model was constructed using the random forest machine learning technique, which was accomplished by combining the use of in situ field measurements, pairs of GF-7 stereo images, and the corresponding generated canopy height model (CHM). Findings showed that the biomass estimation model had an accuracy of R2 = 0.76, RMSE = 7.94 t/ha, which was better than the inclusion of forest canopy height (R2 = 0.30, RMSE = 21.02 t/ha). These results show that GF-7 has considerable application potential in gathering large-scale high-precision forest aboveground biomass using a restricted amount of field data. Full article
Show Figures

Figure 1

27 pages, 3487 KB  
Article
What Factors Affect Binocular Summation?
by Marzouk Yassin, Maria Lev and Uri Polat
Brain Sci. 2024, 14(12), 1205; https://doi.org/10.3390/brainsci14121205 - 28 Nov 2024
Cited by 1 | Viewed by 1900
Abstract
Binocular vision may serve as a good model for research on awareness. Binocular summation (BS) can be defined as the superiority of binocular over monocular visual performance. Early studies of BS found an improvement of a factor of about 1.4 (empirically), leading to [...] Read more.
Binocular vision may serve as a good model for research on awareness. Binocular summation (BS) can be defined as the superiority of binocular over monocular visual performance. Early studies of BS found an improvement of a factor of about 1.4 (empirically), leading to models suggesting a quadratic summation of the two monocular inputs (√2). Neural interaction modulates a target’s visibility within the same eye or between eyes (facilitation or suppression). Recent results indicated that at a closely flanked stimulus, BS is characterized by instability; it relies on the specific order in which the stimulus condition is displayed. Otherwise, BS is stable. These results were revealed in experiments where the tested eye was open, whereas the other eye was occluded (mono-optic glasses, blocked presentation); thus, the participants were aware of the tested eye. Therefore, in this study, we repeated the same experiments but utilized stereoscopic glasses (intermixed at random presentation) to control the monocular and binocular vision, thus potentially eliminating awareness of the tested condition. The stimuli consisted of a central vertically oriented Gabor target and high-contrast Gabor flankers positioned in two configurations (orthogonal or collinear) with target–flanker separations of either two or three wavelengths (λ), presented at four different presentation times (40, 80, 120, and 200 ms). The results indicate that when utilizing stereoscopic glasses and mixing the testing conditions, the BS is normal, raising the possibility that awareness may be involved. Full article
(This article belongs to the Special Issue From Visual Perception to Consciousness)
Show Figures

Figure 1

20 pages, 8781 KB  
Article
A Virtual View Acquisition Technique for Complex Scenes of Monocular Images Based on Layered Depth Images
by Qi Wang and Yan Piao
Appl. Sci. 2024, 14(22), 10557; https://doi.org/10.3390/app142210557 - 15 Nov 2024
Viewed by 1293
Abstract
With the rapid development of stereoscopic display technology, how to generate high-quality virtual view images has become the key in the applications of 3D video, 3D TV and virtual reality. The traditional virtual view rendering technology maps the reference view into the virtual [...] Read more.
With the rapid development of stereoscopic display technology, how to generate high-quality virtual view images has become the key in the applications of 3D video, 3D TV and virtual reality. The traditional virtual view rendering technology maps the reference view into the virtual view by means of 3D transformation, but when the background area is occluded by the foreground object, the content of the occluded area cannot be inferred. To solve this problem, we propose a virtual view acquisition technique for complex scenes of monocular images based on a layered depth image (LDI). Firstly, the depth discontinuities of the edge of the occluded area are reasonably grouped by using the multilayer representation of the LDI, and the depth edge of the occluded area is inpainted by the edge inpainting network. Then, the generative adversarial network (GAN) is used to fill the information of color and depth in the occluded area, and the inpainting virtual view is generated. Finally, GAN is used to optimize the color and depth of the virtual view, and the high-quality virtual view is generated. The effectiveness of the proposed method is proved by experiments, and it is also applicable to complex scenes. Full article
Show Figures

Figure 1

13 pages, 4104 KB  
Article
Abutment Tooth Formation Simulator for Naked-Eye Stereoscopy
by Rintaro Tomita, Akito Nakano, Norishige Kawanishi, Noriyuki Hoshi, Tomoki Itamiya and Katsuhiko Kimoto
Appl. Sci. 2024, 14(18), 8367; https://doi.org/10.3390/app14188367 - 17 Sep 2024
Cited by 1 | Viewed by 2013
Abstract
Virtual reality is considered to be useful in improving procedural skills in dental education, but systems using wearable devices such as head-mounted displays (HMDs) have many problems in terms of long-term use and hygiene, and the accuracy of stereoscopic viewing at close ranges [...] Read more.
Virtual reality is considered to be useful in improving procedural skills in dental education, but systems using wearable devices such as head-mounted displays (HMDs) have many problems in terms of long-term use and hygiene, and the accuracy of stereoscopic viewing at close ranges is inadequate. We developed an abutment tooth formation simulator that utilizes a display (spatial reality display—SRD) to precisely reproduce 3D space with naked-eye stereoscopic viewing at close range. The purpose of this was to develop and validate the usefulness of an abutment tooth formation simulator using an SRD. A 3D-CG (three-dimensional computer graphics) dental model that can be cut in real time was output to the SRD, and an automatic quantitative scoring function was also implemented by comparing the cutting results with exemplars. Dentists in the department of fixed prosthodontics performed cutting operations on both a 2D display-based simulator and an SRD-based simulator and conducted a 5-point rating feedback survey. Compared to the simulator that used a 2D display, the measurements of the simulator using an SRD were significantly more accurate. The SRD-based abutment tooth formation simulator received a positive technical evaluation and high dentist satisfaction (4.37), suggesting its usefulness and raising expectations regarding its future application in dental education. Full article
(This article belongs to the Special Issue Digital Dentistry and Oral Health)
Show Figures

Figure 1

29 pages, 923 KB  
Review
Light Field Visualization for Training and Education: A Review
by Mary Guindy and Peter A. Kara
Electronics 2024, 13(5), 876; https://doi.org/10.3390/electronics13050876 - 24 Feb 2024
Cited by 7 | Viewed by 2912
Abstract
Three-dimensional visualization technologies such as stereoscopic 3D, virtual reality, and augmented reality have already emerged in training and education; however, light field displays are yet to be introduced in such contexts. In this paper, we characterize light field visualization as a potential candidate [...] Read more.
Three-dimensional visualization technologies such as stereoscopic 3D, virtual reality, and augmented reality have already emerged in training and education; however, light field displays are yet to be introduced in such contexts. In this paper, we characterize light field visualization as a potential candidate for the future of training and education, and compare it to other state-of-the-art 3D technologies. We separately address preschool and elementary school education, middle and high school education, higher education, and specialized training, and assess the suitability of light field displays for these utilization contexts via key performance indicators. This paper exhibits various examples for education, and highlights the differences in terms of display requirements and characteristics. Additionally, our contribution analyzes the scientific-literature-related trends of the past 20 years for 3D technologies, and the past 5 years for the level of education. While the acquired data indicates that light field is still lacking in the context of education, general research on the visualization technology is steadily rising. Finally, we specify a number of future research directions that shall contribute to the emergence of light field visualization for training and education. Full article
Show Figures

Figure 1

14 pages, 4325 KB  
Review
Recent Progress in True 3D Display Technologies Based on Liquid Crystal Devices
by Shuxin Liu, Yan Li and Yikai Su
Crystals 2023, 13(12), 1639; https://doi.org/10.3390/cryst13121639 - 27 Nov 2023
Cited by 10 | Viewed by 4598
Abstract
In recent years, the emergence of virtual reality (VR) and augmented reality (AR) has revolutionized the way we interact with the world, leading to significant advancements in 3D display technology. However, some of the currently employed 3D display techniques rely on stereoscopic 3D [...] Read more.
In recent years, the emergence of virtual reality (VR) and augmented reality (AR) has revolutionized the way we interact with the world, leading to significant advancements in 3D display technology. However, some of the currently employed 3D display techniques rely on stereoscopic 3D display method, which may lead to visual discomfort due to the vergence-accommodation conflict. To address this issue, several true 3D technologies have been proposed as alternatives, including multi-plane displays, holographic displays, super multi-view displays, and integrated imaging displays. In this review, we focus on planar liquid crystal (LC) devices for different types of true 3D display applications. Given the excellent optical performance of the LC devices, we believe that LC devices hold great potential for true 3D displays. Full article
(This article belongs to the Special Issue Liquid Crystals and Devices)
Show Figures

Figure 1

19 pages, 5498 KB  
Article
Integral Imaging Display System Based on Human Visual Distance Perception Model
by Lijin Deng, Zhihong Li, Yuejianan Gu and Qi Wang
Sensors 2023, 23(21), 9011; https://doi.org/10.3390/s23219011 - 6 Nov 2023
Cited by 4 | Viewed by 2802
Abstract
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This [...] Read more.
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This research examines the visual characteristics of the human eye and analyzes the path of light from a point source to the eye in the process of capturing and reconstructing the light field. Then, an overall depth of field (DOF) model of II is derived based on the human visual system (HVS). On this basis, an II system based on the human visual distance (HVD) perception model is proposed, and an interactive II display system is constructed. The experimental results confirm the effectiveness of the proposed method. The display system improves the viewing distance range, enhances spatial resolution and provides better stereoscopic display effects. When comparing our method with three other methods, it is clear that our approach produces better results in optical experiments and objective evaluations: the cumulative probability of blur detection (CPBD) value is 38.73%, the structural similarity index (SSIM) value is 86.56%, and the peak signal-to-noise ratio (PSNR) value is 31.12. These values align with subjective evaluations based on the characteristics of the human visual system. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

22 pages, 17347 KB  
Article
How Do Background and Remote User Representations Affect Social Telepresence in Remote Collaboration?: A Study with Portal Display, a Head Pose-Responsive Video Teleconferencing System
by Seongjun Kang, Gwangbin Kim, Kyung-Taek Lee and SeungJun Kim
Electronics 2023, 12(20), 4339; https://doi.org/10.3390/electronics12204339 - 19 Oct 2023
Cited by 2 | Viewed by 2283
Abstract
This study presents Portal Display, a screen-based telepresence system that mediates the interaction between two distinct spaces, each using a single display system. The system synchronizes the users’ viewpoint with their head position and orientation to provide stereoscopic vision through this single monitor. [...] Read more.
This study presents Portal Display, a screen-based telepresence system that mediates the interaction between two distinct spaces, each using a single display system. The system synchronizes the users’ viewpoint with their head position and orientation to provide stereoscopic vision through this single monitor. This research evaluates the impact of graphically rendered and video-streamed backgrounds and remote user representations on social telepresence, usability, and concentration during conversations and collaborative tasks. Our results indicate that the type of background has a negligible impact on these metrics. However, point cloud streaming of remote users significantly improves social telepresence, usability, and concentration compared with graphical avatars. This study implies that Portal Display can operate more efficiently by substituting the background with graphical rendering and focusing on higher-resolution 3D point cloud streaming for narrower regions for remote user representations. This configuration may be especially advantageous for applications where the remote user’s background is not essential to the task, potentially enhancing social telepresence. Full article
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)
Show Figures

Figure 1

20 pages, 10006 KB  
Article
An Effective YOLO-Based Proactive Blind Spot Warning System for Motorcycles
by Ing-Chau Chang, Chin-En Yen, Ya-Jing Song, Wei-Rong Chen, Xun-Mei Kuo, Ping-Hao Liao, Chunghui Kuo and Yung-Fa Huang
Electronics 2023, 12(15), 3310; https://doi.org/10.3390/electronics12153310 - 2 Aug 2023
Cited by 3 | Viewed by 4439
Abstract
Interest in advanced driver assistance systems (ADAS) is booming in recent years. One of the most effervescent ADAS features is blind spot detection (BSD), which uses radar sensors or cameras to detect vehicles in the blind spot area and alerts the driver to [...] Read more.
Interest in advanced driver assistance systems (ADAS) is booming in recent years. One of the most effervescent ADAS features is blind spot detection (BSD), which uses radar sensors or cameras to detect vehicles in the blind spot area and alerts the driver to avoid a collision when changing lanes. However, this kind of BSD system fails to notify nearby vehicle drivers in this blind spot of the possible collision. The goal of this research is to design a proactive bus blind spot warning (PBSW) system that will immediately notify motorcyclists when they enter the blind spot or the area of the inner wheel difference of a target vehicle, i.e., a bus. This will increase the real-time functionality of BSD and can have a significant impact on enhancing motorcyclist safety. The proposed hardware is placed on the motorcycle and consists of a Raspberry Pi 3B+ and a dual-lens stereo camera. We use dual-lens cameras to capture and create stereoscopic images then transmit the images from the Raspberry Pi 3B+ to an Android phone via Wi-Fi and to a cloud server using a cellular network. At the cloud server, we use the YOLOv4 image recognition model to identify the position of the rear-view mirror of the bus and use the lens imaging principle to estimate the distance between the bus and the motorcyclist. Finally, the cloud server returns the estimated distance to the PBSW app on the Android phone. According to the received distance value, the app will display the visible area/blind spot, the area of the inner wheel difference of the bus, the position of the motorcyclist, and the estimated distance between the motorcycle and the bus. Hence, as soon as the motorcyclist enters the blind spot of the bus or the area of the inner wheel difference, the app will alert the motorcyclist immediately to enhance their real-time safety. We have evaluated this PBSW system implemented in real life. The results show that the average position accuracy of the rear-view mirror is 92.82%, the error rate of the estimated distance between the rear-view mirror and the dual-lens camera is lower than 0.2%, and the average round trip delay between the Android phone and the cloud server is about 0.5 s. To the best of our knowledge, this proposed system is one of few PBSW systems which can be applied in the real world to protect motorcyclists from the danger of entering the blind spot and the area of the inner wheel difference of the target vehicle in real time. Full article
(This article belongs to the Special Issue Advances and Challenges in Future Networks)
Show Figures

Figure 1

14 pages, 2826 KB  
Article
Study of Root Canal Length Estimations by 3D Spatial Reproduction with Stereoscopic Vision
by Takato Tsukuda, Noriko Mutoh, Akito Nakano, Tomoki Itamiya and Nobuyuki Tani-Ishii
Appl. Sci. 2023, 13(15), 8651; https://doi.org/10.3390/app13158651 - 27 Jul 2023
Cited by 4 | Viewed by 2551
Abstract
Extended Reality (XR) applications are considered useful for skill acquisition in dental education. In this study, we examined the functionality and usefulness of an application called “SR View for Endo” that measures root canal length using a Spatial Reality Display (SRD) capable of [...] Read more.
Extended Reality (XR) applications are considered useful for skill acquisition in dental education. In this study, we examined the functionality and usefulness of an application called “SR View for Endo” that measures root canal length using a Spatial Reality Display (SRD) capable of naked-eye stereoscopic viewing. Three-dimensional computer graphics (3DCG) data of dental models were obtained and output to both the SRD and conventional 2D display devices. Forty dentists working at the Kanagawa Dental University Hospital measured root canal length using both types of devices and provided feedback through a questionnaire. Statistical analysis using one-way analysis of variance evaluated the measurement values and time, while multivariate analysis assessed the relationship between questionnaire responses and measurement time. There was no significant difference in the measurement values between the 2D device and SRD, but there was a significant difference in measurement time. Furthermore, a negative correlation was observed between the frequency of device usage and the extended measurement time of the 2D device. Measurements using the SRD demonstrated higher accuracy and shorter measurement times compared to the 2D device, increasing expectations for clinical practice in dental education and clinical education for clinical applications. However, a certain percentage of participants experienced symptoms resembling motion sickness associated with virtual reality (VR). Full article
(This article belongs to the Special Issue 3D Scene Understanding and Object Recognition)
Show Figures

Figure 1

11 pages, 2120 KB  
Article
Comparison of Smoothness, Movement Speed and Trajectory during Reaching Movements in Real and Virtual Spaces Using a Head-Mounted Display
by Norio Kato, Tomoya Iuchi, Katsunobu Murabayashi and Toshiaki Tanaka
Life 2023, 13(8), 1618; https://doi.org/10.3390/life13081618 - 25 Jul 2023
Cited by 5 | Viewed by 2183
Abstract
Virtual reality is used in rehabilitation and training simulators. However, whether movements in real and virtual spaces are similar is yet to be elucidated. The study aimed to examine the smoothness, trajectory, and velocity of participants’ movements during task performance in real and [...] Read more.
Virtual reality is used in rehabilitation and training simulators. However, whether movements in real and virtual spaces are similar is yet to be elucidated. The study aimed to examine the smoothness, trajectory, and velocity of participants’ movements during task performance in real and virtual space. Ten participants performed the same motor task in these two spaces, reaching for targets placed at six distinct positions. A head-mounted display (HMD) presented the virtual space, which simulated the real space environment. The smoothness of movements during the task was quantified and analysed using normalised jerk cost. Trajectories were analysed using the actual trajectory length normalised by the shortest distance to the target, and velocity was analysed using the time of peak velocity. The analysis results showed no significant differences in smoothness and peak velocity time between the two spaces. No significant differences were found in the placement of the six targets between the two spaces. Conversely, significant differences were observed in trajectory length ratio and peak velocity time, albeit with small effect sizes. This outcome can potentially be attributed to the fact that the virtual space was presented from a first-person perspective using an HMD capable of presenting stereoscopic images through binocular parallax. Participants were able to obtain physiological depth information and directly perceive the distance between the target and the effector, such as a hand or a controller, in virtual space, similar to real space. The results suggest that training in virtual space using HMDs with binocular disparity may be a useful tool, as it allows the simulation of a variety of different environments. Full article
Show Figures

Figure 1

20 pages, 6490 KB  
Article
Inducing Perceptual Dominance with Binocular Rivalry in a Virtual Reality Head-Mounted Display
by Julianne Blignaut, Martin Venter, David van den Heever, Mark Solms and Ivan Crockart
Math. Comput. Appl. 2023, 28(3), 77; https://doi.org/10.3390/mca28030077 - 17 Jun 2023
Cited by 2 | Viewed by 3214
Abstract
Binocular rivalry is the perceptual dominance of one visual stimulus over another. Conventionally, binocular rivalry is induced using a mirror-stereoscope—a setup involving mirrors oriented at an angle to a display. The respective mirror planes fuse competing visual stimuli in the observer’s visual field [...] Read more.
Binocular rivalry is the perceptual dominance of one visual stimulus over another. Conventionally, binocular rivalry is induced using a mirror-stereoscope—a setup involving mirrors oriented at an angle to a display. The respective mirror planes fuse competing visual stimuli in the observer’s visual field by projecting the stimuli through the stereoscope to the observed visual field. Since virtual-reality head-mounted displays fuse dichoptic vision in a similar way, and since virtual-reality head-mounted displays are more versatile and more readily available than mirror stereoscopes, this study investigated the efficacy of using a virtual-reality headset (Oculus Rift-S) as an alternative to using a mirror stereoscope to study binocular rivalry. To evaluate the validity of using virtual-reality headsets to induce visual dominance/suppression, two identical experimental sequences—one using a conventional mirror stereoscope and one using a virtual-reality headset—were compared and evaluated. The study used Gabor patches at different orientations to induce binocular rivalry and to evaluate the efficacy of the two experiments. Participants were asked to record all instances of perceptual dominance (complete suppression) and non-dominance (incomplete suppression). Independent sample t-tests confirmed that binocular rivalry with stable vergence was successfully induced for the mirror-stereoscope experiment (t = −4.86; p ≤ 0.0001) and the virtual-reality experiment (t = −9.41; p ≤ 0.0001). Using ANOVA to compare Gabor patch pairs of gratings at +45°/−45° orientations presented in both visual fields, gratings at 0°/90° orientations presented in both visual fields, and mixed gratings (i.e., unconventional grating pairs) presented in both visual fields, the performance of the two experiments was evaluated by comparing observation duration in seconds (F = 0.12; p = 0.91) and the alternation rate per trial (F = 8.1; p = 0.0005). The differences between the stimulus groups were not statistically significant for the observation duration but were significantly different based on the alternation rates per trial. Moreover, ANOVA also showed that the dominance durations (F = 114.1; p < 0.0001) and the alternation rates (F = 91.6; p < 0.0001) per trial were significantly different between the mirror-stereoscope and the virtual-reality experiments, with the virtual-reality experiment showing an increase in alternation rate and a decrease in observation duration. The study was able to show that a virtual-reality head-mounted display can be used as an effective and novel alternative to induce binocular rivalry, but there were some differences in visual bi-stability between the two methods. This paper discusses the experimental measures taken to minimise piecemeal rivalry and to evaluate perceptual dominance between the two experimental designs. Full article
(This article belongs to the Special Issue Current Problems and Advances in Computational and Applied Mechanics)
Show Figures

Figure 1

Back to TopTop