Highlights
What are the main findings?
- Physical props significantly enhance presence and enjoyment but increase hand tremor, impairing task performance in high-skill VR archery.
- Physics-associated visual feedback moderates the negative impact of hand tremor on performance and synergistically enhances flow experience when combined with physical props.
What are the implications of the main findings?
- VR designers should implement multimodal integration where physical props are paired with congruent visual feedback to optimize both engagement and performance.
- The identified moderated pathway provides a framework for balancing experiential benefits against performance demands in high-skill VR applications.
Abstract
Most existing virtual reality exergames rely on generic VR devices, which can limit the physical exertion in VR-based exercises. In contrast, physical props can enhance exercise intensity, yet their impact on users’ performance and experience remains understudied, particularly in skill-based tasks. Meanwhile, physical props offer richer tactile and kinesthetic feedback, which, combined with the visual effects of head-mounted displays, presents a potential solution for improving user experience in VR. To explore this, this study developed a sensor-driven experimental framework for investigating high-skill VR tasks. By integrating vision sensors with standard VR devices, we constructed a VR archery system that enables objective quantification of motor performance. Leveraging the sensor-driven framework, we investigate the effects of physical props and physics-associated visual feedback on players’ performance and experience in VR tasks through an experiment involving 33 participants. By objectively quantifying performance, we reveal a dual-pathway mechanism: physical props significantly increased hand tremor, which in turn impaired aiming accuracy, but this negative effect was effectively moderated by time and physics-associated visual feedback that enabled real-time sensorimotor compensation. While complex physical props reduced task performance, they substantially enhanced enjoyment and presence, particularly demonstrating a synergistic effect on users’ flow experience when combined with physics-associated visual feedback. These findings elucidate the complex interplay between physical prop interfaces and visual feedback in high-skill VR tasks, providing valuable insights for designing VR experiences which balance performance requirements and engagement enhancement.
1. Introduction
1.1. Background
Virtual reality (VR) interaction often relies on generic devices like standard controllers, which significantly limit the intensity, realism, and fidelity of physical activities within virtual environments, particularly in high-skill domains []. While these devices offer versatility, they fail to replicate the nuanced movements and rich tactile sensations required for real-world skilled tasks, creating a gap in functional fidelity [].
To address this limitation, researchers have explored specialized haptic interfaces and physical props []. Evidence suggests that realistic interfaces, such as physical props used in sports simulations, can provide superior kinesthetic feedback, potentially enhancing motor learning and performance in VR []. However, the impact of physical props on high-skill tasks presents a complex trade-off: while they may enhance sensory engagement, they could simultaneously increase motor control demands—a tension that remains inadequately understood. While there is already considerable research on physical props in virtual reality [], few studies have explored the multimodal combination of physical props and visual feedback associated with them.
The multimodal nature of VR further complicates this relationship. Users inherently integrate sensory information from multiple channels [,], making the interplay between haptic and visual cues particularly important. Physics-associated visual feedback—where visual cues are dynamically tied to the physical state of a prop—represents a promising approach to enhance interaction realism. Nevertheless, existing evidence on multimodal integration remains conflicted. Some studies report synergistic benefits for both experience and performance [,,,], while others indicate negligible effects or even cognitive overload [,,,]. This inconsistency may stem from a predominant focus on main effects, overlooking the underlying psychophysiological mechanisms—such as sensorimotor integration and compensatory motor control—in high-skill task contexts.
Archery serves as an ideal paradigm for investigating VR interaction and immersion. It inherently combines vibrotactile, force, and kinesthetic feedback, demanding high levels of precision, force control, and adaptation [,]. Its established use in both VR training [,,,] and entertainment [] provides a robust foundation for this study. Specialized hardware interfaces are pivotal for creating realistic VR archery experiences. Previous research has explored diverse technological pathways. For instance, some systems use a real recurve bow integrated with multiple sensors (e.g., reed sensors, laser, Kinect) to detect drawing and releasing, but at the cost of increased system complexity and setup []. Others employ high-precision custom devices with strain gauges, focusing on physical accuracy over immersive experience []. Conversely, approaches using data gloves or purely virtual interactions avoid physical props altogether, thereby lacking realistic tactile resistance feedback []. Crucially, there is a lack of validation regarding how these physical prop interfaces and their associated sensor modalities concretely impact user performance and experience.
To systematically investigate the impact of physical prop interfaces and visual feedback in high-skill VR tasks, this study establishes a sensor-driven experimental framework for VR archery—a task requiring precise motor control, force management, and stability. Within this framework, we address three research objectives:
- Develop and validate a virtual reality archery framework with integrated sensors that can objectively quantify motion performance through data acquisition.
- Investigate the underlying mechanisms through which physical props affect both performance and user experience in high-skill VR tasks.
- Explore how does physics-associated visual feedback interact with the physical prop interfaces, and can it mitigate potential performance decrements while enhancing the subjective experience.
To answer these questions, we conducted a factorial experiment (Input interfaces: Physical Prop vs. Standard Controller × Visual Feedback: Physics-associated vs. Static) with 33 participants. The framework enabled objective quantification of motor performance through sensor-derived measures (hit deviation, hand tremor, completion time), complemented by multi-faceted subjective experience assessments (presence, flow, competence). Crucially, we employed linear mixed-effects modeling to unravel the pathway from the physical prop interfaces to hand tremor to performance accuracy, and to test the moderating role of visual feedback in this chain. This work makes three key contributions.
- We establish a sensor-driven experimental framework that objectively quantify motor performance in VR archery tasks, offering a potential pathway for methodological refinement in XR interaction research.
- Through objective quantitative metrics, this study provides a nuanced, mechanism-based explanation for the performance-experience trade-off associated with physical props in high-skill VR tasks.
- This study elucidates how physics-associated visual feedback moderates the impact of motor instability on performance and experience, offering principled insights for designing effective and engaging multimodal VR interactions.
1.2. Related Work
1.2.1. Application Research of Physical Props in VR
Physical prop interfaces can be implemented through various methods. Beyond simulating vibrations via handheld controllers, recent research has proposed diverse approaches to enhance the realism and interactivity of virtual experiences. As indicated in the background, the use of physical props is a prominent approach to enhancing kinesthetic feedback in VR. In this section, we review relevant work in this area in detail.
Cheng explored the use of passive props to augment VR experiences [], proposing that even non-actuated haptic props can heighten immersion. Their results indicated that passive props, despite lacking active feedback, improve users’ perception of virtual environments through physical presence, yielding more enjoyable and authentic experiences compared to scenarios without kinesthetic feedback.
White further investigated the effects of weighted physical prop interfaces on user experience and task performance []. Their experiments demonstrated that weighted props outperform conventional controllers in enhancing immersion, flow, and gameplay satisfaction, suggesting that physical weight alone can significantly elevate virtual experiences for specific tasks.
Strandholt introduced a redirected mapping technique to address insufficient haptic feedback []. By warping the relationship between physical and virtual tools, researchers enabled physical props to simulate impact and resistance when interacting with virtual objects. Their study revealed that redirected mapping significantly enhances interaction realism without compromising precision, underscoring the importance of mapping accuracy between virtual tools and physical props for user experience.
Auda conducted user experiments to examine the alignment between virtual objects and physical props of varying sizes []. Their findings suggest that precise size matching between virtual and physical objects is unnecessary for certain tasks. Most recently, Mori demonstrated that the physical properties of props play a critical role in improving immersion and applicability []. Collectively, existing studies confirm that haptic feedback via physical props effectively enhances immersion in VR. Different feedback modalities—such as length, weight, and active versus passive feedback—may yield varied outcomes across virtual tasks. While implementation specifics differ (e.g., precision in virtual-physical object alignment or feedback types), these works collectively validate the pivotal role of physical props in VR interaction and offer multiple pathways for advancing related research. However, none of the above studies explore the multimodal combination of physical props and visual feedback associated with them.
1.2.2. The Impact of Simulated Haptic and Visual Feedback in VR
To address the research gap concerning the interaction between haptic interfaces and visual feedback in high-skill tasks, it is essential to review existing evidence on multimodal integration.
Early foundational work [] found that augmented visual feedback accelerates learning, particularly in coordination tasks within virtual environments. However, some argued that frequent augmented visual feedback in procedural skill learning does not improve outcomes and may even hinder learning []. This could stem from the cognitive overload caused by excessive sensory information [], underscoring the importance of feedback intensity modulation in VR, especially across varying task demands.
The congruency between sensory modalities has emerged as a critical factor. Williams examined the effects of incongruent visual feedback (relative to real-world operations) in VR racing simulations []. Their results revealed that inconsistent visual feedback primarily alters user perception rather than directly impairing task performance. Gibbs conducted experiments where participants interacted with a virtual ball bouncing on a virtual rod []. They found that simultaneous visual and haptic feedback amplified the sense of presence, demonstrating the synergistic benefits of multimodal fusion.
Conversely, Gao developed a virtual pottery [], demonstrating that haptic feedback combined with high-fidelity visual feedback markedly improves operational accuracy, more than just presence, compared to bare-hand interactions. Collectively, these studies emphasize the critical role of congruency between visual and haptic feedback in optimizing VR experiences and performance.
Despite the widespread acknowledgment of haptic feedback’s utility in virtual environments [], discrepancies persist across studies regarding feedback intensity, modality types, and the efficacy of multimodal integration. The effectiveness of haptic feedback appears contingent not only on its presence but also on task complexity, feedback intensity, and perceptual congruency.
The efficacy of feedback is further moderated by task complexity. Supporting this view, Van demonstrated that simulated vibrotactile feedback significantly enhanced performance in complex tasks under latency but had negligible effects on simple tasks []. Likewise, the intensity of the feedback itself is a key variable; Tanacar highlighted that higher-intensity haptic rendering could lead to lower error rates [], indicating that the potency of the signal influences performance outcomes.
Finally, moving beyond performance and subjective experience, some research has begun to explore psychophysiological mechanisms. Radhakrishnan investigated the relationship between haptic feedback [], fine motor skill training, and physiological arousal, suggesting that haptic feedback may influence arousal levels even in the absence of significant changes in self-reported measures like presence or task load.
In summary, despite the acknowledged utility of multimodal feedback, notable discrepancies persist across studies. The literature suggests that its effectiveness is contingent on a nuanced interplay of factors, including task complexity, feedback intensity, and, crucially, perceptual congruency—a finding that directly informs the design of our experiment within the high-skill context of archery.
2. Material and Methods
2.1. Design of the Physical Prop Interfaces
To achieve our experimental goals, we designed a VR platform simulation application for archery. Through this application, we can provide users with a realistic simulation archery experience. The application was developed using the Unity3D engine and can run on the Meta Oculus 3 platform. The application offers two main operating modes. The first one uses a standard controller, which is consistent with common VR archery games. Users can use the standard controller interface to input operations. One controller is used to operate the position of the bow, and the other controller’s grab key is used to perform the action of pulling and releasing the bowstring.
The second approach is to use a physical props interface based on simulated bows and arrows, as shown in Figure 1. Users can directly operate a simulated bow, and the location, shape, and function of this simulated bow will be synchronized and mapped in real time to the virtual bow in the virtual space.
Figure 1.
Two types of interaction devices.
2.1.1. The Hardware Architecture
The system’s hardware architecture extends beyond the VR head-mounted display and standard controllers. Additional components critical to the physical prop mode include a simulated physical bow (660 g, 10 lbs), an ESP32S3Cam microcontroller board, an OV2640 camera module, and a battery component. Figure 2 illustrates the hardware configuration. To ensure spatial consistency between the real-world bow and its virtual counterpart, the standard controllers are securely affixed to the bow’s frame as positioning trackers, with their relative positions mapped in the virtual environment. Additionally, an OV2640 camera, integrated with an ESP32S3Cam microcontroller, is mounted at the center of the physical bow and oriented toward the user for real-time hand motion capture. The entire module is powered by an independent battery component.
Figure 2.
The structure of physical prop interfaces for virtual archery.
The video stream is transmitted via the Real-Time Streaming Protocol (RTSP) to a host platform, where a task-specific neural network processes the data to identify hand posture and spatial positioning. Predicted data frames undergo temporal smoothing and anti-jitter filtering before being relayed back to the client through the Transmission Control Protocol (TCP), guiding corresponding actions of the virtual bow.
2.1.2. Gesture Recognition
To enable interaction by capturing users’ natural gestures, we developed a gesture recognition system as shown in Figure 3. This system was designed to accurately track hand positions during bow-drawing and release phases. The technical pipeline consisted of three main stages: camera calibration, synthetic dataset generation, and model training and deployment.
Figure 3.
The structure of Gesture recognition. The raw data stream from the MLP models (shown as the blue line) contains inherent noise and jitter. To address this, we apply a mean filter, which produces the stabilized signal (shown as the red line).
Camera Calibration: The OV2640 camera module was calibrated using a standard checkerboard pattern to obtain its intrinsic parameters, distortion coefficients, and resolution. These calibrated parameters were subsequently used both in the synthetic data generation process and to correct image distortion in the live video stream.
Synthetic Dataset Generation: To train the recognition model, we created a custom synthetic 3D hand joint pose dataset. We first collected several common archery drawing hand postures (e.g., Mediterranean release, Mongolian release) and release states, recording their 3D joint spatial coordinates. These were categorized into two classes: “Closed” for drawing postures and “Open” for other states. A virtual camera, parameterized with the real camera’s calibrated intrinsics, distortion coefficients and resolution, was positioned at the origin in 3D space. We then randomly sampled hand poses, distances, and rotations, ensuring the hand remained in front of the virtual camera’s field of view. The 3D joint coordinates were projected onto the 2D image plane using the virtual camera’s parameters. The Euclidean distance between a predefined index finger joint and the camera was computed as the ground-truth depth. This process generated a rich synthetic dataset containing tuples of (3D joint coordinates, virtual transformation, 2D projected joint coordinates, true distance, gesture state).
Model Training and Performance: Two independent Multilayer Perceptron (MLP) networks were trained on this dataset: one for hand posture classification (“Open”/“Closed”) and another for distance regression. The input to both networks was the vector of 2D joint coordinates from a single frame. We employed an 80:20 random split for training and testing. The distance regression model achieved a Mean Absolute Error (MAE) of less than 2 cm on the test set, which corresponds to a relative error of approximately 4% given the typical draw distance range of 45–50 cm. The binary posture classification model achieved an accuracy of approximately 95% on the test set. The model’s prediction latency is approximately 30 ms.
System Deployment and Workflow: In the operational system, the OV2640 camera, mounted at the center of the physical bow and oriented toward the user, captures real-time hand images at 20–25 frames per second. The video stream is transmitted via RTSP to a host platform. Each frame is processed by the trained MLP networks to simultaneously predict the hand posture state and the longitudinal draw distance. The predicted data frames undergo temporal smoothing and anti-jitter filtering before being relayed back to the VR client via TCP, where they drive the corresponding actions of the virtual bow (e.g., drawing the string, calculating draw force). The end-to-end latency of this pipeline, from image capture to action in VR, is approximately 100 ms. The resulting system usability scores (reported in Section 3.2.5) confirm that this computer vision-based approach maintains a good level of usability for the interactive task.
2.2. Physics-Associated Visual Feedback
The system implements a physics-associated visual feedback mechanism. In the absence of physics-associated visual feedback, a small static crosshair is rendered at the intersection of the arrow’s straight linear trajectory and the target plane (Figure 4a) to ensure objective aiming fairness.
Figure 4.
Two types of visual feedback. (a) The type of visual feedback with a small static crosshair. (b) The type of visual feedback with a dynamic circular reticle. (c) The trajectory line length and reticle radius are dynamically changing. The dashed lines illustrate the transition process of the trajectory extension and reticle scaling, with the blue-to-yellow gradient line representing the trajectory ray and the yellow ellipse indicating the reticle on the target.
When physics-associated visual feedback is enabled, during the aiming phase, the system generates a semitransparent trajectory line at the arrow tip, which dynamically extends along its straight path. This trajectory line is completely straight and unaffected by simulated gravity or wind. Simultaneously, a circular crosshair is displayed at the intersection of this trajectory line and the target plane (Figure 4b). The length of the trajectory line and the radius of the crosshair are dynamically adjusted according to the draw distance: as the draw distance increases, the trajectory line visually lengthens, while the radius of the crosshair decreases (Figure 4c). This real-time synchronization ensures that the visual feedback is consistent with the physical force feedback originating from the bow, hence the term physics-associated visual feedback. Specifically, the draw distance is mapped to the pull of the virtual bow, which in turn affects the simulated speed and trajectory of the arrow. The actual trajectory of the arrow is affected by gravity and wind, so the trajectory shown in the visual feedback is completely different from the actual trajectory of the arrow. Participants need to use their skills to predict the movement of the arrow, which is the main difficulty of this archery task.
When users engage in the aiming task, their visual focus centers on the reticle and target area rather than the virtual bow model. In static crosshair mode, users perceive only tactile feedback from the standard controller or physical prop. In dynamic feedback mode, while the reticle’s positional alignment matches the static version, users observe additional dynamic visual cues (e.g., trajectory line and reticle scaling) that are directly mapped to the physical prop’s real-time state. This multimodal integration establishes a closed-loop connection between visual and kinesthetic feedback, reinforcing sensory congruency and consistency.
2.3. Experiment
2.3.1. Study Design
Based on our two distinct interaction devices and visual feedback configurations, the system enables combinations of varying interaction modalities. Under these conditions, we measured participants’ sense of presence, physical activity enjoyment (PAE), task load, self-efficacy, and other subjective metrics while simultaneously recording objective behavioral data in the virtual environment.
The experiment employed a factorial design with two independent variables: haptic interface type (standard controller [SC] vs. physical prop [PP]) and physics-associated visual feedback (absent vs. present). The four experimental conditions were (1) SC without VF (SC − VF), (2) SC with VF (SC + VF), (3) PP without VF (PP − VF), and (4) PP with VF (PP + VF). A balanced Latin square design [] was implemented to counterbalance the order of conditions across participants.
2.3.2. Participants
A total of 33 adult participants were recruited from the university population. Informed consent was obtained from all subjects involved in the study. The participants had a mean age of 23.8 (±4.4) years, with a minimum age of 20. Only one participant reported extensive prior experience in archery, while two had never used VR devices.
2.3.3. Procedures
We invited volunteers to participate in the experiment under four distinct conditions. Initially, we provided a detailed explanation of the simulated bow system’s interaction methods and task rules. Prior to participation, all volunteers underwent an informed consent process, which involved reading and signing an informed consent form.
The sequence in which volunteers experienced the different modes was determined using a Latin square design to control for order effects. Before commencing each mode, participants were required to familiarize themselves with the respective interaction method through practice sessions. These practice sessions allowed participants to conduct trial shots (a minimum of 10 practice shots were required, and participants could take more if they felt necessary) to ensure they were comfortable and proficient with the interaction mechanics before proceeding to the formal experimental trials. Participants are required to have sufficient rest between the practice and formal phases.
In the formal experimental phase, participants engaged in archery tasks using either two standard controller or a physical prop. Their primary objective was to achieve maximum accuracy by aiming at the target while accounting for wind direction (left or right) and the gravitational effects on the arrow. Upon hitting the target or the plane where the target is located, participants received a score ranging from 0 to 10 based on the proximity of the impact point to the center of the target. The wind direction changed after each shot, and participants could consult a user interface panel to obtain real-time information about the current wind speed, wind direction and score.
We introduced the wind factor primarily to enhance the task’s challenge and realism. To ensure experimental control, wind direction and speed were randomized during practice trials but followed a pre-defined sequence across the 10 formal shots for all participants. The sequence ensures a balanced distribution of wind direction and intensity in both the left and right directions and in terms of intensity level. This design ensured consistent wind conditions for each corresponding shot number. It remained constant and identical across all experimental groups.
Each mode consisted of 10 formal shooting trials per participant to ensure that performance metrics reflected a stable representation of their abilities within that mode without inducing excessive fatigue. The number of tests was determined based on our preliminary experiments, where we found that more than 15 consecutive shots were generally difficult for the average adult to complete. Although we allowed participants to rest at any time, we chose a number of shots that would not cause excessive fatigue for most adults to avoid the impact of frequent interruptions on their subjective experience. Following the completion of the experimental trials, participants completed an evaluation questionnaire designed to assess their experience with each interaction mode.
2.3.4. Measures
To comprehensively evaluate players’ performance and experience, we collected both objective behavioral data and subjective questionnaire responses. The measurement variables are summarized in Table 1 and described in detail below.
Table 1.
Summary of Measurement Variables.
Objective Measures
To provide quantitative evidence of motor performance, we employed a sensor data acquisition approach. We derived three key performance metrics from this sensor-driven framework that capture different aspects of task execution in VR archery. These metrics were chosen to provide a comprehensive assessment of users’ motor performance, stability, and efficiency during the aiming and shooting process.
- Hit deviation, which reflects the precision with which users completed the task. For each shot, the Euclidean distance between the arrow’s impact coordinates on the target plane and the bullseye is recorded. This distance is then normalized using the radial separation from the bullseye (10-ring) to the 5-ring line as one unit.
- Hand tremor, an objective factor potentially affecting precision, was operationalized as the mean positional deviation of the bow. This metric was derived from continuous sampling of the bow’s position at 60 Hz in the second before firing. For each frame, the Euclidean distance from the bow’s position to the average center across all frames was computed. The arithmetic mean of these distances (in centimeters) was then used as the quantification index, with higher values denoting poorer arm stability.
- Task completion time, which indicates efficiency. For each shot, the system records the precise duration from the initiation of the aiming action to the moment the arrow is released.
Subjective Measures
We opted to select specific dimensions from multiple scales to measure subjective experience rather than directly using the Game Experience Questionnaire(GEQ) []. The GEQ is widely used in game experience research, it is designed to cover various types of gaming experiences. In contrast, VR environments are relatively unique, and archery tasks are more akin to physical sports rather than purely electronic games. Therefore, for different dimensions such as presence, we selected the following measurement tools.
- Presence: We selected 4 items from the I-group Presence Questionnaire [] to assess the immersive experience of the players.
- Physical Activity Enjoyment: We selected 4 items from the Short Version of the Physical Activity Enjoyment Scale [] to evaluate the enjoyment of the players, focusing on positive experiences during the activity.
- Flow experience: We selected 4 items from a validated scale [] to measure the flow experience of the players, covering key dimensions such as concentration and sense of control.
- Competence: To assess players’ sense of competence—a psychological need reflecting their perceived effectiveness and mastery in the game environment—we selected 4 items from the Competence subscale of the Game Experience Questionnaire [].
- Task Load: We measured the mental and physical demands of the task using the 6 items of the NASA-TLX questionnaire [], and the overall workload score was computed using the unweighted Raw TLX procedure.
- System Usability: We evaluated the usability of the game system using the 10 items of the System Usability Scale [].
- Future Use Intention: We measured players’ intention to continue using the system in the future through a single-item scale, reflecting the system’s attractiveness and potential value.
3. Results
3.1. Task Performance
In this study, we conducted comprehensive statistical analyses of participants’ performance across four experimental conditions, encompassing objective metrics such as hit deviation, task completion time, and hand tremor. To ensure data representativeness and accuracy, all statistical evaluations were based on data from all individual shooting trials during the formal experiment. We collected 33 sets of objective behavioral data from 33 participants, including 10 shooting trials per participant under each condition. The mean hit deviation across each participant’s ten shots was utilized to represent their personal hit deviation and included in subsequent statistical analyzes. Similarly, the mean task completion time and mean hand tremor across the ten trials were calculated for each participant and used in the statistical analyses. Figure 5 illustrates the distribution of all raw hit deviations (i.e., every individual shooting trial across all participants and conditions). It can be observed that the border of the hit deviations is generally wider in the physical prop conditions (PP − VF and PP + VF) compared to the standard controller conditions (SC − VF and SC + VF). Moreover, the presence of visual feedback seems to reduce the border of the hit deviation in the standard controller condition (SC − VF vs. SC + VF), and the observed difference is weaker under the physical prop condition (PP − VF vs. PP + VF). However, the differences we observed were mainly reflected in a small number of cases with large deviation values, which does not represent the overall situation.
Figure 5.
The distribution of all hit deviations across all conditions.
3.1.1. Hit Deviation Analysis
The Friedman test on the four experimental groups revealed significant between-group differences (, ). To identify the specific pairs that differed, Figure 6 displays the results of Post-hoc pairwise comparisons using Bonferroni correction. Specifically, the adjusted p-value () was calculated as the original p-value multiplied by the number of pairwise comparisons (k = 6 for four conditions). The adjusted p-value was then compared directly to the significance level of . Significant differences were observed between SC − VF and PP − VF (, ), with the median hit deviation increasing by 79.9% when using physical props. Similarly, SC + VF and PP − VF showed a significant difference (, ), while SC + VF and PP + VF also exhibited a significant discrepancy (,). Under visual feedback conditions, the median hit deviation increased by 59.5% due to the use of physical props.
Figure 6.
Results of comparisons of hit deviations. The square markers with different colors represent the outliers under different conditions, which are defined as data points located beyond 1.5 times the interquartile range from the quartiles, as indicated in the figure legend. In all figures, the symbols *,*** denote the levels of statistical significance after adjustments: * indicates an adjusted p-value < 0.05, and *** indicates an adjusted p-value < 0.001. p-values adjusted via the Bonferroni method for 6 comparisons ().
3.1.2. Task Completion Time Analysis
We recorded and analyzed the task completion time for each participant across all four experimental conditions. As shown in Figure 7, the Friedman test revealed no statistically significant differences in task completion time among the four groups (, ).
Figure 7.
Results of comparisons of task completion time. The square markers with different colors represent the outliers under different conditions, which are defined as data points located beyond 1.5 times the interquartile range from the quartiles, as indicated in the figure legend.
3.1.3. Hand Tremor Analysis
The difference in hand tremor across conditions was statistically significant (, ). As shown in Figure 8, the following groups showed significant differences in pairwise comparisons: SC − VF and PP − VF (, ), SC + VF and PP + VF (, ), SC − VF and PP + VF (, ), and SC + VF and PP − VF (, ). However, no significant differences were observed between the two SC-mode groups (, ) or between the two PP-mode groups (, ).
Figure 8.
Results of comparisons of hand tremor. *** indicates an adjusted p-value < 0.001. p-values adjusted via the Bonferroni method for 6 comparisons ().
3.1.4. Linear Mixed-Effects Analysis of Hand Tremor
These results from the non-parametric Friedman test provide an overall comparison of the conditions. However, to better understand the underlying factors and their interactions, we further conducted linear mixed-effects models, which allows us to control for variables and evaluate their independent contributions. A linear mixed-effects model was fitted to analyze the effects of visual feedback (VF), physical prop (PP), and time on hand tremor. The model included VF and PP as categorical fixed factors, Time as a continuous fixed covariate, and all their interactions. (No subjective experience variables from the questionnaires were included in this models.) Crucially, a random intercept for participants was included to account for individual differences in baseline hand tremor levels. The model was fitted using Restricted Maximum Likelihood (REML).
Type III tests of fixed effects revealed a significant main effect of PP (F(1, 113.79) = 70.59, p < 0.001) and Time (F(1, 119.93) = 8.50, p = 0.004). The main effect of VF remained non-significant (p = 0.477). A significant interaction was again found between PP and Time (F(1, 115.52) = 9.81, p = 0.002). None of the other interactions were significant (all p > 0.5).
The estimates labeled as [PP = 0] and [VF = 0] in the Table 2 represent the effects relative to their respective reference groups. The model estimates confirm that using a physical prop [PP = 1] is associated with significantly greater hand tremor than using the standard controller. The estimate for the standard controller group [PP = 0] ( = −1.216, p < 0.001), indicating a lower marginal mean for hand tremor. The significant interaction between [PP = 0] and time ( = 0.142, p = 0.014) suggests that the slope of hand tremor over time was significantly steeper for the group without physical props.
Table 2.
Fixed Effects Estimates-Parameter Coefficients for Hand Tremor Model.
The variance component for the random intercept was statistically significant (p = 0.010), confirming that there were significant individual differences between participants in their baseline levels of hand tremor after accounting for fixed factors.
The results robustly indicate that the use of a physical prop is associated with significantly higher levels of hand tremor compared to the standard controller. This relationship is moderated by time, suggesting the effect of the interface type on hand tremor changes over the course of the task. Visual feedback show no significant effect.
3.1.5. Linear Mixed-Effects Analysis of Hit Deviation
A linear mixed-effects model was fitted to analyze the factors that influence Hit Deviation. The model included Visual Feedback (VF) and Physical Prop (PP) as categorical fixed factors, time and hand tremor as continuous fixed covariates, and a critical interaction term VF × Hand Tremor. A random intercept for participants was incorporated to account for individual differences in baseline accuracy levels. The model was estimated using Restricted Maximum Likelihood (REML).
Type III tests of fixed effects revealed two significant effects: a significant main effect of Hand Tremor (F(1, 124.79) = 6.14, p = 0.015) and a significant interaction between VF and Hand Tremor (F(1, 101.13) = 7.01, p = 0.009). The main effects of VF (p = 0.174), PP (p = 0.495) and Time (p = 0.141) were not statistically significant, indicating no independent influence of these variables on Hit Deviation when controlling for other factors.
Parameter estimates in Table 3 further clarified the nature of the significant interaction. The coefficient for hand tremor ( = 0.552, p = 0.376) represents the simple slope for the reference group (VF = 1, visual feedback present), indicating non-significant association between hand tremor and hit deviation in this condition. Crucially, the interaction term [VF = 0] × Hand Tremor was highly significant ( = 1.517, p = 0.009), revealing that the effect of hand tremor on Hit Deviation was substantially stronger when visual feedback was absent. Specifically, for the VF-absent group (VF = 0), the total slope of hand tremor was calculated as 2.069 (=0.552 + 1.517), meaning that each one-unit increase in hand tremor was associated with an average increase of approximately 2.07 units in hit deviation—a relationship that was both strong and statistically significant.
Table 3.
Fixed Effects Estimates-Parameter Coefficients for Hit Deviation Model.
The significant VF × Hand Tremor interaction indicates that visual feedback moderates the impact of physiological hand tremor on performance accuracy. When visual feedback is available, participants appear to compensate for hand movement, resulting in a minimal and non-significant effect of hand tremor on Hit Deviation. In contrast, in the absence of visual feedback, hand tremor exerts a pronounced detrimental effect on aiming precision, leading to a steep increase in error as instability increases.
The variance component for the random intercept was statistically significant (p = 0.013), confirming the presence of substantial individual differences in baseline hit deviation, even after accounting for the fixed effects.
The analysis revealed a significant moderating effect of interaction between visual feedback and Hand Tremor on the performance. While hand tremor significantly impairs performance, this negative effect is effectively buffered when visual feedback is available. The presence of visual feedback thus appears to enable compensatory motor control mechanisms that mitigate the impact of physical instability. Neither the type of physical prop nor the passage of time showed a significant direct effect on accuracy in this task context.
3.2. Questionnaire Result
To assess the reliability and validity of the measuring scales used in the study, reliability and validity analyses were conducted on the data collected from the four experimental groups. The analysis indicators included internal consistency reliability (Cronbach’s Alpha), the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy, and Bartlett’s Test of Sphericity. As shown in Table A1, the majority of the scales demonstrated good internal consistency ( > 0.7) and structural validity (KMO > 0.7, Bartlett’s Test of Sphericity significant at p < 0.05) across the different experimental conditions. This indicates that the measuring tools employed in this study possess high reliability and validity.
3.2.1. Presence
The Friedman test indicated a statistically significant difference in presence among the four conditions (, ). As illustrated in Figure 9, post-hoc pairwise comparisons using Bonferroni correction showed significant differences between the SC − VF and PP − VF groups (, ) and between the SC + VF and PP + VF groups (, ), indicating that physical props significantly enhanced users’ sense of presence. Additionally, significant differences were found between SC − VF and PP + VF (,) and between SC + VF and PP − VF (, ). However, no significant differences were detected between SC − VF and SC + VF (, ) or between PP − VF and PP + VF (, ), suggesting that physics-associated visual feedback did not significantly improve presence.
Figure 9.
Results of comparisons of user experience. In the figure, * indicates an adjusted p-value < 0.05, ** indicates an adjusted p-value < 0.01, and *** indicates an adjusted p-value < 0.001. p-values adjusted via the Bonferroni method for 6 comparisons ().
3.2.2. PAE
A significant difference was found in PAE among the conditions(, ). Follow-up pairwise comparisons, adjusted with Bonferroni correction, indicated significant differences between SC − VF and PP − VF (, ) and between SC + VF and PP + VF (, ), demonstrating that physical props effectively increased PAE. Differences between SC − VF and PP + VF (, ) also reached statistical significance. No significant differences were found between SC − VF and SC + VF (, ), SC + VF and PP − VF (, ) or between PP − VF and PP + VF (, ).
3.2.3. Flow Experience
The Friedman test indicated a statistically significant difference in the flow experience among the four conditions (, ). Post-hoc analysis revealed that significant differences were only detected between SC − VF and PP + VF (, ) and between PP − VF and PP + VF (, ), highlighting the synergistic effect of physics-associated visual feedback on flow experience when combined with physical props. No significant differences were observed between SC − VF and SC + VF (, ), SC − VF and PP − VF (, ), SC + VF and PP − VF (, ), or SC + VF and PP + VF (, ).
3.2.4. Competence
The Friedman test indicated a statistically significant difference in competence among the four conditions (, ). In the post-hoc analysis, significant differences were identified between SC − VF and SC + VF (, ), SC + VF and PP − VF (, ), and PP − VF and PP + VF (, ), indicating that physics-associated visual feedback significantly enhanced perceived competence. Physical props alone showed no significant impact on percei- ved competence.
3.2.5. SUS and Task Load
In terms of system usability, the bow-and-arrow interaction system developed in this study demonstrated a good level of performance, as shown in Figure 10. According to the established SUS benchmark [], the mean scores (converted to a 100-point scale) for all four experimental conditions ranged from 70.71 to 79.86). The Friedman test indicated a statistically significant difference in the system usability among the four conditions (, ). Significant differences in system usability were detected between SC + VF and PP − VF (, ) and between PP − VF and PP + VF (, ), suggesting that physics-associated visual feedback improves usability. This demonstrates that the introduced visual feedback, particularly when integrated with physical mechanics, is a key factor in effectively enhancing the system’s usability. Ultimately, the PP + VF condition achieved a score of 79.86, approaching the excellent level, which fully demonstrates to the success of the bow-and-arrow system’s design in meeting the requirements for good usability.
Figure 10.
Results of comparisons of user task load and system useability. In the figure, * indicates an adjusted p-value < 0.05, ** indicates an adjusted p-value < 0.01, and *** indicates an adjusted p-value < 0.001. p-values adjusted via the Bonferroni method for 6 comparisons ().
For task load, the Friedman test was significant (, ). Significant differences in task load were observed between SC − VF and PP − VF (, ) and between SC + VF and PP + VF (, ), confirming that physical props significantly increased task load. Differences between SC − VF and PP + VF (, ) and SC + VF and PP − VF (, ) were also significant. No differences were found between SC − VF and SC + VF (, ) or PP − VF and PP + VF (, ). This pattern confirms that physics-associated visual feedback did not contribute to higher task load. The TLX scores across experimental conditions were converted to a 0–100 scale for comparison with established reference values. The mean scores were as follows: SC − VF (44.00), SC + VF (40.71), PP − VF (62.57), and PP + VF (56.86). Relative to the meta-analytic reference values [], where the overall mean TLX is 42 (SD = 13), the SC conditions align closely with the average workload range. In contrast, the PP conditions (56.86–62.57) exceed the 75th percentile (60.00) reported by Grier [], indicating a high workload level. These scores are comparable to those observed in demanding tasks such as Physical Activities (median = 62.00) and Video Game tasks (median = 56.50) in Grier’s taxonomy. Overall, the task represents a moderate-to-high workload scenario, particularly when physical props are incorporated.
4. Discussion
4.1. Main Findings
The present study employed linear mixed-effects modeling to investigate the complex interplay between visual feedback (VF), physical prop (PP), task completion time, hand tremor, and hit deviation. By analyzing two distinct but interrelated models—one predicting hand tremor and the other predicting Hit Deviation—we uncover a nuanced, moderated pathway through which motor control dynamics influence performance outcomes in a simulated archery task.
First, analysis of hand tremor revealed that the use of a physical prop exerts a powerful influence on hand stability. Participants exhibited significantly lower levels of hand tremor when using the standard controller compared to the physical prop, indicating that the latter may introduce biomechanical constraints or unfamiliar motor demands that amplify tremor. Furthermore, a significant Physical Prop × Time interaction demonstrated that the trajectory of hand stability over time differs markedly between interface types. Specifically, hand tremor decreased over time in physical prop condition. This pattern suggests that the physical prop, by providing richer kinesthetic feedback—including greater perceived weight, inertia, and continuous mechanical constraints—may support more effective real-time sensorimotor compensation for hand tremor, enabling users to dynamically suppress tremor during task execution. In contrast, no such improvement was observed with the standard controller, where hand tremor remained relatively stable or even slightly increased. Notably, visual feedback did not significantly affect hand tremor, either as a main effect or in interaction with other variables, indicating that visual feedback does not directly modulate motoric tremor in this context.
Second, and more critically, the analysis of Hit Deviation revealed that hand tremor impacts hit deviation—but this relationship is strongly contingent on the availability of visual feedback. When visual feedback was present (VF = 1), the slope of the hand tremor–hit deviation relationship was weak and non-significant (, ), suggesting that real-time visual information enables participants to compensate for motor instability—likely through real-time sensorimotor compensation. However, when visual feedback was absent (VF = 0), the total effect of hand tremor on hit deviation became substantially stronger (), and highly significant (). This indicates that without external visual guidance, individuals are unable to mitigate the negative consequences of physiological tremor, leading to a direct transmission of motor noise into performance error.
4.2. Reconciling Findings from Different Analytical Approaches
It is important to address the apparent discrepancies between Post-hoc pairwise comparisons and the linear mixed-effects models by recognizing their distinct analytical perspectives. The Post-hoc pairwise comparisons evaluates the total effect of experimental conditions, while the linear mixed-effects model addresses a more refined question: it isolates the independent, direct contribution of each factor after accounting for the influence of other variables. This distinction resolves the two key observations.
First, the significant difference in hit deviation between the SC + VF and PP + VF conditions identified by Post-hoc pairwise comparisons using Bonferroni correction (, ) reflects the genuine total detriment associated with the physical prop. The linear mixed-effects model clarifies that this total effect is predominantly mediated by an indirect pathway: the prop significantly increases hand tremor, which in turn degrades aiming precision. When this physiological mediator is statistically controlled, the direct path from the prop to hit deviation becomes non-significant.
Second, the absence of a simple main effect of visual feedback (VF) on Hit Deviation in the Post-hoc pairwise comparisons aligns with its role as a moderator in the linear mixed-effects model. The model identified a significant VF × Hand Tremor interaction, indicating that VF’s benefit is contingent upon the level of physiological instability; it functions specifically to attenuate the deleterious impact of hand tremor. Thus, although visual feedback attenuates the impact of hand tremor on performance, it does not fully eliminate the indirect negative consequences of using the physical prop. Even under VF, participants using the physical prop experienced substantially higher levels of hand tremor, and although the sensorimotor system could partially compensate, the cumulative effect of increased motor noise translated into measurably higher hit deviations. The moderating role of VF therefore reduces—but does not nullify—the indirect pathway from PP to performance degradation via increased physiological instability.
4.3. Design Inspiration
Together, these findings trace a clear and conditionally dependent causal chain when visual feedback (VF) and physical prop (PP) are used in combination, as shown in Figure 11: The introduction of the specific physical bow prop considerably reduces the performance of the user’s task. This is due to the demands of the archery task on precise technique and force control, where the weight of the real bow and string vibrations increases operational difficulty, leading to exacerbated hand tremor that impair fine motor execution. Notably, this effect is modulated by time—users exhibit a gradual reduction in hand tremor over time when using the physical prop, suggesting partial adaptation to its biomechanical demands. Visual feedback determines its propagation into task performance. These findings have practical implications for the design of human–computer interaction systems—particularly in virtual reality and precision skilled tasks—where minimizing reliance on continuous visual feedback may require additional strategies to stabilize motor output or provide alternative sensory cues.
Figure 11.
Dependent causal chain while using visual feedback and physical prop.
Although physical props reduced task performance, they markedly enhanced users’ sense of presence and physical activity enjoyment. Experimental results indicate that physical feedback from physical props amplified users’ positive emotional engagement. However, physical props also resulted in higher task load. Therefore, in such high-skill tasks, the application of Physical props requires considering user capability.
The physics-associated visual feedback exhibited an important role in the experiment: The enhancement of flow experience through visual feedback was limited under standard controller conditions, showing discernible effects only when combined with physical props. In other words, when using standard controller, physics-associated visual feedback cannot compensate for sensory dissonance or improve users’ flow experience. This indicates that the utility of physics-associated visual feedback is highly dependent on the continuous action-feedback cognitive loop under physical props conditions.
In addition, regarding users’ self-reported intention for future use, we observed a descriptive trend: participants reported a higher willingness to continue using the system after experiencing the condition involving ‘physical props combined with visual feedback.’ Although this was measured using only a single item, the trend is consistent with users’ positive feedback on physical activity enjoyment, flow experience. This further suggests, from an applied perspective, that despite the increased operational demands associated with physical props, the heightened immersion and the synergistic positive experience generated with visual feedback may enhance the system’s long-term appeal and user retention.
4.4. Limitations and Future Work
Sample Size and Population: Although our sample of 33 university students aligns with common practices in the field and significant effects were detected, we acknowledge that a larger sample could reveal more subtle effects. The predominantly young, technologically literate participant pool may not represent broader populations, particularly individuals with different skill levels. Since most of the participants were beginners, our findings have certain limitations for users with different archery skill levels. Future research should systematically examine how skill level modulates the effects of props and visual feedback.
Gesture Recognition System Limitations: The computer vision-based hand tracking system, while cost-effective, introduced several limitations in tracking accuracy and latency. Two key technical challenges were identified: (1) real-world 2D key point detection introduced significant noise that propagated through the pipeline; (2) the test system assumed optimal hand visibility and recognition conditions, which may fail in complex real-world environments with occlusions or varying lighting. Future research could employ more precise tracking systems, such as advanced depth sensors, combined with multi-sensor information fusion technology to more accurately capture subtle movements and behaviors while improving robustness to environmental variations.
The Specific Characteristics of the Physical Props: The study used a single physical bow. Different prop characteristics (vibration, weight distribution, size, material properties) might produce varying effects on both performance and experience. Future work should systematically examine how specific prop parameters influence the observed trade-offs.
Interpretation of Visual Feedback: Our study did not directly assess how users cognitively interpreted the visual feedback. This limits our understanding of the users’ mental models. To address this, future work could focus on investigating how users understand and interpret trajectory lines in visual feedback.
5. Conclusions
This study developed a sensor-driven framework for investigating high-skill tasks in VR. By integrating a vision sensor with standard VR devices, we moved beyond subjective metrics to objectively capture and analyze performance data, such as hand tremor and operation time. This approach allowed us to elucidate the underlying mechanisms through which physical prop interfaces and visual feedback interact to shape user performance in VR archery. It demonstrates how strategically combining available sensing technologies can yield new insights into human-computer interaction, particularly in understanding the complex relationship between interface design, motor performance, and user experience.
The User experience study shows that, the proposed VR archery interaction framework, which is driven by sensors and physical props, enhanced user’s presence and enjoyment but also affected the task performance. More importantly, by objectively quantifying performance with the proposed sensor-driven VR framework, we reveal the dual-pathway mechanism. Although physics-associated visual feedback doesn’t reduce hand tremor directly, it enables sensorimotor compensation that effectively buffers against motor instability. This compensatory mechanism reveals that visual feedback serves as a crucial moderator in the pathway from physical interface to performance outcomes.
Furthermore, visual feedback demonstrates synergistic effects with physical props, particularly in enhancing flow experience and perceived competence. This multimodal integration improves the user experience by creating a more engaging and effective interaction paradigm through the combination of rich kinesthetic feedback with congruent visual cues.
In summary, this study contributes to VR interaction research by introducing a new sensor-driven methodological framework. The framework developed in this study enabled a mechanistic understanding of VR interaction. We objectively demonstrated that physical props increase hand tremor, thereby impairing performance, while physics-associated visual feedback mitigates this effect by enabling sensorimotor compensation. The proposed VR archery framework driven by sensors and physical props and the accompanying user study provide a foundation for future work aimed at designing immersive, effective, and accessible VR systems that leverage sensors and physical interfaces.
Author Contributions
Conceptualization, Z.L. and F.T.; methodology, H.X.; software, Z.L.; formal analysis, Z.L.; investigation, Z.L., H.X. and M.T.; data curation, Z.L. and F.T.; writing—original draft preparation, Z.L.; writing—review and editing, H.X. and F.T.; visualization, M.T.; project administration, F.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
This study was approved by the Ethics Committee of Shanghai University (ECSHU 2024-008).
Informed Consent Statement
All participants were informed about the experiment and signed a consent form, which was approved by the Ethics Committee of Shanghai University (ECSHU 2024-008).
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Acknowledgments
During the preparation of this manuscript, the author(s) used Deepseek for the purposes of translation. The authors have reviewed and edited the output and take full responsibility for the content of this publication.
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A
Table A1.
Reliability and Validity Test Results.
Table A1.
Reliability and Validity Test Results.
| Measurement | Model | Cronbach’s Alpha | KMO | Bartlett’s Test of Sphericity | ||
|---|---|---|---|---|---|---|
| Chi-Square | df | Sig. | ||||
| Presence | SC − VF | 0.909 | 0.822 | 89.463 | 6 | 0.000 |
| SC + VF | 0.911 | 0.821 | 86.018 | 6 | 0.000 | |
| PP − VF | 0.700 | 0.734 | 23.150 | 6 | 0.001 | |
| PP + VF | 0.883 | 0.758 | 70.769 | 6 | 0.000 | |
| PAE | SC − VF | 0.828 | 0.768 | 59.059 | 6 | 0.000 |
| SC + VF | 0.876 | 0.816 | 68.78 | 6 | 0.000 | |
| PP − VF | 0.842 | 0.797 | 51.496 | 6 | 0.000 | |
| PP + VF | 0.855 | 0.795 | 87.658 | 6 | 0.000 | |
| Flow Experience | SC − VF | 0.896 | 0.747 | 82.538 | 6 | 0.000 |
| SC + VF | 0.884 | 0.713 | 88.956 | 6 | 0.000 | |
| PP − VF | 0.697 | 0.703 | 27.851 | 6 | 0.000 | |
| PP + VF | 0.781 | 0.721 | 59.258 | 6 | 0.000 | |
| Competence | SC − VF | 0.904 | 0.831 | 80.198 | 6 | 0.000 |
| SC + VF | 0.892 | 0.804 | 83.697 | 6 | 0.000 | |
| PP − VF | 0.873 | 0.792 | 65.136 | 6 | 0.000 | |
| PP + VF | 0.907 | 0.830 | 88.088 | 6 | 0.000 | |
References
- Bowman, D.A.; McMahan, R.P.; Ragan, E.D. Questioning naturalism in 3D user interfaces. Commun. ACM 2012, 55, 78–88. [Google Scholar] [CrossRef]
- Allen, J.A.; Hays, R.T.; Buffardi, L.C. Maintenance Training Simulator Fidelity and Individual Differences in Transfer of Training. Hum. Factors J. Hum. Factors Ergon. Soc. 1986, 28, 497–509. [Google Scholar] [CrossRef]
- Wee, C.; Yap, K.M.; Lim, W.N. Haptic Interfaces for Virtual Reality: Challenges and Research Directions. IEEE Access 2021, 9, 112145–112162. [Google Scholar] [CrossRef]
- Islam, M.S.; Lim, S. Vibrotactile feedback in virtual motor learning: A systematic review. Appl. Ergon. 2022, 101, 103694. [Google Scholar] [CrossRef]
- Cheng, L.P.; Chang, L.; Marwecki, S.; Baudisch, P. iTurk: Turning Passive Haptics into Active Haptics by Making Users Reconfigure Props in Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; CHI ’18. pp. 1–10. [Google Scholar] [CrossRef]
- LaViola, J.J. A discussion of cybersickness in virtual environments. ACM SIGCHI Bull. 2000, 32, 47–56. [Google Scholar] [CrossRef]
- Sanchez-Vives, M.V.; Slater, M. From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 2005, 6, 332–339. [Google Scholar] [CrossRef]
- Jang, H.; Kim, J.; Lee, J. Effects of Congruent Multisensory Feedback on the Perception and Performance of Virtual Reality Hand-Retargeted Interaction. IEEE Access 2024, 12, 119789–119802. [Google Scholar] [CrossRef]
- Dinh, H.; Walker, N.; Hodges, L.; Song, C.; Kobayashi, A. Evaluating the importance of multi-sensory input on memory and the sense of presence in virtual environments. In Proceedings of the IEEE Virtual Reality (Cat. No. 99CB36316), Houston, TX, USA, 13–17 March 1999; pp. 222–228. [Google Scholar] [CrossRef]
- Fröhlich, J.; Wachsmuth, I. The Visual, the Auditory and the Haptic – A User Study on Combining Modalities in Virtual Worlds. In Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments; Shumaker, R., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 159–168. [Google Scholar] [CrossRef]
- Slater, M.; Wilbur, S. A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence Teleoperators Virtual Environ. 1997, 6, 603–616. [Google Scholar] [CrossRef]
- Biocca, F.; Kim, J.; Choi, Y. Visual Touch in Virtual Environments: An Exploratory Study of Presence, Multimodal Interfaces, and Cross-Modal Sensory Illusions. Presence Teleoperators Virtual Environ. 2001, 10, 247–265. [Google Scholar] [CrossRef]
- Salkini, M.W.; Doarn, C.R.; Kiehl, N.; Broderick, T.J.; Donovan, J.F.; Gaitonde, K. The Role of Haptic Feedback in Laparoscopic Training Using the LapMentor II. J. Endourol. 2010, 24, 99–102. [Google Scholar] [CrossRef]
- Wierinck, E.; Puttemans, V.; Swinnen, S.; van Steenberghe, D. Effect of augmented visual feedback from a virtual reality simulation system on manual dexterity training. Eur. J. Dent. Educ. 2005, 9, 10–16. [Google Scholar] [CrossRef]
- De Barros, P.G.; Lindeman, R.W. Performance effects of multi-sensory displays in virtual teleoperation environments. In Proceedings of the 1st Symposium on Spatial User Interaction, Los Angeles, CA, USA, 20–21 July 2013; pp. 41–48. [Google Scholar] [CrossRef]
- Yasumoto, M.; Ohta, T. The Electric Bow Interface. In Virtual, Augmented and Mixed Reality. Systems and Applications; Shumaker, R., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 436–442. [Google Scholar] [CrossRef]
- Silviu, B.; Mihai, D.; Raffaello, B.; Florin, G.; Cristian, P.; Marcello, C. An Interactive Haptic System for Experiencing Traditional Archery. Acta Polytech. Hung. 2018, 15, 185–208. [Google Scholar] [CrossRef]
- Purnomo, F.A.; Purnawati, M.; Pratisto, E.H.; Hidayat, T.N. Archery Training Simulation based on Virtual Reality. In Proceedings of the 2022 1st International Conference on Smart Technology, Applied Informatics, and Engineering (APICS), Surakarta, Indonesia, 23–24 August 2022; pp. 195–198. [Google Scholar] [CrossRef]
- Yamamoto, Y.; Yamagishi, Y. VR-based Learning Support System for Kyudo (Japanese Archery). In Proceedings of the E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. Association for the Advancement of Computing in Education (AACE), Singapore, 7 October 2024; pp. 467–471. [Google Scholar]
- Gründling, J.P.; Zeiler, D.; Weyers, B. Answering with Bow and Arrow: Questionnaires and VR Blend Without Distorting the Outcome. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 683–692. [Google Scholar] [CrossRef]
- Palacios-Alonso, D.; López-Arribas, A.; Meléndez-Morales, G.; Núñez-Vidal, E.; Gómez-Rodellar, A.; Ferrández-Vicente, J.M.; Gómez-Vilda, P. A Pilot and Feasibility Study of Virtual Reality as Gamified Monitoring Tool for Neurorehabilitation. In Artificial Intelligence in Neuroscience: Affective Analysis and Health Applications; Ferrández Vicente, J.M., Álvarez Sánchez, J.R., de la Paz López, F., Adeli, H., Eds.; Springer: Cham, Switzerland, 2022; pp. 239–248. [Google Scholar] [CrossRef]
- Park, K.H.; Kang, S.Y.; Kim, Y.H. Implementation of Archery Game Application Using VR HMD in Mobile Cloud Environments. Adv. Sci. Lett. 2017, 23, 9804–9807. [Google Scholar] [CrossRef]
- Thiele, S.; Meyer, L.; Geiger, C.; Drochtert, D.; Wöldecke, B. Virtual archery with tangible interaction. In Proceedings of the 2013 IEEE Symposium on 3D User Interfaces (3DUI), Orlando, FL, USA, 16–17 March 2013; pp. 67–70. [Google Scholar] [CrossRef]
- Yasumoto, M. Bow Device for Accurate Reproduction of Archery in xR Environment. In Virtual, Augmented and Mixed Reality; Chen, J.Y.C., Fragomeni, G., Eds.; Springer: Cham, Switzerland, 2023; pp. 203–214. [Google Scholar] [CrossRef]
- Ishida, T.; Shimizu, T. Developing a Virtual Reality Kyudo Training System Using the Cross-Modal Effect. In Advances on P2P, Parallel, Grid, Cloud and Internet Computing; Barolli, L., Ed.; Springer: Cham, Switzerland, 2023; pp. 327–335. [Google Scholar] [CrossRef]
- White, M.; Gain, J.; Vimont, U.; Lochner, D. The Case for Haptic Props: Shape, Weight and Vibro-tactile Feedback. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games, Newcastle upon Tyne, UK, 28–30 October 2019; MIG ’19. pp. 1–10. [Google Scholar] [CrossRef]
- Strandholt, P.L.; Dogaru, O.A.; Nilsson, N.C.; Nordahl, R.; Serafin, S. Knock on Wood: Combining Redirected Touching and Physical Props for Tool-Based Interaction in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; CHI ’20. pp. 1–13. [Google Scholar] [CrossRef]
- Auda, J.; Gruenefeld, U.; Schneegass, S. Enabling Reusable Haptic Props for Virtual Reality by Hand Displacement. In Proceedings of the Mensch und Computer 2021, Ingolstadt, Germany, 5–8 September 2021; MuC ’21. pp. 412–417. [Google Scholar] [CrossRef]
- Mori, K.; Ando, M.; Otsu, K.; Izumi, T. Effect of Repulsive Positions on Haptic Feedback on Using a String-Based Device Virtual Objects Without a Real Tool. In Virtual, Augmented and Mixed Reality; Chen, J.Y.C., Fragomeni, G., Eds.; Springer: Cham, Switzerland, 2023; Volume 14027, pp. 266–277. [Google Scholar] [CrossRef]
- Todorov, E.; Shadmehr, R.; Bizzi, E. Augmented Feedback Presented in a Virtual Environment Accelerates Learning of a Difficult Motor Task. J. Mot. Behav. 1997, 29, 147–158. [Google Scholar] [CrossRef] [PubMed]
- Williams, B.; Garton, A.E.; Headleand, C.J. Exploring Visuo-haptic Feedback Congruency in Virtual Reality. In Proceedings of the 2020 International Conference on Cyberworlds (CW), Caen, France, 29 September–1 October 2020; pp. 102–109. [Google Scholar] [CrossRef]
- Gibbs, J.K.; Gillies, M.; Pan, X. A comparison of the effects of haptic and visual feedback on presence in virtual reality. Int. J. Hum.-Comput. Stud. 2022, 157, 102717. [Google Scholar] [CrossRef]
- Gao, Z.; Wang, H.; Feng, G.; Guo, F.; Lv, H.; Li, B. RealPot: An immersive virtual pottery system with handheld haptic devices. Multimed. Tools Appl. 2019, 78, 26569–26596. [Google Scholar] [CrossRef]
- Van Damme, S.; Legrand, N.; Heyse, J.; De Backere, F.; De Turck, F.; Vega, M.T. Effects of Haptic Feedback on User Perception and Performance in Interactive Projected Augmented Reality. In Proceedings of the 1st Workshop on Interactive eXtended Reality, Lisboa, Portugal, 14 October 2022; IXR ’22. pp. 11–18. [Google Scholar] [CrossRef]
- Tanacar, N.T.; Mughrabi, M.H.; Batmaz, A.U.; Leonardis, D.; Sarac, M. The Impact of Haptic Feedback During Sudden, Rapid Virtual Interactions. In Proceedings of the 2023 IEEE World Haptics Conference (WHC), Delft, The Netherlands, 10–13 July 2023; pp. 64–70. [Google Scholar] [CrossRef]
- Radhakrishnan, U.; Kuang, L.; Koumaditis, K.; Chinello, F.; Pacchierotti, C. Haptic Feedback, Performance and Arousal: A Comparison Study in an Immersive VR Motor Skill Training Task. IEEE Trans. Haptics 2024, 17, 249–262. [Google Scholar] [CrossRef]
- Ai, M.; Li, K.; Liu, S.; Lin, D.K. Balanced incomplete Latin square designs. J. Stat. Plan. Inference 2013, 143, 1575–1582. [Google Scholar] [CrossRef]
- IJsselsteijn, W.; de Kort, Y.; Poels, K. The Game Experience Questionnaire; Technische Universiteit Eindhoven: Eindhoven, The Netherlands, 2013. [Google Scholar]
- Berkman, M.; Catak, G. I-group Presence Questionnaire: Psychometrically Revised English Version. Mugla J. Sci. Technol. 2021, 7, 1–10. [Google Scholar] [CrossRef]
- Chen, C.; Weyland, S.; Fritsch, J.; Woll, A.; Niessner, C.; Burchartz, A.; Schmidt, S.C.E.; Jekauc, D. A Short Version of the Physical Activity Enjoyment Scale: Development and Psychometric Properties. Int. J. Environ. Res. Public Health 2021, 18, 11035. [Google Scholar] [CrossRef]
- Jackson, S.A.; Martin, A.J.; Eklund, R.C. Long and Short Measures of Flow: The Construct Validity of the FSS-2, DFS-2, and New Brief Counterparts. J. Sport Exerc. Psychol. 2008, 5, 561–587. [Google Scholar] [CrossRef]
- Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Human Mental Workload; Advances in Psychology; Hancock, P.A., Meshkati, N., Eds.; ELSEVIER: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar] [CrossRef]
- Brooke, J. SUS - A quick and dirty usability scale. In Usability Evaluation in Industry; Taylor and Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
- Bangor, A.; Kortum, P.T.; Miller, J.T. An Empirical Evaluation of the System Usability Scale. Int. J. Hum.–Comput. Interact. 2008, 34, 574–594. [Google Scholar] [CrossRef]
- Hertzum, M. Reference values and subscale patterns for the task load index (TLX): A meta-analytic review. Ergonomics 2021, 64, 869–878. [Google Scholar] [CrossRef]
- Grier, R.A. How High is High? A Meta-Analysis of NASA-TLX Global Workload Scores. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2016, 59, 1727–1731. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).