Next Article in Journal
Remote State Estimation and Instruction Adjustment of Multi-Agent Systems Under False Data Injection Attacks
Previous Article in Journal
An Improved RANSAC Method for Outlier Detection in OBN Acoustic Positioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of HOTAS Joystick Design Based on Virtual Simulation

1
Department of Mechanical Engineering, Sichuan University, Chengdu 610065, China
2
Independent Researcher, Chengdu 610000, China
3
National Key Laboratory of Digital and Agile Aircraft Design, AVIC Chengdu Aircraft Design and Research Institute, Chengdu 610000, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12733; https://doi.org/10.3390/app152312733 (registering DOI)
Submission received: 28 October 2025 / Revised: 20 November 2025 / Accepted: 27 November 2025 / Published: 1 December 2025

Abstract

To address the lack of effective ergonomic evaluation methods for HOTAS (Hands-On Throttle And Stick) control sticks in dynamic operation scenarios, this study proposes a virtual hand motion data acquisition system based on Unreal Engine 5, combined with Leap Motion tracking technology. This system captures and records hand movements and interaction processes in real time, providing key quantitative indicators such as movement trajectories, operation durations, and key positioning deviations. Through four-stage HOTAS task experiments, the efficiency and accuracy differences of various operation types are quantitatively analyzed to provide data-driven support for the ergonomic optimization of HOTAS control rods. The experimental results verify the effectiveness of the system for dynamic human factors evaluation, and it is expected to serve as a preliminary virtual evaluation tool for the ergonomic optimization of HOTAS.

1. Introduction

As fighter aircraft technology improves, the operation becomes more complex, and the operational load on the pilot is higher. As the main hand operation interface inside the cockpit, the HOTAS system combines the aircraft’s critical control functions into the joystick and throttle, making the hand move as little as possible, and improving the efficiency of the hand. But with larger contents inside, HOTAS has become harder to use. An increase in the number of buttons raises errors and increases cognitive load [1]. The current HOTAS designs lack proper human factors data and dynamic operation information. Thus, optimization relies solely on empirical decision-making and user feedback. Traditional evaluation processes are mainly about using physical prototypes, which have long cycles, have high costs, and are not very flexible.
To address challenges of this kind, this paper introduces a virtual simulation-based human factors data acquisition system to record hand motion, task performance, and interaction accuracy. A four-stage HOTAS operation experiment is conducted to test the following:
  • Feasibility of dynamic human factors data collection in a virtual cockpit;
  • Finger coordination, efficiency, and accuracy in typical HOTAS tasks;
  • Implications for HOTAS ergonomic optimization.

2. Materials and Methods

2.1. Related Work

2.1.1. Research Progress on HOTAS Technologies

In previous studies, it is known that the HOTAS design directly affects the pilot’s performance, the pilot’s cognitive workload, and musculoskeletal comfort. Yeh et al. [2] argued that it was crucial for the control layout to be consistent with the task semantics to avoid operational conflicts and unintended operations. Volsi et al. [3] discussed physiological and behavioral limitations within aviation, including fatigue and attentional issues under high workload. Taylor and Francis [4], HOTAS Knowledge Summary, noted that the most optimal button allocation and ergonomic grip are still essential to “hands-on” operational efficiency.
Some looked at joystick handle size, gain, and geometry. Huysmans et al. [5] showed that handle size and gain can affect precision and cause operator fatigue, while Koyama et al. [6] showed that joystick shape affects the subjective feeling of controllability in wheelchair users, which brings ergonomics ideas for aerospace applications. Yang et al. [7] proposed a method for optimizing the joystick button layout based on grip characteristics, thereby quantitatively linking reachability with usage frequency to directly aid in the improvement of HOTAS design. Bassoli et al. [8] made improvements to a joystick handgrip by employing reverse engineering techniques, which were validated by Yesodha et al. [9], who also determined that spacing and geometry impacted user comfort and control precision, thus suggesting that HOTAS needs to be adaptable to different users.
Previous research contains a vast amount of knowledge about its physical configuration and control design; very few studies have examined finger-level coordination, especially thumb–index synergy for directional button tasks. Van Den Noort et al. [10] and Rachaviti et al. [11] revealed the biomechanical independence and smoothness of thumbs. Xu et al. [12] elucidated the neurobiomechanical basis of selective finger control. This shows that in the future, HOTAS should contain smaller hand kinematic information for ergonomic precision.
Most of the HOTAS research is still at the cockpit-level layout and mechanical-level validation. Fine-grained hand interaction investigation, like thumb–index coordination or directional press activities, is still far from being adequately researched, leaving significant room for more thorough study and discovery [13,14].

2.1.2. Virtual Simulation and Human Factors Research Advances

Regarding immersive VR and MR applications, there are recent works about flight training and human factors. Marron et al. [15] and Thomas et al. [16] showed how an immersive simulation allows for tasks to be replicated in a virtual setting under safe conditions and can also bring to light perceptual conflicts such as visual–vestibular mismatches. Rubio-Tamayo et al. [17] and Neo et al. [18] summarized that virtual environments can offer great experimental control and replicability when researching on humans.
Virtual simulation has also developed with regard to ergonomics. Da Silva et al. [19] reviewed patents and literature that combined digital human modeling, as well as VR. They found that dynamic virtual modeling allows for early and repeatable ergonomic validation. Similarly, Kazemi and Lee [20] provided a wide variety of evaluation methods for HFE in VR, making clear the pros and cons of each method. Tremmel et al. [21] and Gramouseni et al. [22] indicated that cognitive workload in interactive VR could be estimated reliably through EEG-based physiological indicators, which supported the concept of multimodal evaluation frameworks with subjective and objective data.
Several studies have focused on multimodal interaction, input techniques, and cockpit simulation in VR, with cumulative evidence highlighting the regulatory role of sensory feedback in human performance. In terms of input methods, Luong et al. [23] compared controller-based and bare-hand interactions in VR, clarifying their differential impacts on operation efficiency and physical exertion—insights valuable for cockpit interface optimization. For sensory feedback mechanisms, relevant research can be categorized into three core directions: first, tactile/kinesthetic feedback is critical for motor control and perception as demonstrated by studies showing that its absence impairs eye–hand coordination [24], while its presence enhances distance perception [25,26] and supports skilled finger movements and force production [27,28]; second, auditory feedback modulates fine motor output [29] and interacts with tactile perception through cross-modal integration [30,31]; third, multi-cue integration (visual–auditory–tactile/vibrotactile) directly affects pilot performance in virtual cockpits [32,33,34]. Collectively, these studies confirm that VR facilitates early iterative testing (reducing prototype costs) by enabling the systematic evaluation of input modes and multimodal feedback configurations.
Virtual simulation offers a good place for fast prototyping and repeating enhancements in HOTAS design. However, most studies concentrated on cockpit/system-level evaluation; little attention has been paid to the detailed analysis of hand interaction and button-level operation behavior. Notably, the current work focuses on hand interaction patterns and button-level operational behavior in cockpit simulations, without delving into the design or optimization of multisensory feedback; this aspect will be incorporated into our follow-up research to further improve the comprehensiveness of virtual cockpit ergonomic evaluation.

2.1.3. Applications of Leap Motion in Human–Computer Interaction and Simulation

Many studies use Leap Motion (Ultraleap Ltd., Bristol, UK) and similar optical tracking sensors for human–computer interaction, and Marin et al. [35] also compared Leap Motion with Kinect (Microsoft Corporation, Redmond, WA, USA), finding that Leap Motion has better resolution for fingertip trajectory. Bhiri et al. [36] summarized Leap Motion’s advantages and disadvantages, including occlusion, sensitivity to reflections, temporal drift, and possible solutions for accuracy and robustness.
In the aerospace domain and Virtual Training, Qingchao and Jiangang [37] carried out Leap Motion in astronaut training, and Yang et al. [38] integrated it into a plane simulated system, validating its potential for hand trajectory and button reach error analysis. These studies show that Leap Motion offers cost-effective and contact-free data gathering for gesture recognition as well as ergonomic evaluation.
But there is some fear around it slipping off and whether or not they are accurate. Vysocký et al. [39] saw major tracking errors and frame losses during occlusions, which made adaptive multisensor fusions essential for increased resilience. These methods prove that better placement of the sensors and a post-processing algorithm will help Leap Movement have better data reliability.
This study covers this gap by making two contributions: (1) changing the topic from macro-level HOTAS hand operations to its microscopic level, and then measuring some important indexes such as reach deviation, inter-finger cooperation, and the special features of the task trajectory; (2) using the virtual test bench UE 5 together with Leap Motion to start the experiment in the early stage and compare different scenarios quickly. These approaches try to come up with repeatable, data-driven suggestions to improve HOTAS ergonomics.

2.2. Virtual Cockpit and System Development

This study forms a virtual simulation—human factors data gathering system combining high-fidelity Unreal Engine 5 (UE5; Epic Games, Inc., Cary, NC, USA; version 5.2.1) together with Leap Motion Controller 2 (Ultraleap Ltd., Bristol, UK; Ultraleap Tracking Service version 6.1.0). The system enables the real-time collection of human factors data in a controlled, realistic environment.

2.2.1. Simulation Environment Platform (UE5)

We built a high-fidelity virtual cockpit using UE5, and it replicates a fighter aircraft in 1:1 scale. Uses the latest graphic rendering engine and physics engine provides the most immersion. Support modifications of the HOTAS layout allows quick iterations and verification.
The three-dimensional cockpit model was made in Blender (Blender Foundation, Amsterdam, The Netherlands; version 4.3) and imported to UE5; the environmental parameters were set; and the user perspective was set to the headrest. Interactive buttons and the data acquisition workflow were set up and coded with the UE5 Blueprints as shown in Figure 1.

2.2.2. Human Factors Data Acquisition (Leap Motion)

Hand recognition and tracking were performed by using the Leap Motion, and the obtained data was put inside the virtual world. The virtual hand model was calibrated to match each participant’s actual hand size as shown in the figure below: Figure 2. The system continued dynamic hand motions recordings in real time at a constant sampling rate alongside usual preprocessing. The collected dataset included the following:
  • Hand dimensions and individual variability;
  • Three-dimensional motion trajectories and postural characteristics;
  • Finger coordination patterns;
  • Operational efficiency (task duration and frequency);
  • Interaction accuracy (target localization and error rate).

2.3. Participants

We recruited a total of 15 university students (aged 20–30) for this study. Among them, 2 participants were used for equipment debugging and task flow verification in the pre-experiment stage: through this pre-experiment, it was confirmed that the experimental equipment operated stably and the task flow design was reasonable and unambiguous, which ensured the data quality of the formal experiment. The pre-experimental data did not involve the statistical analysis of core research variables and were not included in the final dataset. The remaining 13 participants constituted the sample for the formal experiment, consisting of 8 males and 5 females. All of these formal experiment participants were nonprofessional flyers, with cognitive and physical qualities similar to those of rookie fighter trainees. The gender breakdown of the formal experiment sample was referenced from the global distribution of fighter pilots, while the female portion was slightly increased for the sake of improving representativeness. According to recommendations from ergonomics research, a sample size of 10–30 participants is required to ensure better balanced statistical power and experimental controllability. Using G*Power3.1 to conduct power analysis with alpha set at 0.05 and a large effect size, a sample size of 13 was enough for exploratory analysis.
Inclusion criteria were that participants’ hands be between the 10th and 90th percentiles of anthropometric adult hand data [40] with normal hand dexterity and vision, and no musculoskeletal-related injuries. All the participants signed informed consent before the experiment.

2.4. Experimental Design and Tasks

2.4.1. Experimental Environment

To achieve a more realistic operation and complete immersion, a head-mounted VR display, Meta Quest 3 (Meta Platforms, Inc., Menlo Park, CA, USA), was used. Participants wore the headset and saw a 1:1 scale virtual cockpit environment that resembled the cockpit of an actual fighter aircraft. They were placed in a fixed position, which represents the normal pilot orientation; additionally, the inclinations and seat heights of the seats were adjusted to ergonomic norms [41]. Movements of the head were coordinated with the virtual viewpoint to enable natural gaze.
A Leap Motion sensor was placed in front of the participant center (i.e., midline) to capture hand movements in real time and map them into the VR scene. Its installation height is determined by the sitting knee height of a 50th percentile adult [42], making sure the participant can operate with their right hand in the best recognition range.
The participants interacted directly with the virtual joystick buttons with their right hand without a controller as seen in Figure 3. When a finger touched or pressed a target position, the system triggered the corresponding event and recorded data.

2.4.2. Formal Experiment

Experiments were conducted individually. Before conducting formal tasks, each person received detailed written instructions and had an acclimatization period with a full physical HOTAS joystick (right hand). The experiment was about the common button operations on the right-hand HOTAS control stick of Figure 4, with a focus on the thumb–index interaction. There were two interaction types commonly used: push button and 4-way hat switch. So as to fit with the normal combat order, experiments had these tasks set up as follows: Target Management Switch (TMS) → Weapon Release (Weapon) → Trigger Firing (Trigger) → Countermeasure Management Switch (CMS).
In detail, the TMS is used for target selection and switching purposes and is modeled as 4 abstracted independent single press tasks: up, down, left, right. This corresponds to multi-target operation conditions. The Weapon button and the Trigger button, which respectively mean releasing a weapon and shooting immediately, are shortened to a single hit. Finally, it is the CMS, which is responsible on the one hand for releasing the decoy and on the other for selecting the countermeasure mode. This function is also abstracted as 4 separate single-press functions. The adopted tasks all use a single discrete press operation, which avoids multi-stage trigger differences and remains consistent for task type.
Within 3 min, participants completed the following five sequential tasks as shown in Figure 5:
  • Initialization phase: familiarization with the experimental environment, with baseline hand data collected to calibrate the virtual hand model;
  • Phase 1: sequential pressing of the TMS button in the up, down, left, and right directions;
  • Phase 2: pressing the Weapon button;
  • Phase 3: pressing the Trigger button;
  • Phase 4: sequential pressing of the CMS button in the up, down, left, and right directions.
Through the entire process, both the Leap Motion and UE5 systems recorded the movement trajectory, and the task performance experiment was video recorded and accompanied by notes for observation.

2.4.3. Post-Experiment Feedback

After finishing the above tasks, the participants were asked for a subjective reaction through questionnaires and a semi-structured interview. The questionnaires covered three aspects: System Usability Scale (SUS) [43], NASA Task Load Index (NASA-TLX) [44], and User Experience Questionnaire (UEQ) [45], all of which were on a 5-point Likert scale.

2.5. Data Acquisition and Preprocessing

This study used a mixed-methods design with an integration of two methods: quantitative and qualitative. Based on hand-tracking data from Leap Motion, the ergonomic, performance, and usability-related indicators were obtained.
The central dataset contained hand trajectories (X, Y, Z coordinates), velocity and acceleration parameters, inter-finger distances (thumb–index and thumb–middle), task duration, inter-key intervals, and operation accuracies. All data were preprocessed with standard normalization, and outliers were removed from all data in order to keep them consistent and valid for analysis.

3. Results

3.1. Sample Characteristics and Task Completion

The 13 individuals from the formal trial all completed the designated tasks and did not stop prematurely or have missing data. The participants’ basic information is presented in Figure 6.

3.2. Task Performance

3.2.1. Button Operation Efficiency

Button operation efficiency was checked by observing how long the task took in total (Figure 7) and the gap between clicks on buttons (Figure 8).
At the task level, the total operating time showed overall efficiency and complexity. The phases, including a simple single action, like Weapon and Trigger, had the quickest task lengths, as most participants took less than 2.5–4 s on these. In stark contrast, the TMS and CMS period phases that demanded bidirectional input were the ones that tended to have very long durations, with typical ranges between 20 and 60 s.
To further verify the significance of differences in total durations among various phases, statistical tests were performed on the data. First, the Shapiro–Wilk test for residuals showed p > 0.05, confirming the normality assumption was satisfied and thus the repeated-measures ANOVA results were credible. The sphericity test result indicated that the data did not meet the sphericity assumption ( χ 2 = 39.405, df = 5, p < 0.001). Therefore, the Greenhouse–Geisser correction was adopted ( ε = 0.552), and the corrected result showed that there were significant differences in the total durations among the 4 phases (F = 18.306, p < 0.001, η p 2 = 0.604). The Bonferroni post hoc test results further revealed that Phase 1 had significant differences from both Phase 2 and Phase 3 (both p < 0.001) but no significant difference from Phase 4 (p = 1.000); Phase 4 also had significant differences from both Phase 2 and Phase 3 (both p ≤ 0.003) but no significant difference from Phase 1 (p = 1.000); there was no significant difference between Phase 2 and Phase 3 (p = 1.000), which was consistent with the characteristic in the descriptive statistics that phases with the same level of complexity exhibited similar time consumption.
At an act level, inter button operations check how locally efficient and motor skilled they are. The Weapon and Trigger buttons had a smaller and more compressed interval length, as their medians were 1.72 s and 1.86 s for both buttons. On the contrary, the directional commands such as CMS Up and TMS Down showed longer intervals, e.g., CMS Up: Median = 12.96 s, Mean = 19.05 s, skewed right, and reflected delayed hand re-positioning and spatial uncertainty.
To verify the validity of the statistical analysis, the Shapiro–Wilk test was first performed on residuals, and the result (p > 0.05) confirmed that the normality assumption was satisfied. A sphericity test was conducted to verify the validity of the statistical analysis, and the result indicated that the data met the sphericity assumption ( χ 2 = 9.743, df = 5, p = 0.084 > 0.05). Bonferroni post hoc test results further revealed that Phase 1 had significant differences from both Phase 2 and Phase 3 (p = 0.009, p = 0.004, respectively) but no significant difference from Phase 4 (p = 1.000); Phase 4 also had significant differences from both Phase 2 and Phase 3 (p = 0.043, p = 0.049, respectively) but no significant difference from Phase 1 (p = 1.000); and there was no significant difference between Phase 2 and Phase 3 (p = 1.000).

3.2.2. Button Operation Accuracy

The accuracy and error rate per button task, as well as total number of attempts, are displayed in Table 1. From the results, we can see that Weapon and Trigger operations attained very good accuracy rates of 92.9% and 86.7%, respectively, which also means having small error rates. Regarding the average number of attempts, the mean number of tries on these two tasks was just slightly over 1 (1.08 ± 0.27 and 1.15 ± 0.35).
On the contrary, for directional tasks with TMS and CMS buttons, accuracy was very low (56.5–72.2%), the error rate was very high, and they also needed somewhat more than the average number for these tasks. CMS Up had the lowest accuracy of all (56.5%), the most errors (43.5%), and also had the highest mean number of attempts (1.77 ± 0.58).
When performing the task of a single-button press (Weapon and Trigger), there was a high degree of consistency, but for the task involving multi-directionals (TMS and CMS), there was more variation and deviation in performance. It should be noted that statistical tests show that the accuracy rates at each stage do not vary significantly, and the results should be interpreted with caution.

3.3. Kinematic and Spatial Characteristics

3.3.1. Spatiotemporal Distribution of Hand Movements (Centroid Analysis)

According to the tracking of the hand centroid times, the following can be seen.
Spatial concentration: heatmap analysis showed that there were mainly hand movement points within range X(15–25), Y(−30–20), Z(−10–0) mm relative to the coordinate origin, the joystick center. From Figure 9a, we can see that the hand motion of the players is distributed around the neutral grip zone of the control stick, indicating relatively consistent hand location with limited deviation from the center operating status. In addition, for the hand travel distance, it satisfies the normal distribution, and the sphericity assumption of the repeated-measures ANOVA was violated ( χ 2 = 65.556, degrees of freedom [df] = 5, p < 0.001). After applying the Greenhouse–Geisser correction ( ε = 0.379), a statistically significant difference in hand travel distance was observed between different operation stages (F = 10.098, p = 0.006, η p 2 = 0.457). Bonferroni multiple comparisons further revealed that the hand travel distance in Stage 1 was significantly different from that in Stage 2 and Stage 3, the hand travel distance in Stage 4 was significantly different from that in Stage 2 and Stage 3, and no significant difference in hand travel distance was found between Stage 2 and Stage 3.
Position temporal stability and directional preference (Figure 9b): By using standard displacement, it was seen that the hand positions did not change too much over time but instead showed very little change as well as continuous paths. Greater variations appear along the Y axis compared to the other X and Z axes, indicating some small forward and backward movements during button operations. The rotational direction showed strong central tendency, meaning there was a strong tendency in direction, and the operations were consistent. Rotation around the Z axis was slightly more variable compared to rotation around the X and Y axes, likely as a result of subtle wrist pronation and supination while accessing side-positioned buttons.
Velocity and acceleration (Figure 10): Velocity distribution and acceleration distribution were both relatively compact, with the densest region being near the origin. Vector magnitudes displayed a unimodal, right-skewed distribution, where the majority had low-intensity movements—acceleration was primarily in the range of 0–20 mm/s2 and peak velocity around 5 mm/s, showing fine, controlled hand motions of precision manipulation.

3.3.2. Finger Coordination Patterns (Spatial Localization Strategies and Trajectory Features)

For fine-grained finger motion characterization, we first verified that the data conformed to a normal distribution via the Shapiro–Wilk test, and then carried out three analyses: (1) relative three-dimensional (3D) trajectories between operational fingers and pressed buttons (Figure 11), (2) fingertip distance distributions (Figure 12a), and (3) spatial distances between fingers and button keypoints (Figure 12b).
(1)
Relative position trajectories.
In all 4 phases, fingers were near or at the button origin (0, 0, 0), showing targeting behavior. Phase 1 (TMS) shows wider movement in both directions and especially on the y-axis, whereas phases 2 and 3 (Weapon, Trigger) show short and compact movement. And phase 4 (CMS) is a little more dispersed with longer paths. Statistical analysis shows that the spherical test assumption was not met ( χ 2 = 57.629, df = 5, p < 0.001); after Greenhouse–Geisser correction ( ε = 0.456), there were significant differences in travel distance between phases (F = 11.368, p = 0.002, η p 2 = 0.486); Bonferroni multiple comparisons confirm that phases 1 and 4 were significantly different from phases 2 and 3, while no difference existed between phase 2 and 3.
(2)
Fingertip distance distributions.
As can be seen from Figure 12a, the thumb–index and thumb–middle finger distances were shorter in phases 1 and 4, whereas phases 2 and 3 had bigger separations. Within each phase, the thumb–index distance was always smaller than the thumb–middle distance. Across phases, average distances were in the order Phase 3 > Phase 2 > Phase 1 > Phase 4. Phase 1 had more variation, too, which means there was more shifting around when working in the early stages. For thumb–index distance: The spherical test assumption was met ( χ 2 = 8.892, df = 5, p = 0.115); Bonferroni multiple comparisons confirm phases 1 and 4 were significantly different from phases 2 and 3, with no difference between phase 2 and 3. For thumb–middle distance: The spherical test assumption was not met ( χ 2 = 11.643, df = 5, p = 0.041); after Greenhouse–Geisser correction ( ε = 0.608), there were significant differences between phases (F = 9.098, p = 0.002, η p 2 = 0.431); Bonferroni multiple comparisons confirm phase 4 was significantly different from phases 2 and 3, with no other inter-phase differences.
(3)
Finger–button spatial distances.
As can be seen from Figure 12b, the thumb-TMS distance was shortest during phase 1. The thumb–Weapon distance was shortest in Phase 2 and increased slightly in Phase 3. The index–Trigger distance in phase 3 was very small, but it had more variation. In the first three stages, the distance between the thumb and CMS was significantly greater. Phase 4 thumb–CMS distances were small, which means larger distances from TMS, Weapon button—meaning they compensated for it with other button placement. Statistical analysis showed the spherical test assumption was not met ( χ 2 = 15.058, df = 5, p = 0.010); after Greenhouse–Geisser correction ( ε = 0.673), there were no significant differences in finger–button distance between phases (F = 0.809, p = 0.458, η p 2 = 0.063); Bonferroni multiple comparisons confirmed no differences between any phases.

3.4. Correlation Analysis

Spearman correlation coefficients were calculated on all operational phases of kinematic parameters (X variables) in relation to task-related parameters (Y variables). The following is the summary of the results and is illustrated in Figure 13a–d. Given the potential inflation of Type I error risk caused by multiple pairwise correlation comparisons, the Benjamini–Hochberg (FDR) method was applied to adjust the statistical results, with the target false discovery rate set to α = 0.05. Detailed statistical inspection data can be found in Appendix A, Table A3. Correlation intensity classification criteria are as follows: weak correlation (0.1 ≤ r < 0.3), moderate correlation (0.3 ≤ r < 0.7), strong correlation (r ≥ 0.7). This study is exploratory. The correlation results are only used for descriptive analysis to preliminarily observe the association characteristics among variables. Given the methodological nature of exploratory research, this result does not constitute a definite conclusion and should be carefully interpreted and reasonably applied in combination with more empirical data.
(1)
Phase 1—TMS Operation
In Phase 1, no statistically significant results were observed after correction, but there was an overall consistent and interpretable correlation trend.
The total task duration showed a moderate positive correlation with the movement distances of the hand, thumb, and index finger (r = 0.484–0.516), indicating that larger movement amplitudes are generally accompanied by longer task completion times. The interval between operations in the downward direction also showed a moderate positive correlation with movement distance (r = 0.440–0.478), suggesting that this direction may require more repositioning or fingertip adjustments. The variance in the thumb–index finger and thumb–middle finger distances showed a moderate positive correlation with total duration (r = 0.368–0.434), reflecting that when spatial control is less stable, participants are more likely to need additional fine-tuning movements, thereby prolonging the operation time. Speed-related variables showed a weak to moderate negative correlation with time indicators (r = −0.187 to −0.467), while the minimum distance from the thumb to TMS showed a moderate negative correlation (r = −0.478 to −0.549), indicating that faster movement speeds or closer finger positions may reduce completion time. In terms of accuracy, thumb speed showed a moderate positive correlation with accuracy (r = 0.411), while larger movement amplitudes and distance fluctuations showed a weak negative correlation (r = −0.160 to −0.271), indicating that stable finger spatial control may be associated with higher pressing reliability.
(2)
Phase 2—Weapon Operation
Following Spearman correlation analysis and multiple comparison correction, Phase 2 showed a stable and partially significant correlation pattern.
The total task duration showed a moderate positive correlation (significant), with the movement distances of the hand and index finger (r = 0.681–0.687, p < 0.05), and a moderate positive correlation trend with the thumb movement distance (r = 0.659). The operation interval showed an almost consistent moderate positive correlation pattern with these three movement distance indicators, among which the relationships with the hand and index finger also reached a significant level (r = 0.692–0.698, p < 0.05). The variance in the thumb–index finger and thumb–middle finger distances both showed a moderate positive correlation with time indicators (r = 0.577–0.599), indicating that less stable control of finger distances may increase the need for compensatory fine-tuning movements. Among speed variables, the thumb approach speed showed a strong negative correlation (significant) with total duration and operation interval (r = −0.764 to −0.780, p < 0.05), indicating that a higher thumb approach speed can effectively shorten the operation execution time. In terms of accuracy, movement distance and distance variance both showed a moderate negative correlation trend (r = −0.401 to −0.445) but did not reach significance, and speed-related variables had almost no correlation with accuracy (r = 0.045–0.134).
(3)
Phase 3—Trigger Operation
Following Spearman correlation analysis and multiple comparison correction, Phase 3 showed strong and consistent significant correlation characteristics.
The index finger approach speed showed an extremely strong positive correlation (significant) with total duration and operation interval (r = 0.967, p < 0.001), indicating that when participants approach the button at a faster speed, they are more likely to need additional correction movements after reaching it, resulting in a significant prolongation of the overall task time. The movement distances of the hand and thumb both showed a strong positive correlation (significant) with time indicators (r = 0.764–0.791, p < 0.05), while the index finger movement distance showed a moderate positive correlation trend (r = 0.462). In terms of accuracy, the index finger approach speed showed a moderate negative correlation (significant) with accuracy (r = −0.742, p < 0.05), indicating that a high-speed approach reduces pressing stability; the movement amplitudes of the thumb and hand showed a moderate negative correlation trend with accuracy (r = −0.536 to −0.577), and other spatial variables showed a weak correlation with accuracy (r = −0.124 to −0.289). These characteristics highlight that the efficiency of the Trigger button is highly related to dynamic approach control, especially emphasizing the importance of avoiding excessively fast approaches.
(4)
Phase 4—CMS Operation
Phase 4 showed no statistically significant relationships but presented a stable trend similar to Phase 1.
The total task duration showed a moderate positive correlation trend with hand movement distance (r = 0.659), and a moderate negative correlation trend with thumb speed and the minimum thumb–button distance (r = −0.637 to −0.643), indicating that faster movements or closer initial hand positions may shorten the completion time. The right direction interval showed a moderate positive correlation trend with hand movement distance (r = 0.566) and a moderate negative correlation trend with thumb speed (r = −0.549). Several weak to moderate negative correlation trends were also observed for intervals in other directions (r = −0.407 to −0.593), reflecting differences in spatial adjustment requirements for different directions. Accuracy showed a moderate positive correlation trend with thumb speed and acceleration (r = 0.400–0.686), indicating that CMS operation may be more positively affected by rapid dynamic control, while overall spatial variables had a weak correlation with accuracy (r = −0.183 to 0.309).

3.5. Subjective Questionnaire Results

The survey questionnaires of 13 people were analyzed, and reliability and validity tests were carried out. Regarding Cronbach’s α coefficients, the overall Cronbach’s α coefficient was 0.810, showing that the scale had good internal reliability, meaning that the scales were effectively reflecting users’ perception of usability, experience, and workload (Table 2).

3.5.1. System Usability and User Experience Evaluation

The data met normal distribution assumptions, and Mauchly’s test of sphericity was satisfied ( χ 2 = 49.305, df = 44, p = 0.371). A repeated-measures ANOVA further revealed differences across evaluation dimensions (F = 3.963, p < 0.001). Bonferroni-adjusted comparisons indicated that design innovation was rated significantly higher than natural interaction (p = 0.04) and operation satisfaction (p = 0.01), while other dimension pairs showed no significant differences. These results suggest that the evaluative structure is consistent overall, but users perceived design innovativeness as a particularly strong aspect.
  • Ease of use (Q4): The operational intuitiveness had a mean of 4.1, with 77% giving positive ratings. The average score for the timeliness of the system’s feedback was similarly high, at 4.1, with 92.3% agreeing on it being effective. The virtual HOTAS system showed great usability;
  • Efficiency and performance (Q5): The mean score on the measure of button localization efficiency was 3.5 (moderate to high) with room for improvement. System response speed got an average score of 3.8, with 61.6% of people saying good things about it;
  • Comfort and operational experience (Q6): Both sub-items were rated the same way, with an average score of 3.7. Most people were okay with being able to use it comfortably, but a few noticed they got tired from using their hands, so there could be more work on making it less tiring to hold and use over a long time;
  • Stimulation and novelty (Q7): The system’s excitement dimension earned a mean result of 4.2 points, whereas design innovativeness attained its highest score of 4.6 points. These data imply that users hold rather positive views regarding the excitement and novelty the system can create;
  • Overall satisfaction and recommendation (Q8): The overall satisfaction score was 3.9, and the willingness to recommend score was 4.2. This suggests that there is a consistent positive rating, indicating that users can accept and disseminate it (Table 3).

3.5.2. Task Load and Performance Evaluation

The task load questionnaire showed good data characteristics, with normality satisfied and sphericity upheld ( χ 2 = 11.858, df = 9, p = 0.004). A repeated-measures ANOVA revealed differences across the load dimensions (F = 4.474, p = 0.004). Bonferroni-adjusted comparisons indicated that mental demands were rated significantly higher than physical demands (p = 0.008), while other pairs showed no significant differences. This suggests that the virtual HOTAS task imposed a relatively higher cognitive load than physical load, which is typical of spatially guided fine-motor tasks.
  • Task demands (Q9–Q11): Attention demands were given a mean score of 3.2, which is of a rather higher order. Physical demand obtained 2.4 points, which means the effort was pretty low. Time pressure scored 2.8, which is an average low sense of urgency. The system kept a good balance between mental and physical workload. Overall, the system maintained a reasonable balance between mental and physical workload, consistent with the ANOVA findings.
  • Task performance and emotional feedback (Q12–Q13): When evaluating self-rated task completion, on average, 3.7 was reported, most participants indicated their performance met their expectations. The frustration level was scored at 2.5, which shows a low negative feeling (Table 4).

4. Discussion

It should be noted that this study has an exploratory nature, so the relevant findings should still be interpreted with caution and should not be overly extrapolated.

4.1. Interpretation of Efficiency Results

Efficiency results showed that the task-level duration and the button-level interval metrics capture distinct pieces of information with respect to the efficiency metric.
Shorter time spent in the Weapon phase and Trigger phase means that hand action input can be performed very fast without much thinking or moving the hand. This follows ergonomics that stress easy access and low search cost, with stable thumb–index coordination and small inter-button switch time. Previous ISO guidelines and VR research have typically reported that a single hand action in immersive environments takes approximately 0.8–2.0 s [46,47,48,49,50]. In our study, the total duration and inter-button intervals of the Weapon and Trigger phases fall within or slightly above this typical range, indicating that these single-press actions align well with established VR interaction performance baselines. But in contrast, TMS and CMS phases had a much longer time for them and also bigger right-skewed intervals, meaning more multi-directional inputs likely made tasks harder and more different among users with visual–spatial factors to consider. The participants needed more time for the spatial localization process and the confirmation process, especially without any haptic cues. Given that Phases 1 and 4 consist of multiple sequential directional thumb operations, involving repeated finger realignment, spatial adjustments, and confirmation checks, their cumulative durations and long inter-button intervals cannot be meaningfully compared to single-step VR actions reported in the literature. For this reason, the four-phase internal comparison is a more appropriate reference frame, and the obtained performance values serve as preliminary baseline data for subsequent HOTAS ergonomics research.
This basically aligns with the correlations, where greater hand travel distances and thumb–index variances have positive correlations with times as well as errors, implying that the operator had more spatial exploration, thus being less fluent on the operations. The dual-dimensional evaluation, combining total task duration and inter-button interval analysis, provides a holistic understanding of HOTAS operational efficiency, revealing how task complexity, spatial uncertainty, and feedback conditions jointly shape human performance.

4.2. Kinematic Stability and Ergonomic Implications

At all stages, the participants kept fairly low and steady hand trajectories; their motions were controlled and efficient. However, during the TMS and CMS phases, the trajectory dispersion and motion amplitude went up quite a bit as well, matching the higher mental and motor complexity.
Movement amplitude emerged as the most stable determinant of temporal performance. Across all phases, larger hand or finger movement trajectories were reliably associated with longer completion times, and this relationship was most pronounced in single-press tasks. This indicates that reducing unnecessary spatial displacement remains a key goal in ergonomics. Similarly, unstable inter-finger spacing was often accompanied by longer operation durations, implying that more corrective adjustments are needed during spatial searching. Secondly, the impact of approach speed showed distinct task dependency. In the Weapon task, a faster thumb approach speed improved efficiency; however, in the Trigger task, excessively fast approach speed of the index finger led to longer operation times and lower accuracy, suggesting that high-precision movements require moderate and controlled approach actions. Additionally, the spatial proximity of the operating finger to the button, especially the proximity of the thumb to the target button, showed a beneficial trend in reducing operation time, highlighting the importance of ergonomic resting postures and compact control layouts.
Overall, these patterns emphasize that optimizing Hands-On Throttle and Stick (HOTAS) interactions should focus on minimizing redundant movements, stabilizing finger spacing, and adjusting requirements for approach speed based on task precision. Providing stronger tactile or audio–visual cues may also reduce spatial uncertainty, thereby improving both efficiency and accuracy under higher workload conditions.

4.3. Insights into Finger Coordination

During these four phases, the coordination between the thumb and index finger consistently exhibited the lowest spatial variability, indicating that this combination provides the most stable and controllable movement pattern for fine manipulation. The trend of the correlation results also showed that greater fluctuations in the distance between the thumb and index finger were associated with longer operation durations, which further confirms that this combination constitutes the core ergonomic area for precise and frequent input. In contrast, phases requiring the involvement of the middle finger or multi-directional movements of the thumb (such as CMS) demonstrated significantly greater dispersion in terms of finger spacing and movement amplitude. These patterns suggest that the thumb–middle finger combination operates within a functional range with lower stability and greater reliance on corrections, thus making it less suitable for high-precision or time-critical movements.
In summary, the research results support placing primary or safety-critical buttons within the natural operating range of the thumb and index finger, while assigning secondary or directional control buttons to areas with higher spatial tolerance. Optimizing these ergonomic areas and ensuring they can accommodate users with different hand sizes can further improve operational accuracy and long-term comfort.

4.4. User Differences and Design Adaptation

As for subjective feelings and objective data, there were clear distinctions among people. In terms of usability overall and workload, the ratings were decent, but for system responsiveness and comfort, the ratings were lower, in line with the lack of physical feedback from the virtual setup. Based on the NASA–TLX, UEQ, and interview results, the increase in the complexity of CMS and TMS caused the participants to have higher task effort and lower fluency. No gender difference was found. But there is a clear hand-size variability impact on performance. The 50th percentile users performed best in terms of both efficiency and accuracy. At the same time, P10 and P90 hand-size users adapted less. And it shows the need for a HOTAS that is inclusive of the user. The needs have to take both physical and mental aspects into account.
The effective HOTAS design is to balance the three aspects of operation efficiency, comfort, and inclusiveness to improve the performance of the physical layout and feedback mechanism so as to meet user needs, including diversity, and achieve long-term operation reliability.

4.5. Limitations and Future Research Directions

A virtual simulation environment with Leap Motion hand-tracking exhibited great flexibility and innovative power in dynamically collecting human factors data in terms of HOTAS operation. However, several limitations remain, which will become important restrictive conditions when extrapolating the conclusions of this paper:
  • Virtual environment fidelity is restricted: There was no physical HOTAS device for users; therefore, the user would not feel the true force feedback and push back pressure of the throttle. This may affect their judgment of comfort and fatigue, and may also lead to delays in operation confirmation and deviations in reaction time, thereby affecting the evaluation of operational accuracy and efficiency. And also, the higher degree of freedom within the virtual environment did not fully emulate the actual spatial constraints and operational ranges of a true cockpit;
  • Technical restrictions for collecting data: Wearing gloves like flight gloves can cause the accuracy of the detection performed by Leap Motion to decrease, so it cannot be used practically. Additionally, during the operation, there are deficiencies in the measurement of finger pressing force and gripping force used to analyze physiological load, making it difficult to quantify the differences in feedback perception under different operation forces;
  • Sample composition limitations: The participant group mainly consisted of university students aged 18–25, whose profiles differ from those of professional pilots, limiting generalizability. Plus, the proportion of P50 users was quite high at 62%, and the number of percentile users was few, so it’s easy for the adaptability analysis to be biased;
  • Simplification of task complexity: To ensure data controllability, experimental tasks were simplified from typical operational scenarios. But this may not reflect the actuality of flying, and may underestimate people’s cognitive workload and the difficulty of the task.
Future work can be extended as follows:
  • Enhancing physical realism: Combine the model of physical human–computer interaction devices synchronized with the virtual cockpit, integrate haptic feedback systems including force feedback devices with different force levels, vibration modules, pressure-sensitive gloves, etc., and simultaneously add operation-related acoustic feedback (such as key trigger sounds and force feedback prompt sounds) to achieve the coordinated matching of visual, haptic, and auditory feedback. This method can accurately collect pressing force and grip force data, making the judgment of comfort and physical burden more accurate, while optimizing the operation confirmation experience and reducing reaction time deviations.
  • Developing multidimensional evaluation methods: Future research should advance a combined subjective–objective evaluation framework. Objective measures such as hand trajectories, pressing accuracy, and task duration could be integrated with physiological signals, including EEG and eye tracking, to quantify operational efficiency and cognitive load more precisely. Complementary subjective measures like NASA-TLX, SUS, and UEQ would give complete insights about workload, usability, and user experience.
  • To improve data accuracy and diversity: Adding on to the improved hand-tracking methods like infrared thermography and skeletal recognition, and multimodal sensor fusion, we can collect behavioral and physiological data at the same time. Expand the sample with a broader age range, various levels of experience, and larger, more extreme-hand-size cases to improve adaptability assessment and create a more universal product.
  • Data-driven design support platform creation: A modular framework for information collection and analysis can facilitate the integration of multiple devices in a simultaneous and real-time manner. A data-driven decision engine can turn human factors data into design suggestions that improve both efficiency and comfort, reducing cognitive load at the same time. Further on from military aviation, it may have application to civil aviation, emergency response systems, and immersive game environments.
In future research, our team will continue to advance the research, development, and validation of this experimental platform, focusing on the above-mentioned future work directions, particularly in the aspects of virtual-real integration and multi-sensory feedback. This is aimed at establishing a dynamic human factor data collection system that is closer to real task environments, thereby providing a more reliable design basis for the optimization of human–machine ergonomics of control handles such as HOTAS.

5. Conclusions

This study developed a virtual cockpit using UE5 with Leap Motion hand-tracking integration for dynamically collected human factors data, to study the feasibility of grabbing the operation trajectories, finger coordination, and interaction performance of typical HOTAS tasks.
It was found after the experiment that effective HOTAS joystick operation requires precise coordination between the thumb and index finger, as well as an ergonomic button layout. Complex tasks such as TMS and CMS showed greater time variability and movement instability, indicating the need for more refined spatial arrangement and continuous tactile feedback or multi-sensory feedback. In addition, participants with hand sizes at the 50th percentile reported the most balanced performance, highlighting the importance of adaptive design. Furthermore, our results showed that movement amplitude and unstable finger spacing were consistently associated with longer task times and lower accuracy, while the impact of approach speed varied by task: in the Weapon task, a faster thumb approach speed improved efficiency, but in the Trigger task, an excessively fast index finger approach speed reduced performance and accuracy. Therefore, the phased comparison provides an internal benchmark for explaining performance in a virtual reality joystick environment for specific tasks.
The main contribution of this study is to propose a data-driven method to collect and analyze human factors data, which overcomes the limitations of static measurement of traditional methods, and provides a quantitative basis for the optimization of HOTAS ergonomics. In comparison with conventional prototype-based methods, we can see that we have an advantage in terms of cost, flexibility, and the processing of data. This enables us to obtain rapid feedback on Human Factor data in virtual environments. The research results can establish a benchmark dataset for subsequent comparison with hybrid (virtual–physical) interaction experiments. In addition to military aircraft, it is also possible to expand this method to civil aviation and more complex industrial workstations that require human–machine workstations. It is a brand new paradigm and technical path for the design of future intelligent, data-driven human–machine interaction systems.

Author Contributions

Conceptualization, Y.F., M.W. and J.R.; methodology, Y.F.; software, Y.F. and S.L.; validation, Y.F. and S.L.; formal analysis, Y.F.; investigation, Y.F.; resources, M.W. and J.R.; data curation, Y.F.; writing—original draft preparation, Y.F.; visualization, Y.F.; supervision, M.W.; project administration, M.W.; funding acquisition, M.W. and J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Provincial Science and Technology Project of Sichuan, China (2022ZHCG0043-LH).

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the absence of direct intervention, invasive procedures, and additional risks to participants; it is an academic exploration without commercial interests, and no potential threats to participants’ rights and interests exist.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Detailed Results of Statistical Tests

Appendix A.1. Repeated Measures ANOVA Results

To verify the statistical reliability of the research results, Repeated Measures ANOVA was conducted on ergonomic evaluation variable (task duration, operation interval time, accuracy rate, etc.). Multiple comparisons were corrected using the Bonferroni method, with the test level α = 0.05. The test process and results are described as follows.
Verify whether the data meet the sphericity hypothesis through the Mauchly sphericity test; if the test is significant (p < 0.05), the Greenhouse–Geisser method is adopted for degree of freedom correction. If not significant (p ≥ 0.05), report the original analysis of variance results. Statistical indicators include the following: spherical test statistics ( χ 2 , df, p value), Greenhouse–Geisser correction coefficient ( ε ), F value, degrees of freedom (df), corrected p value, effect size ( η p 2 ), and 95% confidence interval (95% CI), among which the effect size η p 2 is used to reflect the practical significance of the difference. The 95% CI is used to illustrate the reliability of statistical estimates. Bonferroni correction is designed to control the risk of Class I errors caused by multiple comparisons.
The detailed statistical results are shown in Table A1 and Table A2.
Table A1. Repeated measures ANOVA results of variables (Greenhouse–Geisser correction).
Table A1. Repeated measures ANOVA results of variables (Greenhouse–Geisser correction).
Stage VariablesMauchly’s Test of SphericityGreenhouse–Geisser Correction
Chi-Squaredfp Value Epsilon ( ε ) F ValuedfCorrected p Value η p 2
task duration39.40550.0000.55218.3061.6570.0000.604
operation interval time9.74350.084-----
accuracy rate13.44250.0200.6351.9040.8880.4200.069
hand travel distance65.55650.0000.3791.13810.0980.0060.457
finger travel distance57.62950.0000.4561.36911.3680.0010.486
finger–key distance15.05850.0100.6732.0180.8090.4580.063
thumb–index finger distance8.89250.115-----
thumb–middle finger distance11.64350.0410.6081.8249.0980.0020.431
95% CI = 95% confidence interval; η p 2 = partial eta-squared (effect size); df = degrees of freedom; χ 2 = Mauchly’s test statistic; ε = Greenhouse–Geisser correction coefficient; Corrected df and p-values were adjusted for sphericity violation where applicable.
Table A2. Pairwise comparison results of each variable across stages (Bonferroni correction).
Table A2. Pairwise comparison results of each variable across stages (Bonferroni correction).
VariablesStage 1–2Stage 1–3Stage 1–4Stage 2–3Stage 2–4Stage 3–4
task duration<0.001 *<0.001 *1.0001.0000.003 *0.003 *
operation interval time0.009 *0.004 *1.0001.0000.043 *0.049 *
accuracy rate1.0000.9920.6171.0001.0001.000
hand travel distance0.031 *0.030 *0.5421.0000.001 *<0.001 *
finger travel distance0.022 *0.020 *0.7771.0000.004 *0.003 *
finger–key distance1.0000.6440.4281.0001.0001.000
thumb–index finger distance0.029 *0.037 *0.6171.0000.010 *0.021 *
thumb–middle finger distance0.1260.0561.0000.4210.013 *0.030 *
* p < 0.05 ; Values are Bonferroni-corrected p-values for pairwise comparisons between stages; α = 0.05 (significance level); “stage 1–2” indicates comparison between stage 1 and stage 2; p < 0.001 for exact p = 0.000 .

Appendix A.2. Correlation Analysis with Multiple Comparison Correction

To assess the reliability of correlation analysis results, the Benjamini–Hochberg correction method (FDR) was applied to control the Type I error risk arising from multiple correlation comparisons, with the test level set at α = 0.05 . Detailed statistical results of the correlation analysis after FDR correction are presented in Table A3.
Table A3. Correlation coefficients of ergonomic evaluation variables with FDR correction.
Table A3. Correlation coefficients of ergonomic evaluation variables with FDR correction.
VariableStatisticHand Tr.Thumb Tr.Index Tr.Op. Vel.Op. Accel.Op. Pos. Var.Th-Idx Var.Th-Mid Var.Min. F-K Dist.Av. App. Vel.
TimeSpearman r0.5160.4840.505−0.1870.3240.0710.4340.368−0.4780.049
Sig. (2-tail)0.0710.0940.0780.5410.2800.8170.1380.2160.0980.873
FDR p-value0.8150.8150.8150.9920.8840.9920.8150.8840.8150.992
TMS UpSpearman r0.049−0.0050.038−0.0710.143−0.1260.0490.0380.0710.099
Sig. (2-tail)0.8730.9860.9010.8170.6420.6810.8730.9010.8170.748
FDR p-value0.9920.9920.9920.9920.9920.9920.9920.9920.9920.992
TMS DownSpearman r0.4400.4780.445−0.1040.3850.1760.3240.297−0.5490.143
Sig. (2-tail)0.1330.0980.1280.7340.1940.5660.2800.3250.0520.642
FDR p-value0.8150.8150.8150.9920.8840.9920.8840.8860.8150.992
TMS LeftSpearman r−0.005−0.082−0.011−0.363−0.214−0.1210.0220.016−0.302−0.341
Sig. (2-tail)0.9860.7890.9720.2230.4820.6940.9430.9570.3160.255
FDR p-value0.9920.9920.9920.8840.9920.9920.9920.9920.8860.884
TMS RightSpearman r0.0820.0380.115−0.187−0.467−0.0330.4120.297−0.170−0.352
Sig. (2-tail)0.7890.9010.7070.5410.1080.9150.1620.3250.5780.239
FDR p-value0.9920.9920.9920.9920.8150.9920.8150.8860.9920.884
Accuracy RateSpearman r−0.172−0.160−0.1720.411−0.0610.260−0.251−0.2710.1980.003
Sig. (2-tail)0.5740.6010.5740.1630.8420.3920.4090.3700.5160.992
FDR p-value0.9920.9920.9920.8150.9920.9800.9820.9650.9920.992
TimeSpearman r0.6870.6590.681−0.066−0.2090.5770.5880.599−0.231−0.780
Sig. (2-tail)0.0100.0140.0100.8310.4940.0390.0350.0310.4480.002
FDR p-value0.050 *0.0530.050 *0.8850.6180.0840.0840.0840.6070.030 *
WeaponSpearman r0.6980.6700.692−0.077−0.1920.5820.5770.593−0.280−0.764
Sig. (2-tail)0.0080.0120.0090.8030.5290.0370.0390.0330.3540.002
FDR p-value0.050 *0.0510.050 *0.8850.6350.0840.0840.0840.5060.030 *
Accuracy RateSpearman r−0.445−0.401−0.4450.0450.134−0.223−0.356−0.401−0.0450.312
Sig. (2-tail)0.1270.1750.1270.8850.6630.4650.2320.1750.8850.300
FDR p-value0.2380.2920.2380.8850.7650.6070.3660.2920.8850.450
TimeSpearman r0.7640.7910.462−0.341−0.4560.1810.2910.275−0.2090.967
Sig. (2-tail)0.0020.0010.1120.2550.1170.5530.3340.3640.4940.000
FDR p-value0.010 *0.008 *0.2700.5100.2700.6140.5200.5200.5990.000 *
TriggerSpearman r0.7640.7910.462−0.341−0.4560.1810.2910.275−0.2090.967
Sig. (2-tail)0.0020.0010.1120.2550.1170.5530.3340.3640.4940.000
FDR p-value0.010 *0.008 *0.2700.5100.2700.6140.5200.5200.5990.000 *
Accuracy RateSpearman r−0.536−0.577−0.2890.0410.206−0.206−0.124−0.289−0.165−0.742
Sig. (2-tail)0.0590.0390.3390.8940.4990.4990.6870.3390.5900.004
FDR p-value0.1970.1460.5200.8940.5990.5990.7110.5200.6320.017 *
TimeSpearman r0.6590.2030.291−0.643−0.2200.011−0.297−0.110−0.637−0.703
Sig. (2-tail)0.0140.5050.3340.0180.4710.9720.3250.7210.0190.007
FDR p-value0.1900.8200.7370.1900.8161.0000.7370.9010.1900.190
CMS UpSpearman r0.4670.0000.154−0.593−0.1920.077−0.313−0.269−0.330−0.604
Sig. (2-tail)0.1081.0000.6160.0330.5290.8030.2970.3740.2710.029
FDR p-value0.4981.0000.8200.2480.8200.9090.7370.7740.7370.248
CMS DownSpearman r0.2530.0930.066−0.368−0.4070.011−0.1700.165−0.330−0.505
Sig. (2-tail)0.4050.7620.8310.2160.1680.9720.5780.5900.2710.078
FDR p-value0.8100.9090.9220.7200.6391.0000.8200.8200.7370.390
CMS LeftSpearman r0.2910.0600.082−0.044−0.1650.192−0.396−0.286−0.148−0.187
Sig. (2-tail)0.3340.8450.7890.8870.5900.5290.1810.3440.6290.541
FDR p-value0.7370.9220.9090.9500.8200.8200.6390.7370.8200.820
CMS RightSpearman r0.5660.2200.236−0.549−0.220−0.121−0.330−0.093−0.654−0.445
Sig. (2-tail)0.0440.4710.4370.0520.4710.6940.2710.7620.0150.128
FDR p-value0.2930.8160.8160.3120.8160.8860.7370.9090.1900.549
Accuracy RateSpearman r−0.1830.1600.0000.4000.538−0.0800.3090.2170.2860.686
Sig. (2-tail)0.5500.6011.0000.1750.0580.7950.3050.4760.3440.010
FDR p-value0.8200.8201.0000.6390.3160.9090.7370.8160.7370.190
Abbreviations: Tr. = Travel, Op. = Operating, Vel. = Velocity, Accel. = Acceleration, Pos. = Position, Var. = Variance, F-K Dist. = Finger–Key Distance, App. = Approach, Th-Idx = Thumb–Index, Th-Mid = Thumb–Middle; FDR-corrected p < 0.05 is marked with *.

References

  1. Jin, W.; Liu, H.; Shen, F. Artificial intelligence in flight safety: Fatigue monitoring and risk mitigation technologies. In Proceedings of the 1st International Scientific and Practical Conference “Technologies for Improving Old Methods, Theories and Hypotheses”, Sofia, Bulgaria, 7–10 January 2025; International Science Group: Sofia, Bulgaria, 2025; p. 368. [Google Scholar]
  2. Yeh, M.; Swider, C.; Jo, Y.J.; Donovan, C. Human Factors Considerations in the Design and Evaluation of Flight Deck Displays and Controls: Version 2.0; John A. Volpe National Transportation Systems Center: Cambridge, MA, USA, 2016.
  3. Volsi, G.L.; Arcifa, S.; Aruta, A.V.; Distefano, A.; Gulizzi, A.; Libra, A.; Maimone, C.; Mirulla, S.; Mongioi, A.; Nocita, D.; et al. Human performance and limitations in aviation: Physiological and behavioural aspects. Bull. Gioenia Acad. Nat. Sci. Catania 2022, 55, FP16–FP73. [Google Scholar] [CrossRef]
  4. Taylor & Francis. HOTAS—Knowledge and References. Taylor & Francis Online. Available online: https://taylorandfrancis.com/knowledge/Engineering_and_technology/Aerospace_engineering/HOTAS (accessed on 20 October 2025).
  5. Huysmans, M.A.; De Looze, M.P.; Hoozemans, M.J.M.; Van der Beek, A.J.; Van Dieen, J.H. The effect of joystick handle size and gain at two levels of required precision on performance and physical load on crane operators. Ergonomics 2006, 49, 1021–1035. [Google Scholar] [CrossRef]
  6. Koyama, S.; Tatemoto, T.; Kumazawa, N.; Tanabe, S.; Nakagawa, Y.; Otaka, Y. The effect of differences in powered wheelchair joystick shape on subjective and objective operability. Appl. Ergon. 2023, 107, 103920. [Google Scholar] [CrossRef] [PubMed]
  7. Yang, T.; Dong, X.; Liu, X.; Zhuo, S.; Ren, J.; Wang, M. Optimizing joystick button layout for males: An incremental approach based on gripping operational features. Appl. Sci. 2025, 15, 3019. [Google Scholar] [CrossRef]
  8. Bassoli, E.; Gatto, A.; Iuliano, L.; Leali, F. Design for manufacturing of an ergonomic joystick handgrip. In Proceedings of the World Automation Congress, Seville, Spain, 28 June–1 July 2004; IEEE: Piscataway, NJ, USA; Volume 18, pp. 461–466. [Google Scholar]
  9. Yesodha, K.K.R.K.; Narasimhan, V.; Li, Y.; Craig, B. Ergonomic evaluation of videogame controllers. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Los Angeles, CA, USA, 17–21 July 2017; Springer: Cham, Switzerland, 2017; pp. 384–391. [Google Scholar]
  10. Van Den Noort, J.C.; Van Beek, N.; Van Der Kraan, T.; Veeger, D.H.E.J.; Stegeman, D.F.; Veltink, P.H.; Maas, H. Variable and asymmetric range of enslaving: Fingers can act independently over small range of flexion. PLoS ONE 2016, 11, e0168636. [Google Scholar]
  11. Rachaveti, D.; Chakrabhavi, N.; Shankar, V.; Varadhan, S.K.M. Thumbs up: Movements made by the thumb are smoother and larger than fingers in finger-thumb opposition tasks. PeerJ 2018, 6, e5763. [Google Scholar] [CrossRef]
  12. Xu, J.; Mawase, F.; Schieber, M.H. Evolution, biomechanics, and neurobiology converge to explain selective finger motor control. Physiol. Rev. 2024, 104, 983–1020. [Google Scholar] [CrossRef]
  13. Peña Pitarch, E.; Yang, J.; Abdel-Malek, K.; Kim, J.; Marler, T. Joystick ergonomic study in material handling using virtual humans. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Orlando, FL, USA, 5–11 November 2005; Volume 42231, pp. 1415–1420. [Google Scholar]
  14. Wang, L.; Xiang, W.; He, X.; Sun, X.; Yu, J.; Zhou, L.; Sun, G. The virtual evaluation of the ergonomics layout in aircraft cockpit. In Proceedings of the 2009 IEEE 10th International Conference on Computer-Aided Industrial Design & Conceptual Design, Wenzhou, China, 26–29 November 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1438–1442. [Google Scholar]
  15. Marron, T.; Dungan, N.; Namee, B.M.; Hagan, A.D.O. Virtual reality and pilot training: Existing technologies, challenges and opportunities. J. Aviat./Aerosp. Educ. Res. 2024, 33, 1980. [Google Scholar]
  16. Thomas, R.L.; Albelo, J.L.; Wiggins, M. Enhancing pilot training through virtual reality: Recognizing and mitigating aviation visual and vestibular illusions. Int. J. Aviat. Aeronaut. Aerosp. 2023, 10, 10. [Google Scholar] [CrossRef]
  17. Rubio-Tamayo, J.L.; Gertrudix Barrio, M.; García García, F. Immersive environments and virtual reality: Systematic review and advances in communication, interaction and simulation. Multimodal Technol. Interact. 2017, 1, 21. [Google Scholar] [CrossRef]
  18. Neo, J.R.J.; Won, A.S.; Shepley, M.M. Designing immersive virtual environments for human behavior research. Front. Virtual Real. 2021, 2, 603750. [Google Scholar] [CrossRef]
  19. da Silva, A.G.; Mendes Gomes, M.V.; Winkler, I. Virtual reality and digital human modeling for ergonomic assessment in industrial product development: A patent and literature review. Appl. Sci. 2022, 12, 1084. [Google Scholar] [CrossRef]
  20. Kazemi, R.; Lee, S.C. Human factors/ergonomics (HFE) evaluation in the virtual reality environment: A systematic review. Int. J. Hum. Comput. Interact. 2024, 40, 4533–4549. [Google Scholar] [CrossRef]
  21. Tremmel, C.; Herff, C.; Sato, T.; Rechowicz, K.; Yamani, Y.; Krusienski, D.J. Estimating cognitive workload in an interactive virtual reality environment using EEG. Front. Hum. Neurosci. 2019, 13, 401. [Google Scholar] [CrossRef]
  22. Gramouseni, F.; Tzimourta, K.D.; Angelidis, P.; Giannakeas, N.; Tsipouras, M.G. Cognitive assessment based on electroencephalography analysis in virtual and augmented reality environments using head-mounted displays: A systematic review. Big Data Cogn. Comput. 2023, 7, 163. [Google Scholar]
  23. Luong, T.; Cheng, Y.F.; Möbus, M.; Fender, A.; Holz, C. Controllers or bare hands? a controlled evaluation of input techniques on interaction performance and exertion in virtual reality. IEEE Trans. Vis. Comput. Graph. 2023, 29, 4633–4643. [Google Scholar] [CrossRef] [PubMed]
  24. Lavoie, E.; Hebert, J.S.; Chapman, C.S. How a lack of haptic feedback affects eye-hand coordination and embodiment in virtual reality. Sci. Rep. 2025, 15, 25219. [Google Scholar] [CrossRef] [PubMed]
  25. Makin, L.; Barnaby, G.; Roudaut, A. Tactile and kinesthetic feedbacks improve distance perception in virtual reality. In Proceedings of the 31st Conference on l’Interaction Homme-Machine, Grenoble, France, 10–13 December 2019; pp. 1–9. [Google Scholar]
  26. Song, D.; Yuan, W.; Chao, M.A.; Han, T. A modular visuo-haptic mixed reality (VHMR) aided prototype technique for in-vehicle human-machine interaction (HMI) evaluations. J. Eng. Des. 2022, 33, 969–989. [Google Scholar]
  27. Popp, N.J.; Hernandez-Castillo, C.R.; Gribble, P.L.; Diedrichsen, J. The role of feedback in the production of skilled finger sequences. J. Neurophysiol. 2022, 127, 829–839. [Google Scholar] [CrossRef]
  28. Shim, J.K.; Karol, S.; Kim, Y.S.; Seo, N.J.; Kim, Y.H.; Kim, Y.; Yoon, B.C. Tactile feedback plays a critical role in maximum finger force production. J. Biomech. 2012, 45, 415–420. [Google Scholar] [CrossRef]
  29. Guo, J.; Liu, T.; Wang, J. Effects of auditory feedback on fine motor output and corticomuscular coherence during a unilateral finger pinch task. Front. Neurosci. 2022, 16, 896933. [Google Scholar] [CrossRef]
  30. Bresciani, J.P.; Ernst, M.O.; Drewing, K.; Bouyer, G.; Maury, V.; Kheddar, A. Feeling what you hear: Auditory signals can modulate tactile tap perception. Exp. Brain Res. 2005, 162, 172–180. [Google Scholar] [CrossRef]
  31. Foxe, J.J. Multisensory integration: Frequency tuning of audio-tactile integration. Curr. Biol. 2009, 19, R373–R375. [Google Scholar] [CrossRef]
  32. Ren, J.; Cui, Y.; Chen, J.; Qiao, Y.; Wang, L. Multi-modal human–computer interaction system in cockpit. J. Phys. Conf. Ser. 2020, 1693, 012212. [Google Scholar] [CrossRef]
  33. Ding, L.; Yuan, M.; Yu, R. Ergonomics evaluation modeling of multi-modal interaction technology for airplane cockpit and experimental validation. In Proceedings of the Congress of the International Ergonomics Association, Jeju, Republic of Korea, 25–29 August 2024; Springer: Cham, Switzerland, 2024; pp. 419–425. [Google Scholar]
  34. Auer, S.; Anthes, C.; Reiterer, H.; Jetter, H.-C. Aircraft cockpit interaction in virtual reality with visual, auditive, and vibrotactile feedback. Proc. ACM Hum. Comput. Interact. 2023, 7, 420–443. [Google Scholar] [CrossRef]
  35. Marin, G.; Dominio, F.; Zanuttigh, P. Hand gesture recognition with Leap Motion and Kinect devices. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1565–1569. [Google Scholar]
  36. Bhiri, N.M.; Ameur, S.; Alouani, I.; Mahjoub, M.A.; Khalifa, A.B. Hand gesture recognition with focus on Leap Motion: An overview, real-world challenges and future directions. Expert Syst. Appl. 2023, 226, 120125. [Google Scholar] [CrossRef]
  37. Xie, Q.; Chao, J. The application of Leap Motion in astronaut virtual training. IOP Conf. Ser. Mater. Sci. Eng. 2017, 187, 012015. [Google Scholar]
  38. Yang, B.; Xia, X.; Wang, S.; Ye, L. Development of flight simulation system based on Leap Motion controller. Procedia Comput. Sci. 2021, 183, 794–800. [Google Scholar] [CrossRef]
  39. Vysocký, A.; Grushko, S.; Oščádal, P.; Kot, T.; Babjak, J.; Jánoš, R.; Sukop, M.; Bobovský, Z. Analysis of precision and stability of hand tracking with Leap Motion sensor. Sensors 2020, 20, 4088. [Google Scholar] [CrossRef]
  40. GB/T 16252-2023; Hand Sizing System of Adults. National Standards of the People’s Republic of China: Beijing, China, 2023.
  41. GJB 35B-2008; Aircrew Station Geometry for Fighter and Attacker. Military Standard Publishing House of the General Armament Department: Beijing, China, 2008.
  42. GB/T 10000-2023; Human Dimensions of Chinese Adults. National Standards of the People’s Republic of China: Beijing, China, 2023.
  43. Brooke, J. SUS: A quick and dirty usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, I.L., Eds.; Taylor & Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
  44. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar]
  45. Laugwitz, B.; Held, T.; Schrepp, M. Construction and evaluation of a user experience questionnaire. In Proceedings of the Symposium of the Austrian HCI and Usability Engineering Group, Graz, Austria, 20–21 November 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 63–76. [Google Scholar]
  46. ISO/TS 9241-430:2021; Ergonomics of Human-System Interaction—Part 430: Recommendations for the Design of Non-Touch Gestural Input for the Reduction of Biomechanical Stress. International Organization for Standardization: Geneva, Switzerland, 2021.
  47. ISO 9241-820:2024; Ergonomics of Human-System Interaction—Part 820: Ergonomic Guidance on Interactions in Immersive Environments, Including Augmented Reality and Virtual Reality. International Organization for Standardization: Geneva, Switzerland, 2024.
  48. Lou, X.; Song, X.; Hu, X.; Ma, M.; Fu, L.; Zhuang, X.; Fan, X. An extended Fitts’ Law model for hand interaction evaluation in virtual reality: Accounting for spatial variables in 3D space and the arm fatigue influence. Int. J. Hum. Comput. Interact. 2025, 41, 11611–11637. [Google Scholar] [CrossRef]
  49. Fu, M.J.; Hershberger, A.D.; Sano, K.; Çavuşoğlu, M.C. Effect of visuomotor colocation on 3d fitts’ task performance in physical and virtual environments. Presence 2012, 21, 305–320. [Google Scholar] [CrossRef]
  50. Dube, T.J.; Arif, A.S. Free-Hand Input and Interaction in Virtual Reality Using a Custom Force-Based Digital Thimble. Appl. Sci. 2024, 14, 11018. [Google Scholar] [CrossRef]
Figure 1. Parts of the UE5 blueprint related to user button-interaction logic and data-collection logic. The colors shown follow UE5’s default blueprint color-coding for different node and variable types and do not represent analytical categories.
Figure 1. Parts of the UE5 blueprint related to user button-interaction logic and data-collection logic. The colors shown follow UE5’s default blueprint color-coding for different node and variable types and do not represent analytical categories.
Applsci 15 12733 g001
Figure 2. Combat aircraft simulator cockpit and leap motion hand model.
Figure 2. Combat aircraft simulator cockpit and leap motion hand model.
Applsci 15 12733 g002
Figure 3. Experimental process environment image. (a) Initialization phase: hand data acquisition; (b) hand operation phase.
Figure 3. Experimental process environment image. (a) Initialization phase: hand data acquisition; (b) hand operation phase.
Applsci 15 12733 g003
Figure 4. HOTAS right joystick experimental buttons.
Figure 4. HOTAS right joystick experimental buttons.
Applsci 15 12733 g004
Figure 5. Task flowchart. The red point indicates the operating finger at each stage.
Figure 5. Task flowchart. The red point indicates the operating finger at each stage.
Applsci 15 12733 g005
Figure 6. Distribution chart of test subject samples.
Figure 6. Distribution chart of test subject samples.
Applsci 15 12733 g006
Figure 7. Duration of each stage of the task. (a) Total duration of Phase 1 and Phase 4 operations; (b) Total duration of Phase 2 and Phase 3 operations.
Figure 7. Duration of each stage of the task. (a) Total duration of Phase 1 and Phase 4 operations; (b) Total duration of Phase 2 and Phase 3 operations.
Applsci 15 12733 g007
Figure 8. Boxplot of intervals between participants’ operations. The circles in the figure represent outliers that deviate from the overall data distribution.
Figure 8. Boxplot of intervals between participants’ operations. The circles in the figure represent outliers that deviate from the overall data distribution.
Applsci 15 12733 g008
Figure 9. (a) Three-dimensional heatmap of the subject’s hand motion; (b) standardized displacement distribution map of the subject’s hand.
Figure 9. (a) Three-dimensional heatmap of the subject’s hand motion; (b) standardized displacement distribution map of the subject’s hand.
Applsci 15 12733 g009
Figure 10. (a) Three-dimensional heatmap of the subject’s hand acceleration and acceleration vector magnitude distribution map; (b) three-dimensional heatmap of the subject’s hand velocity and velocity vector magnitude distribution map.
Figure 10. (a) Three-dimensional heatmap of the subject’s hand acceleration and acceleration vector magnitude distribution map; (b) three-dimensional heatmap of the subject’s hand velocity and velocity vector magnitude distribution map.
Applsci 15 12733 g010
Figure 11. (a) Phase 1 thumb global analysis diagram; (b) Phase 2 thumb global analysis diagram; (c) Phase 3 index finger global analysis; and (d) Phase 4 thumb global. The circles in each subgraph represent the collected initial positions of the subjects’ fingers relative to the operated buttons.
Figure 11. (a) Phase 1 thumb global analysis diagram; (b) Phase 2 thumb global analysis diagram; (c) Phase 3 index finger global analysis; and (d) Phase 4 thumb global. The circles in each subgraph represent the collected initial positions of the subjects’ fingers relative to the operated buttons.
Applsci 15 12733 g011
Figure 12. (a) Distance between the test subject’s fingertips; (b) distance between the test subject’s fingertips and the operated button.
Figure 12. (a) Distance between the test subject’s fingertips; (b) distance between the test subject’s fingertips and the operated button.
Applsci 15 12733 g012
Figure 13. Four-stage correlation analysis diagram. (a) corresponds to Phase 1; (b) corresponds to Phase 2; (c) corresponds to Phase 3, and (d) corresponds to Phase 4.
Figure 13. Four-stage correlation analysis diagram. (a) corresponds to Phase 1; (b) corresponds to Phase 2; (c) corresponds to Phase 3, and (d) corresponds to Phase 4.
Applsci 15 12733 g013
Table 1. The result of button operation accuracy.
Table 1. The result of button operation accuracy.
TaskAccuracy (%)Error Rate (%)Average Number of Press Attempts
TMS Up65.035.01.54 ± 0.52
TMS Down61.938.11.62 ± 0.54
TMS Left68.431.61.46 ± 0.50
TMS Right72.227.81.38 ± 0.48
Weapon92.97.11.08 ± 0.27
Trigger86.713.31.15 ± 0.35
CMS Up56.543.51.77 ± 0.58
CMS Down72.235.01.38 ± 0.48
CMS Left68.431.61.46 ± 0.50
CMS Right65.035.01.54 ± 0.52
Table 2. Cronbach’s alpha reliability analysis.
Table 2. Cronbach’s alpha reliability analysis.
ItemCorrected Item-Total CorrelationAlpha Coefficient for Deleted ItemCronbach’s Alpha Coefficient
Overall Satisfaction0.4870.7960.810
Stimulation and Novelty0.3340.807
Comfort and User Experience0.4010.802
Efficiency and Performance0.6240.779
Ease of Use0.5450.787
Mental Demands0.1050.850
Physical Demands0.3800.804
Time Pressure0.7300.760
Performance Evaluation0.7630.761
Frustration0.7110.764
Table 3. Subjective scale evaluation results—system usability and user experience.
Table 3. Subjective scale evaluation results—system usability and user experience.
ItemIndicatorAverageOverall Standard Deviation
UsabilityIntuitive operation4.080.73
Clarity of system feedback4.080.73
Efficiency and PerformanceButton positioning efficiency3.540.75
System response speed3.770.89
Comfort and Operational ExperienceOperational comfort3.690.82
Natural interaction3.690.61
Excitement and NoveltyStimulate interest4.150.77
Design innovation4.620.49
Overall SatisfactionOperation satisfaction3.920.47
Recommendation intent4.150.66
Table 4. Subjective scale assessment results—task load and task performance.
Table 4. Subjective scale assessment results—task load and task performance.
ItemAverageOverall Standard Deviation
Mental demands3.230.97
Physical demands2.380.49
Time pressure2.770.97
Performance evaluation3.690.72
Sense of frustration2.540.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, Y.; Li, S.; Ren, J.; Wang, M. Optimization of HOTAS Joystick Design Based on Virtual Simulation. Appl. Sci. 2025, 15, 12733. https://doi.org/10.3390/app152312733

AMA Style

Fan Y, Li S, Ren J, Wang M. Optimization of HOTAS Joystick Design Based on Virtual Simulation. Applied Sciences. 2025; 15(23):12733. https://doi.org/10.3390/app152312733

Chicago/Turabian Style

Fan, Yaqing, Shanwen Li, Jie Ren, and Mei Wang. 2025. "Optimization of HOTAS Joystick Design Based on Virtual Simulation" Applied Sciences 15, no. 23: 12733. https://doi.org/10.3390/app152312733

APA Style

Fan, Y., Li, S., Ren, J., & Wang, M. (2025). Optimization of HOTAS Joystick Design Based on Virtual Simulation. Applied Sciences, 15(23), 12733. https://doi.org/10.3390/app152312733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop