Vibrotactile-Based Operational Guidance System for Space Science Experiments

On-orbit astronauts and scientists on the ground need to cooperate closely, to complete space science experiments efficiently. However, for the increasingly diverse space science experiments, scientists are unable to train astronauts on the ground about the details of each experiment. The traditional interaction of visual and auditory channels is not enough for scientists to directly guide astronauts to experimentalize. An intuitive and transparent interaction interface between scientists and astronauts has to be built to meet the requirements of space science experiments. Therefore, this paper proposed a vibrotactile guidance system for cooperation between scientists and astronauts. We utilized Kinect V2 sensors to track the movements of the participants of space science experiments, process data in the virtual experimental environment developed by Unity 3D, and provide astronauts with different guidance instructions using the wearable vibrotactile device. Compared with other schemes using only visual and auditory channels, our approach provides more direct and more efficient guidance information that astronauts perceive is what they need to perform different tasks. Three virtual space science experiment tasks verified the feasibility of the vibrotactile operational guidance system. Participants were able to complete the experimental task with a short period of training, and the experimental results show that the method has an application prospect.


Introduction
Space science experiments need to be carried out in long-term automatic operation and short-term manned space laboratories, and they are characterized by complexity and uncertainty of environmental factors. In addition, due to the different environments of space and ground, space science experiments cannot refer to the experience of ground experiments. The space science experiments need to be accomplished by the close cooperation of scientists on ground and on-orbit astronauts [1,2]. For some simple tasks, scientists can complete them efficiently with robots. However, robots cannot handle unexpected events in complex tasks well, and therefore, scientists' guidance for astronauts is still needed to deal with unexpected events. However, with the increase in the types of space science experiments, scientists are unable to train astronauts on the ground for the details of each experiment. Although visual and auditory channels have good performance in interaction, they are not enough to meet the requirements of timeliness and effectiveness of space science experiments [2]. An intuitive and transparent interaction interface has to be built to realize the transparent guidance of scientists on the ground to the orbiting astronauts and close cooperation between scientists and astronauts to complete the increasingly diverse and difficult-to-operate space science experiments.
Interaction based on visual and auditory channels has played a role in space science experiments. For example, John A. Karasinski et al. [3] took advantage of AR glasses and

The Method of Operational Guidance
The operational guidance system is based on the idea of "what you sense is the task you need to perform"-namely, the on-orbit astronauts sense the operational movements of the scientist directly, and what the astronaut senses is the task he needs to perform. The key to this method is how to track and capture the operational movements of the scientist and how to convey the guidance information to the astronaut.
This research utilized the theory of "haptic guidance" [6], combined with computer vision technology and virtual reality technology, to construct an operational guidance system for ground scientists and astronauts. The guidance system contains five modules, as shown in Figure 1. The first module is the ground scientist and astronaut. The scientist makes a movement, and the astronaut copies the movement. Module 2 tracks the movements of the scientist and astronaut. The movement information is sent to Module 3 (guidance algorithm) and used to drive an astronaut model in virtual environments (Module 4). The guidance algorithm compares the movement differences between scientist and astronaut and generates guidance feedback signals, which are sent to the wearable vibrotactile device (Module 5).
Actuators 2021, 10, x FOR PEER REVIEW movements of the scientist and astronaut. The movement information is sent to M (guidance algorithm) and used to drive an astronaut model in virtual environment ule 4). The guidance algorithm compares the movement differences between scien astronaut and generates guidance feedback signals, which are sent to the wear brotactile device (Module 5). In our work, the movements of the scientist and astronaut were tracked by one V2 sensor, respectively, and the data were sent to the virtual environment develo Unity3D for data processing and scene driving. Microsoft Kinect is a low-cost an invasive motion caption sensor [16], which can track the major joints of the huma in a three-dimensional way (x, y, and z-axes), and the sensor exhibits good perfo in motion capture. For example, Eftychios Protopapadakis et al. [17] applied a Kin sor to the identification of dance poses. Lin Yang et al. [18] used three Kinect V2 fo erless gait tracking. Alessandro Napoli compared Kinect with professional grad Q motion capture system [19]. Their results showed that Kinect provides adequate mance in tracking joint center displacements. The recognition accuracy error of Ki is (0.0283 ± 0.0186 m) [20].
The wearable device used continuous vibrotactile stimulation as guidance tions, and vibration stimulation was provided using an attractive method (move the vibration). Compared with the exoskeleton force feedback device, vibrotactile lation is more portable and has low power consumption, and, compared with e tactile stimulation, vibrotactile stimulation is safer and more comfortable for user The system was divided into two parts: the master part and the slave part master, the scientist on the ground guides the on-orbit astronaut in the slave part. W a virtual experimental environment based on the real space science experiments which consisted of experimental equipment and virtual astronaut. Participants w to control virtual astronauts to interact with experimental equipment. Scientists master side were able to see real-time visual feedback from virtual astronauts and ment, while the astronauts could not observe the scientists. We set up three expe to evaluate the effectiveness and feasibility of the system. The setting of the exper environment is shown in Figure 2. In our work, the movements of the scientist and astronaut were tracked by one Kinect V2 sensor, respectively, and the data were sent to the virtual environment developed by Unity3D for data processing and scene driving. Microsoft Kinect is a low-cost and noninvasive motion caption sensor [16], which can track the major joints of the human body in a three-dimensional way (x, y, and z-axes), and the sensor exhibits good performance in motion capture. For example, Eftychios Protopapadakis et al. [17] applied a Kinect sensor to the identification of dance poses. Lin Yang et al. [18] used three Kinect V2 for markerless gait tracking. Alessandro Napoli compared Kinect with professional grad Qualisys motion capture system [19]. Their results showed that Kinect provides adequate performance in tracking joint center displacements. The recognition accuracy error of Kinect V2 is (0.0283 ± 0.0186 m) [20].
The wearable device used continuous vibrotactile stimulation as guidance instructions, and vibration stimulation was provided using an attractive method (move toward the vibration). Compared with the exoskeleton force feedback device, vibrotactile stimulation is more portable and has low power consumption, and, compared with electrical tactile stimulation, vibrotactile stimulation is safer and more comfortable for users.
The system was divided into two parts: the master part and the slave part. As the master, the scientist on the ground guides the on-orbit astronaut in the slave part. We built a virtual experimental environment based on the real space science experiments scenes, which consisted of experimental equipment and virtual astronaut. Participants were able to control virtual astronauts to interact with experimental equipment. Scientists on the master side were able to see real-time visual feedback from virtual astronauts and equipment, while the astronauts could not observe the scientists. We set up three experiments to evaluate the effectiveness and feasibility of the system. The setting of the experimental environment is shown in Figure 2.
Actuators 2021, 10, x FOR PEER REVIEW 3 of 17 movements of the scientist and astronaut. The movement information is sent to Module 3 (guidance algorithm) and used to drive an astronaut model in virtual environments (Module 4). The guidance algorithm compares the movement differences between scientist and astronaut and generates guidance feedback signals, which are sent to the wearable vibrotactile device (Module 5). In our work, the movements of the scientist and astronaut were tracked by one Kinect V2 sensor, respectively, and the data were sent to the virtual environment developed by Unity3D for data processing and scene driving. Microsoft Kinect is a low-cost and noninvasive motion caption sensor [16], which can track the major joints of the human body in a three-dimensional way (x, y, and z-axes), and the sensor exhibits good performance in motion capture. For example, Eftychios Protopapadakis et al. [17] applied a Kinect sensor to the identification of dance poses. Lin Yang et al. [18] used three Kinect V2 for markerless gait tracking. Alessandro Napoli compared Kinect with professional grad Qualisys motion capture system [19]. Their results showed that Kinect provides adequate performance in tracking joint center displacements. The recognition accuracy error of Kinect V2 is (0.0283 ± 0.0186 m) [20].
The wearable device used continuous vibrotactile stimulation as guidance instructions, and vibration stimulation was provided using an attractive method (move toward the vibration). Compared with the exoskeleton force feedback device, vibrotactile stimulation is more portable and has low power consumption, and, compared with electrical tactile stimulation, vibrotactile stimulation is safer and more comfortable for users.
The system was divided into two parts: the master part and the slave part. As the master, the scientist on the ground guides the on-orbit astronaut in the slave part. We built a virtual experimental environment based on the real space science experiments scenes, which consisted of experimental equipment and virtual astronaut. Participants were able to control virtual astronauts to interact with experimental equipment. Scientists on the master side were able to see real-time visual feedback from virtual astronauts and equipment, while the astronauts could not observe the scientists. We set up three experiments to evaluate the effectiveness and feasibility of the system. The setting of the experimental environment is shown in Figure 2.

System Implementation
The operational guidance system consisted of movements capture and data processing, guidance algorithm, and wearable device of vibrotactile stimulation. The specific implementation of each part is described next.

Wearable Device
Tactile feedback of each arm is provided by two vibrotactile armbands, with four vibration actuator modules are placed at quadrants at each armband. The actuator module is a DC eccentric rotary motor, and the amplitude can be controlled by changing the PWM duty ratio. Figure 3a,b shows the vibration actuator module and the structure of the single armband. We defined the position of the motor from the user's perspective when the user's arms are stretched horizontally forward and palms down. The wearable device adopts a split design, divided into the left part and right part. Each part is controlled by an Arduino Nano MCU, as shown in Figure 3c.

System Implementation
The operational guidance system consisted of movements capture and data processing, guidance algorithm, and wearable device of vibrotactile stimulation. The specific implementation of each part is described next.

Wearable Device
Tactile feedback of each arm is provided by two vibrotactile armbands, with four vibration actuator modules are placed at quadrants at each armband. The actuator module is a DC eccentric rotary motor, and the amplitude can be controlled by changing the PWM duty ratio. Figure 3a,b shows the vibration actuator module and the structure of the single armband. We defined the position of the motor from the user's perspective when the user's arms are stretched horizontally forward and palms down. The wearable device adopts a split design, divided into the left part and right part. Each part is controlled by an Arduino Nano MCU, as shown in Figure 3c.
The device is powered by two 850 mAh Li-ion batteries. The normal service time is maintained for more than 3 h, and the standby time is maintained for more than 48 h. Two Bluetooth modules (HC-05) provide wireless communication with the computer. Figure 4 shows one user wearing a device.   The device is powered by two 850 mAh Li-ion batteries. The normal service time is maintained for more than 3 h, and the standby time is maintained for more than 48 h. Two Bluetooth modules (HC-05) provide wireless communication with the computer. Figure 4 shows one user wearing a device.

Movements Capture, Data Processing
Our system tracks 25 human joints the sensor provides, and the joints data can be used to drive the virtual astronaut in the virtual environment developed by Unity 3D. Due

Movements Capture, Data Processing
Our system tracks 25 human joints the sensor provides, and the joints data can be used to drive the virtual astronaut in the virtual environment developed by Unity 3D. Due to the differences in arm length and skeleton structure of different people, the coordinates of the joints the sensor tracks still have errors even if their movements are the same. Therefore, we converted the 3D point cloud data of Kinect into quaternions of each joint and mapped the 15 joints in Kinect coordinate to humanoid skeleton structure in Unity 3D. In this way, different users can be matched to the same virtual astronaut model, which solves the problems caused by the differences in the user's arm length and skeleton structure. The mapping relationship is shown in Table 1. We recorded the coordinates of the shoulder (LeftUpperArm, RightUpperArm), elbow (LeftLowerArm, RightLowerArm), and wrist (LeftHand, RightHand) joints in the humanoid skeleton, used the vector of the elbow joint points to the shoulder joint to represent the posture of the upper arm, and used the vector of the wrist joint points to the elbow joint to represent the posture of the lower arm. The length of two posture vectors is a fixed value, that is, the length of the virtual astronaut's lower arm and upper arm. Therefore, once the x and y coordinates of a posture vector are determined, there are only two solutions for the z coordinate of this vector (shown in Figure 5a).
where Len represents the length of the posture vector.
Since most of the postures of the experiment task are in front of the trunk (z > 0), we can omit the value of the z-axis, and the three-dimensional posture vector is thus mapped to the two-dimensional coordinate system. Figure 5b shows four posture vectors mapped to 2D coordinates.
The posture vectors are calculated as follows: where P LowerArm represents the posture vector of the lower arm, P U pperArm represents the posture of the upper arm, and X wirst , Y wirst , X elbow , Y elbow , X shoulder , Y shoulder represent the x-axis value and y-axis value of the wrist, elbow, and shoulder joint, respectively. Then, the movements of human arms can be represented by four posture vectors: P L_Lower for the left lower arm, P L_U pper for the left upper arm, P R_Lower for the right lower arm, and P R_U pper for the right upper arm. Since most of the postures of the experiment task are in front of the trunk (z > 0), can omit the value of the z-axis, and the three-dimensional posture vector is thus mapp to the two-dimensional coordinate system. Figure 5b shows four posture vectors mapp to 2D coordinates.
The posture vectors are calculated as follows: where represents the posture vector of the lower arm, represents posture of the upper arm, and , , , , ℎ , ℎ repres the x-axis value and y-axis value of the wrist, elbow, and shoulder joint, respectiv Then, the movements of human arms can be represented by four posture vectors: _ for the left lower arm, _ for the left upper arm, _ for the right lower a and _ for the right upper arm.

Guidance Algorithm
In specific space science experiments, astronauts' main movements are completed their upper limbs. Therefore, the operation algorithm takes the upper limb posture vect of the master and slave as input, and the output is the guidance instructions for a weara device. Two thresholds are set to represent the minimum acceptable posture error tween master and slave. When the error is below the threshold, we determined that two postures of the master and slave are the same. Additionally, we found the thresh with a preferable guidance effect in two tests.
The left lower arm was used as an example to introduce the master-slave operat guidance algorithm, and other sections follow the exact same approach. The posture v tors of the left lower arm of master and slave are separately expressed as

Guidance Algorithm
In specific space science experiments, astronauts' main movements are completed by their upper limbs. Therefore, the operation algorithm takes the upper limb posture vectors of the master and slave as input, and the output is the guidance instructions for a wearable device. Two thresholds are set to represent the minimum acceptable posture error between master and slave. When the error is below the threshold, we determined that the two postures of the master and slave are the same. Additionally, we found the threshold with a preferable guidance effect in two tests.
The left lower arm was used as an example to introduce the master-slave operation guidance algorithm, and other sections follow the exact same approach. The posture vectors of the left lower arm of master and slave are separately expressed as Additionally, the difference between the two vectors is used to represent the posture error of the master and slave: As mentioned above, when |∆P| ≤ ∆E (∆E is the set threshold), the master and slave postures are believed to be consistent. When |∆P| > ∆E, we decompose ∆P into two vectors in x-axis and y-axis directions expressed as ∆P x , ∆P y . Table 2 shows the mapping relationship between |∆P| and output guidance information. |∆P| is the Euclidean distance, and the calculation is as follows: To search for the optimal threshold, two participants familiar with the system participated in the experiment. The threshold was gradually reduced, and then we conducted an operational guidance test for each threshold. We recorded the upper limb posture vector of the master participant in advance and sent the recorded posture vector to the slave participant successively during the operational guidance. The slave participant completed the operation according to the guidance presented by the wearable device.
After completing the test of each threshold, we recorded the posture vectors of the master and slave, respectively, and the error between master and slave was calculated using the following formula: where, (M x (i), M y (i)), (S x (i), S y (i)) represents the master and slave posture vectors of the i frame data, respectively. Additionally, the total error of the i frame data is represented by the mean values of the four posture vectors as follows: where the N represents the number of data frames. First, to search for the optimal threshold of the lower arm, the threshold of the upper arm was set at a high level (0.10 m). The initial threshold of the lower arm was set to 0.09 m, with a decrease of 0.01 m each time, and the final value was 0.03 m. We counted the mean and variance of the guidance error of the same operation with different thresholds. Figure 6a shows the mean and variance of the four posture vector errors. The maximum average error is 0.079845 when the threshold value of the lower arm is 0.09 m, and the average errors at the threshold of 0.05 m, 0.04 m, and 0.03 m are distributed between 0.0423 m and 0.0462 m. As shown in Figure 6b, the participant used 1~10 points to evaluate the difficulty of the following guidance, and when the threshold was reduced to 0.03 m, the participant reported that the device would vibrate even when his arm moved slightly, and considerable attention was required to maintain the target posture. Therefore, combined with the objective error and subjective evaluation, the threshold of the lower arm was set to 0.04 m. Figure 6c shows the posture vector difference of the right arm between the master and slave.
ate the difficulty of the following guidance, and when the threshold was reduced to 0.03 m, the participant reported that the device would vibrate even when his arm moved slightly, and considerable attention was required to maintain the target posture. Therefore, combined with the objective error and subjective evaluation, the threshold of the lower arm was set to 0.04 m. Figure 6c shows the posture vector difference of the right arm between the master and slave.

Experiments
We simulated the task scene of the space science experiment and built the wearable vibrotactile operational guidance system. Three different experiments were set up to evaluate the effectiveness and feasibility of the proposed operational system. Nine subjects took part in the experiments (seven males, two females; age mean = 23). None of them was left-handed. Each subject participated in the subjective questionnaire survey about the experiment.

Experiment 1: Perceptual Test
To ensure that subjects can accurately distinguish and well understand the vibrotactile stimuli from different positions, we carried out a perception test in the following three conditions: • C1: Using a single vibration source, we conducted 50 trials. For each trial, the wearable device generated a random vibrotactile stimulus; • C2: Using two vibration sources, we conducted 50 trials, and for each trial, the wearable device randomly and simultaneously generated two vibrotactile stimuli at different positions; • C3: Using four vibration sources, we conducted 10 trials, and for each trial, the wearable device generated four stimuli on the basis of predetermined order.
Before the experiment, we explained the position and direction of the vibration stimulus in detail to the participants. Then, the subjects were asked to wear the device and sit in front of a computer with their forearms placed on the table. They should select the

Experiments
We simulated the task scene of the space science experiment and built the wearable vibrotactile operational guidance system. Three different experiments were set up to evaluate the effectiveness and feasibility of the proposed operational system. Nine subjects took part in the experiments (seven males, two females; age mean = 23). None of them was left-handed. Each subject participated in the subjective questionnaire survey about the experiment.

Experiment 1: Perceptual Test
To ensure that subjects can accurately distinguish and well understand the vibrotactile stimuli from different positions, we carried out a perception test in the following three conditions: • C1: Using a single vibration source, we conducted 50 trials. For each trial, the wearable device generated a random vibrotactile stimulus; • C2: Using two vibration sources, we conducted 50 trials, and for each trial, the wearable device randomly and simultaneously generated two vibrotactile stimuli at different positions; • C3: Using four vibration sources, we conducted 10 trials, and for each trial, the wearable device generated four stimuli on the basis of predetermined order.
Before the experiment, we explained the position and direction of the vibration stimulus in detail to the participants. Then, the subjects were asked to wear the device and sit in front of a computer with their forearms placed on the table. They should select the answer at the experimental interface shown in Figure 8. Each subject experimented in the order of the schedule shown in Figure 9. Subjects had sufficient time to report the position and direction of the perceived stimuli, and they had two minutes of rest after each experiment.

Experiment 2: Guiding Training
In order to make the subjects familiar with the operational guidance system more quickly, we decomposed five continuous operation actions into 22 discrete postures and presented the corresponding guidance instructions to the subjects successively. Each subject was trained twice. In the first training, they were told how to move their arms when they sensed vibrations in different positions and directions. The second training was conducted without our prompts. After the experiment, all subjects participated in the subjective questionnaire.
As shown in Table 3 and Figure 10, the subjective questionnaire mainly deals with the wearable device and the operational guidance system. For each question, subjects had to answer by assigning a score ranging from 1 (totally disagree) to 10 (totally agree).
Question Q1 (familiar to the operational guidance system) was positively rated, with a mean of 7.23 and standard deviation of 1.98. For Q2 and Q3, related to the wearability of the device, the mean responses were 7.44 and 7.55 with a standard deviation of 1.23 and 0.88, respectively. With the training in Experiment 2, the subjects were more familiar with the operational guidance system, and it was easy and comfortable for most of them to wear the device. Questions Q4 showed a positive result, with a mean of 9.11 and a standard deviation of 0.93. For most subjects, the vibration stimuli from different positions and directions could be clearly distinguished (Q5 mean 7.89, standard deviation 1.05). For Q6, the mean was 3.56, and the standard deviation was 2.12, which showed that noise pro-

Experiment 2: Guiding Training
In order to make the subjects familiar with the operational guidance system more quickly, we decomposed five continuous operation actions into 22 discrete postures and presented the corresponding guidance instructions to the subjects successively. Each subject was trained twice. In the first training, they were told how to move their arms when they sensed vibrations in different positions and directions. The second training was conducted without our prompts. After the experiment, all subjects participated in the subjective questionnaire.
As shown in Table 3 and Figure 10, the subjective questionnaire mainly deals with the wearable device and the operational guidance system. For each question, subjects had to answer by assigning a score ranging from 1 (totally disagree) to 10 (totally agree).
Question Q1 (familiar to the operational guidance system) was positively rated, with a mean of 7.23 and standard deviation of 1.98. For Q2 and Q3, related to the wearability of the device, the mean responses were 7.44 and 7.55 with a standard deviation of 1.23 and 0.88, respectively. With the training in Experiment 2, the subjects were more familiar with the operational guidance system, and it was easy and comfortable for most of them to wear the device. Questions Q4 showed a positive result, with a mean of 9.11 and a standard deviation of 0.93. For most subjects, the vibration stimuli from different positions and directions could be clearly distinguished (Q5 mean 7.89, standard deviation 1.05). For Q6, the mean was 3.56, and the standard deviation was 2.12, which showed that noise produced by vibration motors has little impact on the perception of vibration stimuli. For the To evaluate the subjects' perception of vibration stimuli, we recorded the correct rate and finishing time of each experiment. We defined the correct rate of an experiment as the number of correct answers reported by the subjects divided by the total number of trials. Additionally, the finishing time was defined as the interval between the first stimulus generation and the last stimulus reported by the subject.

Experiment 2: Guiding Training
In order to make the subjects familiar with the operational guidance system more quickly, we decomposed five continuous operation actions into 22 discrete postures and presented the corresponding guidance instructions to the subjects successively. Each subject was trained twice. In the first training, they were told how to move their arms when they sensed vibrations in different positions and directions. The second training was conducted without our prompts. After the experiment, all subjects participated in the subjective questionnaire.
As shown in Table 3 and Figure 10, the subjective questionnaire mainly deals with the wearable device and the operational guidance system. For each question, subjects had to answer by assigning a score ranging from 1 (totally disagree) to 10 (totally agree).

Experiment 3: Master-Slave Operational Guidance Experiment
To verify the feasibility of tactile guidance interaction in a non-visual environment, we simulated the experimental tasks of the space science experiment and built the experimental scene for Experiment 3, as shown in Figure 2, combined with the specific operational tasks of the space science experiment for the expert in the master role and the subjects in the slave role. All subjects participated in Experiment 3 after finishing the training in Experiment 2, and they had a certain understanding of the operational guidance system. In the experiment, experts and subjects naturally stood in front of the Kinect sensor separately, and the expert was invisible to the subjects. The expert performed the operations in turn, and the subject needed to operate with the guidance of a wearable vibrotactile device, driving the astronaut model in a virtual scene to complete the tasks. We selected three representative tasks in the space science experiment. Subjects did not know the operations of the tasks in advance, and they could only operate under the guidance of the device. Subjects needed to repeat the attempt until the first success, and it was regarded as a failed attempt when the vibration of the wearable device exceeded 15 s. The virtual experiment scenes are shown in Figure 11a, and the decomposition of three tasks is shown in Figure 11b.  Question Q1 (familiar to the operational guidance system) was positively rated, with a mean of 7.23 and standard deviation of 1.98. For Q2 and Q3, related to the wearability of the device, the mean responses were 7.44 and 7.55 with a standard deviation of 1.23 and 0.88, respectively. With the training in Experiment 2, the subjects were more familiar with the operational guidance system, and it was easy and comfortable for most of them to wear the device. Questions Q4 showed a positive result, with a mean of 9.11 and a standard deviation of 0.93. For most subjects, the vibration stimuli from different positions and directions could be clearly distinguished (Q5 mean 7.89, standard deviation 1.05). For Q6, the mean was 3.56, and the standard deviation was 2.12, which showed that noise produced by vibration motors has little impact on the perception of vibration stimuli. For the last question, most subjects could understand the meaning of guiding instruction (Q7 mean 7.89, standard deviation 1.27).

Experiment 3: Master-Slave Operational Guidance Experiment
To verify the feasibility of tactile guidance interaction in a non-visual environment, we simulated the experimental tasks of the space science experiment and built the experimental scene for Experiment 3, as shown in Figure 2, combined with the specific operational tasks of the space science experiment for the expert in the master role and the subjects in the slave role. All subjects participated in Experiment 3 after finishing the training in Experiment 2, and they had a certain understanding of the operational guidance system. In the experiment, experts and subjects naturally stood in front of the Kinect sensor separately, and the expert was invisible to the subjects. The expert performed the operations in turn, and the subject needed to operate with the guidance of a wearable vibrotactile device, driving the astronaut model in a virtual scene to complete the tasks. We selected three representative tasks in the space science experiment. Subjects did not know the operations of the tasks in advance, and they could only operate under the guidance of the device. Subjects needed to repeat the attempt until the first success, and it was regarded as a failed attempt when the vibration of the wearable device exceeded 15 s. The virtual experiment scenes are shown in Figure 11a, and the decomposition of three tasks is shown in Figure 11b. We recorded the posture vectors of 10 subjects that participated in the experiment, as well as the finishing time of each task. Additionally, we also recorded the completion of tasks in the Unity virtual experimental scene, that is, the number of attempts when the subjects completed the task successfully for the first time.
Due to the communication delay between the wearable device and computer, and the reaction time of the subjects, the motion of the slave is always lagging behind the master. To evaluate the effect of operation guidance, we used signal matching for the posture vector between slaver and master and recorded the delay of each task, as shown in Figure  12.

Experiment 1: Results
The results of Experiment 1 showed that the test for C1 (single vibration source) has an average accuracy of 94.4% and a standard deviation of 3.67. The test for C2 (double vibration sources) has an average accuracy of 83.3% and a standard deviation of 8.87. In terms of the time for task accomplishment, the mean value of C1 is 165.389s, while for C2, it is 327.89s. As shown in Table 4, the correct rate of C1 is generally above 90%. Subjects reported that it is easy to confuse the stimuli in neighboring directions (especially the inner side of the upper arm). Additionally, more obvious in C2, subjects were more likely to be confused when they perceived two vibration stimuli at different positions at the same time, as can be seen from Table 3, where the correct rate of C2 was significantly lower than that of C1. The finishing time is shown in Figure 13b, which also shows this We recorded the posture vectors of 10 subjects that participated in the experiment, as well as the finishing time of each task. Additionally, we also recorded the completion of tasks in the Unity virtual experimental scene, that is, the number of attempts when the subjects completed the task successfully for the first time.
Due to the communication delay between the wearable device and computer, and the reaction time of the subjects, the motion of the slave is always lagging behind the master. To evaluate the effect of operation guidance, we used signal matching for the posture vector between slaver and master and recorded the delay of each task, as shown in Figure 12. We recorded the posture vectors of 10 subjects that participated in the experiment, as well as the finishing time of each task. Additionally, we also recorded the completion of tasks in the Unity virtual experimental scene, that is, the number of attempts when the subjects completed the task successfully for the first time.
Due to the communication delay between the wearable device and computer, and the reaction time of the subjects, the motion of the slave is always lagging behind the master. To evaluate the effect of operation guidance, we used signal matching for the posture vector between slaver and master and recorded the delay of each task, as shown in Figure  12.

Experiment 1: Results
The results of Experiment 1 showed that the test for C1 (single vibration source) has an average accuracy of 94.4% and a standard deviation of 3.67. The test for C2 (double vibration sources) has an average accuracy of 83.3% and a standard deviation of 8.87. In terms of the time for task accomplishment, the mean value of C1 is 165.389s, while for C2, it is 327.89s. As shown in Table 4, the correct rate of C1 is generally above 90%. Subjects reported that it is easy to confuse the stimuli in neighboring directions (especially the inner side of the upper arm). Additionally, more obvious in C2, subjects were more likely to be confused when they perceived two vibration stimuli at different positions at the same time, as can be seen from Table 3, where the correct rate of C2 was significantly lower than that of C1. The finishing time is shown in Figure 13b, which also shows this

Experiment 1: Results
The results of Experiment 1 showed that the test for C1 (single vibration source) has an average accuracy of 94.4% and a standard deviation of 3.67. The test for C2 (double vibration sources) has an average accuracy of 83.3% and a standard deviation of 8.87. In terms of the time for task accomplishment, the mean value of C1 is 165.389 s, while for C2, it is 327.89 s. As shown in Table 4, the correct rate of C1 is generally above 90%. Subjects reported that it is easy to confuse the stimuli in neighboring directions (especially the inner side of the upper arm). Additionally, more obvious in C2, subjects were more likely to be confused when they perceived two vibration stimuli at different positions at the same time, as can be seen from Table 3, where the correct rate of C2 was significantly lower than that of C1. The finishing time is shown in Figure 13b, which also shows this result. Moreover, the correct rate of the second test of C2 is generally lower than that of the first test; subjects reported that lingering vibration stimuli would lead to the weakening of perception. result. Moreover, the correct rate of the second test of C2 is generally lower than that of the first test; subjects reported that lingering vibration stimuli would lead to the weakening of perception.
The recognition correct rate of vibration on different sides is significantly lower than that on the same side. C3 shows a positive result with an average correct rate of 85.1%. Overall, the accuracy and finishing time are satisfactory. The result of the perceptual test for Experiment 1 showed that participants were able to correctly distinguish vibration stimuli from different positions and directions.

Experiment 3: Results
As regards the three tasks in Experiment 3, Tables 5 and 6 show the statistical results of the number of attempts when the subjects completed the tasks successfully for the first time. It is worth noting that Task 1 and Task 3 only involve the right arm, while Task 2 requires both arms to act simultaneously. Subjects reported that when conducting the operation of Task 2, it was confusing sometimes that both arms perceived vibration stimulation at the same time, and when conducting the operations of Task 1 or Task 3, they could focus on the stimulation perception on one arm. The results showed that Task 1 and Task 3 have only right arm involvement, and subjects generally succeed in accomplishing tasks after one or two attempts, while Task 2 involves both arms, and subjects can accomplish The recognition correct rate of vibration on different sides is significantly lower than that on the same side. C3 shows a positive result with an average correct rate of 85.1%. Overall, the accuracy and finishing time are satisfactory. The result of the perceptual test for Experiment 1 showed that participants were able to correctly distinguish vibration stimuli from different positions and directions.

Experiment 3: Results
As regards the three tasks in Experiment 3, Tables 5 and 6 show the statistical results of the number of attempts when the subjects completed the tasks successfully for the first time. It is worth noting that Task 1 and Task 3 only involve the right arm, while Task 2 requires both arms to act simultaneously. Subjects reported that when conducting the operation of Task 2, it was confusing sometimes that both arms perceived vibration stimulation at the same time, and when conducting the operations of Task 1 or Task 3, they could focus on the stimulation perception on one arm. The results showed that Task 1 and Task 3 have only right arm involvement, and subjects generally succeed in accomplishing tasks after one or two attempts, while Task 2 involves both arms, and subjects can accomplish the task successfully after two or three attempts. Moreover, we recorded the finishing time and delay between the master and slave. As shown in Figure 14 Subjects  Task 1  Task 2  Task 3  1  2  2  2  2  1  3  2  3  1  3  1  4  1  2  1  5  1  3  1  6  2  4  Moreover, we recorded the finishing time and delay between the master and slave. As shown in Figure 14 To intuitively show the effect of guidance, the recorded posture vectors are plotted as a waveform, shown in Figure 15. The results of the operation guidance experiment, reported in Figures 13 and 15, showed that the guidance instructions can be clearly recognized and understood. The subjects were able to accomplish the virtual operation tasks with 2 or 3 attempts under the guide of the master. Furthermore, the task finishing time of the subject is almost the same as that of the expert, and the average delay is less than 3 s. By observing the trend of the waveform in Figure 15, it can also be found that the subjects can follow the posture of the expert arm well, which verifies the feasibility of the operational guidance system. To intuitively show the effect of guidance, the recorded posture vectors are plotted as a waveform, shown in Figure 15. The results of the operation guidance experiment, reported in Figures 13 and 15, showed that the guidance instructions can be clearly recognized and understood. The subjects were able to accomplish the virtual operation tasks with 2 or 3 attempts under the guide of the master. Furthermore, the task finishing time of the subject is almost the same as that of the expert, and the average delay is less than 3 s. By observing the trend of the waveform in Figure 15, it can also be found that the subjects can follow the posture of the expert arm well, which verifies the feasibility of the operational guidance system.

Conclusions
This work proposed an operational guidance system for space science and verified the feasibility of the wearable vibrotactile device for operatio The system tracks and compares the postures of the master and slave, and vibration stimulation to guide the action of the slave end. Although experim tual environment showed positive results, there is still a long way to its suc cal application in space science experiments. There are some shortages in o which we will address in future work.
The first is that we ignored the DOF of the arm rotation around itsel when conducting the operation guidance, both the master and slave should of their hands up to ensure that the directions of their arms are consistent cause Kinect V2 sensor cannot track the rotation angle of the arm around i this issue, we plan to utilize inertial sensors to track the rotation degree of f arms around itself, improve the guidance algorithm, and verify the syste complex tasks.
The second is that we used continuous vibrotactile stimulation to prov instructions, and as the subjects reported in Experiment 1, the long stimula perception to vibration. The next step will change the vibration mode to di the experiment to find a better interval time. In addition, we would like to u types of tactile feedback, for example, kinesthetic feedback, vibration combin stretch feedback, and other vibrative actuators [21]. Although the algorithm z-axis has shown a positive result in the posture in front of the trunk, intro into the algorithm is also one of the future works.

Conclusions
This work proposed an operational guidance system for space science experiments and verified the feasibility of the wearable vibrotactile device for operational guidance. The system tracks and compares the postures of the master and slave, and then presents vibration stimulation to guide the action of the slave end. Although experiments in a virtual environment showed positive results, there is still a long way to its successful practical application in space science experiments. There are some shortages in our approach, which we will address in future work.
The first is that we ignored the DOF of the arm rotation around itself. As a result, when conducting the operation guidance, both the master and slave should keep the back of their hands up to ensure that the directions of their arms are consistent, which is because Kinect V2 sensor cannot track the rotation angle of the arm around itself. To solve this issue, we plan to utilize inertial sensors to track the rotation degree of freedom of the arms around itself, improve the guidance algorithm, and verify the system with more complex tasks.
The second is that we used continuous vibrotactile stimulation to provide guidance instructions, and as the subjects reported in Experiment 1, the long stimulation weakens perception to vibration. The next step will change the vibration mode to discrete and set the experiment to find a better interval time. In addition, we would like to utilize different types of tactile feedback, for example, kinesthetic feedback, vibration combined with skin-stretch feedback, and other vibrative actuators [21]. Although the algorithm that omits the z-axis has shown a positive result in the posture in front of the trunk, introducing Z-axis into the algorithm is also one of the future works.
The third is that the user (whether the scientists or astronauts) must be in the right and appropriate position where Kinect works properly; this has a large impact on practical applications, especially when users are very close to the device, and using multiple Kinect sensors or other motion capture devices [9][10][11] will be considered in a future work. The fourth is the delay issue of space-ground communication in space science experiments, which was not considered in the simulation experiments we had set up. Next, we will set up delay to simulate the communication between space and ground, further verifying the feasibility of the system in a high-delay environment and try to find a minimum acceptable time delay. The delay issue is a significant issue in practical applications, and its solution needs further development of communication technology between space and ground. Further, we would like to combine our approach with VR or AR technology and take advantage of both, allowing the astronauts to see the difference in the action between themselves and scientists while being able to directly sense the action of scientists through the tactile channels.
In this paper, we proposed a vibrotactile guidance system composed of a Kinect V2 sensor, Unity 3D virtual environment, and a wearable device; the target is to be applied in space science experiments to establish a more efficient intuitive transparent interaction system between ground scientists and astronauts. We constructed a simulated experimental environment, tested the system combined with three tasks of space science experiments, and verified the feasibility of the system for operation guidance. The waveforms of the postures also show a preferable guiding effect. Participants can well follow the guidance to accomplish the experimental task after receiving guidance training, and the experimental results show that the method has an application prospect.