Next Article in Journal
Piecewise: A Non-Isomorphic 3D Manipulation Technique That Factors Upper-Limb Ergonomics
Previous Article in Journal
Current Perspective of Metaverse Application in Medical Education, Research and Patient Care
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inter-Subject EEG Synchronization during a Cooperative Motor Task in a Shared Mixed-Reality Environment

School of Science and Technology, Meiji University, 1-1-1 Higashimita, Tama-ku, Kawasaki 214-8571, Japan
*
Author to whom correspondence should be addressed.
Virtual Worlds 2023, 2(2), 129-143; https://doi.org/10.3390/virtualworlds2020008
Submission received: 25 January 2023 / Revised: 1 March 2023 / Accepted: 14 March 2023 / Published: 20 April 2023

Abstract

:
Mixed-reality (MR) environments, in which virtual objects are overlaid on the real environment and shared with peers by wearing a transparent optical head-mounted display, are considered to be well suited for collaborative work. However, no studies have been conducted to provide neuroscientific evidence of its effectiveness. In contrast, inter-brain synchronization has been repeatedly observed in cooperative tasks and can be used as an index of the quality of cooperation. In this study, we used electroencephalography (EEG) to simultaneously measure the brain activity of pairs of participants, a technique known as hyperscanning, during a cooperative motor task to investigate whether inter-brain synchronization would be also observed in a shared MR environment. The participants were presented with virtual building blocks to grasp and build up an object cooperatively with a partner or individually. We found that inter-brain synchronization in the cooperative condition was stronger than that in the individual condition (F(1, 15) = 4.70, p < 0.05). In addition, there was a significant correlation between task performance and inter-brain synchronization in the cooperative condition (rs = 0.523, p < 0.05). Therefore, the shared MR environment was sufficiently effective to evoke inter-brain synchronization, which reflects the quality of cooperation. This study offers a promising neuroscientific method to objectively measure the effectiveness of MR technology.

1. Introduction

Mixed-reality (MR) environments are considered particularly suitable for collaborative work [1]. MR offers environments in which physical and digital objects coexist in a user’s visual perception of the real world using transparent head-mounted displays [2,3]. Although both MR and immersive virtual reality (VR) can display and allow a user to interact with virtual objects, MR can be superior to immersive VR in that MR can superimpose a virtual world on the real world, and multiple people in a shared MR space can have a shared experience of manipulating and observing virtual objects simultaneously. Owing to these advantages, MR has been applied in a variety of fields for face-to-face and remote collaborative tasks [4,5,6,7,8,9]. However, to the best of our knowledge, no studies have been conducted to obtain neuroscientific evidence that shared MR environments are effective for cooperative tasks.
Recently, a growing number of studies have employed the hyperscanning technique, which is a neuroimaging technique used to measure two or more individuals’ brain activities simultaneously. Previous studies have reported that inter-brain synchronization is enhanced during cooperative actions [10,11,12,13,14]. Hyperscanning is considered a promising approach to study social interaction, especially in naturalistic environments, because it does not presuppose a rigid and well-organized time-controlled experimental environment (such as an event-related design in an experimental room). Instead, the major analytical technique used in hyperscanning is to examine inter-brain synchronization among participants during the experimental session, which can be applied to more naturalistic experimental settings.
Owing to its flexibility for application to naturalistic tasks, electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) are often used for hyperscanning studies. Previous studies have reported inter-brain neural synchronization between two or more participants in various cooperative tasks. Yun et al. [15] found that frontal EEG theta rhythms were synchronized between pairs of participants during a task of mutual finger-pointing, and this synchronization was more enhanced when their finger movements were also synchronized. Similar EEG frontal theta synchronization was reported during ensemble guitar play [16,17]. Kawasaki et al. [18] measured dyads’ EEG activity when they alternately pronounced letters of the alphabet, and they showed that inter-brain frontal synchronization was enhanced as behavioral coordination (speech rhythm synchronization) increased. Similar inter-brain frontal synchronization has also been reported in studies utilizing hemodynamic measures, such as fNIRS and functional magnetic resonance imaging (fMRI) [19,20,21,22,23,24]. Several studies have also demonstrated EEG synchronization in other brain areas, including the sensorimotor area [10] and the temporo-parietal junction (TPJ) [25], in social interaction tasks. Recently, Dikker et al. [26] reported inter-brain synchronization over broader brain areas in activities performed by students during real-world high school classes, which was enhanced when students were more engaged in the class. These findings suggest that inter-brain synchronization can be used as a measure of the “quality” of social interaction between individuals during cooperative tasks.
In this study, we hypothesized that inter-brain synchronization during a cooperative task would be observed even in a shared MR environment. We investigated the neural activities of pairs of participants engaged in a cooperative motor task in a shared MR environment using the EEG hyperscanning technique. To this end, we developed an MR application in which participants could grasp and move virtual building blocks to assemble an object to imitate a target object that was simultaneously presented. The participants performed this task either cooperatively with a partner (cooperative condition) or individually in parallel with another participant (individual condition). We anticipated that inter-brain synchronization would be observed between pairs in the cooperative condition but not in the individual condition.

2. Materials and Methods

2.1. Participants

A total of 32 healthy adult participants, consisting of 16 same-sex pairs (7 female pairs; 14 right-handed pairs and 2 left-handed and right-handed pairs; mean age 22.1, SD ± 0.95, range 20–24), participated in this experiment. They had normal or corrected-to-normal vision. The experiment was conducted according to the principles and guidelines of the Declaration of Helsinki with the approval of the ethics committee of the School of Science and Technology, Meiji University.

2.2. Experimental Procedures

The pairs of participants sat down face-to-face while wearing head-mounted MR devices (HoloLens, Microsoft; Figure 1a). The experimental task was to assemble structures using a set of virtual building blocks as sample objects presented through the MR device. The task was either performed cooperatively with a partner (cooperative condition; Figure 1b,d) or separately in parallel with the peer (individual condition; Figure 1c,e), which served as the control condition. The order of conditions was counterbalanced across pairs.
All participants were first instructed as to the functionality of the MR device and learned how to operate it during a practice session in which they assembled an object that was simpler than the sample object used in the actual task.
The main session started with the experimenter’s verbal instruction. One of the participants first pressed the “Sample” button presented near the center of the MR visual area to make the sample object appear. Next, the participant freely pressed the “BlueCube” (for one member of the pair designated as 1P) or “RedCube” (for the other, designated as 2P) button on the left side of the MR visual area to make a virtual building block (Cube) appear. The color of the Cube for each participant was determined in advance. The participants had to build up the same object as the sample object in the place at around the center of the MR visual area where small white cubes (3 vertical × 5 horizontal × 3 depth) served as a guide. While building up the object in the task, participants were free to place blocks anywhere in the workspace and were not limited to stacking them on top of other blocks. The participants could move the virtual object by pressing it, dragging it to a guide cube, and then releasing it (as in “drag and drop” interactions using a mouse cursor in a conventional graphical user interface) while fixating on the object throughout this movement. The participants were instructed to try to minimize their head movements during the task.
In the cooperative condition, the pair of participants assembled an object cooperatively. In so doing, they needed to understand one another’s intentions regarding where to put their cube to build up the object efficiently. The participants were allowed to converse minimally during the task. The mean duration of conversation was 3.47 s, with a SD of ±2.59; the mean number of conversations during the task was 4.5, with a SD ± 4.0, and the average ratio of conversation time to task time was 7.87%, with a SD ± 8.53. In the individual condition, each participant created an object similarly to that in the cooperative condition but did so behind a virtual wall that rendered the other participant and their cubes invisible (Figure 1c). The sample object in the individual condition was presented to each participant separately and was composed of only blocks of a predetermined color (blue and red for 1P and 2P, respectively). The shape of the sample object was equivalent in the two conditions. We consider the individual condition as a control condition because the participant was able to see their own environment as in the cooperative condition (e.g., their own hands and the surroundings of the room), whereas they could not see the other participant’s workspace. Because the participants performed the same tasks as in the cooperative condition, their motor activity was comparable between the two conditions.
The participants determined whether the task was completed, and the task was ended upon the participant’s verbally saying “Finished.” The experimental time for each task was 287.4 s, SD ± 184.8 for the cooperative condition and 294.9 s, SD ± 120.1 for the individual condition (t(15) = 0.144, p = 0.887). In the cooperative condition, one participant was predetermined to be responsible to press the “Sample” button at the beginning of the task to complete the task.
After each condition, the participants answered a questionnaire regarding the task with a list of items measured with a seven-point Likert scale (from −3 to 3) to confirm that the cooperative condition indeed required cooperation between the participants and that there was a difference between the two conditions in terms of cooperation. The questions asked participants to rate the quality of their cooperation (“I felt we cooperated well”), their ability to share the space (“I felt we were sharing the space”), and about sharing objects (“I felt we were sharing the virtual objects”).

2.3. EEG Recordings

EEG signals of both participants in each pair were simultaneously recorded. Signals were recorded at 16 scalp sites (F5, F3, Fz, F4, F6, C5, C3, Cz, C4, C6, P5, Pz, P6, PO3, POz and PO4), located according to the extended international 10/20 system. The reference electrode was placed on the participants’ right earlobes. The EEG signal was amplified and digitized at 512 Hz with a band-pass filter of 0.5–60 Hz.
Figure 2 shows the layout of the experimental apparatus. The pairs of participants sat face-to-face, remaining 3 m away from each other. Their EEG signals were recorded using two individual EEG preamplifiers and amplifiers (g.USBamp, g.tec Inc., Schiedlberg, Austria) separately. Our method of recording EEG data was similar to that used in the previous study [27]. A trigger signal was input to both EEG amplifiers by the experimenter’s button press to record the start and end times of the task to keep the two EEG signals temporally synchronized.

2.4. Data Analysis

2.4.1. Subjective Ratings

Because the questionnaire data were not normally distributed (Lilliefors test, p < 0.05), we analyzed the questionnaire data on cooperation, sharing space and sharing objects by conducting the Wilcoxon signed-rank test between the questionnaire scores and zero to determine that the cooperative task was indeed cooperative and between the two conditions to examine whether there was a significant difference in subjective experience between the cooperative and individual tasks.

2.4.2. EEG Data

All computations for the EEG signals were performed using the MATLAB R2018b (MathWorks Inc., Natick, MA, USA) environment. First, we extracted EEG data corresponding to the task, based on the trigger signals recorded at the start and end of the task. Then, for the individual condition, the length of the data was adjusted to be equal for each pair by truncating the data with a longer task time of one participant to match the data with the shorter task time of another. At this stage, we rejected the EEG data of three electrodes (F4, C6, and Pz) due to measurement errors and large artifacts caused by the participants’ movement.
Second, we preprocessed the extracted data using MATLAB toolbox EEGLAB version 14.1.2b [28]. We applied a low-pass filter at 35 Hz to the EEG signals, and the signals were divided into 1-s epochs. Epochs with artifacts at any time step were removed with a rejection threshold of ±100 μV for the remaining 13 electrodes [29,30,31,32]. In addition, we eliminated all epochs during which the participants conversed in the cooperative task by manually checking the video data. For each pair of participants only epochs that were accepted for both participants (common epochs) were included in the analysis [26,33]. Overall, epoch rejection rates were 49.6%, SD ± 24.7, for the cooperative condition, and 44.9%, SD ± 30.8, for the individual condition (t(15) = 0.69, p = 0.498).
Third, we employed the total interdependence (TI) [26,33,34] to compute brain-to-brain synchronization among the pairs during the task. TI is defined in terms of spectral coherence. In this study, we computed spectral coherence based on the Welch method, which controls for bias in coherence estimation [31]. For a pair of simultaneously acquired time series (x_1,y_1),(x_2,y_2),(x_3,y_3),…,(x_n,y_n), TI was computed according to the formula
T I x , y = 1 2 π π π ln 1 C x y 2 λ d λ
where Cxy(λ) is the coherence between the two signals, x and y, at frequency f = λ /2π. For two Gaussian processes, this formula has been shown to measure the total amount of mutual information between them. Numerically, for a given sampling frequency fs, Equation (1) can be recast into an implementable form as given below.
T I x , y = 2 f s i = 1 N 1 ln 1 C x y 2 i Δ f Δ f
where Δ f = fs/(2(N − 1)) is the frequency resolution and N is the number of desired frequency points in the interval between 0 and the Nyquist frequency fs/2.
In this study, TI was estimated by computing the magnitude-squared coherence using the Welch method for one-on-one paired combinations of the same electrodes (e.g., Fz-Fz) for two participants. The magnitude-squared coherence was calculated for the frequency range between 3~20 Hz by tapering non-overlapping 1 s epochs with a Hanning window and performing the Fourier transform with a frequency resolution of 1 Hz. Therefore, by calculating the TI values of the two EEG data, their total mutual information resulted in a range of 0 to 1, where the TI value becomes larger (near 1) with greater inter-brain synchronization. We believe that this method has two main advantages. First, it is possible to characterize the synchronization of brain activity between individuals across broad frequency bands (3–20 Hz), which may reflect the general connectivity without being bound to a specific frequency band (e.g., theta, alpha, or beta). Second, it can be applied to data with a high epoch rejection rate because TI can be estimated for a short (1 s, in this study) accepted epoch [26].
We selected nine electrodes, which were relatively free of noise across the participants, from the anterior, central, and posterior positions (as in [26,33]). These included three frontal (Fz, F5, F6), three central (Cz, C3, C4), and three posterior electrodes (POz, P5, P6). We then performed a repeated-measures two-way ANOVA for TI values with two conditions (cooperative, individual) and nine electrodes (Fz, F5, F6, Cz, C3, C4, POz, P5, and P6) as main factors.
In addition, we employed a permutation test approach [35] to further confirm the significance of the TI values. To yield the control TI values under the null hypothesis, the common epoch data of one of the dyads were randomly shuffled in time, and the TI value was calculated (control TI value). We repeated this procedure 1000 times, and the control TI values that were greater than the experimental TI value were extracted from all 1000 values, which served as a significance level (p = the number/1000). We also performed a posthoc paired t-test and calculated the effect size between the cooperative and individual conditions to confirm the conditional difference.
Additionally, we performed a two-way ANOVA (condition × electrode) for the mean calculated magnitude-squared coherence in the EEG standard bands (Delta: 3 Hz, Theta: 4~7 Hz, Alpha: 8~13 Hz, Beta: 14~20 Hz).

3. Results

3.1. Subjective Reports

Figure 3 shows the questionnaire results for both cooperative and individual conditions. The results exhibited significantly greater scores for the cooperative condition compared to a value of zero. (Z = 4.85, p = 1.26 × 10−6, r = 0.857, for cooperation; Z = 4.52, p = 6.06 × 10−6, r = 0.799, for sharing space; Z = 4.78, p = 1.76 × 10−6, r = 0.845, for sharing objects; Wilcoxon signed-rank test), and that rating scores in the cooperative condition were significantly greater than those in the individual condition (Z = 4.98, p = 6.25 × 10−7, r = 0.880, for cooperation; Z = 4.93, p = 8.28 × 10−7, r = 0.872, for sharing space; Z = 4.96, p = 7.23 × 10−7, r = 0.877, for sharing objects; Wilcoxon signed-rank test).

3.2. Inter-Brain Synchronization

We confirmed that TI values were normally distributed (p > 0.05, Lilliefors test). The results of a repeated-measures two-way ANOVA (condition × electrode) showed a significant main effect of the condition (F(1, 15) = 4.70, p = 0.047, η2 = 0.238; Figure 4), but no main effect of specific electrodes (F(8, 120) = 0.519, p = 0.840, η2 = 0.033) nor of interaction between condition and electrodes (F(8, 120) = 1.24, p = 0.283, η2 = 0.076). We also examined four frequency bands separately (delta, theta, alpha, and beta) through a two-way ANOVA (condition × electrode) but found no statistically significant results (p > 0.1). These results indicate that inter-brain synchronization was significantly higher in the cooperation condition than in the individual condition, independent of channel locations and frequency bands.
We then performed a permutation test and calculated the effect size of the conditional differences at each electrode. There were strong p-values in the permutation test in the cooperative condition for several electrodes (p < 0.05, uncorrected; Table 1), although they did not survive after multiple comparison adjustment (Bonferroni), but not in the individual condition (p > 0.05, uncorrected). Effect size analyses showed large effects (coop. > indiv.) at Fz (d = 1.035) and C4 (d = 0.827) and medium effects at other electrodes (C3, POz, and P6; Table 1). These results confirm that inter-brain synchronization was greater in the cooperative condition than in the individual condition over broad brain areas.

3.3. Correlation between Subjective Reports and Inter-Brain Synchronization

We first calculated Spearman’s correlation coefficient between the results of the questionnaire (subjective reports) and inter-brain synchronization (TI). The results showed a significant positive correlation between the rating score of “sharing objects” and the TI value at Fz in the cooperative condition (rs(31) = 0.373, p = 0.036; Figure 5a; Table 2). Additionally, we found a weak positive correlation between the rating score of “sharing space” and the TI value at Fz in the cooperative condition (rs(31) = 0.308, p = 0.087; Figure 5b). There was no other significant correlation between the questionnaire and TI value in other electrodes nor in the individual condition (rs(31) < 0.3).
We then calculated Spearman’s correlation between task performance (performance time) and TI value. We found a significant positive correlation at F5 in the cooperative condition (rs(15) = 0.523; p = 0.038; Figure 6a; Table 2), indicating that the inter-subject synchronization was higher with increasing performance time (cooperation did not proceed well). In contrast, we found a marginal negative correlation at C3 in the cooperative condition, indicating that inter-brain synchronization was higher when the performance time was shorter (cooperation worked well) (rs(15) = −0.423; p = 0.103; Figure 6b).
Finally, there was no correlation between the results of the questionnaire and task performance (p > 0.1).

4. Discussion

We investigated inter-brain synchronization of two individuals performing a cooperative motor task in a shared MR environment using EEG hyperscanning. The subjective reports showed that the rating scores of the questionnaire items (cooperation, sharing space, and sharing objects) were significantly greater than zero in the cooperative condition and were significantly greater in the cooperative condition than in the individual condition. In terms of brain activity, we found that inter-brain synchronization in the cooperative task was stronger than that in the individual task. In addition, a significant correlation between inter-brain synchronization and the subjective report (sharing objects) and objective measure (performance time) was found in the cooperative condition. These results indicate that the cooperative task in the MR environment was sufficiently effective to evoke inter-brain synchronization, which can be regarded as showing that the dyads cooperated well in the task.
These results show that inter-brain synchronization during the cooperative task was stronger than that during the individual task. Dikker et al. [26] and Bevilacqua et al. [33] used TI values averaged over broad brain areas, rather than in a highly localized manner, including occipital (O1, O2), frontal (F3, F4), and parietal electrodes (P7, P8), to investigate the general differences in inter-brain synchronization between conditions. Similar to their results, we found significant inter-brain synchronization in the cooperative condition. Moreover, the correlation analyses showed that the participants’ subjective reports were correlated with inter-brain synchronization. Thus, it is reasonable that the cooperative task fostered cooperation between dyads. Therefore, we postulate that inter-brain synchronization could be associated with the quality of cooperation, and hence with a sense of ‘unity’ [36,37] or “we-mode” cognition [38], between dyads during the cooperative task in the shared MR environment.
Previous studies have reported that inter-brain neural synchronization occurred in the frontal regions of subjects’ brain during various types of cooperative tasks, including cooperative motor tasks [15,20,23,39,40,41], ensemble guitar playing or singing [16,42], face-to-face dialog [21], teacher-student interaction [43,44], mutual gaze [19,24], and cooperative creative tasks [45]. These previous studies have reported the results of brain activities when subjects collaborated in a real-world face-to-face environment. In contrast, the present study is the first to report inter-brain synchronization even in an MR environment, which is similar to studies on real-world collaboration. Although the exact cognitive abilities required for these tasks may vary, the frontal region would likely play critical roles in executive functions, or higher-level cognitive control, such as problem-solving, working memory, inhibition and decision making [46,47,48]. In the cooperative task in this study, the participants were often required not only to plan their actions to place the virtual objects, but also to carefully observe their partner’s actions to assemble the target object effectively. This would require the participants to generate an entire action plan, to decide when or whether to move, to track the current status against the final goal, and communicate with the peer if necessary (although the conversation was restricted to be minimal). All of these require executive function of the frontal cortex and were most effective when those plans or intentions were shared between the dyads throughout the task, which was eventually reflected in inter-brain frontal synchronization.
It is also reasonable that the brain activities over the frontal region may reflect the function of understanding the thoughts of others, as the medial prefrontal cortex (mPFC) is considered to be one of core regions of mentalizing, or “theory of mind” (ToM) [49,50,51]. ToM is the ability to infer other’s mental states, such as their thoughts and beliefs, which provides a framework for effective social interaction. Many studies have reported that the mPFC is involved in ToM processing during interactive tasks [51,52,53]. Therefore, inter-brain synchronization over the frontal region may reflect mutual mentalizing of others’ thoughts, especially when the dyads did not cooperate well and had to infer their partner’s intention more deliberately, as suggested by our correlation results that frontal inter-brain synchronization (F5) was significantly correlated with performance time.
Mutual action observation is another factor that helps to explain inter-brain synchronization, which involves the mirror neuron system (MNS), as the cooperative task allowed the participants to see their peers’ movements. The MNS consists of brain regions that activate not only when an individual performs an action, but also when they observe someone else performing the same action, which includes fronto-parietal sensorimotor cortices, allowing us to understand the intentions of other people’s movements and how they feel at that moment [37,54,55,56]. In this study, pairs of participants performed a cooperative task, sitting face-to-face in the MR environment so that they could see each other. During the task, they had opportunities to observe their partner’s movement to place their blocks; thus, similar brain activities between the performer’s motor-related regions and the observer’s MNS-related regions were observed. Marginal negative correlation between TI value and performance time (r = −0.423) found at C3 in our experiment, which indicates that inter-brain synchronization in the motor area was enhanced when the participants cooperated well and finished the task earlier, which might be a reflection of MNS activity. This interpretation is also consistent with previous findings that showed inter-brain synchronization during mutual motor tasks [10].
In the present study, task performance was not correlated with subjective reports (feelings of cooperation, sharing space, and objects). This indicates that task performance may not be related to the subjective quality of cooperation in a simple manner. One reason for this may be that the speed of motor execution differed from pair to pair; therefore, even if the degree of subjective feeling of cooperation was high, performance time was not necessarily reduced. Performance time may be somewhat related to participants’ familiarity with MR-like technologies. Because MR is a relatively novel technology, none of the participants had previously experienced MR, whereas some of them who had experience with VR may have had an advantage in the MR environment.
Finally, the limitations of this study should be noted. First, although the number of participants was comparable to that in other hyperscanning studies, the small sample size might have weakened the statistical power. Nevertheless, considering the strong conditional difference in the subjective measure (questionnaire) in our experiment, we could expect a strong conditional effect also in neural activities, which was indeed observed in terms of TI values (effect size d = 1.04, at Fz). The sample size (N = 16) was also justified by the power analysis. Second, the rejection rate of epochs was relatively high (>40%), although similar values have often been reported in previous real-world hyperscanning studies. This is mainly because of the difficulty of attaching EEG electrode simultaneously with MR device and less tolerability to motion artifacts. The use of more flexible and motion-tolerable electrodes in mobile EEG devices (such as patch-type EEG sensors) may overcome this problem in future studies. Conversation is another factor in artifacts during a collaborative activity, but it is important in the interaction between dyads when performing a cooperative task. In this experiment, the average ratio of conversation time to task time was relatively short. Moreover, on average, 90% of the data were unaffected by the conversation (also note that we removed all epochs during conversation from the analyses). Therefore, we believe that noisy epochs caused by conversation and body movements were successfully rejected in the data preprocessing stage. It would be beneficial to incorporate multiscale PCA [57] and signal decomposition methods [58] to remove noise from EEG signals in future studies. Third, our experiment targeted inter-brain synchronization of the dyads. However, applying the hyperscanning technique to more than three participants is definitely important to investigate the neural underpinnings of cooperative behavior further. Cooperation has been proven effective in enhancing work outcomes in many fields and in different age groups [59]. Cooperative learning is also emerging as a new educational form in companies and schools, which promotes group knowledge generation, improves the ability to solve complex problems, and encourages learners to participate in learning [60]. The MR technology used in our experiment can be applied to group interaction; thus, group-level inter-brain synchronization in a shared MR environment should be further investigated in future research.

5. Conclusions

To the best of our knowledge, this study is the first to apply the EEG hyperscanning technique to investigate inter-brain synchronization during a cooperative task in a shared MR environment. Our results successfully showed that inter-brain synchronization was also observed in a shared MR environment during the cooperative task. These results suggest that measures of inter-brain synchronization can reflect the quality of cooperation in a shared MR environment. This study offers a promising neuroscientific method to objectively measure the effectiveness of VR and MR.

Author Contributions

Conceptualization, Y.O. and S.S.; methodology, Y.O.; software, Y.O.; formal analysis, Y.O.; data curation, Y.O.; writing—original draft preparation, Y.O.; writing—review and editing, S.S.; visualization, Y.O.; supervision, S.S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the JST Moonshot R&D (Grant Number JPMJMS2013) and JSPS KAKENHI (Grant Number 21H03785).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the School of Science and Technology, Meiji University (21-551).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data are available upon request, which may be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ens, B.; Lanir, J.; Tang, A.; Bateman, S.; Lee, G.; Piumsomboon, T.; Billinghurst, M. Revisiting collaboration through mixed reality: The evolustion of groupware. Int. J. Hum. Comput. Stud. 2019, 131, 81–98. [Google Scholar] [CrossRef]
  2. Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.-M. A review on mixed reality: Current trends, challenges and prospects. Appl. Sci. 2020, 10, 636. [Google Scholar] [CrossRef]
  3. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, E77D, 1321–1329. [Google Scholar]
  4. Lindgren, R.; Tscholl, M.; Wang, S.; Johnson, E. Enhancing learning and engagement through embodied interaction within a mixed reality simulation. Comput. Educ. 2016, 95, 174–187. [Google Scholar] [CrossRef]
  5. Aguayo, C.; Danobeitia, C.; Cochrane, T.; Aiello, S.; Cook, S.; Cuevas, A. Embodied reports in paramedicine mixed reality learning. Res. Learn. Technol. 2018, 26, 2150. [Google Scholar] [CrossRef]
  6. Leonard, S.N.; Fitzgerald, R.N. Holographic learning: A mixed reality trial of Microsoft HoloLens in an Australian secondary school. Res. Learn. Technol. 2018, 26, 2160. [Google Scholar] [CrossRef]
  7. Ali, A.A.; Dafoulas, G.A.; Augusto, J.C. Collaborative Educational Environments Incorporating Mixed Reality Technologies: A Systematic Mapping Study. IEEE Trans. Learn. Technol. 2019, 12, 321–332. [Google Scholar] [CrossRef]
  8. Piumsomboon, T.; Dey, A.; Ens, B.; Lee, G.; Billinghurst, M. The effects of sharing awareness cues in collaborative mixed reality. Front. Robot. AI 2019, 6, 5. [Google Scholar] [CrossRef]
  9. Ask, T.F.; Kullman, K.; Sutterlin, S.; Knox, B.J.; Engel, D.; Lugo, R.G. A 3D mixed reality visualization of network topology and activity results in better dyadic cyber team communication and cyber situational awareness. Front. Big Data 2023, 6, 1042783. [Google Scholar] [CrossRef]
  10. Dumas, G.; Nadel, J.; Soussignan, R.; Martinerie, J.; Garnero, L. Inter-brain synchronization during social interaction. PLoS ONE 2010, 5, e12166. [Google Scholar] [CrossRef]
  11. Hari, R.; Himberg, T.; Nummenmaa, L.; Hamalainen, M.; Parkkonen, L. Synchrony of brains and bodies during implicit interpersonal interaction. Trends Cogn. Sci. 2013, 17, 105–106. [Google Scholar] [CrossRef] [PubMed]
  12. Schilbach, L.; Timmermans, B.; Reddy, V.; Castall, A.; Bente, G.; Shilicht, T.; Vogeley, K. Toward a second-person neuroscience. Behav. Brain Sci. 2013, 36, 393–462. [Google Scholar] [CrossRef] [PubMed]
  13. Babiloni, F.; Astolfi, L. Social neuroscience and hyperscanning techniques: Past, present and future. Neurosci. Biobehav. Rev. 2014, 44, 76–93. [Google Scholar] [CrossRef]
  14. Koike, T.; Tanabe, H.C.; Sadato, N. Hyperscanning neuroimaging technique to reveal the “two-in-one” system in social interactions. Neurosci. Res. 2015, 90, 25–32. [Google Scholar] [CrossRef] [PubMed]
  15. Yun, K.; Watanabe, K.; Shimojo, S. Interpersonal body and neural synchronization as a marker of implicit social interaction. Sci. Rep. 2012, 2, 959. [Google Scholar] [CrossRef]
  16. Lindenberger, U.; Li, S.C.; Gruber, W.; Muller, V. Brains swinging in concert: Cortical phase synchronization while playing guitar. BMC Neurosci. 2009, 10, 22. [Google Scholar] [CrossRef]
  17. Muller, V.; Saenger, J.; Lindenberger, U. Hyperbrain network properties of guitarists playing in quartet. Ann. N. Y. Acad. Sci. 2018, 1423, 198–210. [Google Scholar] [CrossRef]
  18. Kawasaki, M.; Yamada, Y.; Ushiku, Y.; Miyauchi, E.; Yamaguchi, Y. Inter-brain synchronization during coordination of speech rhythm in human-to-human social interaction. Sci. Rep. 2013, 3, 1692. [Google Scholar] [CrossRef]
  19. Saito, D.N.; Tanabe, H.C.; Izuma, K.; Hayashi, M.J.; Morito, Y.; Komeda, H.; Uchiyama, H.; Kosaka, H.; Okazawa, H.; Fujibayashi, Y.; et al. “Stay tuned”: Inter-individual neural synchronization during mutual gaze and joint attention. Front. Integr. Neurosci. 2010, 4, 127. [Google Scholar] [CrossRef]
  20. Cui, X.; Bryant, D.M.; Reiss, A.L. NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation. Neuroimage 2012, 59, 2430–2437. [Google Scholar] [CrossRef]
  21. Jiang, J.; Dai, B.H.; Peng, D.L.; Zhu, C.Z.; Liu, L.; Lu, C.M. Neural Synchronization during Face-to-Face Communication. J. Neurosci. 2012, 32, 16064–16069. [Google Scholar] [CrossRef] [PubMed]
  22. Tanabe, H.C.; Kosaka, H.; Saito, D.N.; Koike, T.; Hayashi, M.J.; Izuma, K.; Komeda, H.; Ishitobi, M.; Omori, M.; Munesue, T.; et al. Hard to “tune in”: Neural mechanisms of live face-to-face interaction with high-functioning autistic spectrum disorder. Front. Hum. Neurosci. 2012, 6, 268. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, N.; Mok, C.; Witt, E.E.; Pradhans, A.H.; Chen, J.E.; Reiss, A.L. NIRS-Based Hyperscanning Reveals Inter-brain Neural Synchronization during Cooperative Jenga Game with Face-to-Face Communication. Front. Hum. Neurosci. 2016, 10, 82. [Google Scholar] [CrossRef] [PubMed]
  24. Hirsch, J.; Zhang, X.; Noah, J.A.; Ono, Y. Frontal temporal and parietal systems synchronize within and across brains during live eye-to-eye contact. NeuroImage 2017, 157, 314–330. [Google Scholar] [CrossRef] [PubMed]
  25. Jahng, J.; Kralik, J.D.; Hwang, D.-U.; Jeong, J. Neural dynamics of two players when using nonverbal cues to gauge intentions to cooperate during the Prisoner’s Dilemma Game. NeuroImage 2017, 157, 263–274. [Google Scholar] [CrossRef] [PubMed]
  26. Dikker, S.; Wan, L.; Davidesco, I.; Kaggen, L.; Oostrik, M.; McClintock, J.; Rowland, J.; Michalareas, G.; Van Bavel, J.J.; Ding, M.Z.; et al. Brain-to-Brain Synchrony Tracks Real-World Dynamic Group Interactions in the Classroom. Curr. Biol. 2017, 27, 1375–1380. [Google Scholar] [CrossRef]
  27. Barraza, P.; Dumas, G.; Liu, H.; Blanco-Gomez, G.; Van Den Heuvel, M.I.; Baart, M.; Pérez, A. Implementing EEG hyperscanning setups. MethodsX 2019, 6, 428–436. [Google Scholar] [CrossRef]
  28. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef]
  29. Picton, T.W.; van Roon, P.; Armilio, M.L.; Berg, P.; Ille, N.; Scherg, M. The correction of ocular artifacts: A topographic perspective. Clin. Neurophysiol. 2000, 111, 53–65. [Google Scholar] [CrossRef]
  30. van der Helden, J.; Boksem, M.A.S.; Blom, J.H.G. The Importance of Failure: Feedback-Related Negativity Predicts Motor Learning Efficiency. Cereb. Cortex 2010, 20, 1596–1603. [Google Scholar] [CrossRef]
  31. Burgess, A.P. On the interpretation of synchronization in EEG hyperscanning studies: A cautionary note. Front. Hum. Neurosci. 2013, 7, 881. [Google Scholar] [CrossRef] [PubMed]
  32. Tenke, C.E.; Kayser, J.; Alvarenga, J.E.; Abraham, K.S.; Warner, V.; Talati, A.; Weissman, M.M.; Bruder, G.E. Temporal stability of posterior EEG alpha over twelve years. Clin. Neurophysiol. 2018, 129, 1410–1417. [Google Scholar] [CrossRef] [PubMed]
  33. Bevilacqua, D.; Davidesco, I.; Wan, L.; Chaloner, K.; Rowland, J.; Ding, M.Z.; Poeppel, D.; Dikker, S. Brain-to-Brain Synchrony and Learning Outcomes Vary by Student-Teacher Dynamics: Evidence from a Real-world Classroom Electroencephalography Study. J. Cogn. Neurosci. 2019, 31, 401–411. [Google Scholar] [CrossRef] [PubMed]
  34. Wen, X.T.; Mo, J.; Ding, M.Z. Exploring resting-state functional connectivity with total interdependence. Neuroimage 2012, 60, 1587–1595. [Google Scholar] [CrossRef]
  35. Dmochowski, J.P.; Sajda, P.; Dias, J.; Parra, L.C. Correlated components of ongoing EEG point to emotionally laden attention—A possible marker of engagement? Front. Hum. Neurosci. 2012, 6, 112. [Google Scholar] [CrossRef]
  36. Shimada, S.; Matsumoto, M.; Takahashi, H.; Yomogida, Y.; Matsumoto, K. Coodinated activation of premotor and ventromdeial prefrontal cortices during vicarious reward. Soc. Cogn. Affect. Neurosci. 2016, 11, 508–511. [Google Scholar] [CrossRef]
  37. Koide, T.; Shimada, S. Cheering enhances inter-brain synchronization between sensorimotor areas of player and observer. Jpn. Psychol. Res. 2018, 60, 265–275. [Google Scholar] [CrossRef]
  38. Gallotti, M.; Frith, C. Social cognition in the we-mode. Trends Cogn. Sci. 2013, 17, 160–165. [Google Scholar] [CrossRef]
  39. Funane, T.; Kiguchi, M.; Atsumori, H.; Sato, H.; Kubota, K.; Koizumi, H. Synchronous activity of two people’s prefrontal cortices during a cooperative task measured by simultaneous near-infrared spectroscopy. J. Biomed. Opt. 2011, 16, 077011. [Google Scholar] [CrossRef]
  40. Balconi, M.; Vanutelli, M.E. EEG hyperscanning and behavioral synchronization during a joint action. Neuropsychol. Trends 2018, 24, 23–47. [Google Scholar] [CrossRef]
  41. Dai, R.N.; Liu, R.; Liu, T.; Zhang, Z.; Xiao, X.; Sun, P.P.; Yu, X.T.; Wang, D.H.; Zhu, C.Z. Holistic cognitive and neural processes: A fNIRS-hyperscanning study on interpersonal sensorimotor synchronization. Soc. Cogn. Affect. Neurosci. 2018, 13, 1141–1154. [Google Scholar] [CrossRef] [PubMed]
  42. Osaka, N.; Minamoto, T.; Yaoi, K.; Azuma, M.; Shimada, Y.M.; Osaka, M. How Two Brains Make One Synchronized Mind in the Inferior Frontal Cortex: fNIRS-Based Hyperscanning During Cooperative Singing. Front. Psychol. 2015, 6, 1811. [Google Scholar] [CrossRef] [PubMed]
  43. Pan, Y.; Dikker, S.; Goldstein, P.; Zhu, Y.; Yang, C.; Hu, Y. Instructor-learner brain coupling discriminates between instructional approaches and predicts learning. NeuroImage 2020, 211, 116657. [Google Scholar] [CrossRef] [PubMed]
  44. Sun, B.; Xiao, W.; Feng, X.; Shao, Y.; Zhang, W.; Li, W. Behavioral and brain synchronization differences between expert and novice teachers when collaborating with students. Brain Cogn. 2020, 139, 105513. [Google Scholar] [CrossRef]
  45. Xue, H.; Lu, K.; Hao, N. Cooperation makes two less-creative individuals turn into a highly-creative pair. NeuroImage 2018, 172, 527–537. [Google Scholar] [CrossRef]
  46. Kringelbach, M.L.; Rolls, E.T. The functional neuroanatomy of the human orbitofrontal cortex: Evidence from neuroimaging and neuropsychology. Prog. Neurobiol. 2004, 72, 341–372. [Google Scholar] [CrossRef]
  47. Rushworth, M.F.S.; Noonan, M.P.; Boorman, E.D.; Walton, M.E.; Behrens, T.E. Frontal Cortex and Reward-Guided Learning and Decision-Making. Neuron 2011, 70, 1054–1069. [Google Scholar] [CrossRef]
  48. Kawasaki, M.; Kitajo, K.; Fukao, K.; Murai, T.; Yamaguchi, Y.; Funabiki, Y. Frontal theta activation during motor synchronization in autism. Sci. Rep. 2017, 7, 15034. [Google Scholar] [CrossRef]
  49. Gallagher, H.L.; Happe, F.; Brunswick, N.; Fletcher, P.C.; Frith, U.; Frith, C.D. Reading the mind in cartoons and stories: An fMRI study of ‘theory of mind’ in verbal and nonverbal tasks. Neuropsychologia 2000, 38, 11–21. [Google Scholar] [CrossRef]
  50. Rilling, J.K.; Sanfey, A.G.; Aronson, J.A.; Nystrom, L.E.; Cohen, J.D. The neural correlates of theory of mind within interpersonal interactions. NeuroImage 2004, 22, 1694–1703. [Google Scholar] [CrossRef]
  51. Carrington, S.J.; Bailey, A.J. Are There Theory of Mind Regions in the Brain? A Review of the Neuroimaging Literature. Hum. Brain Mapp. 2009, 30, 2313–2335. [Google Scholar] [CrossRef] [PubMed]
  52. McCabe, K.; Houser, D.; Ryan, L.; Smith, V.; Trouard, T. A functional imaging study of cooperation in two-person reciprocal exchange. Proc. Natl. Acad. Sci. USA 2001, 98, 11832–11835. [Google Scholar] [CrossRef] [PubMed]
  53. Gallagher, H.L.; Jack, A.I.; Roepstorff, A.; Frith, C.D. Imaging the intentional stance in a competitive game. NeuroImage 2002, 16, 814–821. [Google Scholar] [CrossRef] [PubMed]
  54. Gallese, V. The manifold nature of interpersonal relations: The quest for a common mechanism. Philos. Trans. R. Soc. B Biol. Sci. 2003, 358, 517–528. [Google Scholar] [CrossRef] [PubMed]
  55. Iacoboni, M.; Molnar-Szakacs, I.; Gallese, V.; Buccino, G.; Mazziotta, J.C.; Rizzolatti, G. Grasping the intentions of others with one’s own mirror neuron system. PLoS Biol. 2005, 3, 529–535. [Google Scholar] [CrossRef]
  56. Rizzolatti, G.; Cattaneo, L.; Fabbri-Destro, M.; Rozzi, S. Cortical mechanisms underlying the organization of goal-directed actions and mirror neuron-based action understanding. Physiol. Rev. 2014, 94, 655–706. [Google Scholar] [CrossRef]
  57. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z. Identification of motor and mental imagery EEG in two and multiclass subject-dependent tasks using successive decomposition index. Sensors 2020, 20, 5283. [Google Scholar] [CrossRef] [PubMed]
  58. Sadiq, M.T.; Yu, X.; Yuan, Z.; Fan, Z.; Rehman, A.U.; Li, G.; Xiao, G. Motor imagery EEG signals classification based on mode amplitude and frequency components using empirical wavelet transform. IEEE Access 2019, 7, 127678–127692. [Google Scholar] [CrossRef]
  59. Mayseless, N.; Hawthorne, G.; Reiss, A.L. Real-life creative problem solving in teams: fNIRS based hyperscanning study. Neuroimage 2019, 203, 116161. [Google Scholar] [CrossRef]
  60. Suh, H. Collaborative Learning Models and Support Technologies in the Future Classroom. Int. J. Educ. Media Technol. 2011, 5, 50–61. [Google Scholar]
Figure 1. (a) Appearance of the experimental apparatus. The participants wore the head-mounted MR device and an EEG cap simultaneously. (b) Cooperative condition. The participants sat down face-to-face with their partner and manipulated virtual building blocks to cooperatively create an object equivalent to the sample object. (c) Individual condition. The participants performed the same MR task as in the cooperative condition, but did so behind a virtual wall that hid the partner’s appearance. (d) The screen capture of the participant’s view in the cooperative task and (e) in the individual condition. Left and right panels show the perspectives of participants designated as 1P and 2P, respectively.
Figure 1. (a) Appearance of the experimental apparatus. The participants wore the head-mounted MR device and an EEG cap simultaneously. (b) Cooperative condition. The participants sat down face-to-face with their partner and manipulated virtual building blocks to cooperatively create an object equivalent to the sample object. (c) Individual condition. The participants performed the same MR task as in the cooperative condition, but did so behind a virtual wall that hid the partner’s appearance. (d) The screen capture of the participant’s view in the cooperative task and (e) in the individual condition. Left and right panels show the perspectives of participants designated as 1P and 2P, respectively.
Virtualworlds 02 00008 g001
Figure 2. Layout of the experimental apparatus. As shown in this figure, two individual EEG signals were recorded simultaneously during the task.
Figure 2. Layout of the experimental apparatus. As shown in this figure, two individual EEG signals were recorded simultaneously during the task.
Virtualworlds 02 00008 g002
Figure 3. Questionnaire results. Rating scores of all three items (cooperation, sharing space, and sharing objects) in the cooperative condition were significantly greater than 0, and the scores of all three items in the cooperative condition were significantly greater than that in the individual condition (**** p < 0.0001). Note that the values for individual conditions are very low (around −3).
Figure 3. Questionnaire results. Rating scores of all three items (cooperation, sharing space, and sharing objects) in the cooperative condition were significantly greater than 0, and the scores of all three items in the cooperative condition were significantly greater than that in the individual condition (**** p < 0.0001). Note that the values for individual conditions are very low (around −3).
Virtualworlds 02 00008 g003
Figure 4. TI values for each condition at the respective electrodes of the nine. TI values in the cooperative condition were significantly greater than those in the individual condition.
Figure 4. TI values for each condition at the respective electrodes of the nine. TI values in the cooperative condition were significantly greater than those in the individual condition.
Virtualworlds 02 00008 g004
Figure 5. (a) Correlation between rating score (sharing objects) and mean TI value at Fz in the cooperative condition. (b) Correlation between rating score (sharing space) and mean TI value at Fz in the cooperative condition.
Figure 5. (a) Correlation between rating score (sharing objects) and mean TI value at Fz in the cooperative condition. (b) Correlation between rating score (sharing space) and mean TI value at Fz in the cooperative condition.
Virtualworlds 02 00008 g005
Figure 6. (a) Correlation between performance time and mean TI value at F5 in the cooperative condition. (b) Correlation between performance time and mean TI value at C3 in the cooperative condition.
Figure 6. (a) Correlation between performance time and mean TI value at F5 in the cooperative condition. (b) Correlation between performance time and mean TI value at C3 in the cooperative condition.
Virtualworlds 02 00008 g006
Table 1. Mean TI values for each condition in all electrodes. * p < 0.05, uncorrected.
Table 1. Mean TI values for each condition in all electrodes. * p < 0.05, uncorrected.
ElectrodeConditionTI-ValuePerm. TestCoop. vs. Indiv.
MeanSDp-Valuet-Valuep-ValueEffect Size (Cohen’s d)
FrontalFzcoop0.2110.0080.014 *2.5510.022 *1.035
indiv0.2010.0100.072
F5coop0.2050.0130.0531.1030.2870.437
indiv0.1990.0140.081
F6coop0.2020.0090.06−1.2240.240−0.455
indiv0.2060.0120.106
CentralCzcoop0.2060.0070.029 *1.2480.2310.444
indiv0.2020.0110.116
C3coop0.2080.0080.036 *1.2990.2140.506
indiv0.2030.0120.096
C4coop0.2100.0120.021 *2.2480.040 *0.827
indiv0.1940.0240.109
PosteriorPOzcoop0.2110.0170.015 *1.8550.0830.717
indiv0.2010.0120.085
P5coop0.2140.0470.043 *1.0050.3310.374
indiv0.2000.0230.124
P6coop0.2130.0180.035 *1.8410.0850.814
indiv0.1950.0240.082
Table 2. Spearman’s correlation results between TI value and behavioral data. * p < 0.05, uncorrected.
Table 2. Spearman’s correlation results between TI value and behavioral data. * p < 0.05, uncorrected.
ElectrodeConditionTI - Questionnaire ScoreTI-Task Performance
CooperationSharing SpaceSharing Objectsrsp-Value
rsp-Valuersp-Valuersp-Value
FrontalFzCoop0.0050.9800.3080.0870.3730.036 *−0.2250.401
Indiv−0.2540.161−0.0910.621−0.1330.4680.1580.558
F5Coop0.1520.407−0.1370.4550.0060.9720.5230.038 *
Indiv0.0550.7650.2370.1910.2880.1100.0290.914
F6Coop0.1190.5160.0320.8620.0390.831−0.2280.395
Indiv−0.0620.736−0.0210.9090.0080.967−0.2450.360
CentralCzCoop−0.0090.960−0.0390.8340.0310.866−0.2930.271
Indiv−0.0230.9020.1970.2800.1300.480–0.0320.906
C3Coop0.1280.4840.2400.1860.2020.268−0.4230.103
Indiv−0.0480.7940.1540.4000.0350.850−0.0680.803
C4Coop−0.1570.392−0.2220.2230.0220.904−0.0490.858
Indiv0.0030.9890.1720.3460.1140.5330.0680.802
PosteriorPOzCoop0.0520.777−0.2190.229−0.2880.110−0.2400.370
Indiv−0.1080.5580.1920.2910.1820.3190.1230.651
P5Coop0.1700.351−0.1850.312−0.1780.3310.0040.987
Indiv−0.2500.167−0.0190.918−0.1010.5840.0730.787
P6Coop−0.0600.745−0.2750.128−0.2710.1340.1190.660
Indiv−0.0520.7790.1950.2860.0640.728−0.0140.959
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ogawa, Y.; Shimada, S. Inter-Subject EEG Synchronization during a Cooperative Motor Task in a Shared Mixed-Reality Environment. Virtual Worlds 2023, 2, 129-143. https://doi.org/10.3390/virtualworlds2020008

AMA Style

Ogawa Y, Shimada S. Inter-Subject EEG Synchronization during a Cooperative Motor Task in a Shared Mixed-Reality Environment. Virtual Worlds. 2023; 2(2):129-143. https://doi.org/10.3390/virtualworlds2020008

Chicago/Turabian Style

Ogawa, Yutaro, and Sotaro Shimada. 2023. "Inter-Subject EEG Synchronization during a Cooperative Motor Task in a Shared Mixed-Reality Environment" Virtual Worlds 2, no. 2: 129-143. https://doi.org/10.3390/virtualworlds2020008

Article Metrics

Back to TopTop