Next Article in Journal
Comparison of Physical and System Factors Impacting Hydration Sensing in Leaves Using Terahertz Time-Domain and Quantum Cascade Laser Feedback Interferometry Imaging
Next Article in Special Issue
Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks
Previous Article in Journal
Self-Supervised Wavelet-Based Attention Network for Semantic Segmentation of MRI Brain Tumor
Previous Article in Special Issue
Elbow Joint Stiffness Functional Scales Based on Hill’s Muscle Model and Genetic Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

Crosstalk in Facial EMG and Its Reduction Using ICA

1
Psychological Process Research Team, Guardian Robot Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
2
Field Science Education and Research Center, Kyoto University, Oiwake-cho, Kitashirakawa, Kyoto 606-8502, Japan
3
Brain Activity Imaging Center, ATR-Promotions, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0288, Japan
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(5), 2720; https://doi.org/10.3390/s23052720
Submission received: 31 January 2023 / Revised: 24 February 2023 / Accepted: 28 February 2023 / Published: 2 March 2023
(This article belongs to the Special Issue Electromyography (EMG) Signal Acquisition and Processing)

Abstract

:
There is ample evidence that electromyography (EMG) signals from the corrugator supercilii and zygomatic major muscles can provide valuable information for the assessment of subjective emotional experiences. Although previous research suggested that facial EMG data could be affected by crosstalk from adjacent facial muscles, it remains unproven whether such crosstalk occurs and, if so, how it can be reduced. To investigate this, we instructed participants (n = 29) to perform the facial actions of frowning, smiling, chewing, and speaking, in isolation and combination. During these actions, we measured facial EMG signals from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles. We performed an independent component analysis (ICA) of the EMG data and removed crosstalk components. Speaking and chewing induced EMG activity in the masseter and suprahyoid muscles, as well as the zygomatic major muscle. The ICA-reconstructed EMG signals reduced the effects of speaking and chewing on zygomatic major activity, compared with the original signals. These data suggest that: (1) mouth actions could induce crosstalk in zygomatic major EMG signals, and (2) ICA can reduce the effects of such crosstalk.

1. Introduction

There is extensive evidence from psychophysiological studies that facial electromyography (EMG) can provide valuable information for the assessment of subjective emotional experience [1,2,3]. Specifically, EMG signals recorded from the corrugator supercilii muscle (related to frowning) and zygomatic major muscle (related to smiling) are negatively and positively associated with subjective valence ratings, respectively. For example, a previous study has recorded continuous subjective ratings of valence and EMG from these muscles during the observation of emotional films [4]. The dynamic changes in subjective valence ratings were negatively and positively associated with corrugator supercilii and zygomatic major EMG activity, respectively. Although there remain debates regarding the universal relationships between emotional categorical states and facial muscle activation patterns [5,6,7], ample evidence suggests that emotional valence is reliably related to facial muscle activity [8]. Although some studies have shown that facial EMG responses occur even without the subjective experiences of emotional events [9,10], suggesting possible dissociation between subjective and physiological emotional responses, ample evidence suggests that they are generally coordinated and constitute a unified emotional system [11]. Several recent studies have used facial EMG of the corrugator supercilii and zygomaticus major muscles for emotion sensing during various active tasks, including conversation [12,13] and food consumption [14,15].
Some investigators have cautioned that the use of facial EMG signals as a proxy for the emotional state may be affected by crosstalk [16,17]; that is, the EMG signal for a specific muscle may be affected by electric activity in adjacent muscles through volume conduction. Indeed, crosstalk is a serious concern in all types of surface EMG data [18,19], but particularly for facial EMG recordings because crosstalk in surface EMG is distance-dependent [20] and there are more than two dozen individual muscles in proximity on each side of the face [21].
However, there remains uncertainty regarding whether and how crosstalk affects facial EMG during tasks related to emotion sensing. Only a few studies have empirically investigated this issue, and the results have been equivocal [22,23,24]. In one study, facial EMG signals were recorded from seven muscles in the lower half of the face, including the zygomatic major muscle [22]. Participants performed six tasks that required the production of discrete facial actions. Although a certain degree of crosstalk was present, it was within acceptable limits for most muscles. However, the study did not record corrugator EMG signals or statistically analyze zygomatic major EMG data; it also failed to assess facial actions in the context of emotional tasks. Another study recorded facial EMG from five muscles, including the corrugator supercilii and zygomatic major, while participants performed six simple and combined facial actions (e.g., smiling and chewing) [23]. The researchers concluded that the degree of crosstalk was acceptably small for all facial actions, except for a large effect of chewing on zygomatic major activity. However, the conclusion was not supported by statistical tests. Data from another study of 29 facial actions using 48 monopolar electrodes suggested that crosstalk did not fundamentally change facial EMG patterns, possibly because EMG amplitudes decrease according to the square of the distance from the source [24]. Based on these findings, we hypothesized that mouth actions, such as eating and talking, would produce a small but meaningful degree of crosstalk in EMG signals from the zygomatic major muscle, which is close to the mouth, but not in signals from the corrugator supercilii muscle.
In addition, uncertainty remains regarding whether data analysis techniques can reduce the effect of crosstalk on facial EMG. A potentially useful technique is independent component analysis (ICA), which performs blind source separation via unsupervised learning [25] and can decompose electrophysiological sensor signals into independent components (ICs) that correspond to source activities [26]. Several studies have shown that ICA on electroencephalography data can perform linear spatial filtering on the recorded data to the effects of summing the volume-conducted cortical source activities in each recording channel [26]. Because surface EMG signals are the sum of the propagating action potentials produced by the recruited motor units (i.e., the basic muscle-building block consisting of one motor neuron and all the muscle fibers that it innervates) [27], ICA may effectively decompose motor unit source activities. Consistent with this notion, some studies have demonstrated that ICA effectively decomposed target muscle signals and crosstalk within surface EMG signals recorded from hand muscles [28,29]. Although ICA was also applied to facial EMG signals, the results were equivocal [30,31,32,33]. In a seminal work [30], EMG signals were recorded from hand and face muscles, including the zygomatic major, while participants performed hand gestures and uttered vowels, respectively. The EMG signals were then decomposed using ICA, and the gestures/vowels were classified through artificial neural network analysis of the ICs. The classification accuracies of the gestures and vowels were 100% and ~60%, respectively; the researchers concluded that ICA performed poorly with respect to the classification of facial EMG. Other studies used ICA to evaluate facial EMG signals recorded from a few muscles, including the zygomatic major [31,32]. The researchers reported that their artificial neural network analysis of ICs accurately classified smiling. Another study recorded facial EMG signals using an array of electrodes on the cheeks; the results showed that ICA of the EMG signals successfully distinguished ICs related to three different facial actions, including smiling [33]. Based on these data, although there is no direct evidence regarding the reduction of crosstalk in facial EMG, we hypothesized that ICA could be used to remove crosstalk from facial EMG signals related to emotion-sensing tasks.
To test these hypotheses, we measured facial EMG while participants performed facial actions. We measured EMG from the corrugator supercilii and zygomatic major muscles, as well as the masseter and suprahyoid muscles, which are involved in mouth actions such as chewing and talking. We instructed the participants to perform frowning, smiling, chewing, and speaking actions, in isolation and combination. Initially, we focused on EMG signals from emotion-sensing-related muscles during simple facial actions of non-target muscles (e.g., zygomatic major EMG activity during speaking) to identify the presence of crosstalk. We also evaluated whether crosstalk could emerge during combined facial actions. Because facial actions are generally difficult to perform consciously [22], we did not expect to observe fully isolated muscle contractions. Subsequently, we used ICA to determine whether crosstalk could be reduced.

2. Materials and Methods

2.1. Participants

Twenty-nine Japanese volunteers (16 women; mean ± standard deviation age, 22.6 ± 2.7 years) participated in this study. The sample size was determined through a priori power analysis conducted using G*Power software ver. 3.1.9.2 [34]. We assumed paired t-tests (two-tailed) to compare the original and ICA-reconstructed signals, to detect the mean effect size in biological psychological studies (i.e., d = 0.8 [35]) with an α-level of 0.05 and power (1–β) of 0.80. Power analysis showed that >15 participants were required. All participants had normal or corrected-to-normal visual acuity. After an explanation of the experimental procedure, all participants provided written informed consent. This study was approved by the Ethics Committee of the Unit for Advanced Studies of the Human Mind, Kyoto University (approval number: 30-P-6). All experiments adhered to the ethical policies of our institution and the Declaration of Helsinki.

2.2. Apparatus

The experiments were performed using Presentation software (Neurobehavioral Systems, Berkeley, CA, USA) and a Windows computer (HP Z200 SFF, Hewlett-Packard Japan, Tokyo, Japan). Slides showing the task instructions were presented on a 19-inch cathode ray tube monitor (HM903D-A; Iiyama, Tokyo, Japan) with a resolution of 1024 × 768 pixels.

2.3. Procedure

The experiments were carried out in a soundproof, electrically shielded chamber (Science Cabin, Takahashi Kensetsu, Tokyo, Japan). Following electrode attachment, participants were instructed to perform facial actions while their facial EMG signals were recorded. All facial actions (i.e., frowning, smiling, chewing, speaking [i.e., slowly uttering vowel sounds], frowning + chewing, smiling + chewing, frowning + speaking, and smiling + speaking) were listed on the screen. To facilitate understanding, the screen also showed pictures depicting the anatomy of the facial muscles, along with photographs of single actions. Participants were asked to practice all facial actions at their own pace. Then, after 8 practice trials, a total of 64 experimental trials were performed. The order of conditions was pseudorandomized.
For each trial, after the action instructions (e.g., “Frown”) had been presented for 3 s, followed by the presentation of a small black cross for 3 s as a fixation point, a large red cross appeared on the screen for 5 s. Participants were asked to perform facial actions in accordance with instructions provided during the presentation of the red cross.

2.4. EMG Recording

EMG signals were recorded from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles on the left side of the face (Figure 1). Sets of pre-gelled, self-adhesive 0.7-cm Ag/AgCl electrodes (1.5-cm interelectrode spacing; Prokidai, Sagara, Japan) were used. The electrodes were placed in accordance with guidelines and methods used in previous studies [24,36,37,38]. A ground electrode was placed on the middle of the forehead. The data were amplified, bandpass-filtered (20–400 Hz), and sampled at 1000 Hz using an EMG-025 amplifier (Harada Electronic Industry, Sapporo, Japan), the PowerLab 16/35 data acquisition system with a 16-bit A/D resolution and LabChart Pro 8.0 software (ADInstruments, Dunedin, New Zealand). A low-cut filter (20 Hz) was used to remove motion artifacts [39]. Participants’ behaviors were monitored by video, which was unobtrusively recorded using a digital web camera (HD1080P; Logicool, Tokyo, Japan).

2.5. Data Analysis

Data preprocessing and ICA were performed using Psychophysiological Analysis Software 3.3 (Computational Neuroscience Laboratory of the Salk Institute, La Jolla, CA, USA) and in-house programs implemented in the MATLAB R2021a environment (MathWorks, Natick, MA, USA). Preprocessing was conducted in a manner identical to that used in a previous study [14]. EMG data for each trial were recorded at baseline (beginning 500 ms before stimulus onset) and during stimulus presentation (during the performance of facial actions; 5000 ms). Then, the data were rectified and downsampled to 10 Hz; this resampling was conducted because ICA assumes zero-lag synchronization [40], and the conductance velocity of crosstalk in surface EMG is reportedly ~4.5 m/s [41]. EMG preprocessing using rectification and downsampling was recommended in guidelines [42] and used in several previous studies (e.g., [43]).
The processed EMG data for each trial and participant were then concatenated and subjected to ICA, which enables blind source separation of a linear mixture of sources in electrophysiological signals that are spatially fixed and temporally independent [26]. We used the infomax algorithm [25,44], which identifies the unmixing matrix by maximizing the joint entropy (i.e., maximizing the individual entropies while minimizing the mutual information) of the resulting unmixed signals. The artificial neural network was trained using unmixing weighted matrices that maximized the joint entropy of transformed channel data [26]. For all participants, ICA identified the most important ICs for single muscles. Then, to remove crosstalk associated with the masseter and suprahyoid muscles, EMG signals were reconstructed using the two ICs that exhibited the highest variance with respect to the corrugator supercilii and zygomatic major muscle EMG data. Figure 2 and Figure S1 present representative examples of original and ICA-reconstructed EMG signals. The original and ICA-reconstructed EMG signals were baseline-corrected with respect to the mean value over the pre-stimulus period and averaged across the stimulus presentation period (5000 ms).
Statistical tests were conducted using JASP 0.14.1 software [45]. Original EMG signals were tested for the differences from zero using one-sample t-tests (two-tailed). Subsequently, original and ICA-reconstructed EMG signals were compared using paired t-tests (two-tailed). All results were considered statistically significant at p < 0.05 after correction for multiple tests (i.e., eight) for each measure using Holm’s sequential method [46]. Cohen’s d values [47] were reported as the effect size measures. Our preliminary analysis indicated that several measures had a non-normal distribution (Shapiro-Wilk test, p < 0.05). Although t-tests should be considered asymptotically valid under general conditions, even when the normality assumption is rejected [48], we additionally conducted non-parametric Wilcoxon signed-rank tests to confirm the results of the t-tests.

3. Results

3.1. Original EMG Signal Analysis

Figure 3 shows the mean ± standard error values for original and ICA-reconstructed EMG signals.
First, using one-sample t-tests, original EMG signals of the corrugator supercilii and zygomatic major muscles were evaluated to determine whether the isolated non-target facial actions also elicited muscle activation (i.e., crosstalk). The results (Table 1; Figure S2) showed that the corrugator supercilii was significantly activated only during frowning (t(28) = 5.42, p < 0.001, d = 1.01). The zygomatic major muscle was significantly activated during the target smiling action, as well as non-target frowning, chewing, and speaking (t(28) > 2.26, p < 0.032, d > 0.41).
Next, corrugator supercilii and zygomatic major EMG activities during combined facial actions were analyzed using the methods described above. The corrugator supercilii was significantly activated only during the frowning + speaking and frowning + chewing action combinations (t(28) > 3.73, p < 0.001, d > 0.68). The zygomatic major showed significant EMG activity during all combined actions, including the conditions only activating non-target muscles (i.e., the frowning + speaking and frowning + chewing actions) (t(28) > 2.40, p < 0.024, d > 0.44).
To evaluate the validity of the methods used to manipulate facial actions, EMG signals recorded from the masseter and suprahyoid muscles during both simple and combined actions were analyzed. The results revealed significant EMG activity in all conditions except frowning alone (t(28) > 3.90, p < 0.001, d > 0.72).
To confirm the robustness of these results, original EMG signals were analyzed using one-sample Wilcoxon signed-rank tests, which confirmed the significant results of one-sample t-tests (Table S1).

3.2. Comparison of Original and ICA-Reconstructed EMG Signals

To evaluate the ability of ICA to reduce crosstalk (i.e., masseter and suprahyoid muscle activities) from the EMG signals of the corrugator supercilii and zygomatic major muscles, original and ICA-reconstructed signals were compared using paired t-tests.
First, corrugator supercilii and zygomatic major EMG signals during simple action conditions were evaluated. The results (Table 2; Figure S3) showed no significant differences in corrugator supercilii activity (p > 0.05, Holm-corrected). For zygomatic major activity, significant differences were found during all actions, indicating that the ICA-reconstructed EMG signals were weaker than the original signals (t(28) > 2.61, p < 0.015, d > 0.48).
Next, EMG signals recorded during combined actions were evaluated using the methods described above. For corrugator supercilii activity, the ICA-reconstructed signals were significantly stronger than the original signals during frowning + chewing (t(28) = 2.97, p = 0.006, d = 0.55). For zygomatic major activity, significant differences were found during all combined actions, indicating weaker ICA-reconstructed signals than the original signals (t(28) > 3.26, p < 0.004, d > 0.60).
To confirm the validity of reducing masseter and suprahyoid muscle activity using ICA, EMG signals recorded from those muscles during both simple and combined actions were also analyzed. For both muscles, the original signals were stronger than the ICA-reconstructed signals under all conditions except frowning alone (t(28) > 2.71, p < 0.012, d > 0.50).
The original and ICA-reconstructed signals were compared using non-parametric paired Wilcoxon signed-rank tests. The results showed that all significant effects according to t-tests were also significant on Wilcoxon signed-rank tests, except for corrugator supercilii activity during frowning + chewing (Table S2).

4. Discussion

Our original EMG signal analysis confirmed that deliberate facial actions appropriately activated all four target muscles. Importantly, crosstalk arising from the contraction of other muscles during frowning, speaking, and chewing affected the zygomatic major EMG signals. These results are consistent with the previous suggestion that crosstalk is present among facial EMG data obtained in emotion-sensing paradigms [16]. We observed crosstalk for zygomatic major activity, but this was less evident for corrugator supercilii activity. This is consistent with a previous study regarding the effect of chewing [23], although that study did not include a thorough statistical analysis. Our results are also compatible with the suggestion that crosstalk in EMG is distance-dependent [20]. Extending prior research, the present study provides reliable evidence that crosstalk arising from non-emotional facial actions can affect facial EMG signals recorded during emotion sensing.
Furthermore, our results of the comparison of original and ICA-reconstructed EMG signals demonstrated that ICA reduced the effect of crosstalk arising from mouth actions on zygomatic major EMG signals. This finding corroborates the results of studies in which ICA effectively removed crosstalk from hand muscle EMG signals [28,29] and distinguished facial EMG signals [31,32,33]. However, in another study where ICA was used to evaluate facial EMG data, ICs exhibited poor vowel classification performance [30]. This equivocal result may be explained by methodological differences: the study with poor performance included only one participant, and thus may have lacked sufficient power to reveal the effects of ICA. Our method allowed us to detect a statistically significant effect of ICA on facial EMG signals. To the best of our knowledge, this study provides the first evidence that ICA can reduce crosstalk in facial EMG signals recorded for the purpose of emotion sensing.
The present results have several practical implications. Emotions have a major impact on happiness [49], behavior, and health [50]; however, self-report measures of these aspects are inherently subjective, subject to biases, and difficult to record in a continuous manner during tasks [51]. Therefore, emotion sensing on the basis of physiological signals (i.e., facial EMG) is advantageous [1,2,3]. Further, as some recent studies have developed wearable devices that can record facial EMG signals [33,52,53], future studies presumably will use facial EMG to detect emotional states under naturalistic situations. However, our results suggest that non-emotional facial actions (e.g., speaking and eating) can affect emotion-related EMG. At the same time, they also indicate that ICA can reduce the effect of crosstalk. We hope that our findings will enhance the sensitivity of future analyses of emotion sensing.
Some limitations of the present study should be acknowledged. First, we only tested two types of non-emotional facial actions; crosstalk arising from other actions requires investigation. We may have failed to detect crosstalk in corrugator supercilii EMG signals because we only tested mouth actions. Facial actions in the upper face region, such as eyebrow-raising [54], should be tested. Furthermore, facial EMG has applications beyond emotion sensing, including human–computer interface [55,56], oral processing and food texture analysis [57,58,59], speech and swallowing disorder assessment [60,61,62], and facial palsy assessment [63,64], which may have a specific target and confounding facial muscle activities. Future research should explore additional facial actions and their effect on crosstalk. Second, our comparative analysis of the original and ICA-reconstructed corrugator supercilii EMG signals unexpectedly revealed higher values for the ICA-reconstructed data, suggesting that crosstalk removal improves EMG signals under specific conditions. However, these results may be related to artifacts associated with the statistical manipulation; further studies are needed to evaluate such effects of ICA.

5. Conclusions

In this study, speaking and chewing induced EMG activity in the zygomatic major muscle. Compared with the original signals, the ICA-reconstructed zygomatic major EMG signals were less affected by speaking and chewing. These data indicate that mouth actions can induce crosstalk in zygomatic major EMG signals; importantly, ICA can reduce the effects of such crosstalk. However, because we tested only a limited number of facial actions and our results showed some unexpected patterns, further studies are warranted to investigate crosstalk in facial EMG and its ICA analysis.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s23052720/s1, Figure S1. The independent component analysis process. Figure S2. Mean ± standard error of original and independent component analysis (ICA)-reconstructed electromyography data indicating significant results of one-sample t-tests (vs. zero; two-tailed) of original electromyography signals. Figure S3. Mean ± standard error of original and independent component analysis (ICA)-reconstructed electromyography data indicating significant results of paired t-tests (two-tailed) comparing original and ICA-reconstructed signals. Table S1. Results of one-sample Wilcoxon signed-rank tests (vs. zero; two-tailed) of original electromyography signals. Table S2. Results of paired Wilcoxon signed-rank tests (two-tailed) comparing original and independent component analysis-reconstructed signals. Table S3: Electromyography signals (μV) for each condition of each participant.

Author Contributions

Conceptualization and investigation, W.S. Analysis, W.S. and T.K. Writing, W.S. and T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by funds from Japan Science and Technology Agency-Mirai Program (JPMJMI20D7) and Japan Society for the Promotion of Science KAKENHI (19H01124; 22K03205).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Unit for Advanced Studies of the Human Mind, Kyoto University (30-P-6).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

The data supporting the findings of this study are available within the Supplementary Materials.

Acknowledgments

The author thanks Masaru Usami for his technical support.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Cacioppo, J.T.; Berntson, G.G.; Klein, D.J. What is an emotion? The role of somatovisceral afference, with special emphasis on somatovisceral “illusions”. In Emotion and Social Behavior Ix; Clark, M.S., Ed.; Sage Publications: Thousand Oaks, CA, USA, 1992; pp. 63–98. [Google Scholar]
  2. Lang, P.J.; Bradley, M.M.; Cuthbert, B.N. Emotion, motivation, and anxiety: Brain mechanisms and psychophysiology. Biol. Psychiatry 1998, 44, 1248–1263. [Google Scholar] [CrossRef] [PubMed]
  3. Wingenbach, T.S.H. Facial EMG—Investigating the interplay of facial muscles and emotions. In Social and Affective Neuroscience of Everyday Human Interaction: From Theory to Methodology; Boggio, P.S., Wingenbach, T.S.H., da Silveira Coelho, M.L., Comfort, W.E., Marques, L.M., Alves, M.V.C., Eds.; Springer: Cham, Switzerland, 2023; pp. 283–300. [Google Scholar]
  4. Sato, W.; Kochiyama, T.; Yoshikawa, S. Physiological correlates of subjective emotional valence and arousal dynamics while viewing films. Biol. Psychol. 2020, 157, 107974. [Google Scholar] [CrossRef] [PubMed]
  5. Fernández-Dols, J.M.; Crivelli, C. Emotion and expression: Naturalistic studies. Emot. Rev. 2013, 5, 24–29. [Google Scholar] [CrossRef]
  6. Reisenzein, R.; Studtmann, M.; Horstmann, G. Coherence between emotion and facial expression: Evidence from laboratory experiments. Emot. Rev. 2013, 5, 16–23. [Google Scholar] [CrossRef]
  7. Durán, J.I.; Reisenzein, R.; Fernández-Dols, J.M. Coherence between emotions and facial expressions: A research synthesis. In The Science of Facial Expression; Fernández-Dols, J.M., Russell, J.A., Eds.; Oxford University Press: New York, NY, USA, 2017; pp. 107–129. [Google Scholar]
  8. Russell, J.A. Core affect and the psychological construction of emotion. Psychol. Rev. 2003, 110, 145–172. [Google Scholar] [CrossRef]
  9. Dimberg, U.; Thunberg, M.; Elmehed, K. Unconscious facial reactions to emotional facial expressions. Psychol. Sci. 2000, 11, 86–89. [Google Scholar] [CrossRef] [PubMed]
  10. Bornemann, B.; Winkielman, P.; van der Meer, E. Can you feel what you do not see? Using internal feedback to detect briefly presented emotional stimuli. Int. J. Psychophysiol. 2012, 85, 116–124. [Google Scholar] [CrossRef]
  11. Lang, P.J. Emotion’s response patterns: The brain and the autonomic nervous system. Emot. Rev. 2014, 6, 93–99. [Google Scholar] [CrossRef]
  12. Riehle, M.; Kempkensteffen, J.; Lincoln, T.M. Quantifying facial expression synchrony in face-to-face dyadic interactions: Temporal dynamics of simultaneously recorded facial EMG signals. J. Nonverbal. Behav. 2017, 41, 85–102. [Google Scholar] [CrossRef]
  13. Nishimura, S.; Kimata, D.; Sato, W.; Kanbara, M.; Fujimoto, Y.; Kato, H.; Hagita, N. Positive emotion amplification by representing excitement scene with TV chat agents. Sensors 2020, 20, 7330. [Google Scholar] [CrossRef]
  14. Sato, W.; Minemoto, K.; Ikegami, A.; Nakauma, M.; Funami, T.; Fushiki, T. Facial EMG correlates of subjective hedonic responses during food consumption. Nutrients 2020, 12, 1174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Sato, W.; Ikegami, A.; Ishihara, S.; Nakauma, M.; Funami, T.; Yoshikawa, S.; Fushiki, T. Brow and masticatory muscle activity senses subjective hedonic experiences during food consumption. Nutrients 2021, 13, 4216. [Google Scholar] [CrossRef] [PubMed]
  16. van Boxtel, A. Facial EMG as a tool for inferring affective states. Proc. Meas. Behav. 2010, 2010, 104–108. [Google Scholar]
  17. Huang, C.N.; Chen, C.H.; Chung, H.Y. The review of applications and measurements in facial electromyography. J. Med. Biol. Eng. 2005, 25, 15–20. [Google Scholar]
  18. Hug, F. Can muscle coordination be precisely studied by surface electromyography? J. Electromyogr. Kinesiol. 2011, 21, 1–12. [Google Scholar] [CrossRef]
  19. Mesin, L. Crosstalk in surface electromyogram: Literature review and some insights. Phys. Eng. Sci. Med. 2020, 43, 481–492. [Google Scholar] [CrossRef]
  20. Winter, D.A.; Fuglevand, A.J.; Archer, S.E. Crosstalk in surface electromyography: Theoretical and practical estimates. J. Electromyogr. Kinesiol. 1994, 4, 15–26. [Google Scholar] [CrossRef]
  21. Westbrook, K.E.; Nessel, T.A.; Varacallo, M. Anatomy, head and neck, facial muscles. In StatPearls; StatPearls Publishing: Tampa, FL, USA, 2022. [Google Scholar]
  22. Lapatki, B.G.; Stegeman, D.F.; Jonas, I.E. A surface EMG electrode for the simultaneous observation of multiple facial muscles. J. Neurosci. Methods. 2003, 123, 117–128. [Google Scholar] [CrossRef]
  23. Rantanen, V.; Ilves, M.; Vehkaoja, A.; Kontunen, A.; Lylykangas, L.; Makela, E.; Rautiainen, M.; Surakka, V.; Lekkala, J. A survey on the feasibility of surface EMG in facial pacing. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1688–1691. [Google Scholar]
  24. Schumann, N.P.; Bongers, K.; Guntinas-Lichius, O.; Scholle, H.C. Facial muscle activation patterns in healthy male humans: A multi-channel surface EMG study. J. Neurosci. Methods 2010, 187, 120–128. [Google Scholar] [CrossRef]
  25. Bell, A.J.; Sejnowski, T.J. An information-maximization approach to blind separation and blind deconvolution. Neural. Comput. 1995, 7, 1129–1159. [Google Scholar] [CrossRef]
  26. Makeig, S.; Debener, S.; Onton, J.; Delorme, A. Mining event-related brain dynamics. Trends Cogn. Sci. 2004, 8, 204–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Campanini, I.; Merlo, A.; Disselhorst-Klug, C.; Mesin, L.; Muceli, S.; Merletti, R. Fundamental concepts of bipolar and high-Density surface EMG understanding and teaching for clinical, occupational, and sport applications: Origin, detection, and main errors. Sensors 2022, 22, 4150. [Google Scholar] [CrossRef] [PubMed]
  28. Farina, D.; Févotte, C.; Doncarli, C.; Merletti, R. Blind separation of linear instantaneous mixtures of nonstationary surface myoelectric signals. IEEE Trans. Biomed. Eng. 2004, 51, 1555–1567. [Google Scholar] [CrossRef] [PubMed]
  29. Kilner, J.M.; Baker, S.N.; Lemon, R.N. A novel algorithm to remove electrical cross-talk between surface EMG recordings and its application to the measurement of short-term synchronisation in humans. J. Physiol. 2002, 538, 919–930. [Google Scholar] [CrossRef] [PubMed]
  30. Naik, G.R.; Kumar, D.K. Applications and limitations of independent component analysis for facial and hand gesture surface electromyograms. J. Proc. R Soc. New South Wales. 2007, 140, 47–54. [Google Scholar]
  31. Gruebler, A.; Suzuki, K. Measurement of distal EMG signals using a wearable device for reading facial expressions. Annu Int Conf. IEEE Eng. Med. Biol. Soc. 2010, 2010, 4594–4597. [Google Scholar] [PubMed]
  32. Perusquía-Hernández, M.; Hirokawa, M.; Suzuki, K. A wearable device for fast and subtle spontaneous smile recognition. IEEE Trans. Affect. 2017, 8, 522–533. [Google Scholar] [CrossRef]
  33. Inzelberg, L.; Rand, D.; Steinberg, S.; David-Pur, M.; Hanein, Y. A wearable high-resolution facial electromyography for long term recordings in freely behaving humans. Sci. Rep. 2018, 8, 2058. [Google Scholar] [CrossRef] [Green Version]
  34. Faul, F.; Erdfelder, E.; Lang, A.G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods. 2007, 39, 175–191. [Google Scholar] [CrossRef]
  35. Schafer, T.; Schwarz, M.A. The meaningfulness of effect sizes in psychological research: Differences between sub-disciplines and the impact of potential biases. Front. Psychol. 2019, 10, 813. [Google Scholar] [CrossRef] [Green Version]
  36. Fridlund, A.J.; Cacioppo, J.T. Guidelines for human electromyographic research. Psychophysiology 1986, 23, 567–589. [Google Scholar] [CrossRef] [PubMed]
  37. Ishihara, S.; Nakauma, M.; Funami, T.; Tanaka, T.; Nishinari, K.; Kohyama, K. Electromyography during oral processing in relation to mechanical and sensory properties of soft gels. J. Texture Stud. 2011, 42, 254–267. [Google Scholar] [CrossRef]
  38. Kohyama, K.; Gao, Z.; Ishihara, S.; Funami, T.; Nishinari, K. Electromyography analysis of natural mastication behavior using varying mouthful quantities of two types of gels. Physiol. Behav. 2016, 161, 174–182. [Google Scholar] [CrossRef] [PubMed]
  39. Van Boxtel, A. Optimal signal bandwidth for the recording of surface EMG activity of facial, jaw, oral, and neck muscles. Psychophysiology 2001, 38, 22–34. [Google Scholar] [CrossRef] [PubMed]
  40. Whitmer, D.; Worrell, G.; Stead, M.; Lee, I.K.; Makeig, S. Utility of independent component analysis for interpretation of intracranial EEG. Front. Hum. Neurosci. 2010, 4, 184. [Google Scholar] [CrossRef] [Green Version]
  41. Farina, D.; Merletti, R.; Indino, B.; Nazzaro, M.; Pozzo, M. Surface EMG crosstalk between knee extensor muscles: Experimental and model results. Muscle Nerve. 2002, 26, 681–695. [Google Scholar] [CrossRef]
  42. Altimar, L.R.; Dantas, J.L.; Bigliassi, M.; Kanthack, T.F.D.; de Moraes, A.C.; Abrão, T. Influence of different strategies of treatment muscle contraction and relaxation phases on EMG signal processing and analysis during cyclic exercise. In Computational Intelligence in Electromyography Analysis—A Perspective on Current Applications and Future Challenges; Naik, G.R., Ed.; InTech: London, UK, 2012; pp. 97–116. [Google Scholar]
  43. Klein Breteler, M.D.; Simura, K.J.; Flanders, M. Timing of muscle activation in a hand movement sequence. Cereb Cortex. 2007, 17, 803–815. [Google Scholar] [CrossRef] [Green Version]
  44. Bell, A.J.; Sejnowski, T.J. Learning the higher-order structure of a natural sound. Network 1996, 7, 261–270. [Google Scholar] [CrossRef]
  45. JASP Team. JASP (Version 0.14.1) [Computer Software]. 2020. [Google Scholar]
  46. Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
  47. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Erlbaum: Hillsdale, NJ, USA, 1988. [Google Scholar]
  48. Fay, M.P.; Proschan, M.A. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Stat. Surv. 2010, 4, 1–39. [Google Scholar] [CrossRef]
  49. Lyubomirsky, S. Why are some people happier than others? The role of cognitive and motivational processes in well-being. Am. Psychol. 2001, 56, 239–249. [Google Scholar] [CrossRef] [PubMed]
  50. Meiselman, H.L. A review of the current state of emotion research in product development. Food Res. Int. 2015, 76, 192–199. [Google Scholar] [CrossRef]
  51. Li, S.; Scott, N.; Walters, G. Current and potential methods for measuring emotion in tourism experiences: A review. Curr. Issues Tour. 2015, 18, 805–827. [Google Scholar] [CrossRef]
  52. Sato, W.; Murata, K.; Uraoka, Y.; Shibata, K.; Yoshikawa, S.; Furuta, M. Emotional valence sensing using a wearable facial EMG device. Sci. Rep. 2021, 11, 5757. [Google Scholar] [CrossRef]
  53. Gjoreski, M.; Kiprijanovska, I.; Stankoski, S.; Mavridou, I.; Broulidakis, M.J.; Gjoreski, H.; Nduka, C. Facial EMG sensing for monitoring affect using a wearable device. Sci Rep. 2022, 12, 16876. [Google Scholar] [CrossRef]
  54. Grammer, K.; Schiefenhovel, W.; Schleidt, M.; Lorenz, B.; Eibl-Eibesfeldt, I. Patterns on the face: The eyebrow flash in crosscultural comparison. Ethology 1988, 77, 279–299. [Google Scholar] [CrossRef]
  55. Vojtech, J.M.; Cler, G.J.; Stepp, C.E. Prediction of optimal facial electromyographic sensor configurations for human-machine interface control. IEEE Trans. Neural. Syst. Rehabil. Eng. 2018, 26, 1566–1576. [Google Scholar] [CrossRef]
  56. Zhu, B.; Zhang, D.; Chu, Y.; Zhao, X.; Zhang, L.; Zhao, L. Face-computer interface (FCI): Intent recognition based on facial electromyography (fEMG) and online human-computer interface with audiovisual feedback. Front. Neurorobot. 2021, 15, 692562. [Google Scholar] [CrossRef]
  57. Kemsley, E.K.; Defernez, M.; Sprunt, J.C.; Smith, A.C. Electromyographic responses to prescribed mastication. J. Electromyogr. Kinesiol. 2003, 13, 197–207. [Google Scholar] [CrossRef]
  58. Funami, T.; Ishihara, S.; Kohyama, K. Use of electromyography in measuring food texture. In Food Texture Design and Optimization; Dar, Y., Light, J.M., Eds.; Wiley-Blackwell: Oxford, UK, 2014; pp. 283–307. [Google Scholar]
  59. Kazemeini, S.M.; Campos, D.P.; Rosenthal, A.J. Muscle activity during oral processing of sticky-cohesive foods. Physiol. Behav. 2021, 242, 113580. [Google Scholar] [CrossRef]
  60. Perlman, A.L.; Palmer, P.M.; McCulloch, T.M.; Vandaele, D.J. Electromyographic activity from human laryngeal, pharyngeal, and submental muscles during swallowing. J. Appl. Physiol. 1999, 86, 1663–1669. [Google Scholar] [CrossRef] [PubMed]
  61. Vaiman, M. Surface electromyography as a screening method for evaluation of dysphagia and odynophagia. Head Face Med. 2009, 5, 9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Balata, P.M.M.; da Silva, H.J.; de Moraes, K.J.R.; de Araújo Pernambuco, L.; de Moraes, S.R.A. Use of surface electromyography in phonation studies: An integrative review. Int. Arch. Otorhinolaryngol. 2013, 17, 329–339. [Google Scholar] [PubMed] [Green Version]
  63. Ryu, H.M.; Lee, S.J.; Park, E.J.; Kim, S.G.; Kim, K.H.; Choi, Y.M.; Kim, J.U.; Song, B.Y.; Kim, C.H.; Yoon, H.M.; et al. Study on the validity of surface electromyography as assessment tools for facial nerve palsy. J. Pharmacopunct. 2018, 21, 258–267. [Google Scholar] [CrossRef]
  64. Guntinas-Lichius, O.; Volk, G.F.; Olsen, K.D.; Makitie, A.A.; Silver, C.E.; Zafereo, M.E.; Rinaldo, A.; Randolph, G.W.; Simo, R.; Shaha, A.R.; et al. Facial nerve electrodiagnostics for patients with facial palsy: A clinical practice guideline. Eur. Arch. Otorhinolaryngol. 2020, 277, 1855–1874. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic illustrations of electrode placement for the recording of electromyography signals from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles.
Figure 1. Schematic illustrations of electrode placement for the recording of electromyography signals from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles.
Sensors 23 02720 g001
Figure 2. Examples of original and independent component analysis (ICA)-reconstructed electromyography signals from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles. Data for three trials (a total of 16.5 s; each trial contained 0.5-s pre- and 5-s post-stimulus periods) of a representative participant are shown. To remove crosstalk arising from the masseter and suprahyoid muscle activities, the signals were reconstructed using the independent components that exhibited the highest variance with respect to the corrugator supercilii and zygomatic major muscle activities.
Figure 2. Examples of original and independent component analysis (ICA)-reconstructed electromyography signals from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles. Data for three trials (a total of 16.5 s; each trial contained 0.5-s pre- and 5-s post-stimulus periods) of a representative participant are shown. To remove crosstalk arising from the masseter and suprahyoid muscle activities, the signals were reconstructed using the independent components that exhibited the highest variance with respect to the corrugator supercilii and zygomatic major muscle activities.
Sensors 23 02720 g002
Figure 3. Mean ± standard error of original and independent component analysis (ICA)-reconstructed electromyography data recorded from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles during frowning, smiling, chewing, speaking actions, and combinations of these actions.
Figure 3. Mean ± standard error of original and independent component analysis (ICA)-reconstructed electromyography data recorded from the corrugator supercilii, zygomatic major, masseter, and suprahyoid muscles during frowning, smiling, chewing, speaking actions, and combinations of these actions.
Sensors 23 02720 g003
Table 1. Results of one-sample t-tests (vs. zero; two-tailed) of original electromyography signals.
Table 1. Results of one-sample t-tests (vs. zero; two-tailed) of original electromyography signals.
MuscleStatisticFacial Action
FrowningSmilingSpeakingChewingFrowning + SpeakingSmiling + SpeakingFrowning + ChewingSmiling + Chewing
Corrugatort5.420.130.240.084.360.953.740.87
p<0.0010.8950.8090.939<0.0010.349<0.0010.39
d1.010.030.050.010.810.180.690.16
Zygomatict2.275.884.474.233.865.882.415.40
p0.031<0.001<0.001<0.001<0.001<0.0010.023<0.001
d0.421.090.830.790.721.090.451.00
Massetert1.524.748.455.986.756.844.897.28
p0.140<0.001<0.001<0.001<0.001<0.001<0.001<0.001
d0.280.881.571.111.251.270.911.35
Suprahyoidt0.593.9110.445.695.077.304.917.49
p0.558<0.001<0.001<0.001<0.001<0.001<0.001<0.001
d0.110.731.941.060.941.360.911.39
d, Cohen’s d statistic [47]. Degrees of freedom were 28 for all tests. Significant results (p < 0.05) corrected using Holm’s method are shown in bold font.
Table 2. Results of paired t-tests (two-tailed) comparing original and independent component analysis-reconstructed signals.
Table 2. Results of paired t-tests (two-tailed) comparing original and independent component analysis-reconstructed signals.
Muscle Statistic Facial Action
FrowningSmilingSpeakingChewingFrowning + SpeakingSmiling + SpeakingFrowning + ChewingSmiling + Chewing
Corrugatort2.391.970.521.262.280.402.971.00
p0.0240.0590.6060.2180.0310.6910.0060.325
d0.440.370.100.230.420.080.550.19
Zygomatict2.623.103.786.154.583.276.013.57
p0.0140.004<0.001<0.001<0.0010.003<0.0010.001
d0.490.580.701.140.850.611.120.66
Massetert1.402.956.635.876.065.034.774.95
p0.1730.006<0.001<0.001<0.001<0.001<0.001<0.001
d0.260.551.231.091.130.930.890.92
Suprahyoidt0.772.729.405.714.726.504.696.58
p0.4500.011<0.001<0.001<0.001<0.001<0.001<0.001
d0.140.511.751.060.881.210.871.22
d, Cohen’s d statistic [47]. Degrees of freedom were 28 for all tests. Significant results (p < 0.05) corrected using Holm’s method are shown in bold font.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sato, W.; Kochiyama, T. Crosstalk in Facial EMG and Its Reduction Using ICA. Sensors 2023, 23, 2720. https://doi.org/10.3390/s23052720

AMA Style

Sato W, Kochiyama T. Crosstalk in Facial EMG and Its Reduction Using ICA. Sensors. 2023; 23(5):2720. https://doi.org/10.3390/s23052720

Chicago/Turabian Style

Sato, Wataru, and Takanori Kochiyama. 2023. "Crosstalk in Facial EMG and Its Reduction Using ICA" Sensors 23, no. 5: 2720. https://doi.org/10.3390/s23052720

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop