Associations between Cognitive Concepts of Self and Emotional Facial Expressions with an Emphasis on Emotion Awareness

Recognising our own and others’ emotions is vital for healthy social development. The aim of the current study was to determine how emotions related to the self or to another influence behavioural expressions of emotion. Facial electromyography (EMG) was used to record spontaneous facial muscle activity in nineteen participants while they passively viewed negative, positive and neutral emotional pictures during three blocks of referential instructions. Each participant imagined themself, another person or no one experiencing the emotional scenario, with the priming words “You”, “Him” or “None” presented before each picture for the respective block of instructions. Emotion awareness (EA) was also recorded using the TAS-20 alexithymia questionnaire. Corrugator supercilii (cs) muscle activity increased significantly between 500 and 1000 ms post stimulus onset during negative and neutral picture presentations, regardless of ownership. Independent of emotion, cs activity was greatest during the “no one” task and lowest during the “self” task from less than 250 to 1000 ms. Interestingly, the degree of cs activation during referential tasks was further modulated by EA. Low EA corresponded to significantly stronger cs activity overall compared with high EA, and this effect was even more pronounced during the “no one” task. The findings suggest that cognitive processes related to the perception of emotion ownership can influence spontaneous facial muscle activity, but that a greater degree of integration between higher cognitive and lower affective levels of information may interrupt or suppress these behavioural expressions of emotion.


Introduction
The ability to recognise and attend to other people's emotions is essential for healthy social integration and general well-being. Deficits in this ability are observed in virtually all forms of psychopathology including autism and schizophrenia, to name only a few [1]. Recognising other people's emotions also requires the more fundamental ability to discriminate the self from others in order to identify and contrast one's own emotional experiences from another's experiences. Interestingly, deficits in self and other discriminatory processing are also highly symptomatic of social-emotional disorders [1].
In line with these observations, research has shown that brain activity associated with self and other discrimination is modulated by emotional context, suggesting that the two forms of information processing in the brain are intertwined [2][3][4][5][6][7]. However, little research has been carried out to determine how self and other discriminatory neural processes relate to emotion recognition.
Research on the embodiment of emotion follows the idea that the recognition of emotion in another person occurs via the simulation of that person's observed emotional cues, leading to the representation of that person's emotional state in the observer through physical experience [8,9]. Facial mimicry is a robust aspect of emotion embodiment involving the simulation of an observed emotion through congruent facial expressions. Studies have repeatedly shown that when exposed to faces of conspecifics, individuals spontaneously and rapidly mimic their facial expression [10][11][12][13]. The effect has been demonstrated in the absence of conscious perception of the conspecific's face [14], and there is evidence to suggest that these spontaneous facial reactions play a facilitative role in emotion recognition, whereby preventing mimicry in certain facial muscle groups leads to a substantial drop in the ability to recognise another's emotions from their facial expression [15].
Embodiment theory states that spontaneous facial reactions modulate an individual's perception of another person's emotional experience through motor and sensory neural areas dedicated to facial perception, which overlap to synchronously map others' facial expressions onto their own [16,17]. This process consequently leads to a shared perception of others' emotional experiences. In support of this theory, brain imaging studies have shown that recognition of emotion in others and recognising one's own experience of emotion activate common sets of neural structures [7,16,18,19]. These findings further show that neural structures involved in discriminating between self and others are also involved in the perception of our own and others' emotional experiences [20].
However, studies using electroencephalography to determine how the emotional self is differentiated from others have shown that self-and other-referenced emotions can also be discriminated by the brain in the absence of an observed other and even without explicit instructions to give effortful focus on another person's emotional experience [2,4,6]. For example, in Herbert's studies, merely presenting a personal pronoun or article combined with an emotive noun such as "my fear", "his fear" or "the fear" evoked significantly different neural activity as a function of who owned the emotion. These more automatically driven discriminations were localised to neural activity within the amygdala, insula and anterior cingulate brain regions [5], all of which have previously been implicated in effortful discriminations of self-and other-referenced emotions.
To this extent, we were interested in whether spontaneous facial reactions would still differ in the expected direction between self-and other-referenced emotional experiences in the absence of an observed other person's emotional expression (i.e., no face to mimic). Spontaneous facial reactions can be elicited in the absence of an observed person altogether. Other studies have shown that in the same way observed emotional faces activate congruent spontaneous facial reactions, so too do emotional pictures, sounds and words [10][11][12][13].
In the current study, we used an adapted version of the word paradigm used in Herbert et al.'s series of studies of self and other referential processing (e.g., [4,5]). Herbert presented emotional words paired with a pronoun (e.g., my fear, his happiness). Here we presented pictures of emotional scenes preceded by the word "You", "Him" or "None" (None to mean "no one") to denote who the emotional scenario was to be referenced to. The reason for using emotional pictures rather than emotional words was to better simulate evolutionarily relevant emotional context. Examples of the emotional scene stimuli included snakes, tornados and aimed guns (unpleasant); puppies, tropical beaches and appetising food (pleasant); and envelopes, power cords and furniture (neutral). The experiment was a block design with three conditions. In one condition the word "You" was presented before each picture, and participants imagined that the emotional scenario depicted in each picture was happening to them (self-reference condition). In a second condition, the word "Him" was shown before each picture, and participants were instructed to imagine that the emotional scenario was happening to someone else (a person described to the participant; other-reference condition). In a third condition, the word "None" was shown before each picture with no reference instructions (no-reference condition).
Previous research where participants have observed another experiencing an emotion [21] or observed another directing their emotional expression towards or away from the self [22] have shown that spontaneous facial reactions decrease as self-relevance of the emotional event decreases. Therefore, we hypothesised that if spontaneous facial reactions are elicited in the absence of an observed other, then spontaneous facial reactions should be lower in the other-referenced condition compared with the self-referenced condition. Also in line with this theory, emotional scenarios in the no-reference condition would be least relevant to the self; hence, we expect the lowest spontaneous facial reactions for these emotional pictures.
Closely related to the ability to comprehend owned and others' emotions is the degree to which one is capable of understanding emotional experiences. Emotion awareness (EA) is a personality trait reflecting the conscious experience of emotion including the ability to cognitively and semantically process, identify and express emotions, the awareness of bodily sensations, and the ability to infer the emotions of others [23]. Although EA has been implicated in emotion recognition ability [24][25][26], it is not known whether EA modulates spontaneous facial reactions; therefore, we introduced EA as an exploratory independent variable in our analyses.

Participants
Participants were 22 undergraduate university students sourced from the University of Newcastle volunteer database, with those enrolled in first year psychology courses receiving course credit for participation. Ethics approval was obtained for the study from the University of Newcastle Human Research Ethics Committee. Data from three participants were excluded from the final analysis due to poor signal-to-noise ratio in physiological recordings. The remaining 19 participants (11 females) were right-handed, non-smoking native speakers of English aged between 17 and 29 years (M = 20.89, SD = 3.40). They had no known history of neuropathology or emotional disorder, and were not taking any central nervous system targeted medication such as antidepressants or stimulants at the time of recruitment.

Stimuli
The stimuli consisted of 90 unpleasant, 90 neutral and 90 pleasant pictures from the Geneva Affective Picture Database [27] and the International Affective Picture System [28]. Pictures were chosen for the experiment based on matching levels of pre-evaluated valence and arousal ratings collected in a pilot rating study involving an independent group of 42 participants (23 females) rating a larger pool of stimuli [13]. Table 1 lists the mean valence and arousal ratings for the experimental stimuli. Examples of unpleasant stimuli were disfigured bodies, snakes, spiders and violence. Pleasant stimuli were nature scenes, appetising food and erotic scenes depicting a male and female embrace. Neutral stimuli ranged from ordinary household objects to plain nature scenes and low arousing pictures of snakes. Table 1. Pre-evaluated valence and arousal ratings for the stimulus collection. Values for valence are based on the degree of reported pleasantness on a scale of 1-9, where 9 represents very pleasant and 1 represents very unpleasant. Values for arousal are based on the degree of reported arousal on a scale of 1-9, where 9 represents very arousing and 1 represents very calm. An analysis of variance carried out for the factor "valence" showed that mean ratings for unpleasant, neutral and pleasant pictures were significantly different from one another (F (2, 267) = 2278.70, p < .001, η 2 = .95). Unpleasant pictures were rated as most unpleasant, pleasant pictures were rated as most pleasant, and neutral picture ratings were situated midway. For the factor "arousal", the analysis of variance showed no significant main effect of valence (p = .116). Hence, arousal levels were no different across the three emotion stimulus categories according to self-report ratings. After randomly allocating stimuli to three reference groups, Analysis of Variance (ANOVA) for the factors "valence" and "arousal" were carried out with the additional factor "reference condition". No significant effect of reference condition was found (all p-values > .05); hence, valence and arousal ratings were also no different across reference conditions.

Procedure and Tasks
The three reference groups corresponded to three blocked and counterbalanced tasks. For the self-reference task, participants were instructed to imagine they were experiencing the scenario in each presented picture. For the other-reference task, they were to imagine that a person named Tom was experiencing the scenario. This fictional person was introduced to the participant at the beginning of the task, and was simply described as being a male university student named Tom. In the no-reference task, participants were instructed to simply view the picture. The emotional stimuli were equally split into three reference groups via random allocation. The stimuli within each reference group did not differ across participants; however, the order of stimulus presentations within groups always changed.
Given that pronouns have consistently been shown to evoke distinct referential differences [5,6,[29][30][31], pronouns were used to assign ownership of emotions. Each trial consisted of a 5000 ms stimulus presentation preceded by the word "You", "Him" or "None" for the corresponding task block to continually reinforce the referential task instructions ( Figure 1). After each stimulus presentation, a modified version of the Self-Assessment Manikin [32] was used to assess participants' emotional valence and arousal reactions to the scenarios.

Facial EMG Measures
The corrugator supercilii (cs) muscles, which furrow the eyebrows, were used to reference muscle potential changes corresponding to unpleasant stimuli, and the zygomaticus major (zm) muscles, which lift the cheeks and lips, were used to reference muscle potential changes corresponding to pleasant stimuli. EMG of the corrugator and zygomaticus muscles was done using a Nexus 10 wireless recording device connected via Bluetooth to a PC laptop, and output measurements were recorded using Biotrace Software (Mind-media.net). Further procedural details can be found in [13]. For EMG, a single 1000 ms epoch time-locked to the onset of each emotional picture stimulus was extracted from the continuous recordings and divided into four 250 ms time intervals by averaging across data points. Trial-by-trial variance was also examined using the procedure outlined in [13]. For each time interval, the grand mean of each of the nine conditions was subject to further statistical analyses to assess for significant differences.

Data Analysis
The experiment included two within-subjects factors: "valence" (3 levels: unpleasant, neutral, pleasant) × "reference" (3 levels: self, other, no one), and one between-subjects factor "emotion awareness" (low, high), which were assessed using repeated-measures When each scale appeared on the monitor, participants used their right hand to press a number between 1 and 9 on a keyboard corresponding to their degree of felt emotion. For the valence scale, 1 = "very unhappy" and 9 = "very happy". For the arousal scale, 1 = "very calm" and 9 = "very excited". Participants were given explicit standardised instructions for the meaning and use of the ratings scales, and were given six practice trials to become familiar with the method of rating and the trial sequence of events before commencing with the tasks. During the experiment, participants sat in a reclining chair under dim lighting in front of a display monitor positioned to allow 9.9 • × 8.5 • of visual angle for stimulus presentations, and were given a short break midway and at the end of each task block.
Before starting the experiment, participants completed a standard demographics questionnaire, the Toronto Alexithymia Scale (TAS-20) [33,34] and the Big Five Inventory (BFI). The TAS-20 consists of 20 short phrase self-report items with easily accessible vocabulary, and produces a score between 1 (high EA) and 100 (low EA). TAS-20 scores ranged from 36 to 74 (M = 46.21, SD = 9.83). One outlier greater than 3 SDs from the mean was identified, but not excluded from the data set to preserve statistical power in physiological data analyses. Participants were assigned to the high or low EA group based on the median score of 43. Hence, a score less than 43 equated to high EA (n = 11), while a score of 43 or greater equated to low EA (n = 8).

Facial EMG Measures
The corrugator supercilii (cs) muscles, which furrow the eyebrows, were used to reference muscle potential changes corresponding to unpleasant stimuli, and the zygomaticus major (zm) muscles, which lift the cheeks and lips, were used to reference muscle potential changes corresponding to pleasant stimuli. EMG of the corrugator and zygomaticus muscles was done using a Nexus 10 wireless recording device connected via Bluetooth to a PC laptop, and output measurements were recorded using Biotrace Software (Mindmedia.net). Further procedural details can be found in [13]. For EMG, a single 1000 ms epoch time-locked to the onset of each emotional picture stimulus was extracted from the continuous recordings and divided into four 250 ms time intervals by averaging across data points. Trial-by-trial variance was also examined using the procedure outlined in [13]. For each time interval, the grand mean of each of the nine conditions was subject to further statistical analyses to assess for significant differences.

Data Analysis
The experiment included two within-subjects factors: "valence" (3 levels: unpleasant, neutral, pleasant) × "reference" (3 levels: self, other, no one), and one between-subjects factor "emotion awareness" (low, high), which were assessed using repeated-measures ANOVAs for each dependent measure of interest. For corrugator and zygomatic data analyses, the additional within-subjects factor time interval (4 levels) was employed to assess the effect of time over the first second of stimulus viewing. The intervals corresponded to the first 250, 250-500, 500-750 and 750-1000 ms post stimulus onset. For the behavioural data, a single mean valence and arousal rating was calculated from all trials under each experimental condition. Means for each rating scale were submitted to separate 3 (Reference: self, other, no one) × 3 (Emotion: unpleasant, neutral, pleasant) × 2 (EA: high, low) repeated-measures ANOVAs. For all analyses, repeated (for time interval comparisons) and simple contrasts were used to determine the direction of significant main effects (p < .05), Greenhouse-Geisser corrections were used for sphericity violations, and Pearson's correlation coefficient (r) was used to measure effect sizes. All paired samples and independent samples t-tests were conducted with a corrected alpha criterion for family-wise error by dividing p by the number of tests carried out.

Facial EMG Measures and Skin Conductance
For corrugator activity, reference condition and EA produced significant main effects and interacted significantly during all four time intervals. The corresponding statistical values are listed in Table 2.
Instead of interpreting contrasts for individual time intervals, the ANOVA was carried out again post hoc with the additional factor "time interval" (four levels) to determine whether corrugator activity changed significantly over time as a function of the dependent factors. Time interval did not significantly interact with reference condition, EA or their combined influence (all p-values > .05); therefore, mean corrugator activity was collapsed across time intervals to generate a single grand average for the 1000 ms viewing period post stimulus onset for each factor (Figure 2). Simple contrasts showed that participants exhibiting low EA produced significantly greater corrugator activity than those exhibiting high EA when pictures were assigned no-reference. This difference was significantly greater than when pictures were referenced to the self (F (1, 17) = 4.44, p = .05, η 2 = .21) and to another person (F (1, 17) = 6.31, p = .022, η 2 = .27). Three independent samples t-test comparisons of the EA groups, one for each reference condition, further confirmed that corrugator activity produced by the low EA group was significantly greater only when pictures were assigned no-reference (t (17) = −2.88, p = .01) (using family-wise corrected alpha of .017).  As expected, the main effect of emotion category became strongly significant at the third and fourth time windows (see Table 2 for statistical values). Contrasts for each time interval showed no difference in corrugator activity produced by unpleasant and neutral pictures (p-values > .05), but that both unpleasant and neutral pictures produced significantly greater activity than did pleasant pictures (for intervals 3   As expected, the main effect of emotion category became strongly significant at the third and fourth time windows (see Table 2 for statistical values). Contrasts for each time interval showed no difference in corrugator activity produced by unpleasant and neutral pictures (p-values > .05), but that both unpleasant and neutral pictures produced significantly greater activity than did pleasant pictures (for intervals 3 Figure 3 shows mean corrugator activity across time. Repeated contrasts across the second, third and fourth intervals showed that from 500 to 750 ms and from 750 to 1000 ms, the difference in activity between pleasant and unpleasant pictures increased significantly (p = .002, .011, respectively), as did the difference in activity between pleasant and neutral pictures (p = .001, .03, respectively). However, no significant difference in activity occurred between unpleasant and neutral pictures over time (p > .05). No other significant effects of corrugator activity were found (all p-values > .05), and no significant effects were found for zygomatic activity all p-values > .05).

Discussion
The aim of the current experiment was to determine whether cognitive concepts of self, other and no one lead to different patterns of spontaneous facial reactions. The study was inspired by research showing that spontaneous facial reactions decrease as self-relevance of an emotional event decreases, specifically when observing another person experiencing an emotion [21,22]. The study aimed to further this line of research by investigating whether the mere activation of cognitive concepts of self and other would evoke responses similar to when we physically observe another person's emotional responses. Based on research showing that self-relevant emotional processing leads to greater mimicry responses [21,22], it was hypothesised that viewing self-referenced emotional scenarios would elicit stronger spontaneous facial reactions than when viewing emotional scenarios referenced to another or to no one.
These hypotheses were not supported by the data. Firstly, it was not the case that self-referenced or even other-referenced emotion processing leads to enhanced spontaneous mimicry. In terms of corrugator activity, we found that spontaneous mimicry was greatest in the no-reference condition, whereby emotional stimuli were viewed with no explicit reference to the self or another. Contrary to our predictions, we did not find that pleasant stimuli activated spontaneous zygomatic activity. Others [35] also found that high but not low arousal pleasant stimuli modulated zm activity, while both high and low arousal unpleasant stimuli modulated cs activity. Given that the stimuli in the current experiment were low in arousal, our results match those of Fujimura [35] in that low arousing pleasant stimuli do not evoke spontaneous zygomatic activity. Collectively, the data suggest that emotion recognition mechanisms thought to underlie spontaneous mimicry do not seem to be directly associated with higher-order cognitive concepts of the self or emotion ownership.
Most of the research to date has attempted to understand the nature of facial mimicry in emotion perception by examining individuals' facial reactions to emotionally expressive faces or gestures. However, these paradigms do not distinguish between mimicry as a function of owned emotional experiences and as a function of understanding another person's emotional state. Most research investigating how spontaneous facial reactions are associated with perceiving self and others' emotions has been carried out by presenting individuals with images or videos of another person with instructions to give effortful focus on the emotional experience of the observed person or to focus on their own emotional experience (e.g., [21,36]). For example, it was investigated whether mimicry was induced when adopting the perspective of another person by showing participants video clips of patients experiencing painful sonar treatment via headphones and instructing them to imagine how the patient felt in one condition, or to imagine how they would feel if they were the patient in a second condition [21]. In the same study [21], participants were specifically instructed to think about how the other person felt during the other-referenced condition, and in a second round of picture stimulus viewing, to rate the other person's emotion using a self-report scale. In the current study, participants were also required to use an emotion rating scale, but in all three conditions participants were to rate their own emotional reaction to each scenario, which they did immediately after viewing. Hence, the conditions differed according to who the participant imagined to be experiencing the scenario-the self, another person or no one-but did not differ according to whose emotional reaction they were effortfully focusing on.
Spontaneous facial reactions of participants have been investigated while they watched video clips of people displaying angry bodily gestures either directed towards the camera (i.e., directed towards the self) or directed at a 45-degree angle relative to the camera (i.e., directed away from the self) to induce the idea that the angry person was facing another person [22]. The faces of the people in the video clips were blurred to differentiate direct facial mimicry from mimicry related to the bodily gestures. The authors found that mimicry reactions were greater during self-directed compared with other-directed video clips; however, it is not clear as to whether facial mimicry is functionally involved in understanding another person's emotional state because the participants' facial reactions could have been attributed to processing of the gestures as a primary source of emotion (i.e., could this person be a threat to me?) in both conditions, with less threat elicited in the other-directed condition rather than as a secondary source (i.e., what emotion is the person conveying?). These studies have reported stronger mimicry during self-referenced emotional appraisals compared with other-referenced appraisals, leading to the theory that mimicry responses increase when emotional events become more self-relevant [21].
In terms of the function of spontaneous mimicry, the results of the current experiment support the theory that the immediacy or salience of the emotion-inducing environment is a mediating factor of whether or not spontaneous mimicry is activated. Contrastingly, the re-activation of cognitive concepts of emotion, such as in relation to the self or another person, does not seem to induce such an effect in the physical absence of a stimulus, suggesting a dissociation between physical motor behaviour and cognitive concepts of emotion, which does not support theories of embodied cognition. The current results in fact showed the greatest mimicry responses when subjects were instructed to not think about themself or anyone else when viewing the emotional stimuli. This not only supports the interpretation that the immediacy of the emotional stimulus is an important factor in inducing spontaneous mimicry, but also suggests that introducing interruptive processing and thereby increasing cognitive load, such as when instructing participants to additionally imagine themself or someone else in the emotional scenario, also interrupts spontaneous mimicry.
Corrugator activity was also enhanced not only for unpleasant stimuli, but also for neutral stimuli. This effect has been reported in other studies as well [13]. Other authors [37] also reported that neutral faces were not used in statistical analyses because participants reported negative feelings during the exposures. Recently, we found that neutral stimuli evoked enhanced corrugator activity, particularly neutral facial expressions. That neutral stimuli evoke more negative rather than positive-related patterns of facial activity is thought to be due to neutral faces being perceived in a negative context rather than simply representing an emotionally void canvas. The fact that often neutral is similar to unpleasant could further mean that "neutral" equates to "boring". If we consider that young people often make up the sample population of these types of psychology experiments, this emerging pattern of results in emotion research may be indicative of the changing modern young brain. These days, a low-stimulation level might indeed be perceived as unpleasant, particularly for modern information technology-overusing young people (see [38]).
At the low or pathological end of emotion awareness is alexithymia ("no words for feelings"), a condition often associated with psychiatric and neurological disorders. Recently, a meta-analysis of 15 neuroimaging studies of alexithymic patients [39] showed distinctly less activity in the supplementary motor and premotor areas of the brain compared with healthy controls. These areas are thought to be involved in spontaneous facial reactions [40], but the precise mechanisms of motor cortical activity into facial mimicry activity is not known. Hence, we were interested in whether emotion awareness ability may be a predictor of mimicry responses. In the current study, corrugator recordings revealed that emotion awareness does interact with spontaneous mimicry. We found that participants who scored low in emotion awareness produced greater spontaneous corrugator responses than did people who scored high in emotion awareness. These findings align with past research on physiological correlates of emotion awareness. This research has shown that low emotion awareness corresponds to greater externalised emotional responses while higher emotion awareness corresponds to greater internalised emotional activity, including greater cognitive emotional thinking and pre-frontal activity, compared with those with low emotion awareness [41][42][43]. Along this line, if we consider that those with high emotion awareness are better at empathising, and that empathy involves the recruitment of higher-order cognitive processing, then given our findings that cognitive activity can supress or interrupt mimicry, it is logical to expect that people with greater emotion awareness display relatively lower spontaneous activation levels.

Conclusions
Our findings support the idea that cognitive processes related to the perception of emotion ownership can influence spontaneous facial muscle activity. Greater input of cognitive information may suppress behavioural expressions of emotion. Overall, the findings support current theories that brain processes involved in self/other discrimination are intertwined with emotion processing and behavioural expressions of emotion. Crucially, multiple neuropathological conditions are characterised by disordered concepts of self and deficits in social-emotional functions, and there is now strong justification for exploring the nature of these behavioural and brain processes in clinical populations. This can have implications for future research and possibly lead to applications in terms of dealing with clinical populations. A well-defined understanding of interactions between self-referential processing and emotion will help to improve diagnosis and treatment of a disordered self, as occurring in mental disorders such as schizophrenia and others, as well as treatment of depression, where emotion plays a dominant role. In the long run, it will also be possible to investigate the different stages of self-referential processing in relation with different clinical symptoms, and its neurophysiological correlates might become markers of self-disorders (e.g., [44]).
Author Contributions: P.W. was involved in designing and implementing the experiment in addition to manuscript writing and overall mentoring. A.M. collected all data and was involved in design and implementation of the experiment in addition to writing the manuscript. All authors have read and agreed to the published version of the manuscript.