Next Article in Journal
Exploring the Effects of Brain Stimulation on Musical Taste: tDCS on the Left Dorso-Lateral Prefrontal Cortex—A Null Result
Previous Article in Journal
Subliminal Word Processing: EEG Detects Word Processing Below Conscious Awareness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Features and Extra-Striate Body Area Representations of Diagnostic Body Parts in Anger and Fear Perception

1
Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
2
Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, China
3
Faculty of Psychology, Southwest University, Chongqing 400715, China
4
School of Psychology, South China Normal University, Guangzhou 510631, China
5
Faculty of Psychology, Beijing Normal University, Beijing 100875, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(4), 466; https://doi.org/10.3390/brainsci12040466
Submission received: 27 February 2022 / Revised: 19 March 2022 / Accepted: 29 March 2022 / Published: 31 March 2022
(This article belongs to the Section Neuropsychology)

Abstract

:
Social species perceive emotion via extracting diagnostic features of body movements. Although extensive studies have contributed to knowledge on how the entire body is used as context for decoding bodily expression, we know little about whether specific body parts (e.g., arms and legs) transmit enough information for body understanding. In this study, we performed behavioral experiments using the Bubbles paradigm on static body images to directly explore diagnostic body parts for categorizing angry, fearful and neutral expressions. Results showed that subjects recognized emotional bodies through diagnostic features from the torso with arms. We then conducted a follow-up functional magnetic resonance imaging (fMRI) experiment on body part images to examine whether diagnostic parts modulated body-related brain activity and corresponding neural representations. We found greater activations of the extra-striate body area (EBA) in response to both anger and fear than neutral for the torso and arms. Representational similarity analysis showed that neural patterns of the EBA distinguished different bodily expressions. Furthermore, the torso with arms and whole body had higher similarities in EBA representations relative to the legs and whole body, and to the head and whole body. Taken together, these results indicate that diagnostic body parts (i.e., torso with arms) can communicate bodily expression in a detectable manner.

1. Introduction

Humans are able to detect and identify bodily expressions, which is crucial for social interaction. Bodily expressions are major sources of social and emotional information beyond facial expressions [1,2,3], especially when communicating at a considerable distance [4]. For example, a happy body is associated with large and broad motion patterns [5,6]. However, it remains largely unclear regarding what, in terms of diagnostic body parts, is indispensable for decoding bodily expressions, and what underlying neural mechanisms accompany the process. Diagnostic body parts are critical to effective and efficient social communication. This study used the Bubbles paradigm to uncover the diagnostic body parts and functional magnetic resonance imaging (fMRI) methods to investigate how they were decoded in body-selective cortical regions.
Previous studies have demonstrated that body parts have distinct biological motion patterns that modulate bodily expression perception. As long ago as the 19th century, Darwin [7] proposed that head motion indicates large amount of emotional information, through tilt direction: an upward orientation conveys joy, and a downward orientation conveys shame or sadness. This conjecture was later supported by evidence showing that people perceived emotion corresponding to specific head motions [8,9,10]. Even simple leg and foot gestures exert influence on the evaluation of people’s attitude [11]. These studies showed that separate body parts play important roles in forming bodily expression perception.
Perception of bodily expressions depends on how motion patterns are processed by a balance between encoding and decoding [10,12,13]. Encoding refers to expressing one’s inner emotion with nonverbal information. For example, slower and fewer overall body movements can express an unhappy or sad state in dance [14], though these movements are usually interpreted as neutral expressions in daily life. Decoding refers to extracting the nonverbal cues to interpret bodily expression. When an observer had difficulties identifying bodily expression from an ambiguous face, they would automatically extract information from other body parts [15,16]. Studies employing whole-body stimuli have found that decoding bodily expression usually involves how to decode body parts such as arm or hand gestures [17,18]. Specifically, anger is often recognized through arm movement [19], clenched or shaking fists and hitting gestures [20]. Fear is usually accompanied by defensive reactions [10,21], such as placing arms over the chest [16,20], or covering the face with hands [18]. Neutral has more expanded torso and limb shapes to express relaxed states [22]. However, these studies failed to clearly explain: (1) whether those body parts were all diagnostic for decoding bodily expression; (2) other body parts, such as the torso, containing the information of leaning forward or backward, were also helpful for behavior identification.
Moreover, emotions drive bodily responses [23], because emotions evoke specific patterns of autonomous nervous system activity [24], which further lead to discrete ‘feeling fingerprints’ of the human body [25,26]. In these works, many types of emotion showed that torso and arms were significantly associated with bodily sensations, such as anger and fear. The evidence from body feelings of emotion possibly explained how people perceive bodily expressions. We therefore speculated that perceiving anger or fear was probably achieved by qualifying whether the torso, arms and hands are contracted or expanded. We combined torso, arms and hands into an integrated part, and hypothesized that diagnostic features for perceiving fear and anger consist of the combined diagnostic body parts: torso with arms.
We adopted the Bubbles paradigm to identify diagnostic features for decoding static bodily expressions. The paradigm applied the theory of reverse correlation to model the mental representations by reverse correlating all the possible information projected onto the retina, with the observer’s corresponding response [13,27]. For instance, one observer selected some specific faces as happy expressions. Researchers computed significantly overlapped regions of his/her selected faces as diagnostic features. As the human visual system can be sampled into five orthogonal spatial frequency bands responsible for transmitting different scales of information, researchers always compute the diagnostic information independently for each frequency band. The Bubbles paradigm has been used for identifying human faces [27] and scenes [28]. For faces, many experiments have been designed to identify gender [27,29], identity [29], expression [27,29] and age [30]. In recent years, Jack et al. promoted reverse correlation when studying dynamic facial expressions [31,32], and they revealed cultural differences in features for perceiving dynamic faces between the East and the West [33]. To our knowledge, few studies have employed the Bubbles paradigm for bodily expression perception in human participants. By analogy with facial expressions, we intended to apply this method to how bodily expressions were decoded through diagnostic body parts.
Then, our next purpose was to reveal the influence of diagnostic body parts on the perception of bodily expressions. The distinction between body form and body action should be noticed, because they are intimately linked to body parts and bodily expressions. The evidence from brain lesions demonstrated that body form and body action can be double dissociated, indicating the functions were from different brain areas or structures [34]. Body action recognition can be associated with motor-related cortices, such as the somatosensory cortex, supplementary motor area and ventral premotor cortex [35,36]. Perception of body form involves independent neural structures, such as the fusiform body area (FBA), extra-striate body area (EBA) and posterior superior temporal sulcus [37]. In primates, the neurons in the temporal cortex drive the responses to human body parts [38]. In humans, the EBA and FBA, both of which are located near the occipito-temporal regions [39], have been identified to be preferential in processing body-related stimuli [40,41]. Particularly, the FBA is sensitive to whole-body stimuli, while the EBA is sensitive to images of neutral bodies and body parts [37,42]. The EBA also contains neural populations overlapped for form and action perception [34]. In other words, the EBA may drive the perception of body motion expressed by body parts. However, the EBA representation of body parts (i.e., arms and hands) remains elusive which is diagnostic for the perception of bodily expressions. We therefore predict that diagnostic features for perceiving body parts would engage the involvement of the EBA: (1) the representations of a torso with arms would be more correlated with the representations of whole bodies than that of other body parts; (2) the representations of angry body parts would be more correlated with the representations of fearful body parts than that of neutral body parts. This was motivated by the idea that the EBA possibly drives the abstraction of diagnostic features from the torso with arms. Combined with representational similarity analysis (RSA) [43], we conducted a follow-up fMRI experiment to reveal the influence of diagnostic body parts for the perception of bodily expression.
In addition, the limbic structures are sensitive to the arousal of negative visual stimuli. The brain structures of the caudate, thalamus and anterior insula are core hubs of the salience network, involved in the function of experiencing negative emotion [44,45]. The amygdala of the limbic structures plays important roles in the rapid automatic perception of body emotion, such as fear [46]. There are fine-grained distinctions of related sub-cortices between anger and fear perception [23,45,47,48]. The neural function for fear perception is characterized by fast and automatic processing, while anger perception depends on consciousness to some extent. In the present study, we expected that subcortical activations would be involved in the perception of emotional body parts.
Together, this study comprises two experiments to reveal diagnostic body features in bodily expression categorization and to uncover the corresponding neural patterns. In the behavioral experiment, participants viewed a variety of incomplete body images and indicated the emotion they perceived (i.e., anger, fear and neutral). A separate group of participants were tested in the fMRI experiment, where they were asked to identify different body parts. Our findings may provide converging evidence to support how our brain decodes the diagnostic features.

2. Materials and Methods

2.1. Participants

In the Bubbles experiment, 33 college students (18 females, mean age ± s.d. = 22.39 ± 2.83 years) were recruited from Liaoning Normal University via advertisement. In the fMRI experiment, 24 students (14 females, mean age ± s.d. = 21.48 ± 2.01 years) were recruited. All of the participants had normal or corrected-to-normal vision and were right-handed. The experiments were approved by the local ethics committee. Written informed consent was provided before participation. No neurological or psychiatric history was reported. After the experiments, participants were financially compensated.

2.2. Stimuli Presentation

In the Bubbles experiment, 12 body images of 4 different actors (2 female) expressing 3 bodily expressions (anger, fear and neutral) were selected from the BEAST stimulus set [49]. The mean (s.d.) of the vertical and horizontal extent of bodies was 293 (6.7) and 98 (14.9) pixels. They were therefore imbedded in a grayscale background (grayscale = 128) with the identical size of 310 × 245 pixels, presented at 6.90° × 5.47° of visual angle on a computer screen with a resolution of 1024 × 768 pixels and a refresh rate at 60 Hz. To sample the body features in different spatial scales, we decomposed one image into five nonoverlapping spatial frequency (SF) bandwidths of one octave each, with cut-offs at 123 (22.4), 61 (11.2), 31 (5.6), 15 (2.8), 8 (1.4) and 4 (0.7) cycles/image (c/deg of visual angle), respectively. The decomposition was processed by the toolbox of matlabPyrTools for MATLAB (https://github.com/gregfreeman/matlabPyrTools; accessed on 10 April 2012). The size of each octave was defined based on the image size. For example, the highest SF band expressed one cycle by 2 × 2 pixels. This SF layer was represented by 310/2 (in the vertical direction) and 245/2 (in the horizontal direction) cycles/image (cpi) [12,50]. We used horizontal SFs of 123 cpi here to measure the high-SF octave according to previous studies [51,52]. This process of peeling off each SF layer was applied recursively. Each band was independently sampled with a number of randomly positioned Gaussian bubbles (windows) to generate a bubbles mask (each in the second row of Figure 1). The bubbles were then adjusted at each scale to reveal 3 cycles per bubble (standard deviations of bubbles were 0.13, 0.27, 0.54, 1.07 and 2.14 degree of visual angle, from fine to coarse scales). The sampled information was then recombined to produce a sparse stimulus (the ‘Final Stimulus’ in Figure 1). The number of stimuli would be further adjusted during the experimental procedure.
In the fMRI experiment, 36 different body action pictures of 12 actors (6 female) expressing 3 expressions (anger, fear, neutral) were also selected from BEAST, including the 12 body images used in the Bubbles experiment. Each whole-body image was split into 3 types of body part (torso with arms, head and legs) and each part was adjusted to the center of an independent image. Whole bodies were also included to be one type of body part, resulting in 144 body images as experimental stimuli in total. Participants viewed the pictures, subtending 6.86° × 5.43° of visual angles, through a mirror mounted on the head coil (mirror size: 3.12 cm × 2.34 cm).

2.3. Experimental Tasks

2.3.1. Bubbles Task

Participants performed emotion categorization of a Bubbles stimulus (the ‘Final Stimulus’ in Figure 1) into one of three emotional types by pressing the ‘1’ to ‘3’ key on the upper left side of one computer keyboard. The experiment comprised 4 sessions and 1920 trials in total (160 presentations of each body). Participants viewed each stimulus freely until they pressed one of the buttons. That is, reaction time was not limited, and the next trial would begin after the participant had chosen one emotional type. Participants were required to have a short break after 160 trials. Most of the participants could finish all the trials in 1 h. During the experiment, we collected two datasets of masks: the bubbles masks (each in the second row of Figure 1) (1) led to correct response; (2) led to error response. The datasets were used for analysis after experiments. The sampling density (i.e., the total number of Gaussian bubbles in each bubbles mask) was adjusted on each trial, independently for each expression, to maintain 75% accuracy. Two participants were excluded as they failed to achieve above the accuracy. Experimental programs were performed by scripts [27] on MATLAB platforms (R2017b).

2.3.2. fMRI Behavioral Task

The whole experiment consisted of 4 functional runs and a structural anatomical scan. During the functional runs, participants were required to classify the body stimulus (whole body, torso with arms, heads, legs) into a certain type of bodily expression (anger, fear and neutral). Within each run, there were 108 trials. Procedures were edited and performed through E-Prime 2.0 (Psychology Software Tools, Inc., Pittsburgh, PA, USA). Each functional run started with a white spot presented for 6 s. Within each trial, participants needed to fixate on a white cross at the center of the screen. The fixation duration was chosen from a range of 2–6 s (average: 4 s), which approximates the ISI durations in previous emotion-related tasks [53,54,55]. Then, one stimulus followed and was kept on the screen for 2 s, during which participants were required to indicate its expression type quickly and accurately by pressing one of the three response buttons. The sequence of pressing buttons was counterbalanced across participants. An additional fixation cross was presented for 10 s after 36 trials, in order to increase the fMRI design efficiency (Figure 2).

2.4. Data Acquisition

2.4.1. Bubbles Data

To record the body information diagnostic for each expression, the bubbles masks were computed. We added up all the bubbles masks (grayscale of pixels for each scale and each participant) of the datasets which participants correctly responded to. Similarly, the masks that led to error responses were added. To build proportion images, the correct-response mask was divided (independently computed for each pixel) by both the correct-response and error-response mask for each SF band. Then, we employed the Stat4Ci toolbox [56]. These proportion images were smoothed with a Gaussian filter of a standard deviation of 8 pixels by using the function of ‘SmoothCi’. After smoothing, these images were transformed into z scores using the function of ‘ZtransSCi to normalize the grayscales of each mask. Diagnostic information was tested on the z-transformed data by applying the cluster test from, using the function of ‘StatThresh with a significance threshold of p-value < 0.05, and cluster t-threshold > 2.7.

2.4.2. fMRI Data

The MRI scanning was carried out at the Brain Imaging Research Center of Southwest University, Chongqing, China, using a 3T SIMENS Trio Tim Syngo MR B17 scanner (Siemens Medical, Erlangen, Germany). A gradient-echoplanar imaging (EPI) sequence (scanning parameters: field-of-view/slice thickness: 192/3.5 mm; voxel size: 3.0 × 3.0 × 3.5 mm3; matrix: 64 × 64; number of slices: 33; TR/TE: 2000/30 ms; flip angle: 90°) was unified for all runs. Each run consisted of 342 functional volumes. Structural images were acquired through a three-dimensional sagittal T1-weighted magnetization-prepared rapid gradient echo (scanning parameters: field-of-view/slice thickness: 256/1 mm; TR/TE/TI: 1900/2.52/900 ms; voxel size: 1.0 × 1.0 × 1.0 mm3; flip angle: 9°; matrix: 256 × 256).

2.5. Data Analysis

2.5.1. Bubbles Data

As our aim was to show diagnostic information used in each SF band for a given expression, we summed the number of the significant pixels presented in each SF band and divided that by the sum of all the pixels in the corresponding band. This computation was to reveal the relative use of SF bands across expressions and actors, in terms of the proportion of diagnostic information per band [51]. We also summed the different information from 5 SF bands to build the whole diagnostic information for each expression and actor. In order to determine where the diagnostic information could be located, we further divided each body stimulus into three parts: head (including hair and neck), legs (including feet) and torso with two arms and hands. Then, we summed the number of significant pixels presented in each body part of different bands and divided that by the sum of all the pixels in the same part, for each participant, in terms of the proportion of diagnostic information per body part. A two-way repeated-measures ANOVA on this diagnostic proportion was conducted with expression (anger, fear, neutral) and body part (head, torso with arms, legs). The Greenhouse–Geisser method was used for sphericity corrections. Post hoc multiple comparisons were conducted on p-values with the Bonferroni correction. Effect sizes for each comparison are reported in the Results.

2.5.2. fMRI Behavioral Data

To compare with the statistical results from the Bubbles experiment, we also conducted repeated-measures ANOVAs with Expression (three levels: anger, fear and neutral) and Body (four levels: whole body, torso with arms, legs, and head). We found there might be a response bias that participants preferred to classify the unidentifiable bodies as neutral. Therefore, we adopted an unbiased index Hu [57] which took account of every response bias by multiplying hit rate (ACC). Hu was computed for each condition and subject.

2.6. fMRI Localizer and ROI definition

A separate group of 18 participants (10 females, mean age ± s.d. = 22.39 ± 2.06 years) were recruited for one functional localizer task (240 volumes). This localizer experiment consisted of 5 conditions: whole bodies, body parts, hands, tools and chairs [58]. Each condition consisted of 13 different grayscale images (450 × 600 pixels) on a white background. The localizer scan comprised a fully randomized sequence of 25 blocks and ran for 8 min 1 s in total. Scanning started with a white spot presented for 6 s. Within each category block, fixation cross was presented at the center of the screen for 1 s and then 13 different images were randomly presented in the center of the screen. Each image was kept for 800 ms with a blank inter-stimulus interval (ISI) of 200 ms. One image in the stimuli sequence was repeated once. Participants were required (1) to press a button with their right index finger when the repeated image appeared and (2) to pay attention to all 14 images of the stimuli sequence. Interval between two blocks was 4 s. We used this task to identify the selective brain area of perceiving whole bodies and hands for RSA. Scanning parameters were the same as the main fMRI experiment.
Four functional ROIs were defined at the group level of all the participants’ brains: left EBA, right EBA, left FBA and right FBA. The preprocessed data were analyzed using a GLM for each participant. The model included 5 experimental condition regressors and 6 motion correction regressors. In the group-level analysis, four ROIs were selected from the group-level analysis: the EBA and FBA were defined using the t-map of contrast [whole-bodies + body-parts > chairs] [58]. The threshold for ROI definition was set at q (FDR) = 0.05. The EBA covered 684 voxels (left) and 835 voxels (right) in the occipito-temporal cortex. The FBA covered 155 voxels (left) and 171 voxels (right) in the fusiform gyrus.

2.7. Image Data Preprocessing

Brain imaging data were preprocessed by using the CONN functional connectivity toolbox (version: 16.b; https://www.nitrc.org/projects/conn/; updated on 15 June 2016). First of all, these functional images were slice-timing corrected, realigned and un-warped, and outlier detected (ART-based scrubbing). They were identified as an outlier if (1) the head displacement of any frame surpassed the threshold of 0.9 mm, and (2) the global mean intensity of any frame surpassed 5 standard deviations above the mean intensity of the entire scan. Due to the constraints of head motion and whole-brain intensity, five subjects were excluded. Functional images were co-registered to each subject’s gray matter image segmented from the corresponding high-resolution T1-weighted image, then spatially normalized into a common stereotactic Montreal Neurological Institute (MNI) space and smoothed by an isotropic three-dimensional Gaussian kernel with 6 mm full-width at half-maximum (FWHM).

2.8. fMRI Activation Analysis

The whole-brain GLM analysis was performed based on the toolbox of SPM12 software packages (https://www.fil.ion.ucl.ac.uk/spm/; updated on 13 January 2020). The statistical model consisted of 12 regressors of experimental conditions and 6 regressors of head motion parameters. We applied a 3 × 4 ANOVA with emotion and body part to analyze the group random effects. We focused on the main effect of expression conveyed by the whole body or torso with arms. To further test the expression effects, the paired-sample t-test was performed on the contrast of ‘anger > neutral’ and ‘fear > neutral’ under both the whole body and torso with arms. The statistical tests were performed at the q = 0.05 level corrected for multiple comparisons using the false discovery rate (FDR).
Additionally, in order to evaluate the overlap of activations within the cluster under the contrasts of ‘anger > neutral’ and ‘fear > neutral’ of the whole body and torso with arms, we computed the Sørensen–Dice coefficient (Dice, 1945):
Roverlap = 2 × Voverlap/(V1 + V2).
Voverlap represents the number of voxels in the common activation region of the two contrasts. V1 and V2 represent the number of voxels in each overlapped cluster.

2.9. Constructing Candidate RDMs

We constructed 12 candidate representational dissimilarity matrices (RDMs) to simulate how the ROIs distinguish the bodily conditions and decode their emotional information (Figure 3). The matrices predicted 7 different candidate models: 3 body-effect models (body-separate, body-pattern1 and body-pattern2), 3 emotion-effect models (emotion-separate, emotion-pattern1 and emotion-pattern2) and a random model. Among these models, body-separate, body-pattern1, emotion-separate, emotion-pattern1 were categorical models where two conditions were identical categories (dissimilarity = 0) or different categories (dissimilarity = 1). For body-separate, each bodily condition represented one independent category. For body-pattern1, whole body and torso with arms represented the same category. We designed body-pattern1 by assuming that the ROI representations of the torso–arms and the whole body would represent the same category for processing body parts, if there was no difference in the similarities of the true representations with the body-pattern1 and body-separate model. In other words, body-separate and body-pattern1 together examined whether the EBA extracted diagnostic information mainly from the torso with arms. Similar, the emotion-patterns focused on whether two stimuli shared emotional information. Emotion-separate predicted that each emotional condition represented one independent category. Emotion-pattern1 predicted that the emotion category could be distinguished from the whole body and torso with arms. The 2 models together examined whether the ROIs could abstract the same emotional information from the torso with arms.
The above predictions differentiated whether the ROI representations belonged to independent categories, which might not depict the real similarity relations well. Therefore, we additionally used 2 special candidate models: body-pattern2 and emotion-pattern2. They predicted all the conditions that elicited 5 different prototypical response patterns. We used different ranks, rather than degree, to predict the similarity and all the ranks of each model were rank-transformed from 0–1 before statistical tests. Specifically, in body-pattern2, dissimilarity rank 1 (dark green entries of body-pattern2 in Figure 3) was used to predict the closest category, which might be found in the relationship between the torso with arms and whole body, and between the torso with arms and legs. Rank 2 (red entries of body-pattern2) was used between the legs and whole body, between the head and torso with arms and between the head and legs. Rank 3 (yellow entries) was used between the whole body and head. Based on body-pattern2, emotion-pattern2 distinguished the emotion category from the relationship of the whole body and torso with arms (dissimilarity rank = 0–6). The remaining red and yellow entries of in the emotion-pattern represented dissimilarity ranks 7 and 8, separately.

2.10. Representational Similarity Analysis

Representational similarity analysis (RSA) was performed using the scripts according to rsatoolbox [59,60]. For each participant, response patterns were extracted from the t-maps of the 12 conditions across voxels inside the 4 ROIs. Then, the true RDM was computed by quantifying the degree of dissimilarity (1 minus correlation) of the response patterns for each pair of conditions, for each of the ROIs. The group RDMs were obtained by averaging all the individual RDMs. In addition, multidimensional scaling (MDS) and hierarchical cluster trees were used to visualize the similarity structure of each group RDM. We performed the RSA to assess (1) whether each candidate RDM was significantly related to the true RDMs and (2) whether there were differences between any two conceptual RDMs in the degree of relatedness to the true RDMs. Kendall’s τA correlation was used to compute the two relationships. For each candidate RDM and the true RDM, a two-tailed t-test was used to assess whether the correlation was significantly against zero over subjects. The statistical threshold for these results was also q (FDR) = 0.05 corrected for multiple comparison.

3. Results

3.1. Bubbles Results

Figure 4 shows the descriptive results of the diagnostic information from the Bubbles analysis. It presents diagnostic information used on one female actor and one male actor for each expression (rows) and each SF band (columns 2–6). The first column presents an integration of the diagnostic information collapsed across the five SF bands. The final column presents a bar graph representing the diagnostic spectrum for each actor (for the results of the other two actors, see Figure A1 and Figure A2).
Repeated-measures ANOVAs on diagnostic proportion showed that the main effect of expression type was significant (F (1.4, 42.00) = 5.542, p = 0.014, η2 p = 0.156) (Figure 5). Main effects of body part were significant (F (2, 60) = 17.716, p < 0.001, η2 p = 0.371). The interaction between them was also significant (F (2.45, 73.49) = 4.901, p < 0.001, η2 p = 0.140). Further simple effect analysis showed that in anger, the torso with arms (M ± SE: 0.151 ± 0.015) was higher than the head (0.077 ± 0.013, p = 0.003) and legs (0.050 ± 0.008, p < 0.001), while there was no significant difference between the head and legs (p = 0.335). In fear, the head (0.214 ± 0.029, p = 0.003) and torso (0.191 ± 0.027, p < 0.001) were larger than the legs (0.089 ± 0.023), while there was no significant difference between the head and torso (p > 0.999). In the neutral condition, there was no significant difference between any pair of the three parts (head: 0.234 ± 0.048, torso: 0.211 ± 0.037, legs: 0.187 ± 0.036; p > 0.05).

3.2. fMRI Behavioral Performance

Behavioral results (Figure 6) showed significant main effects of expression (F (2, 36) = 27.381, p < 0.001, η2 p = 0.603) and body (F (2.244, 40.395) = 236.183, p < 0.001, η2 p = 0.929). The interaction between expression and body was also significant (F (2.984, 53.708) = 42.275, p < 0.001, η2 p = 0.701). To further examine which body part can be better perceived, simple effects analyses were used to compare the Hu of all the bodies under each expression condition. There were significant differences among body parts for the anger condition (F (3, 16) = 238.242, p < 0.001, η2 p = 0.970). Whole body (0.329 ± 0.015) was performed better than legs (0.043 ± 0.008, p < 0.001) and head (0.001 ± 0.001, p < 0.001). Torso with arms (0.307 ± 0.016, p = 0.015) was also performed better than legs (p < 0.001) and head (p < 0.001). However, there were no differences between torso with arms and whole body. Legs were performed better than head (p < 0.001). For the fear condition, there were significant differences among bodies (F (3, 16) = 53.272, p < 0.001, η2 p = 0.909). Whole body (0.313 ± 0.016) was performed better than torso with arms (0.268 ± 0.019, p = 0.029), legs (0.055 ± 0.013, p < 0.001) and head (0.030 ± 0.008, p < 0.001). Torso with arms was performed better than legs (p < 0.001) and head (p < 0.001). However, there was no differences between legs and head (p = 0.827). For the neutral condition, there were significant differences among bodies (F (3, 16) = 11.052, p < 0.001, η2 p = 0.674). Whole body (0.183 ± 0.009) was performed better than torso with arms (0.129 ± 0.011, p = 0.007), legs (0.090 ± 0.008, p < 0.001) and head (0.117 ± 0.008, p = 0.001). Torso with arms was performed better than legs (p < 0.009). Head was also performed better than legs (p < 0.045). However, there were no differences between torso with arms and head.

3.3. Brain Activations

A whole-brain ANOVA (flexible factorial design) was conducted on the two within-participants factors expression (anger, fear, neutral) and body part (whole body, torso with arms, legs, head) at the group level. The interaction of expression and body part was observed in clusters of the medial frontal cortex, right anterior insula and precuneus. The main effect of expression was found in clusters in the frontal lobe, parietal lobe, fusiform gyrus, post cingulate gyrus, caudate, insula and cerebellum. The main effect of body part was found mainly in the supplementary motor area and occipito-temporal cortex close to the EBA (for more details, see Figure A3 and Figure A4, Table A1). Figure 7 shows the activation difference of the contrasts ‘anger > neutral’ and ‘fear > neutral’ under the condition of whole body and torso with arms. All the contrasts yielded clusters of the left occipito-temporal cortex, showing higher activity for expression-related processing. The whole body and torso with arms showed overlap in 21 voxels (Roverlap = 0.50) for the ‘anger > neutral’ contrast and 14 voxels (Roverlap = 0.48) for the ‘fear > neutral’ contrast (Figure A5). This indicated that the EBA may contribute to encoding the diagnostic information used in bodily expression perception.

3.4. Representations of EBA and FBA

Three main results (to be quantified in subsequent analyses) were found by visual inspection of the four RDMs (Figure 8). (1) All of the RDMs exhibited a dominant body part effect that the whole matrix can be split into sixteen 3 × 3 sub-matrices. Within each sub-matrix, the pairwise correlation tended to be in a similar degree, while it was different between adjacent sub-matrices. (2) The RDMs seemed to contain emotion effects that the neural representations between emotional stimuli shared a low degree of dissimilarity, and neutral stimuli showed large dissimilarity with emotional stimuli. (3) For the EBA RDM, the emotion effects seemed to appear on specific body parts such as whole body and torso with arms, which is consistent with our assumption that torso with arms is the diagnostic body part for emotion recognition. (4) For the FBA RDM, the whole body showed large dissimilarity with all of the three split body parts.
Multi-dimensional scaling (MDS) arrangements and hierarchical plots were performed to visualize the dissimilarity structure arranged by all conditions, and they generally revealed three separate clusters: one for whole body, one for large body parts (torso with arms and legs) and one for head. Torso with arms and legs produced similar responses in both the EBA and FBA. However, the visualizations revealed at least two main different organization of clusters for the EBA and FBA: (1) in the MDS plots, the whole body showed the largest distances from the head for the two EBA ROIs, while the whole body showed large distances from all of the three split body parts for both the FBA ROIs. (2) In the dendrogram, for the EBA, torso with arms and legs were grouped into one cluster, then they were directly grouped with whole bodies. For the FBA, after the three body parts were grouped together into one cluster, whole body was grouped with the cluster. Statistical inference was needed to further examine the relationships.

3.5. Statistical Inference of RSA

Statistical inference was performed to test whether each candidate RDM was significantly related to the true RDMs. The relatedness was tested using the signed-rank test. The four bar graphs (Figure 9A) showed that (1) body-pattern1, body-pattern2, body-separate and emotion-pattern2 were positively related to the EBA and FBA RDM except the random, emotion-pattern1 and emotion-separate model. (2) In general, the candidate models predict lower similarities with the FBA, indicating the predictions might not fit the FBA representations well. (3) Among the significant models, emotion-pattern2 and body-pattern2 had high correlations with the true RDM (EBA and FBA, both left and right) while body-separate and body-pattern1 had relatively low correlations, and the results need further pairwise comparisons.
Next, we also tested whether any two candidate RDMs differed in their relatedness to the true RDMs. The upper triangular matrices (Figure 9B) showed that for the correlation to the EBA, (1) emotion-pattern2 was more correlated than body-pattern2 (left EBA: q(fdr) < 0.05; right EBA: q(fdr) < 0.01). Body-pattern2 was more correlated than body-separate (left and right EBA: q(fdr) < 0.01). Among these candidate models, emotion-pattern2 was the best model to simulate the representation of the EBA RDMs. (2) However, there were no significant differences between the correlation of body-separate (to the EBA) and the correlation of body-pattern1 (to the EBA). Both body-pattern1 and body-separate correlated more than the random model (left and right: q(fdr) < 0.01). (3) There were no significant differences between the correlation of emotion-pattern1 and the correlation of the random model. Emotion-separate was the worst model to simulate the EBA representation similarities.
Similarly, for the correlation to the FBA, (1) emotion-pattern2 was more correlated than body-pattern2 (left and right FBA: q(fdr) < 0.05), and body-pattern2 was more correlated than body-separate (left and right FBA: q(fdr) < 0.01). This was consistent with the correlation to the EBA. Emotion-pattern2 was also the best candidate RDM to simulate the representation of the FBA RDMs. (2) However, body-separate was also more correlated than body-pattern1 (left and right FBA: q(fdr) < 0.05). Body-pattern1 correlated more than emotion-pattern1 (left and right FBA: q(fdr) < 0.01). (3) There were no differences between the correlation of emotion-pattern1 and the correlation of the random model to the left FBA (q(fdr) > 0.05), or between the correlation of the random model and the correlation of emotion-separate (left and right FBA: q(fdr) > 0.05). Body-pattern1 was more correlated than the random model (q(fdr) < 0.05).

4. Discussion

The current study explored the diagnostic parts for bodily expression recognition and their mechanisms by analyzing the brain representations of bodily expression. To illustrate, clenched fists and flexed arms of hit gestures reliably allowed observers to categorize the emotion as anger. Similarly, a backward-leaning torso, arms in front of torso and hands shielding the body all reliably indicated fear. Our findings were generally consistent with behavior types used for expression decoding and encoding in previous expression communication studies [10,18,61]. Moreover, in the fMRI experiment we found that the response patterns in the EBA carried information to clearly distinguish different body parts. In contrast, the FBA only distinguished between the whole body and body parts. Furthermore, the EBA decoded the information which may be used for further expression perception.

4.1. Torso with Arms as Diagnostic Body Parts

Previous researchers manipulated behavioral type (spatial form, such as head tilted up or down) or quality (spatiotemporal properties, such as speed and energy) to study bodily behaviors, and identified some specific behaviors which can be used to perceive both static and dynamic expressions [17,62]. For example, hot anger (or rage) portrayals can be expressed by a forward lean or movement [5]. However, in more cases, bodily expression is transmitted through flexible and variable motion patterns [10], and it is difficult to focus on all the patterns in one experiment. We therefore adopted another perspective which directly focused on the diagnostic body parts.
The main contribution of the Bubbles experiment is that we identified the diagnostic body parts to summarize the diagnostic features, for understanding the mechanisms underlying expression perception. Here, the Bubbles methods we used were not entirely consistent with previous studies, which revealed the diagnostic information in different SF bandwidths [51,52]. Instead, we additionally divided the whole body into three body parts, and analyzed the proportions of diagnostic information for each body part. The purpose of doing this analysis is that we contrived to integrate the flexible postures or movements. Previous studies used the Bubbles paradigms on facial expression, however, facial expression and bodily expressions differed a great deal in their structures or configurations [62,63]. Bodily expressions have flexible postures or movements, therefore, there were many limitations in generalizing the results of diagnostic features to other bodily behavior patterns even in perceiving the same expression. However, at the level of body parts, there were significant differences in the amount of diagnostic information that the body parts contained. If two irregular dynamics share ‘similar’ posture or movement, they probably share the ‘same’ diagnostic body parts for perception. That is, to distinguish whether the bodily expression is anger or fear, we can simply pay attention to the torso with arms.
The current experiment showed two characteristics consistent with previous studies. (1) Diagnostic feature selection basically depends on the information structure provided by visual inputs [13]. For perceiving anger, observers selected the area of the thighs as a diagnostic feature for one of the female actors, however little information was selected from the thighs for one of the male actors (Figure 4). For the male actor, observers considered visual information only from his fist. This variation in diagnostic features cannot be attributed solely to perceived gender association with certain expressions, because the two actors differed subtly in body postures. (2) Diagnostic feature selection also depends on flexibly utilizing the spatial locations of body parts. For example, extracting head information differed obviously between the Bubbles task and fMRI experiment. The diagnostic features for heads originally contained both the neck and lower half of the head, while they could not be recognized in fMRI experiments. We inferred the reason was that head orientation was difficult to be identified when heads were presented alone, but may be easy when integrated with torso parts. This is consistent with the notion that spatial location was extracted by a top-down processing mechanism [13,64], which is modulated by task requirements, memory representations and strategies [13,52,64]. Here, the absence of spatial location for head orientation leads to more flexible strategies in utilizing memory representations of diagnostic features.

4.2. Brain Activations Related to Body Parts

Brain activation results were generally consistent with the body recognition literatures [28,46,58,65]. Notably, the angry vs. neutral contrast of leg stimuli revealed stronger activations of the lingual gyrus, the inferior parietal lobule (IPL), supplementary motor area (SMA), thalamus and the anterior insula (AI). The lingual gyrus has been demonstrated to be involved in processing both human faces [66] and bodies [67]. The lingual gyrus was activated during passive viewing of body parts [67]. Therefore, the stronger activation of angry torsos relative to neutral torsos possibly reflected processing information of body motion.
The IPL also modulates the perception of facial expressions and interpretation of character information [68]. Importantly, the IPL plays a causal role in processing fearful bodies [69]. However, we only observed IPL activation during the ‘anger versus neutral’ contrast. More experiments are needed to explain this finding. Activation of the SMA upon viewing angry legs implies that the area is collecting information on body motion [70,71], given that the region is involved in planning or preparation of movements [72]. Our results also found the activation of the caudate in the thalamus was enhanced in this contrast. Studies found that the thalamus and AI could be linked with the experience of unpleasant and highly arousing affect [38]. In particular, the AI may be responsible for the integration of interoceptive awareness with feelings of disgust and phobia subjects with higher interoceptive awareness could require less AI activity to maintain similar behavior when viewing phobia stimuli [73,74]. This function may explain why AI activation occurred when viewing nondiagnostic body parts (e.g., head, legs). However, the AI is critical to the salience network, particularly in switching access to working memory and attentional resources to detecting salient events [74].
Other subcortical structures, such as the amygdala, were not found in the anger vs. neutral contrast, nor in other contrasts. We could not draw conclusions that these regions were not involved in the perception of bodily expression, because the results might be dependent on the experimental design. As we mentioned in the Introduction, the current design may not evoke a rapid response of the amygdala.

4.3. Neural Representation of Diagnostic Body Parts

We confirmed the significance of the torso part for expression perception by examining the brain activities of separate body parts. Regions activated by the torso part and whole body overlapped a great deal in the left occipito-temporal lobes near the EBA which is sensitive to body parts and biological motion [37,46,75]. This indicated that the EBA might process the bodily expression by extracting the information from the torso and arms. Therefore, we further examined the dissimilarity structures of neural representation in the EBA. First of all, we found that the EBA RDMs were more correlated to body-pattern2 than body-separate and body-pattern1. There were no differences in the correlation between any pair of the latter two models with the EBA RDMs, indicating that among the three body parts, the neural representation for the torso part was most similar to that of the whole body. As the FBA differentiates body configurations [75,76], the same analysis was also applied to the neural representations in the FBA. We also found the effects of body-pattern2, however, the FBA RDMs were more correlated to body-separate than body-pattern1. This indicated that the FBA preferred to decode the body parts as independent categories. Poyo Solanas and colleagues [76] demonstrated that both the EBA and FBA could decode the information of limb contraction. However, their neural representations differed. Our results may contribute to showing the representational difference between the EBA and FBA.
Second, we showed an organizational structure for representations in the EBA and FBA, related to distinguishing posture or movement of body parts. This is consistent with previous studies [43,77] that showed that the EBA could produce response to body parts at the semantic level rather than at the physical property level. However, these researchers did not clarify the posture or movement of body parts. Body-separate was better than body-pattern1 in the correlation to both sides of the FBA. This reflected that the FBA might not be as sensitive as the EBA to decode the torso with arms. The FBA was therefore not a good brain area sensitive to the diagnostic body parts. This functional difference is consistent with the studies emphasizing the functional specialization of the EBA and FBA [76,78].
Furthermore, postural features from the torso and arms possibly drive the bodily expression perception. For example, limb contraction drives fear perception [76]. Our results further indicated that decoding the posture was probably derived from diagnostic body parts. The EBA RDMs were more correlated to emotion-pattern2 than body-pattern2. This provided evidence that the torso with arms was decoded in a similar way as the whole body. Combining the facts that the EBA plays a role in action perception [12] and is connected to parietal cortex regions [78], we inferred that the EBA possibly transmits the movement information of the upper limbs for more abstract perception. Taken together, the main contribution of the current experiment was that the EBA may convey diagnostic information for perceiving bodily expressions.

4.4. Limitations and Future Expectations

We employed methods that have been previously used to investigate diagnostic information and neural representations. However, the approaches we used have limitations in some respects: (1) group-level functional ROIs were adopted, rather than individual-subject level ones, limiting the accuracy of the functional localization of MVPA effects. (2) Only a limited set of basic emotions (fear and anger) were employed, limiting the generality of the conclusions. (3) Likewise, only static stimuli were used—a different answer might be obtained for information conveyed by body motion. For instance, recent works on the ‘forrest dataset’ [79] investigated emotional representations during movie watching [80,81]. Dynamic bodily expressions unfolded time-varying and complex emotions, which were closer to real-life experiences. Dynamic features conveyed more details, such as the time to rise, or the probability of resurgence [82]. Future research should expand the types of emotion and stimuli to further investigate the features and related neural mechanisms of bodily emotion.

5. Conclusions

Behavioral evidence supports that diagnostic emotional information is involved in the torso (including arms and hands) for perceiving both anger and fear. The body part and whole body also share similar neural representations in the EBA. Furthermore, we demonstrated that the canonical area of the EBA distinguished the posture or movement for decoding bodily emotion. In sum, both behavioral and imaging results showed that the action or postures of upper body parts can provide core features for emotion recognition.

Author Contributions

Conceptualization, W.L. and J.R.; methodology, J.R., R.D. and D.W.; software, J.R.; validation, R.D., S.L. and M.Z.; formal analysis, J.R. and R.D.; investigation, W.L. and J.R.; resources, W.L.; data curation, J.R.; writing—original draft preparation, J.R. and S.L.; writing—review and editing, R.D. and C.F.; visualization, J.R.; supervision, C.F. and P.X.; project administration, W.L.; funding acquisition, W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China grant (grant number: 31871106).

Institutional Review Board Statement

The study was approved by the Ethical Committee of Liaoning Normal University.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data and code that support the findings of this study are available from the corresponding author upon reasonable request by e-mail. The data are not publicly available as new data such as the fMRI data under more emotion conditions were created and will be further analyzed in future studies. We will provide the data, code and results of the Bubbles experiments and the task-fMRI experiments, if editors, reviewers and any readers need them to perform validation or any other analysis.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Diagnostic information in one other female actor.
Figure A1. Diagnostic information in one other female actor.
Brainsci 12 00466 g0a1
Figure A2. Diagnostic information in one other male actor.
Figure A2. Diagnostic information in one other male actor.
Brainsci 12 00466 g0a2
Figure A3. Clusters of main effects and interaction of the group-level whole-brain ANOVA. (A) Clusters results for a main effect of body and (B) a main effect of expression. (C) Clusters results revealed by the interaction between body and expression. Abbreviations: EBA = extra-striate body area, aITG = anterior inferior temporal gyrus, PHG = parahippocampal gyrus, FG = fusiform gyrus, ACC = anterior cingulate cortex, INS = insular, MTG = middle temporal gyrus, IOL = left inferior occipital, LG = lingual gyrus, IFG = inferior frontal gyrus, STG = superior temporal gyrus, PCC = posterior cingulate cortex, MTG = middle temporal gyrus, SFG = superior frontal gyrus, MFG = middle frontal gyrus, SMA = supplementary motor area, SPL = superior parietal lobule, vmPFC = ventromedial prefrontal cortex, MSF = medial superior frontal cortex, IPL = inferior parietal lobule.
Figure A3. Clusters of main effects and interaction of the group-level whole-brain ANOVA. (A) Clusters results for a main effect of body and (B) a main effect of expression. (C) Clusters results revealed by the interaction between body and expression. Abbreviations: EBA = extra-striate body area, aITG = anterior inferior temporal gyrus, PHG = parahippocampal gyrus, FG = fusiform gyrus, ACC = anterior cingulate cortex, INS = insular, MTG = middle temporal gyrus, IOL = left inferior occipital, LG = lingual gyrus, IFG = inferior frontal gyrus, STG = superior temporal gyrus, PCC = posterior cingulate cortex, MTG = middle temporal gyrus, SFG = superior frontal gyrus, MFG = middle frontal gyrus, SMA = supplementary motor area, SPL = superior parietal lobule, vmPFC = ventromedial prefrontal cortex, MSF = medial superior frontal cortex, IPL = inferior parietal lobule.
Brainsci 12 00466 g0a3
Figure A4. Clusters of contrast analysis. (A) Clusters shown by the contrast angry legs > neutral legs. (B) Clusters shown by the contrast fearful legs > neutral legs. (C) Clusters shown by the contrast fearful head > neutral head. Abbreviations: INS = insular, MTG = middle temporal gyrus, EBA = extra-striate body area, IFG = inferior frontal gyrus, MFG = middle frontal gyrus, SMA = supplementary motor area, IPL = inferior parietal lobule, SPL = superior parietal lobule.
Figure A4. Clusters of contrast analysis. (A) Clusters shown by the contrast angry legs > neutral legs. (B) Clusters shown by the contrast fearful legs > neutral legs. (C) Clusters shown by the contrast fearful head > neutral head. Abbreviations: INS = insular, MTG = middle temporal gyrus, EBA = extra-striate body area, IFG = inferior frontal gyrus, MFG = middle frontal gyrus, SMA = supplementary motor area, IPL = inferior parietal lobule, SPL = superior parietal lobule.
Brainsci 12 00466 g0a4
Figure A5. Overlapping of the ROI and maps of emotional torso parts. Abbreviations: TA_AN = anger vs. neutral under torso with arms, TA_FN = fear vs. neutral under torso with arms, WB_AN = anger vs. neutral under whole body, WB_FN = fear vs. neutral under whole body.
Figure A5. Overlapping of the ROI and maps of emotional torso parts. Abbreviations: TA_AN = anger vs. neutral under torso with arms, TA_FN = fear vs. neutral under torso with arms, WB_AN = anger vs. neutral under whole body, WB_FN = fear vs. neutral under whole body.
Brainsci 12 00466 g0a5
Table A1. Information of brain activations.
Table A1. Information of brain activations.
Contrasts or RegionMNI Co-OrdinatesPeak z or tCluster pCluster Size
xyz
Emotion × body interactions
Ventromedial prefrontal gyrus−85627.61.5 × 10−5143
R anterior insula302007.50.002162
L precuneus−8−56286.91.2 × 10−4103
Inferior frontal gyrus−5036246.56.4 × 10−4426
Main effect of expression
L cerebellum−12−74−3414.50.0033104
R cerebellum24−66−3013.54.7 × 10−4150
L fusiform gyrus−30−36−2013.40.03158
L middle temporal gyrus−60−4−1818.00.0019118
Ventromedial prefrontal gyrus044−1616.42.7 × 10−7380
L inferior frontal gyrus, L insula−3022−425.28.3 × 10−151115
R inferior frontal gyrus, L insula3420219.51.4 × 10−8489
R caudate1210611.80.05050
L(R) posterior cingulate, precuneus−6−602016.14.0 × 10−10633
L(R) medial frontal gyrus6521016.67.5 × 10−5205
L inferior parietal lobule−38−505416.51.3 × 10−5257
L middle frontal gyrus−24304815.00.003899
L(R) supplementary motor area−4224811.30.0018122
Main effect of body
R anterior inferior temporal gyrus60−10−2411.01.7 × 10−4202
L anterior inferior temporal gyrus−62−14−189.00.0032118
L middle occipital lobe (covering L EBA, L FBA gyrus, cerebellum)−42−82414.97.1 × 10−232311
R middle occipital lobe (covering R EBA, R FBA gyrus, cerebellum)14−1001240.05.7 × 10−191732
L(R) medial frontal gyrus and anterior cingulate−444−2011.41.3 × 10−141160
L insula, L inferior frontal gyrus−3022−215.93.3 × 10−6332
R insula, R inferior frontal gyrus3222216.94.3 × 10−5247
Precuneus, posterior cingulate−2−224013.51.5 × 10−161405
L inferior frontal gyrus−4228269.20.008392
L angular−46−684611.31.7 × 10−4197
R inferior frontal gyrus464289.50.0039112
L superior parietal lobule18−685011.51.7 × 10−4198
L(R) supplementary motor area−4205017.36.8 × 10−10654
L superior frontal gyrus−1646408.30.008393
L superior frontal gyrus−20325414.50.0012144
In whole body,
Anger > neutral
L occipito-temporal gyrus (EBA)−48−76−24.30.01525
Fear > neutral
L occipito-temporal gyrus (EBA)−54−60125.20.01035
In torso with arms,
Anger > neutral
R lingual gyrus20−8465.40.03280
L occipito-temporal gyrus (EBA)−50−7404.60.04860
Fear > neutral
R middle temporal gyrus50−6284.60.04523
L occipito-temporal gyrus (EBA)−54−62104.60.01224
In legs,
Anger > neutral
R cerebellum44−56−344.40.010212
L cerebellum−12−70−364.70.0038285
L middle temporal gyrus−42−6044.00.0053259
L insula−3022−25.22.6 × 10−163593
R insula322004.84.6 × 10−4435
R middle frontal gyrus424525.03.1 × 10−91557
R thalamus (including R caudate)20−4143.90.016185
L thalamus (including L caudate)−6−18123.90.0091223
R inferior parietal lobule68−34243.80.019171
L inferior parietal lobule−32−42424.92.8 × 10−81297
R inferior parietal lobule32−42444.45.2 × 10−5604
L(R) supplementary motor area−620504.53.3 × 10−6841
L(R) precuneus−14−68564.10.023159
R superior parietal lobule18−64544.00.0028310
Fear > neutral
L insula−3022−45.45.4 × 10−5273
R insula342025.07.1 × 10−4173
L inferior frontal gyrus−5610224.70.009097
In head,
Anger > neutral
-
Fear > neutral
R cerebellum24−66−324.60.04270
R insula, R inferior frontal gyrus321804.61.7 × 10−4282
L insula−3420−24.60.0020177
L inferior frontal gyrus−4218164.80.0032142
L supplementary motor area−622444.70.0020164

References

  1. Aviezer, H.; Trope, Y.; Todorov, A. Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science 2012, 338, 1225–1229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Aviezer, H.; Trope, Y.; Todorov, A. Holistic person processing: Faces with bodies tell the whole story. J. Pers. Soc. Psychol. 2012, 103, 20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Barrett, L.F.; Mesquita, B.; Gendron, M. Context in Emotion Perception. Curr. Dir. Psychol. Sci. 2011, 20, 286–290. [Google Scholar] [CrossRef]
  4. De Gelder, B. Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Philos. Trans. R. Soc. B 2009, 364, 3475–3484. [Google Scholar] [CrossRef]
  5. Dael, N.; Mortillaro, M.; Scherer, K.R. Emotion expression in body action and posture. Emotion 2012, 12, 1085. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Martinez, L.; Falvello, V.B.; Aviezer, H.; Todorov, A. Contributions of facial expressions and body language to the rapid perception of dynamic emotions. Cogn. Emot. 2016, 30, 939–952. [Google Scholar] [CrossRef] [PubMed]
  7. Darwin, C. The Expression of the Emotions in Man and Animals; University of Chicago Press: Chicago, IL, USA; London, UK, 1965; pp. 196–219. [Google Scholar]
  8. Tracy, J.L.; Robins, R.W. The nonverbal expression of pride: Evidence for cross-cultural recognition. J. Pers. Soc. Psychol. 2008, 94, 516–530. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Livingstone, S.R.; Palmer, C. Head movements encode emotions during speech and song. Emotion 2016, 16, 365–380. [Google Scholar] [CrossRef] [PubMed]
  10. Witkower, Z.; Tracy, J.L. Bodily Communication of Emotion: Evidence for Extrafacial Behavioral Expressions and Available Coding Systems. Emot. Rev. 2019, 11, 184–193. [Google Scholar] [CrossRef]
  11. O’Reilly, P. Bipedics: Towards a New Category of Kinesics—An Empirical Investigation of the Expression of Attitude, and Emotion, Through Simple Leg and Foot Gesture. Master’s Thesis, University of Gothenburg, Gothenburg, Sweden, June 2012. [Google Scholar]
  12. De Gelder, B. Towards the neurobiology of emotional body language. Nat. Rev. Neurosci. 2006, 7, 242–249. [Google Scholar] [CrossRef] [PubMed]
  13. Sowden, P.T.; Schyns, P.G. Channel surfing in the visual brain. Trends Cogn. Sci. 2006, 10, 538–545. [Google Scholar] [CrossRef] [Green Version]
  14. Sawada, M.; Suda, K.; Ishii, M. Expression of emotions in dance: Relation between arm movement characteristics and emotion. Percept. Mot. Ski. 2003, 97, 697–708. [Google Scholar] [CrossRef]
  15. Aviezer, H.; Ensenberg, N.; Hassin, R.R. The inherently contextualized nature of facial emotion perception. Curr. Opin. Psychol. 2017, 17, 47–54. [Google Scholar] [CrossRef]
  16. Van den Stock, J.; Righart, R.; de Gelder, B. Body expressions influence recognition of emotions in the face and voice. Emotion 2007, 7, 487–494. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Coulson, M. Attributing Emotion to Static Body Postures: Recognition Accuracy, Confusions, and Viewpoint Dependence. J. Nonverbal Behav. 2004, 28, 117–139. [Google Scholar] [CrossRef]
  18. Parkinson, C.; Walker, T.T.; Memmi, S.; Wheatley, T. Emotions are understood from biological motion across remote cultures. Emotion 2017, 17, 459–477. [Google Scholar] [CrossRef] [PubMed]
  19. Beck, A.; Cañamero, L.; Bard, K.A. Towards an affect space for robots to display emotional body language. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar]
  20. Aviezer, H.; Hassin, R.R.; Ryan, J.; Grady, C.; Susskind, J.; Anderson, A.; Moscovitch, M.; Bentin, S. Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 2008, 19, 724–732. [Google Scholar] [CrossRef]
  21. Shariff, A.F.; Tracy, J.L. What are emotion expressions for? Curr. Dir. Psychol. Sci. 2011, 20, 395–399. [Google Scholar] [CrossRef] [Green Version]
  22. Gross, M.M.; Crane, E.A.; Fredrickson, B.L. Effort-shape and kinematic assessment of bodily expression of emotion during gait. Hum. Mov. Sci. 2012, 31, 202–221. [Google Scholar] [CrossRef]
  23. Nummenmaa, L.; Saarimäki, H. Emotions as discrete patterns of systemic activity. Neurosci. Lett. 2019, 693, 3–8. [Google Scholar] [CrossRef]
  24. Kreibig, S.D. Autonomic nervous system activity in emotion: A review. Biol. Psychol. 2010, 84, 394–421. [Google Scholar] [CrossRef]
  25. Nummenmaa, L.; Glerean, E.; Hari, R.; Hietanen, J.K. Bodily maps of emotions. Proc. Natl. Acad. Sci. USA 2014, 111, 646–651. [Google Scholar] [CrossRef] [Green Version]
  26. Volynets, S.; Glerean, E.; Hietanen, J.K.; Hari, R.; Nummenmaa, L. Bodily maps of emotions are culturally universal. Emotion 2020, 20, 1127. [Google Scholar] [CrossRef] [Green Version]
  27. Gosselin, F.; Schyns, P.G. Bubbles: A technique to reveal the use of information in recognition tasks. Vision Res. 2001, 41, 2261–2271. [Google Scholar] [CrossRef] [Green Version]
  28. Zhan, M.; Goebel, R.; de Gelder, B. Ventral and Dorsal Pathways Relate Differently to Visual Awareness of Body Postures under Continuous Flash Suppression. eNeuro 2018, 5, 1–18. [Google Scholar] [CrossRef] [Green Version]
  29. Schyns, P.G.; Bonnar, L.; Gosselin, F. Show Me the Features! Understanding Recognition From the Use of Visual Information. Psychol. Sci. 2002, 13, 402–409. [Google Scholar] [CrossRef]
  30. Van Rijsbergen, N.; Jaworska, K.; Rousselet, G.A.; Schyns, P.G. With age comes representational wisdom in social signals. Curr. Biol. 2014, 24, 2792–2796. [Google Scholar] [CrossRef] [Green Version]
  31. Jack, R.E.; Garrod, O.G.; Yu, H.; Caldara, R.; Schyns, P.G. Facial expressions of emotion are not culturally universal. Proc. Natl. Acad. Sci. USA 2012, 109, 7241–7244. [Google Scholar] [CrossRef] [Green Version]
  32. Jack, R.E.; Garrod, O.G.; Schyns, P.G. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Curr. Biol. 2014, 24, 187–192. [Google Scholar] [CrossRef] [Green Version]
  33. Jack, R.E.; Sun, W.; Delis, I.; Garrod, O.G.; Schyns, P.G. Four not six: Revealing culturally common facial expressions of emotion. J. Exp. Psychol. Gen. 2016, 145, 708. [Google Scholar] [CrossRef] [Green Version]
  34. Moro, V.; Urgesi, C.; Pernigo, S.; Lanteri, P.; Pazzaglia, M.; Aglioti, S.M. The neural basis of body form and body action agnosia. Neuron 2008, 60, 235–246. [Google Scholar] [CrossRef] [Green Version]
  35. Urgesi, C.; Moro, V.; Candidi, M.; Aglioti, S.M. Mapping implied body actions in the human motor system. J. Neurosci. 2006, 26, 7942–7949. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Nummenmaa, L.; Hirvonen, J.; Parkkola, R.; Hietanen, J.K. Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy. NeuroImage 2008, 43, 571–580. [Google Scholar] [CrossRef]
  37. De Gelder, B.; de Borst, A.W.; Watson, R. The perception of emotion in body expressions. Wiley Interdiscip. Rev. Cogn. Sci. 2015, 6, 149–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Popivanov, I.D.; Schyns, P.G.; Vogels, R. Stimulus features coded by single neurons of a macaque body category selective patch. Proc. Natl. Acad. Sci. USA 2016, 113, E2450–E2459. [Google Scholar] [CrossRef] [Green Version]
  39. Downing, P.E.; Peelen, M.V. The role of occipitotemporal body-selective regions in person perception. Cogn. Neurosci. 2011, 2, 186–203. [Google Scholar] [CrossRef] [PubMed]
  40. Downing, P.E.; Jiang, Y.; Shuman, M.; Kanwisher, N. A cortical area selective for visual processing of the human body. Science 2001, 293, 2470–2473. [Google Scholar] [CrossRef]
  41. Peelen, M.V.; Downing, P.E. Is the extrastriate body area involved in motor actions? Nat. Neurosci. 2005, 8, 125. [Google Scholar] [CrossRef]
  42. Bracci, S.; Caramazza, A.; Peelen, M.V. Representational similarity of body parts in human occipitotemporal cortex. J. Neurosci. 2015, 35, 12977–12985. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Kriegeskorte, N.; Mur, M.; Bandettini, P.A. Representational similarity analysis-connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2008, 2, 4. [Google Scholar] [CrossRef] [Green Version]
  44. Touroutoglou, A.; Hollenbeck, M.; Dickerson, B.C.; Barrett, L.F. Dissociable large-scale networks anchored in the right anterior insula subserve affective experience and attention. NeuroImage 2012, 60, 1947–1958. [Google Scholar] [CrossRef] [Green Version]
  45. Van den Stock, J.; Tamietto, M.; Sorger, B.; Pichon, S.; Grézes, J.; de Gelder, B. Cortico-subcortical visual, somatosensory, and motor activations for perceiving dynamic whole-body emotional expressions with and without striate cortex (V1). Proc. Natl. Acad. Sci. USA 2011, 108, 16188–16193. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Van de Riet, W.A.; Grèzes, J.; de Gelder, B. Specific and common brain regions involved in the perception of faces and bodies and the representation of their emotional expressions. Soc. Neurosci. 2009, 4, 101–120. [Google Scholar] [CrossRef] [PubMed]
  47. Tamietto, M.; Adenzato, M.; Geminiani, G.; de Gelder, B. Fast recognition of social emotions takes the whole brain: Interhemispheric cooperation in the absence of cerebral asymmetry. Neuropsychologia 2007, 45, 836–843. [Google Scholar] [CrossRef] [Green Version]
  48. Moro, V.; Pernigo, S.; Avesani, R.; Bulgarelli, C.; Urgesi, C.; Candidi, M.; Aglioti, S. Visual body recognition in a prosopagnosic patient. Neuropsychologia 2012, 50, 104–117. [Google Scholar] [CrossRef] [PubMed]
  49. De Gelder, B.; Van den Stock, J. The bodily expressive action stimulus test (BEAST). Construction and validation of a stimulus basis for measuring perception of whole body expression of emotions. Front. Psychol. 2011, 2, 181. [Google Scholar] [CrossRef] [Green Version]
  50. Delplanque, S.; N’diaye, K.; Scherer, K.; Grandjean, D. Spatial frequencies or emotional effects? J. Neurosci. Methods 2007, 165, 144–150. [Google Scholar] [CrossRef]
  51. Smith, F.W.; Schyns, P.G. Smile through your fear and sadness: Transmitting and identifying facial expression signals over a range of viewing distances. Psychol. Sci. 2009, 20, 1202–1208. [Google Scholar] [CrossRef] [PubMed]
  52. Schyns, P.G.; Petro, L.S.; Smith, M.L. Transmission of facial expressions of emotion co-evolved with their efficient decoding in the brain: Behavioral and brain evidence. PLoS ONE 2009, 4, e5625. [Google Scholar] [CrossRef] [Green Version]
  53. Dore, B.P.; Weber, J.; Ochsner, K.N. Neural Predictors of Decisions to Cognitively Control Emotion. J. Neurosci. 2017, 37, 2580–2588. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Kinnison, J.; Padmala, S.; Choi, J.M.; Pessoa, L. Network analysis reveals increased integration during emotional and motivational processing. J. Neurosci. 2012, 32, 8361–8372. [Google Scholar] [CrossRef] [Green Version]
  55. Quarto, T.; Blasi, G.; Maddalena, C.; Viscanti, G.; Lanciano, T.; Soleti, E.; Mangiulli, I.; Taurisano, P.; Fazio, L.; Bertolino, A. Association between Ability Emotional Intelligence and Left Insula during Social Judgment of Facial Emotions. PLoS ONE 2016, 11, e0148621. [Google Scholar] [CrossRef] [Green Version]
  56. Chauvin, A.; Worsley, K.J.; Schyns, P.G.; Arguin, M.; Gosselin, F. Accurate statistical tests for smooth classification images. J. Vis. 2005, 5, 1. [Google Scholar] [CrossRef]
  57. Wagner, H.L. On measuring performance in category judgment studies of nonverbal behavior. J. Nonv. Beha. 1993, 17, 3–28. [Google Scholar] [CrossRef]
  58. Bracci, S.; Ietswaart, M.; Peelen, M.V.; Cavina-Pratesi, C. Dissociable neural responses to hands and non-hand body parts in human left extrastriate visual cortex. J. Neuropsychol. 2010, 103, 3389–3397. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Diedrichsen, J.; Kriegeskorte, N. Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS Comput. Biol. 2017, 13, e1005508. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Nili, H.; Wingfield, C.; Walther, A.; Su, L.; Marslen-Wilson, W.; Kriegeskorte, N. A toolbox for representational similarity analysis. PLoS Comput. Biol. 2014, 10, e1003553. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Wallbott, H.G. Bodily expression of emotion. Eur. J. Soc. Psychol. 1998, 28, 879–896. [Google Scholar] [CrossRef]
  62. Vaessen, M.J.; Abassi, E.; Mancini, M.; Camurri, A.; de Gelder, B. Computational Feature Analysis of Body Movements Reveals Hierarchical Brain Organization. Cereb. Cortex 2018, 29, 3551–3560. [Google Scholar] [CrossRef] [PubMed]
  63. Reed, C.L.; Stone, V.E.; Grubb, J.D.; McGoldrick, J.E. Turning configural processing upside down: Part and whole body postures. J. Exp. Psychol. Hum. Percept. Perform. 2006, 32, 73. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Ngo, N.; Isaacowitz, D.M. Use of context in emotion perception: The role of top-down control, cue type, and perceiver’s age. Emotion 2015, 15, 292. [Google Scholar] [CrossRef] [PubMed]
  65. Jastorff, J.; Huang, Y.A.; Giese, M.A.; Vandenbulcke, M. Common neural correlates of emotion perception in humans. Hum. Brain Mapp. 2015, 36, 4184–4201. [Google Scholar] [CrossRef] [Green Version]
  66. Mccarthy, G.; Puce, A.; Belger, A.; Allison, T. Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex. Cereb. Cortex 1999, 9, 431–444. [Google Scholar] [CrossRef] [PubMed]
  67. Astafiev, S.V.; Stanley, C.M.; Shulman, G.L.; Corbetta, M. Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat. Neurosci. 2004, 7, 542–548. [Google Scholar] [CrossRef] [PubMed]
  68. Radua, J.; Phillips, M.L.; Russell, T.; Lawrence, N.; Marshall, N.; Kalidindi, S.; Elhage, W.; Mcdonald, C.; Giampietro, V.; Brammer, M.J. Neural response to specific components of fearful faces in healthy and schizophrenic adults. NeuroImage 2010, 49, 939–946. [Google Scholar] [CrossRef] [Green Version]
  69. Engelen, T.; de Graaf, T.A.; Sack, A.T.; de Gelder, B. A causal role for inferior parietal lobule in emotion body perception. Cortex 2015, 73, 195–202. [Google Scholar] [CrossRef]
  70. Roland, P.E.; Larsen, B.; Lassen, N.A.; Skinhøj, E. Supplementary motor area and other cortical areas in organization of voluntary movements in man. J. Neurophysiol. 1980, 43, 118–136. [Google Scholar] [CrossRef]
  71. Tanji, J.; Shima, K. Role for supplementary motor area cells in planning several movements ahead. Nature 1994, 371, 413–416. [Google Scholar] [CrossRef]
  72. Galléa, C.; Popa, T.; Billot, S.; Méneret, A.; Depienne, C.; Roze, E. Congenital mirror movements: A clue to understanding bimanual motor control. J. Neurol. 2011, 258, 1911–1919. [Google Scholar] [CrossRef] [PubMed]
  73. Craig, A.D. How do you feel—now? The anterior insula and human awareness. Nat. Rev. Neurosci. 2009, 10, 59–70. [Google Scholar] [CrossRef]
  74. Menon, V.; Uddin, L.Q. Saliency, switching, attention and control: A network model of insula function. Brain Struct. Funct. 2010, 214, 655–667. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Taylor, J.C.; Wiggett, A.J.; Downing, P.E. Functional MRI analysis of body and body part representations in the extrastriate and fusiform body areas. J. Neurophysiol. 2007, 98, 1626–1633. [Google Scholar] [CrossRef] [Green Version]
  76. Poyo Solanas, M.; Vaessen, M.; de Gelder, B. Computation-Based Feature Representation of Body Expressions in the Human Brain. Cereb. Cortex 2020, 30, 6376–6390. [Google Scholar] [CrossRef]
  77. Bracci, S.; Caramazza, A.; Peelen, M.V. View-invariant representation of hand postures in the human lateral occipitotemporal cortex. NeuroImage 2018, 181, 446–452. [Google Scholar] [CrossRef]
  78. Zimmermann, M.; Mars, R.B.; De Lange, F.P.; Toni, I.; Verhagen, L. Is the extrastriate body area part of the dorsal visuomotor stream? Brain Struct. Funct. 2018, 223, 31–46. [Google Scholar] [CrossRef] [PubMed]
  79. Hanke, M.; Adelhöfer, N.; Kottke, D.; Iacovella, V.; Sengupta, A.; Kaule, F.R.; Nigbur, R.; Waite, A.Q.; Baumgartner, F.; Stadler, J. A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation. Sci. Data 2016, 3, 160092. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Lettieri, G.; Handjaras, G.; Ricciardi, E.; Leo, A.; Papale, P.; Betta, M.; Pietrini, P.; Cecchetti, L. Emotionotopy in the human right temporo-parietal cortex. Nat. Commun. 2019, 10, 5568. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Lettieri, G.; Handjaras, G.; Setti, F.; Cappello, E.M.; Bruno, V.; Diano, M.; Leo, A.; Ricciardi, E.; Pietrini, P.; Cecchetti, L. Default and control network connectivity dynamics track the stream of affect at multiple timescales. Soc. Cogn. Affect. Neurosci. 2021, Online. [Google Scholar] [CrossRef]
  82. Kuppens, P.; Verduyn, P. Emotion dynamics. Curr. Opin. Psychol. 2017, 17, 22–26. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of generating Bubbles stimulus. As shown in the first row, each original body stimulus was decomposed into 5 scales of five spatial frequency bandwidths (123 to 4 cpi). Then, in the second row, each bandwidth was independently and randomly placed Gaussian window bubbles. The third row shows the body information revealed by bubbles of each scale and the sum of information across scales. The final stimulus summed the 5 leftmost pictures on the row, and it was then applied in the formal experiment.
Figure 1. Illustration of generating Bubbles stimulus. As shown in the first row, each original body stimulus was decomposed into 5 scales of five spatial frequency bandwidths (123 to 4 cpi). Then, in the second row, each bandwidth was independently and randomly placed Gaussian window bubbles. The third row shows the body information revealed by bubbles of each scale and the sum of information across scales. The final stimulus summed the 5 leftmost pictures on the row, and it was then applied in the formal experiment.
Brainsci 12 00466 g001
Figure 2. (a). Stimuli illustration of the 3 bodily expressions and 4 body parts. (b). fMRI task procedures.
Figure 2. (a). Stimuli illustration of the 3 bodily expressions and 4 body parts. (b). fMRI task procedures.
Brainsci 12 00466 g002
Figure 3. Candidate models. Body-separate, body-pattern1, emotion-separate and emotion-pattern1 are categorical models of simulating the similarity of the BOLD activation patterns induced by the emotional categorization task, if the body or emotion factors independently dominate the underlying representations. Body-pattern2 assumes that the similarities of activation patterns induced by torso + arms, legs and head with that induced by whole body vary from high to low. Emotion-pattern2 combines emotion-pattern1 and body-pattern2, assuming that torso + arms and whole body share similar patterns for emotion categorization.
Figure 3. Candidate models. Body-separate, body-pattern1, emotion-separate and emotion-pattern1 are categorical models of simulating the similarity of the BOLD activation patterns induced by the emotional categorization task, if the body or emotion factors independently dominate the underlying representations. Body-pattern2 assumes that the similarities of activation patterns induced by torso + arms, legs and head with that induced by whole body vary from high to low. Emotion-pattern2 combines emotion-pattern1 and body-pattern2, assuming that torso + arms and whole body share similar patterns for emotion categorization.
Brainsci 12 00466 g003
Figure 4. Diagnostic information revealed by the Bubbles experiment. The significant body information (red regions) for categorizing each bodily expression is displayed in a separate row. The first three rows show the three expressions by a female actor and the latter three rows those by a male actor. The first column shows the diagnostic SF features overlaying all the SF bands sampled in our experiment. The next five columns show the SF features of each band, respectively. The last bar graph is about the diagnostic SF spectrum for each expression (proportion of the diagnostic information per band). The numbers at the top show the range of each bandwidth (unit: cpi). The numbers at the top correspond with those below each bar graph.
Figure 4. Diagnostic information revealed by the Bubbles experiment. The significant body information (red regions) for categorizing each bodily expression is displayed in a separate row. The first three rows show the three expressions by a female actor and the latter three rows those by a male actor. The first column shows the diagnostic SF features overlaying all the SF bands sampled in our experiment. The next five columns show the SF features of each band, respectively. The last bar graph is about the diagnostic SF spectrum for each expression (proportion of the diagnostic information per band). The numbers at the top show the range of each bandwidth (unit: cpi). The numbers at the top correspond with those below each bar graph.
Brainsci 12 00466 g004
Figure 5. Bar graph for results of the Bubbles experiment. Each bar represents the diagnostic pixel proportion (mean + s.e.m.) in the body parts of torso with arms, legs and head for classification as anger, fear and neutral. ** p < 0.005; *** p < 0.001.
Figure 5. Bar graph for results of the Bubbles experiment. Each bar represents the diagnostic pixel proportion (mean + s.e.m.) in the body parts of torso with arms, legs and head for classification as anger, fear and neutral. ** p < 0.005; *** p < 0.001.
Brainsci 12 00466 g005
Figure 6. Bar graph for behavioral results of fMRI experiment. Each bar represents the behavioral performance (Hu, see the main text; mean + s.e.m.) for classifying the WB, TA, legs and head into anger, fear and neutral, respectively. * p < 0.05; ** p < 0.01; *** p < 0.001.
Figure 6. Bar graph for behavioral results of fMRI experiment. Each bar represents the behavioral performance (Hu, see the main text; mean + s.e.m.) for classifying the WB, TA, legs and head into anger, fear and neutral, respectively. * p < 0.05; ** p < 0.01; *** p < 0.001.
Brainsci 12 00466 g006
Figure 7. Group analysis results for the contrast of ‘anger vs. neutral’ and ‘fear vs. neutral’ under whole body (WB, yellow clusters) and torso with arms (TA, red clusters) conditions. WB and TA were overlapped in orange clusters. The clusters were significantly located in occipitotemporal cortex around EBA.
Figure 7. Group analysis results for the contrast of ‘anger vs. neutral’ and ‘fear vs. neutral’ under whole body (WB, yellow clusters) and torso with arms (TA, red clusters) conditions. WB and TA were overlapped in orange clusters. The clusters were significantly located in occipitotemporal cortex around EBA.
Brainsci 12 00466 g007
Figure 8. Representation structures in EBA and FBA. (A) True RDMs, averaged across subjects for the four ROIs, show the neural dissimilarity (1 – r) between any two of the body parts. (B) MDS, calculated based on the RDM matrices, plotting the pairwise distance in a 2D space. The distances reflect the response-pattern similarity: the pairs which are located next to each other shared similar response patterns, while those far away from each other had dissimilar response patterns. (C) Dendrogram, grouping the body parts (nearest neighbor), aiming at revealing their categorical divisions.
Figure 8. Representation structures in EBA and FBA. (A) True RDMs, averaged across subjects for the four ROIs, show the neural dissimilarity (1 – r) between any two of the body parts. (B) MDS, calculated based on the RDM matrices, plotting the pairwise distance in a 2D space. The distances reflect the response-pattern similarity: the pairs which are located next to each other shared similar response patterns, while those far away from each other had dissimilar response patterns. (C) Dendrogram, grouping the body parts (nearest neighbor), aiming at revealing their categorical divisions.
Brainsci 12 00466 g008
Figure 9. Statistical test results. (A) Correlation (Kendall’s rank correlation coefficient τA) between the true RDMs and the candidate RDMs, respectively. The correlation coefficients were tested using a default one-sided signed-rank test. Significant results are marked by one ‘*’ below the bars. (B) The difference between any two candidate RDMs in their relatedness to the true RDMs. Each entry represents the significance of the difference tested by a two-sided signed-rank test. The colors of each entry represent different significant thresholds: q(FDR) = 0.05 (deep red) and q(FDR) = 0.01 (red); the nonsignificant entries are black. (C) Candidate models. BS: body-separate; BP: body-pattern; ES: emotion-separate; EP: emotion-pattern; r: random.
Figure 9. Statistical test results. (A) Correlation (Kendall’s rank correlation coefficient τA) between the true RDMs and the candidate RDMs, respectively. The correlation coefficients were tested using a default one-sided signed-rank test. Significant results are marked by one ‘*’ below the bars. (B) The difference between any two candidate RDMs in their relatedness to the true RDMs. Each entry represents the significance of the difference tested by a two-sided signed-rank test. The colors of each entry represent different significant thresholds: q(FDR) = 0.05 (deep red) and q(FDR) = 0.01 (red); the nonsignificant entries are black. (C) Candidate models. BS: body-separate; BP: body-pattern; ES: emotion-separate; EP: emotion-pattern; r: random.
Brainsci 12 00466 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ren, J.; Ding, R.; Li, S.; Zhang, M.; Wei, D.; Feng, C.; Xu, P.; Luo, W. Features and Extra-Striate Body Area Representations of Diagnostic Body Parts in Anger and Fear Perception. Brain Sci. 2022, 12, 466. https://doi.org/10.3390/brainsci12040466

AMA Style

Ren J, Ding R, Li S, Zhang M, Wei D, Feng C, Xu P, Luo W. Features and Extra-Striate Body Area Representations of Diagnostic Body Parts in Anger and Fear Perception. Brain Sciences. 2022; 12(4):466. https://doi.org/10.3390/brainsci12040466

Chicago/Turabian Style

Ren, Jie, Rui Ding, Shuaixia Li, Mingming Zhang, Dongtao Wei, Chunliang Feng, Pengfei Xu, and Wenbo Luo. 2022. "Features and Extra-Striate Body Area Representations of Diagnostic Body Parts in Anger and Fear Perception" Brain Sciences 12, no. 4: 466. https://doi.org/10.3390/brainsci12040466

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop