Facial Feature Movements Caused by Various Emotions: Differences According to Sex

Facial muscle micro movements for eight emotions were induced via visual and auditory stimuli and were verified according to sex. Thirty-one main facial features were chosen from the Kinect API out of 121 initially obtained facial features; the average change of pixel value was measured after image alignment. The proposed method is advantageous as it allows for comparisons. Facial micro-expressions are analyzed in real time using 31 facial feature points. The amount of micro-expressions for the various emotion stimuli was comparatively analyzed for differences according to sex. Men's facial movements were similar for each emotion, whereas women's facial movements were different for each emotion. The six feature positions were significantly different according to sex; in particular, the inner eyebrow of the right eye had a confidence level of p < 0.01. Consequently, discriminative power showed that men's ability to separate one emotion from the others was lower compared to women's ability in terms of facial expression, despite men's average movements being higher compared to women's. Additionally, the asymmetric phenomena around the left eye region of women appeared more strongly in cases of positive emotions.


Introduction
Emotion estimation methods have been studied recently in order to help develop affective smart content.To estimate emotions, bio-signals or bio-images extracted from several body parts can be utilized and these are used to measure intrinsic and extrinsic parameters of human emotion.
In previous research, physiological signal-based methods, such as Electrocardiography (ECG), Photoplethysmography (PPG), Electroencephalography (EEG), and galvanic skin response (GSR), are used for the quantitative estimations of emotion [1][2][3][4].However, as these methods involve attaching sensors to the body, this inconvenience can inadvertently cause a negative emotion that creates noise in the readings.Similarly, eye movement recognition methods also include equipment to be attached, such as a head-mounted device (HMD) [5].In our approach, to avoid mis-estimation due to the attached sensor, image-based method is used for emotion estimation.
Image-based emotion recognition methods have been studied previously, and include gestures, and eye and facial expressions [6][7][8].Many approaches of gesture recognition exist but there are difficulties in the quantitative measurement and the summary of several features.Eye image based methods such as using blink rate or pupil size require equipment such as a high-resolution camera, infrared illuminator, and zoom lens.Therefore, the facial expression recognition method is more convenient and simple as a camera based method.From a different viewpoint, facial expressions have been defined since the 1960s, with Ekman and Friesen defining facial muscle movement and their relationships [9].From that, the Facial Action Coding System (FACS) was created, which has been designated as the "first face emotion map".However, FACS has not been verified; it has only been described.Therefore, camera-based analysis is appropriate to assist its verification.
In previous work, the facial expression recognition method has been studied using the Active Appearance Model (AAM) algorithm [10].Its features are extracted based on Principal Component Analysis (PCA) [11].Since this method has a long processing time, it is difficult to perform in real time.Additionally, when using eigen-points, the method has some constraints; for instance, face movements should be exaggerated and illumination conditions must be equally maintained [12].To estimate social emotions, previous research has measured the amount of movement on the upper body [13].This is calculated by subtracting two successive images, and this method does not consider non-linear facial movements.Another previous study approached facial expression recognition by using facial movement features [14].This study analyzed facial muscle movement in static images, which could not consider the facial element and any muscle movements.
Eckhard Hess, a psychologist from the University of Chicago, defined the characteristics of pupil accommodation for each sex [15]; he said pupil size could change even with the same light intensity.He then proposed that pupil size could change when looking at attractive or hostile objects; he found followings: First, when men looked at an image, such as a building and landscape, pupil size did not change.However, the pupil rapidly changed when men looked at a woman's naked body.Regarding women, a naked male body and smiling baby images generated pupil change.In these studies, differences according to sex can be measured.Many researchers have studied the differences in emotional expression according to sex.Partala and Surakka found differences between men and women by estimated pupil accommodation [7]; when specific emotions were elicited, both sexes' pupil sizes were larger compared to the neutral state.Go et al. found that when emotions changed, women's expressions were more clearly classified compared to men's expressions at the speech signal [16].Another study involved an analysis of three dimensions: convert responding, interpersonal expression, and attitude [17].In this research, interpersonal expression-the difference of representative power among individuals-was not considered.An attitude analysis could be discussed, such as subjective valuation.
Several studies used facial expressions to measure features for each sex; for example, one study determined differences according to sex regarding whether facial expressions were correctly chosen [16].Women had a higher rate of correct classification, distinguishing one emotion from another.The research analyzed only two emotions (valence and arousal), and it is insufficient to standardize characteristic of facial expressions.In other research, they used visual stimuli independently to elicit emotion, and the same result was derived [18].Previous studies verified that using only one stimulus was insufficient to elicit a specific emotion [19].Therefore, using a single modality was incorrect for emotion recognition.To distinguish sex characteristics, previous research used both facial images and speech signals [20].Facial feature vectors were obtained using linear discriminant analysis (LDA) [21].Speech signals were obtained using a wavelet sub-band.The facial feature vectors and speech signals were merged.This research performed better compared to the previous studies; however, LDA, wavelet transform, and merging methods require a long processing time.
To supplement the weaknesses of previous studies, 31 points that present facial significant movement based on camera images were designated.To overcome the limitation that facial expression was recognized only in terms of extreme expressions, an analysis of movement using the 31 points was performed at 49 Hz by using Kinect camera.
We designed the experimental analysis in three phases.First, we analyzed the amount of facial movement, and then compared this variation between men and women.Second, we used t-test to verify statistically significant differences according to sex.We confirmed significant difference points between men and women at each emotion.Third, we analyzed the discriminative power of distinguishing one emotion from another for each sex.We used eight emotions for this research: Amusement, Excitement, Contentment, Awe, Fear, Angry, Disgust, and Sad.To elicit specific emotions, both visual and auditory stimuli were given to participants.

Related Works for Asymmetry of Facial Expression
Many studies regarding the facial asymmetry according to emotional facial expressions exist.In existing studies, the subject's handedness and sex have been mentioned as factors that affect facial asymmetry.First, from the viewpoint of handedness, although it has been revealed that handedness acts on laterality in relation to linguistic functions [22], nothing has been found yet regarding the laterality relationship between emotional facial expressions and handedness despite that it has been studied by many researchers.On the other hand, many studies regarding right hemisphere processes argue that since nonlinguistic functions are based on the right hemisphere, they may not be related with handedness.From the viewpoint of sex, in the middle of controversies over the roles of sex as a variable that acts of laterality data [23,24], there are grounds for the arguments that the lateralization patterns of males and females are different from each other.In relation to linguistic processing, several studies have argued that females have weaker laterality than males [25,26].On the contrary, studies regarding emotional processing have argued that females may have stronger laterality than males [27][28][29].
Existing studies regarding expressions according to emotions can be largely divided into those that analyzed volitional expressions and those that analyzed spontaneous expressions.

Volitional Facial Expression
First, there are studies that analyzed volitional facial expressions.These studies conducted experiments by requesting study subjects to make certain facial expressions.Borod and Caron advised that negative facial expressions are made more frequently by the left face while positive facial expressions are relatively less lateralized [30].In addition, Borod et al. reported that, whereas male facial expressions were leftward lateralized, in the case of females, negative emotion facial expressions were leftward lateralized while positive emotion facial expressions were rightward lateralized [31].Campbell indicated that people tended to make facial expressions with their left faces more frequently than their right faces and he again reported that asymmetry appeared on left-handers' left faces more prominently than right-handers [32,33].Heller and Levy argued that happiness appeared on the left faces of both right-handers and left-handers [34].Sackeim and Cur argued that asymmetry was severer when intense emotions such as disgust and anger were expressed and changes in the left face were larger when negative emotions were expressed than when positive emotions were expressed [35].In addition, Cacioppo and Petty, and Knox argued that no difference was found between the two sides of the face [36,37].
To review the positive emotions indicated in the abovementioned studies in detail, some studies argued that when right-handed adults were requested to express volitional smiles or happiness, facial expressions might appear asymmetrically on the left or right face.In addition, some other studies reported that leftward asymmetry appeared in facial expressions of happiness in both right-handers and left-handers.Several other positive emotions showed identically ambiguous results.No significant difference in the expressions of flirtation, clowning, sexual arousal, or astonishment/surprise was found between the two sides of the face.However, some cases of left asymmetry were found in the case of males' sexual arousal.To review the negative emotions in detail, whereas many studies argued that facial expressions of negative emotions were made on the left face, other studies failed to find any difference.Many cases of leftward asymmetry were reported in the facial expressions of disgust, disapproval, and confusion.Although Sackeim and Gur reported that leftward asymmetry appeared in the facial expressions of anger, the results of actual recalculation indicated that the difference was not large [38].The results of analyses of other negative facial expressions are less clear.Whereas Borod and Caron, Borod et al. reported that leftward asymmetry in the facial expressions of sadness/grief, Cacioppo and Petty, Sackeim and Gur, and Strauss and Kaplan reported that no significant asymmetry appeared.No significant difference was found in a study that investigated facial expressions of fear.In summary, most studies found that left and right faces were involved in the facial expressions of positive emotions at the same frequencies and studies generally argued that the left face (right hemisphere) was more frequently involved in the facial expressions of negative emotions than the right face.

Spontaneous Facial Expression
There are studies on spontaneous facial expressions in which spontaneous facial expressions appearing when images were shown to subjects were experimented.Borod et al. argued that the left face moved frequently in the case of facial expressions of negative emotions and that in the case of facial expressions of positive emotions, whereas the left face was active in males, the facial expressions were not lateralized in the case of females [31].Ekman et al. indicated that leftward asymmetry appeared more frequently in the case of volitional smiles than in the case of spontaneous smiles and that the number of emotion facial expressions where leftward asymmetry appeared were equal to the number of emotion facial expressions where rightward asymmetry appeared [39].They also indicated that the results in relation to negative emotions were similar but the results could not be generalized because the numbers of data were small.As with most other studies, Lynn and Lynn argued that there was also no difference in asymmetry between the two sides of the face [40,41].
To review the above studies regarding positive emotions in detail, as for the facial expressions of positive emotions of normal subjects, whereas one study reported leftward asymmetry in male subjects, most other studies argued that the same number of cases of rightward asymmetry appeared as the number of cases of leftward asymmetry.To review the above studies regarding negative emotions in detail, although quite a few cases of leftward asymmetry were found by Borod et al., a study conducted by Ekman et al. reported that the same numbers of cases of asymmetry on both sides of the face appeared.However, these authors suggested that their results could not be considered reliable because the number of data on negative facial expressions was too small.In summary, in the case of positive emotions, most studies indicated that left and right faces were involved at the same frequencies but the results cannot be easily generalized because the number of studies on spontaneous facial expressions is very small and attempts to analyze spontaneous facial expressions according to emotional valence are quite insufficient.
In the case of the existing studies mentioned above, face asymmetry was evaluated firsthand by the observers and a limitation has existed in that there are few data that can be used to evaluate the results of these studies quantitatively objectively.

Face Feature Extraction and Measuring Facial Feature Changes
The Kinect face tracking SDK uses the Kinect coordinate system to output its 3D tracking [42].The origin is located at the cameras optical center (sensor), Z-axis is pointing towards a user, Y-axis is pointing up as shown in Figure 1.The face tracking SDK tracks faces in 3D based on the AAM and depth data, and provides 121 feature points on a face.By using face tracking SDK, we detect face and those coordinates of the 121 facial feature points from successive two frames [(a) in Figure 2].Among them, we selected 31 main significant points based on Ekman's action units as shown in Figure 3b [(b) in Figure 2].In the selection of 31 main points, we excluded facial contour points that are irrelevant to facial expression muscles.To analyze the facial muscle movement between successive frames, we measure variations of a pixel value in each feature point.However, natural subtle vibrations of humans could be an inaccurate factor in the measurement of pixel values at the same position.Thus, we performed the alignment step named shift-matching scheme before the pixel value measurement [(c) in Figure 1] [43].First, two surrounding regions (10 × 10 pixels) of each feature point were cropped in two successive frames.Then, we applied a local binary pattern (LBP) operator to define facial texture information [44].The LBP is an efficient texture operator that assigns a label to pixels by thresholding the 3 × 3 neighborhood of each pixel based on the center pixel value and presents the result as a binary number as shown in Figure 4.    To analyze the facial muscle movement between successive frames, we measure variations of a pixel value in each feature point.However, natural subtle vibrations of humans could be an inaccurate factor in the measurement of pixel values at the same position.Thus, we performed the alignment step named shift-matching scheme before the pixel value measurement [(c) in Figure 1] [43].First, two surrounding regions (10 × 10 pixels) of each feature point were cropped in two successive frames.Then, we applied a local binary pattern (LBP) operator to define facial texture information [44].The LBP is an efficient texture operator that assigns a label to pixels by thresholding the 3 × 3 neighborhood of each pixel based on the center pixel value and presents the result as a binary number as shown in Figure 4. To analyze the facial muscle movement between successive frames, we measure variations of a pixel value in each feature point.However, natural subtle vibrations of humans could be an inaccurate factor in the measurement of pixel values at the same position.Thus, we performed the alignment step named shift-matching scheme before the pixel value measurement [(c) in Figure 2] [43].First, two surrounding regions (10 × 10 pixels) of each feature point were cropped in two successive frames.Then, we applied a local binary pattern (LBP) operator to define facial texture information [44].The LBP Symmetry 2016, 8, 86 6 of 15 is an efficient texture operator that assigns a label to pixels by thresholding the 3 × 3 neighborhood of each pixel based on the center pixel value and presents the result as a binary number as shown in Figure 4.An ordered set of binary values can be expressed in decimal form as shown in Equation ( 1) [45]: where xc and yc denote vertical and horizontal coordinates of the center position, respectively.Also, ic and in denote the gray value of the center pixel and those of the eight neighbor pixels, respectively.The function s(x) is defined as: To calculate the Hamming distance, LBP codes are extracted from the two surrounding regions.Because these surrounding regions can be misaligned horizontally and vertically, −2-+2-pixel translation is considered for alignment.The translational factor for alignment is determined at the minimum Hamming distance position [46].The Hamming distance (Dh) is the number of positions in which bit-vectors differ: The minimum Hamming distance means that the two surrounding regions are almost the same.Figure 5 shows the conceptual diagram of the facial feature alignment method.An ordered set of binary values can be expressed in decimal form as shown in Equation ( 1) [45]: where x c and y c denote vertical and horizontal coordinates of the center position, respectively.Also, i c and i n denote the gray value of the center pixel and those of the eight neighbor pixels, respectively.The function s(x) is defined as: To calculate the Hamming distance, LBP codes are extracted from the two surrounding regions.Because these surrounding regions can be misaligned horizontally and vertically, −2-+2-pixel translation is considered for alignment.The translational factor for alignment is determined at the minimum Hamming distance position [46].The Hamming distance (D h ) is the number of positions in which bit-vectors differ: The minimum Hamming distance means that the two surrounding regions are almost the same.Figure 5 shows the conceptual diagram of the facial feature alignment method.An ordered set of binary values can be expressed in decimal form as shown in Equation ( 1) [45]: where xc and yc denote vertical and horizontal coordinates of the center position, respectively.Also, ic and in denote the gray value of the center pixel and those of the eight neighbor pixels, respectively.The function s(x) is defined as: To calculate the Hamming distance, LBP codes are extracted from the two surrounding regions.Because these surrounding regions can be misaligned horizontally and vertically, −2-+2-pixel translation is considered for alignment.The translational factor for alignment is determined at the minimum Hamming distance position [46].The Hamming distance (Dh) is the number of positions in which bit-vectors differ: The minimum Hamming distance means that the two surrounding regions are almost the same.Figure 5 shows the conceptual diagram of the facial feature alignment method.
In Equation ( 4), the m and n indicate the width and height of the overlapped region after alignment between the two surrounding regions, respectively (Figure 5).

Experiment Design
We tested 30 participants (15 men and 15 women) aged between 24 and 35 years old (men: Mean = 27.8,Standard deviation = 2.9; women: Mean = 27.0,Standard deviation = 2.5).We paid each participant $20 for participating and advised them to be prepared for the experiments by resting sufficiently.To elicit emotion, we used two kinds of stimuli.In previous research, the elicited emotion power was estimated using an EEG signal at three modalities: auditory, visual, and combined visual with auditory [19].Using both visual and auditory stimuli is most efficient for eliciting emotion; therefore, we used a combined visual and auditory modality.We used visual stimuli of artistic photography taken from a photo-sharing site (http://musicovery.com).The artistic photographs were obtained using the emotion categories designated by the artist who uploaded the photo.Additionally, the artistic photographs were verified in terms of their emotion classification at low-level features [47].These photos show a similar effect to the International Affective Picture System (IAPS), which is most commonly used for visual stimuli [48].Auditory stimuli were downloaded from a music-sharing site (http://www.deviantart.com).The auditory stimuli at the music-sharing site were located based on Russell's 2D emotion model, including the arousal-relaxation axis and positive-negative axis.We downloaded the music corresponding with the location of that emotion.Then, we verified the validity of the auditory stimuli via self-assessment.The auditory stimuli validity was verified at 98% reliability.We used a total of eight emotions.Figure 6 shows the mapping using Russell's 2D emotion model.Then, the facial feature change (Ci) is calculated as follows [(d) in Figure 1]: In Equation ( 4), the m and n indicate the width and height of the overlapped region after alignment between the two surrounding regions, respectively (Figure 5).

Experiment Design
We tested 30 participants (15 men and 15 women) aged between 24 and 35 years old (men: Mean = 27.8,Standard deviation = 2.9; women: Mean = 27.0,Standard deviation = 2.5).We paid each participant $20 for participating and advised them to be prepared for the experiments by resting sufficiently.To elicit emotion, we used two kinds of stimuli.In previous research, the elicited emotion power was estimated using an EEG signal at three modalities: auditory, visual, and combined visual with auditory [19].Using both visual and auditory stimuli is most efficient for eliciting emotion; therefore, we used a combined visual and auditory modality.We used visual stimuli of artistic photography taken from a photo-sharing site (http://musicovery.com).The artistic photographs were obtained using the emotion categories designated by the artist who uploaded the photo.Additionally, the artistic photographs were verified in terms of their emotion classification at lowlevel features [47].These photos show a similar effect to the International Affective Picture System (IAPS), which is most commonly used for visual stimuli [48].Auditory stimuli were downloaded from a music-sharing site (http://www.deviantart.com).The auditory stimuli at the music-sharing site were located based on Russell's 2D emotion model, including the arousal-relaxation axis and positive-negative axis.We downloaded the music corresponding with the location of that emotion.Then, we verified the validity of the auditory stimuli via self-assessment.The auditory stimuli validity was verified at 98% reliability.We used a total of eight emotions.Figure 6 shows the mapping using Russell's 2D emotion model.We displayed 50 visual stimuli as a slideshow for each emotion.Each participant used an earpiece to concentrate on the auditory stimulus.The Kinect camera was located at a center position under the monitor in order to capture the front of the face.To remove the order effect, the sequence of eliciting emotion was placed randomly [49].Participants had 2 min of rest time between emotions.Experimental procedure is shown in Figure 7.
Because facial movements are different between individuals, we first measured a case of neutral emotions; this allowed us to minimize the individual variation by removing the neutral emotion at the elicited specific emotion.We displayed 50 visual stimuli as a slideshow for each emotion.Each participant used an earpiece to concentrate on the auditory stimulus.The Kinect camera was located at a center position under the monitor in order to capture the front of the face.To remove the order effect, the sequence of eliciting emotion was placed randomly [49].Participants had 2 min of rest time between emotions.Experimental procedure is shown in Figure 7.Because facial movements are different between individuals, we first measured a case of neutral emotions; this allowed us to minimize the individual variation by removing the neutral emotion at the elicited specific emotion.

Facial Movement Analysis for Men and Women
First, we analyzed the amount of facial movement for each sex across eight emotions.

Men's Facial Movement Results
Figure 8 shows analysis results of the men's facial feature movements; the squares indicate significant points that have higher movements compared to the average of the entire points of the both sexes (detailed in Table 1).

Facial Movement Analysis for Men and Women
First, we analyzed the amount of facial movement for each sex across eight emotions.

Men's Facial Movement Results
Figure 8 shows analysis results of the men's facial feature movements; the squares indicate significant points that have higher movements compared to the average of the entire points of the both sexes (detailed in Table 1).We displayed 50 visual stimuli as a slideshow for each emotion.Each participant used an earpiece to concentrate on the auditory stimulus.The Kinect camera was located at a center position under the monitor in order to capture the front of the face.To remove the order effect, the sequence of eliciting emotion was placed randomly [49].Participants had 2 min of rest time between emotions.Experimental procedure is shown in Figure 7.Because facial movements are different between individuals, we first measured a case of neutral emotions; this allowed us to minimize the individual variation by removing the neutral emotion at the elicited specific emotion.

Facial Movement Analysis for Men and Women
First, we analyzed the amount of facial movement for each sex across eight emotions.

Men's Facial Movement Results
Figure 8 shows analysis results of the men's facial feature movements; the squares indicate significant points that have higher movements compared to the average of the entire points of the both sexes (detailed in Table 1).The pattern of men's facial feature movements was similar in all emotions.In particular, Amusement, Excitement, Contentment, Awe, and Sad showed entirely the same distribution.Fear and Disgust showed particularly high movement at point 14.In particular, the results showed that the facial feature movements of men were symmetrical for all emotional stimuli.Figure 9 shows analysis results of the women's facial feature movements; the squares indicate significant points that had higher movements compared to the average of the entire points of both sexes (detailed in Table 2).
While the facial movement distribution of men was similar for all emotions, women's distribution showed diversity for all emotions.Some points showed a commonly high movement, such as points 2, 11, 12, 13, 22, 24, and 26.In particular, unlike the men's results, the facial feature movements of women were asymmetric for most emotion stimuli.The facial features around the left eye showed greater movements compared to the right eye side for all emotional stimuli as shown in Figure 9.While the facial movement distribution of men was similar for all emotions, women's distribution showed diversity for all emotions.Some points showed a commonly high movement, such as points 2, 11, 12, 13, 22, 24, and 26.In particular, unlike the men's results, the facial feature movements of women were asymmetric for most emotion stimuli.The facial features around the left eye showed greater movements compared to the right eye side for all emotional stimuli as shown in Figure 9.
Table 2. Amount of women's facial feature movements according to the eight emotional stimuli (Yellow filled cells: greater features than the average of the entire features of the both sexes).Table 2. Amount of women's facial feature movements according to the eight emotional stimuli (Yellow filled cells: greater features than the average of the entire features of the both sexes).We analyzed the statistical significant difference for each sex.We used a t-test to measure the statistical difference; significantly high facial movement was found for both sexes in Section 3.1 (Figures 8 and 9). Figure 10 shows the results of the t-test between men and women.We analyzed the statistical significant difference for each sex.We used a t-test to measure the statistical difference; significantly high facial movement was found for both sexes in Section 3.1 (Figures 8 and 9). Figure 10 shows the results of the t-test between men and women.The squares and triangles indicate significant differences at confidence levels of p < 0.01 and p < 0.1, respectively (Figure 10).Point 4 shows a commonly significant difference at a confidence level of p < 0.01 except for Amusement.Fear, Disgust, and Sad had similar facial movement distributions for the difference.Using the t-test, we verified the objective difference between men and women.

Ability to Distinguish One Emotion from Another for Men and Women
We analyzed the ability of emotion classification between men and women.Figure 11 shows the result when individual variation is removed; that is, a positive number means that facial movement shows a larger than neutral emotion and a negative number means that it shows a smaller than neutral emotion.Consequently, the eight emotions all have larger facial movements than neutral.The facial movement average of men is higher compared to women for all emotions.Therefore, men's facial expressions were more abundant compared to women's.However, The squares and triangles indicate significant differences at confidence levels of p < 0.01 and p < 0.1, respectively (Figure 10).Point 4 shows a commonly significant difference at a confidence level of p < 0.01 except for Amusement.Fear, Disgust, and Sad had similar facial movement distributions for the difference.Using the t-test, we verified the objective difference between men and women.

Ability to Distinguish One Emotion from Another for Men and Women
We analyzed the ability of emotion classification between men and women.Figure 11 shows the result when individual variation is removed; that is, a positive number means that facial movement shows a larger than neutral emotion and a negative number means that it shows a smaller than neutral emotion.Consequently, the eight emotions all have larger facial movements than neutral.The facial movement average of men is higher compared to women for all emotions.Therefore, men's facial expressions were more abundant compared to women's.However, it is insufficient to measure this only using the average.Therefore, we looked at the standard deviations, which were 0.07 for men and 0.13 for women.Women showed a higher amount of distinguishing of emotions compared to men, despite women's facial movement average being lower compared to men's.In a previous study, women had more abundant emotion expressions at the brain signal compared to men [50].We identified a similar result from the standard deviation between the results for men and women.
it is insufficient to measure this only using the average.Therefore, we looked at the standard deviations, which were 0.07 for men and 0.13 for women.Women showed a higher amount of distinguishing of emotions compared to men, despite women's facial movement average being lower compared to men's.In a previous study, women had more abundant emotion expressions at the brain signal compared to men [50].We identified a similar result from the standard deviation between the results for men and women.

Discussion
In this paper, we performed an experiment to verify 31 facial feature movements of men and women according to eight emotional stimuli.Then, the statistical significances of each facial feature for all emotion stimuli were validated.Additionally, facial expressiveness between men and women was analyzed by comparing the standard deviation of facial feature movement of each emotion.
First, men's facial feature movements appeared mostly through the whole facial positions.Additionally, the facial feature movement tendencies were not clearly separable among different emotion stimuli.Only two positions, forehead center and right bottom cheek, showed greater movements for two emotions: fear and disgust.
Second, women showed separable facial feature movements according to the different emotions compared with men.Although several facial features showed equally significant movements for all emotions, the features around the eyes showed different movement tendencies for each emotion.Next, the difference of the facial feature movements between men and women for each emotion was confirmed by calculating statistical significances.For all emotions, the inner brow of right eye equally showed significant differences between men and women.Additionally, movements around the lip were also regarded as main feature positions in order to separate between men and women for each emotion.
According to the first and second results for men and women, the symmetry issue can be compared.As shown in Figures 8 and 9, the facial feature movements of men and women were symmetrical and asymmetrical, respectively.In particular, the asymmetric phenomena of women appeared more strongly around the left eye region in cases of positive emotions, such as amusement, excitement, contentment, and awe, as shown in Figure 9.This result concurs with the seminal research of Ekman [39].
Third, the standard deviation of the average facial feature movements for each emotion was comparatively analyzed for each sex.A greater standard deviation for the average movements against each emotion means more expressiveness for each emotion.That is, different emotions reflect

Discussion
In this paper, we performed an experiment to verify 31 facial feature movements of men and women according to eight emotional stimuli.Then, the statistical significances of each facial feature for all emotion stimuli were validated.Additionally, facial expressiveness between men and women was analyzed by comparing the standard deviation of facial feature movement of each emotion.
First, men's facial feature movements appeared mostly through the whole facial positions.Additionally, the facial feature movement tendencies were not clearly separable among different emotion stimuli.Only two positions, forehead center and right bottom cheek, showed greater movements for two emotions: fear and disgust.
Second, women showed separable facial feature movements according to the different emotions compared with men.Although several facial features showed equally significant movements for all emotions, the features around the eyes showed different movement tendencies for each emotion.Next, the difference of the facial feature movements between men and women for each emotion was confirmed by calculating statistical significances.For all emotions, the inner brow of right eye equally showed significant differences between men and women.Additionally, movements around the lip were also regarded as main feature positions in order to separate between men and women for each emotion.
According to the first and second results for men and women, the symmetry issue can be compared.As shown in Figures 8 and 9, the facial feature movements of men and women were symmetrical and asymmetrical, respectively.In particular, the asymmetric phenomena of women appeared more strongly around the left eye region in cases of positive emotions, such as amusement, excitement, contentment, and awe, as shown in Figure 9.This result concurs with the seminal research of Ekman [39].
Third, the standard deviation of the average facial feature movements for each emotion was comparatively analyzed for each sex.A greater standard deviation for the average movements against each emotion means more expressiveness for each emotion.That is, different emotions reflect well into their (men's or women's) facial expressions.Although the men's facial feature movement was greater than the women's, the standard deviation for the women was almost twice the men's standard deviation.Consequently, women's emotion expressiveness through facial expressions is better compared to men's.
In previous research, facial expression recognition against different emotions was actively studies based on psychological theories, such as Ekman's facial action units [9].However, there was no research comparing facial expressiveness against each emotion for men and women.Consequently, our results indicated that men use more facial muscles but women provide more discriminative facial expressions for each emotion.

Conclusions
We proposed a new method for verifying differences of facial movement according to sex.We designed the experimental analysis in three phases.First, we analyzed the amount of facial movement for each sex.The distributions of men's significant facial movement points were similar for all emotions.Meanwhile, the distributions of women's significant facial movement points appeared variously for all emotions.This means that women's ability to distinguish one emotion from another in terms of facial movement is higher compared to men's ability.Second, regarding the eliciting of specific emotions, facial movement was larger compared to neutral emotion for both men and women.We performed a t-test to verify the statistical significant differences between men and women.The inner eyebrow of the right eye shows a commonly significant region for all emotions.Excitement had the largest number of significant points, while Awe had the least.This result concurs with previous research results.The present results could help development of smart content based on emotion.
In future research, we plan to match facial movement with Ekman's action units.Because the FACS was not verified but rather described, we will work to verify the FACS.Using the algorithm that we developed, we could analyze the direction and size of movement at particular regions.Further, we will develop content based on facial movements.Consequently, we expect the verification of intrinsic facial movements to contribute to not only many kinds emotional and affective applications [19,51], but also to the forensics and security fields [52].

Figure 1 .
Figure 1.Flow chart of the measuring facial feature changes method.

Figure 3 .
Figure 3. Facial feature points: (a) The initial 121 feature points provided by Kinect face tracking SDK; and (b) the 31 feature points chosen as significant facial expression features and its numbering.

Figure 5 .
Figure 5. Example of facial feature alignment in one of the facial feature points.

Figure 5 .
Figure 5. Example of facial feature alignment in one of the facial feature points.

Figure 5 .
Figure 5. Example of facial feature alignment in one of the facial feature points.

Figure 8 .
Figure 8. Facial movements for men (Red dotted line: vertical face center).

Figure 8 .
Figure 8. Facial movements for men (Red dotted line: vertical face center).Figure 8. Facial movements for men (Red dotted line: vertical face center).

Figure 8 .
Figure 8. Facial movements for men (Red dotted line: vertical face center).Figure 8. Facial movements for men (Red dotted line: vertical face center).

Figure 9 .
Figure 9. Facial movements for women (Red dotted line: vertical face center).

Figure 9 .
Figure 9. Facial movements for women (Red dotted line: vertical face center).

Figure 10 .
Figure 10.Result of t-test between men and women (Squares and triangles indicate significant difference at a confidence levels of p < 0.01 and p < 0.1, respectively).

Figure 10 .
Figure 10.Result of t-test between men and women (Squares and triangles indicate significant difference at a confidence levels of p < 0.01 and p < 0.1, respectively).

Figure 11 .
Figure 11.Average amount of facial movement at each emotion and standard deviation.

Figure 11 .
Figure 11.Average amount of facial movement at each emotion and standard deviation.

Table 1 .
Amount of men's facial feature movements according to the eight emotional stimuli (Yellow filled cells: greater features than the average of the entire points of the both sexes).