Skip to Content
SymmetrySymmetry
  • Article
  • Open Access

5 April 2019

An Improved Micro-Expression Recognition Method Based on Necessary Morphological Patches

and
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.

Abstract

Micro-expression is a spontaneous emotional representation that is not controlled by logic. A micro-expression is both transitory (short duration) and subtle (small intensity), so it is difficult to detect in people. Micro-expression detection is widely used in the fields of psychological analysis, criminal justice and human-computer interaction. Additionally, like traditional facial expressions, micro-expressions also have local muscle movement. Psychologists have shown micro-expressions have necessary morphological patches (NMPs), which are triggered by emotion. Furthermore, the objective of this paper is to sort and filter these NMPs and extract features from NMPs to train classifiers to recognize micro-expressions. Firstly, we use the optical flow method to compare the on-set frame and the apex frame of the micro-expression sequences. By doing this, we could find facial active patches. Secondly, to find the NMPs of micro-expressions, this study calculates the local binary pattern from three orthogonal planes (LBP-TOP) operators and cascades them with optical flow histograms to form the fusion features of the active patches. Finally, a random forest feature selection (RFFS) algorithm is used to identify the NMPs and to characterize them via support vector machine (SVM) classifier. We evaluated the proposed method on two popular publicly available databases: CASME II and SMIC. Results show that NMPs are statistically determined and contribute to significant discriminant ability instead of holistic utilization of all facial regions.

1. Introduction

Facial expressions are a significant medium for people to express and detect emotional states [1]. Micro-expressions are characterized as rapid facial muscle movements that are involuntary and reveal a person’s true feelings [2]. Ekman et al. had suggested that micro-expressions can completely show the hidden emotions of a person, but due to their brief duration and subtle intensity [2], development of automatic micro-expression detection and recognition remains challenging. Hence in this scenario, Ekman proposed a facial expression coding system (FACS) [3], which decomposes facial muscles into multiple action units (AUs). Each micro-expression is composed of a set of combinations and functions of AUs [4]. Ekman also emphasized that micro-expressions can be categorized into six basic emotions: happiness, sadness, surprise, disgust, anger and fear [4]. Furthermore, Haggard first introduced the concept of “micro-expression” [5], and subsequently Ekman et al. defined rapid and unconscious spontaneous facial movements as micro-expressions. Since micro-expressions are brief and spontaneous expressions, these facial movements can express a person’s true emotional response [6]. Micro-expression recognition not only has high reliability amongst emotion recognition tasks [7], but also has great potential applications in many fields, such as emotion analysis, teaching evaluation and criminal detection. However, because of the short duration, subtle intensity and localized movements of a micro-expression, even well-trained researchers can only achieve 40% recognition accuracy [8]. Due to limitations such as lack of professional training and high computational cost, micro-expression identifications are difficult to surpass in large-scale implementation [9,10]. As a result, an increasing demand for automatic micro-expression recognition in recent years has driven research attention [11].
Facial expression (macro-expression) recognition is a frontier inter-discipline which involves professional knowledge in different fields. With the development of cognitive psychology, biopsychology and computer technology, the application and progress of macro-expression recognition has gradually penetrated into the field of artificial intelligence and achieved some innovative theoretical results. The earliest research on macro-expressions can be traced back to about 150 years ago. Because of individual differences, the performance of facial expressions derived from emotional response varies among different people. In the 1960s, Ekman et al. [1] scientifically classified facial expressions into six corresponding emotional categories (happiness, surprise, disgust, anger, fear and sadness) according to the general law of commonality. In recent decades, numerous scholars have made fruitful achievements in the field of macro-expression recognition [12]. The truth is, deep learning has brought macro-expression recognition to a new stage and achieved remarkable results [13,14,15,16]. For example, Li et al. comprehensively studied most of macro-expression recognition technologies based on deep neural network and evaluated the algorithms on some widely used databases [13]. In addition, this paper compares the advantages and limitations of these methods on static image databases and dynamic video sequence databases. Deep learning relies on the powerful graphics computing ability of a computer to directly put massive data into the algorithm, and the system can automatically learn features from the data. However, the development of expression recognition based on deep network is facing a huge challenge: the amount of training data is exceedingly small. Kulkarni et al. established SASE-FE database to solve this problem [14]. Furthermore, the iCV-MEFED database which is built by Guo et al. also enrich the amount of data for facial expression recognition [15]. They also validated the emotional attributes of the image in the SASE-FE database. With the influx of a large number of macro-expression databases, deep network has made remarkable achievements in facial expression recognition [13]. The covariance matrices are exploited to encode the deep convolutional neural networks (DCNN) features for facial expression recognition by Otberdout [17]. The experimental result shows that the covariance descriptors computed on DCNN features are more efficient than the standard classification with fully connected layers and softmax, and the proposed approach achieves performance at the state of the art for facial expression recognition. Furthermore, researchers are also working on the emotional state conveyed by facial images. Some teams use macro-expression images to judge real versus fake expressed emotion classification [13,18]. In the literature [19], both visual modalities (face images) and audio modalities (speech) are utilized to capture facial configuration changes and emotional response. Macro-expression recognition reflects people’s emotional state by detecting their facial changes. Although this technology can judge people’s psychological emotions from the surface, it cannot reveal the emotions people are trying to hide. Micro-expression can represent the real emotional responses that people try to hide.
Micro-expressions are an involuntary facial muscle response, with a short duration that is typically between 1/25 and 1/5 s [3]. Because of their fleeting nature, micro-expressions can express a person’s real intentions. Moreover, psychologists have found that micro-expressions triggered by emotion or habit generally have local motion properties [8]; they are facial expressions with insufficient muscle contractions. The muscle movements of micro-expressions are usually concentrated in the eyes, eyebrows, nose or mouth areas [9]. Psychologists have also developed the theory of necessary morphological patches (NMPs) [9], which refers to some salient facial regions that play a crucial role in micro-expression recognition. Although these NMPs only involve a few of action units (AUs), they are necessary indications to judge whether a person is in an emotional state or not. For example, when the upper eyelid is lifted and exposes more iris, people are reflexively experiencing “surprise”. NMPs are always focused on the eye and eyebrow areas, and the NMPs of “disgust” are concentrated around the eyebrow and nasolabial fold.
As a typical pattern recognition task, micro-expression recognition can be roughly divided into two important parts. One is the feature extraction component, which extracts useful information from video sequences to describe micro-expressions. The other is classification, which designs a classifier based on the first stage to identify the micro-expression sequences. Many previous researchers have focused on the feature extraction of micro-expressions. For example, the local binary pattern from three orthogonal planes (LBP-TOP) was employed to detect micro-expressions and achieved good results [10,11]. Although the recognition rate of these algorithms was slightly higher than a human operation, it was still far from a high-quality micro-expression recognition method. Therefore, some researchers have developed many improved algorithms to enhance the accuracy [20,21,22]. The spatiotemporal completed local quantized pattern (STCLQP) algorithm is an extension of completed local quantized pattern (CLQP) in a 3D spatiotemporal context [13]; its calculations resemble LBP-TOP calculations, which extracted texture features in the XY plane, XT plane and YT plane respectively, and then cascaded them as STCLQP features. The advantage of STCLQP is that it considers more information for micro-expression recognition, but it inevitably introduces a higher number of dimensions. Wang et al. proposed the local binary pattern with six intersection points (LBP-SIP) algorithm [22], which reduces the dimensions of features. However, in most work [20,21,22], researchers mainly use the entire facial region to extract features, which greatly increases the number of features but reduces the recognition accuracy. In this paper, we firstly extract NMPs to improve the effectiveness of the features.
In many macro-expression recognition tasks, researchers often divided the whole face into many active patches based on FACS and selected some salient patches as features [23,24,25,26]. For example, Happy et al. explained that the extraction of discriminative features from salient facial patches played a vital role in effective facial expression recognition [24]. Liu et al. developed a simplified algorithm framework using a set of fusion features extracted from the salient areas of the face [25]. Inspired by these studies, we attempted to extract some discriminative patches form the FACS and use them for micro-expression recognition. The proposed method inherits a basic concept of NMP theory, which uses these important patches to search through the whole facial region. Our work extends this research by reducing the features dimensions and extracting more effective features.
This paper proposes a straightforward and effective approach to automatically recognize the micro-expressions. The contributions of this work are as follows:
  • Introduces an automatic NMP extraction technique that combines both the FACS-based method and the features selection method. The FACS-based method tries to extract some regions that have intense muscle movements, called active patches of micro-expressions. To obtain the active areas, this work used the Pearson coefficient to determine the correlation between an expressive image and a neutral image [26]. Unlike macro-expressions, micro-expressions are subtle and brief, so it is highly misleading to use a correlation coefficient to define effective micro-expression regions. To improve this defect, this paper uses an optical flow algorithm to calculate active patches of the micro-expression sequences. This method has a strong robustness to subtle muscle movements, which uses temporal variation and correlation pixel intensity to determine the motion of each pixel sequentially.
  • The optical flow algorithm and LBP-TOP method are applied to describe the local textural and motion features in each active facial patches.
  • A micro-expression is a unique category of facial expressions that only uses few facial muscles to perform a subconscious emotional state. In order to solve this problem and develop a more robust method, the random forest feature extraction algorithm is used to select the NMPs as the valid features.
  • Extensive experiments on two spontaneous micro-expression databases demonstrate the effectiveness of only considering NMPs to recognize a micro-expression.
The paper is organized as follows. Section 2 reviews related work on facial landmarks, feature representations and NMP selection. The proposed framework is shown in Figure 1. Section 3 introduces the databases. Experimental results and discussion are provided in Section 4. Finally, Section 5 concludes the paper.
Figure 1. An illustration of the proposed framework.

3. Databases Processing and Experimental Settings

Micro-expression data acquisition is difficult, and it is difficult for non-professionals to identify micro-expressions too. Therefore, the collection and selection of micro-expression datasets is very important. There are two popular spontaneous micro-expression datasets to make experiment: the CASME II and SMIC databases [11,35]. This paper experimented on these two databases and describes the experimental setup and some details.

3.1. CASME II

The CASME II database [35] was published in 2014 as an upgraded version of the CASME database [36]. The time resolution of the new database changed from 60 fps to 200 fps, while the spatial resolution increased to a 280 × 340. The onset frame, the frame with the greatest variation (apex frame) and the offset frame of these micro-expression samples are coded. In addition, their facial motion units are marked and their emotional attributes are determined. These micro-expressions are grouped into two groups because of their different environmental configurations and different cameras used. Group A was taken by BenQ M31 camera at 200 fps and in natural light. Group B was shot by a Point Grey GRAS-03K2C camera at 200 fps. Group B was shot in a room with two LED lights. This dataset consists of five classes of emotions: happiness (32 samples), disgust (60 samples), surprise (25 samples), repression (27 samples) and tense (102 samples).

3.2. SMIC

The Spontaneous Micro-Expression Database (SMIC) was designed by the Zhao team at the Machine Vision Research Center of the University of Oulu, Finland [11]. The SMIC database included 164 micro-expression video clips from 16 participants (mean age 28, 6 women, 10 men, 8 Caucasians and 8 Asians). All the fragments are from HS data group, and there are 71 fragments from 8 participants of VIS and NIR data group. These micro-expressions were recorded in the interrogation room where threatening criminals were punished. Only a few emotional fragments containing high intensity were intercepted, and high intensity of emotional fluctuations prompted participants to suppress their facial expressions. Each micro-expression has a maximum total duration of 0.5 s and the longest video sequence contains 50 frames. There are three main emotion categories: positive (happiness; 51 samples), negative (sad, fear, and disgust; 70 samples), and surprise (43 samples).

3.3. CAS(ME)2

The Chinese Academy of Science Macro- and Micro-expression (CAS(ME)2) dataset [37] was established by the Chinese Academy of Science. In this dataset, 22 participants (13 females and 9 males) were asked to give response to nine chosen elicitation videos under two light-emitting diode (LED) lights. The dataset contains 300 macro-expressions and 57 micro-expressions, and also provides four different emotional labels: positive, negative, surprise and others. The expression samples in these dataset were selected from more than 600 elicited facial movements and were coded with the onset, apex, and offset frames, with AUs marked and emotions labeled [37]. In our experiments, all 357 video clips are used.

3.4. Experimental Settings

The micro-expression sequences, captured by a high-speed camera, are different from frame-to-frame. If the different frame numbers of each subject are used to extract and classify the micro-expressions, the recognition rate will degrade. Thus, we used the time interpolation model (TIM) to normalize all the frames of micro-expression sequences [38]. Table 3 shows the relationship between the number of frames, the experimental time and accuracy. The frames of all samples were normalized to 10 (as in Table 3).
Table 3. Relationship between time interpolation model (TIM) length with time and accuracy.
We used the facial landmark method in literature [27] to locate micro-expressions. The model is based on a mixtures of trees with a shared pool of parts; it models every facial landmark as a part and uses global mixtures to capture topological changes due to viewpoint. The experimental result shows that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize, unlike dense graph structures. We used this method to track all micro-expression sequences in three databases (CASME II, SMIC and CAS(ME)2). The experimental results are shown in Table 4.
Table 4. The accuracy of landmark algorithm.
In order to test the accuracy of the optical flow algorithm, we compared the average error and computational density of Horn-Schunck optical flow and Lucas-Kanade optical flow. The average error is the angle difference arcos ( 1 + V × V ) between the calculated optical flow field V = ( v 1 , v 2 ) and the measured optical flow field V = ( v 1 , v 2 ) . Computational density is the proportion of the pixels involved in the calculation. A larger computational density means that a more complete optical flow field can be provided. The calculation density is related to the average error. In the calculation, we chose a better optical flow algorithm based on these two calculation indexes.
From the results in Table 5, it can be seen that the classical Horn-Schunck and Lucas-Kanade optical flow algorithms are not suitable for dealing with large displacement, but they have good descriptive ability for small relative motion. This characteristic is very consistent with the muscle movement characteristics of micro-expressions.
Table 5. Accuracy list of optical flow.
In this paper, we used two cross-validation methods to evaluate the prediction performance of this model, which can alleviate the detrimental effects brought about by the over-fitting problem and can obtain as much effective information as possible from the limited data. The micro-expression dataset can be divided into three parts: the training set, the validation set and the test set. The training set is used to train the model, the verification set is used to configure the parameters, and the test set is the unknown data used to development the model, which is also used to evaluate the generalization ability of the algorithm. The leave one sample out cross validation (LOSOCV) method and the 10-Fold Cross Validation method were used to demonstrate the RF algorithm.
We used the SVM classifier to judge the recognition rate of NMPs. In the process of SVM classification and recognition, three important parameters need to be selected and adjusted. One is the selection of the kernel function. As shown in Table 6, in the experiment, we used all 106 valid regions to test and selected the kernel function and get the result with the highest recognition accuracy. The second is the penalty coefficient C, which is tolerance of errors. It can be used to compromise the minimization of training errors and the complexity of the model. The higher C shows that errors cannot be tolerated; it is easy to over-fit. The smaller C is, the easier it is to under-fit. The third is the gamma parameter, which is a parameter of RBF function when it is selected as kernel. The width of RBF will affect the range of action of each support vector corresponding to Gauss, thus affecting the generalization ability. In this paper, two cross-validation methods are used to predict the classification performance of machine learning model and the corresponding experimental results are given.
Table 6. Recognition rate of different kernel functions on Chinese Academy of Science Macro- and Micro-expression (CASME II) and Spontaneous Micro-Expression (SMIC) databases (%).

4. Results and Discussion

In this chapter, the NMPs definition of micro-expressions was proposed, and we also designed the corresponding experiments to verify their correctness and effectiveness.

4.1. Defined Active Patches and Feature Extraction

First, an automated learning-free facial landmark detection technique (proposed in [27]) was used to locate the facial region of each micro-expression sequence. Then the facial area was cropped according to a set of 68 landmarks. Ultimately, we normalized all the micro-expression images into 240 × 280-pixels and divided them into a set of 12 × 14 patches, each with 20 × 20 pixels, as shown in Figure 10, which also illustrates the location of the active patches and their associated emotional states.
Figure 10. The Illustration of active patches and emotional state.
The optical flow method was used to define the facial active patches of the micro-expressions, which the histograms were used as direction features to identify the micro-expression sequences. In this experiment, we analyzed the recognition rate of HS and LK optical flow algorithm on the CASME II database and chose the HS method with higher accuracy combined with LBP-TOP operators to form the ultimate micro-expression feature.
The recognition rate of optical flow features is low, as shown in Figure 11, and the proportion of erroneous decisions for each emotional category is high. There are two reasons for this problem: (1) the images in the database cannot (strictly) satisfy the assumption that the illumination remains unchanged, even if the appropriate experimental environment is set-up, thus, the brightness changes in the facial region are not complete eliminated; and (2) micro-expressions are subtle movements, which easily lead to over-smoothness and confuse some useful information.
Figure 11. Comparison of recognition accuracies between LK and HS (%).
To make-up for the deficiencies, LBP-TOP operators are calculated to cascade with optical flow features. An LBP-TOP operator has two important parameters: radius and neighborhood points. In this article, we write LBP TOP R X , R Y , R T , P X Y , P X T , P Y T as R X , R Y , R T ; P X Y = P X T = P Y T = P for convenience.
Comparing the information in Table 7, the recognition rate of R X = R Y = 3 , R T = 1 ; P X Y = P X T = P Y T = 8 is the highest. This is due to the high resolution of the micro-expression images and short inter-frame space. Thus, we need a larger spatial domain. R X and R Y , a smaller time domain R T that embodies local textural properties and spatial-temporal motion information. Moreover, the neighboring points will also affect the accuracy of recognition. If P is too small, the feature dimensions are insufficient, lack of sufficient information; if P is too large, it will produce high-dimensional features that will confuse the distinction between classes and significantly increase the number of calculations.
Table 7. Recognition accuracies of different parameters (%).

4.2. NMPs Defined and Result Comparison

In the experiments discussed in Section 4.1, we extracted 106 active facial patches to represent the muscle motion profile of micro-expressions. We then extracted the features based on the combination of optical flow features and LBP-TOP operators of these patches. If all active patches are used for micro-expression recognition, this will not only cause high-dimension features but will also fail to show the necessary emotional state of micro-expressions. So, we used the RFFS method to measure the importance of these active patches and select the NMPs with the most discriminant ability to recognize micro-expressions. We conducted experiments in the facial area, the active patches and the NMPs. The results are shown in Table 8, Table 9 and Table 10.
Table 8. Recognition rate in different regions of micro-expressions in the CASME II database (%).
Table 9. Recognition rate in different regions of micro-expressions in the SMIC database (%).
Table 10. Recognition rate in different regions of micro-expressions in the CAS(ME)2 database (%).
As a feature selection algorithm, RF can evaluate the importance of each patch on the classification problem. This paper also used other feature selection methods to select the NMPs for micro-expression recognition and to obtain the corresponding accuracy. The experimental results are shown in Table 11.
Table 11. Accuracy rate and NMPs numbers of different feature selection algorithms.
By comparing the data in the table, several NMPs in the eyes, eyebrows and mouth regions (selected by each algorithm) are essentially equivalent, while the NMPs in the cheek and nose regions are different. This is because the muscle movement of micro-expressions are mainly concentrated in the eyes, eyebrows and mouth regions. There are few AUs for the cheek and nose regions, while the micro-expressions are restrained movements, which are very subtle and easily overlooked. In some regions, the correlation of motion is very small, so the Pearson Coefficient is insensitive and misleading. Mutual information as a feature selection method is not convenient. It is not a measurement method, nor can it be normalized, and the results on different data sets cannot be compared. The Lasso model is also very unstable, when the data changes slightly, the model changes drastically. The proposed method is robust and useful, and the experimental results show that the NMPs selected by this method are basically in-line with the most representative facial muscle motion patches developed by psychologists.
This method can also reduce the dimensionality of features. Compared with other traditional methods, our proposed method can select some features with strong descriptions and improved discrimination ability. Table 12 shows the experimental results of the comparison between our method and the traditional dimensional reduction methods used on the CASME II database.
Table 12. Comparison of recognition accuracies between different feature dimension methods (%).
The traditional feature reduction method only maps hybrid features, extracted from active patches, from the high-dimensional space to a new low-dimensional space. However, because the motion amplitudes of micro-expressions are very small, the traditional methods that don’t consider target variables in the process of dimensionality reduction are likely to remove features with key motion information. This will affect the accuracy of the classifier. In this paper, the RF algorithm was used to screen out the NMPs for micro-expression recognition. This algorithm is designed and implemented according to the experimental purpose of the article. The NMPs that have the greatest likelihood for micro-expressions were also highly-targeted and accurate. Thus, we needed to evaluate the importance of each active patch to select the most representative necessary areas. In addition, this algorithm can eliminate some irrelevant and redundant features to reduce the feature dimensions, improve the model accuracy and reduce the running time.
We compared the accuracy between the proposed method and the other micro-expression recognition algorithms [39,40,41,42,43,44,45,46,47,48,49,50]. The final results are shown in Table 13 and Table 14. The tables show that our algorithm has better recognition performance on the two databases.
Table 13. Micro-expression recognition rates (%) in the CASME II Database.
Table 14. Micro-expression recognition rates (%) in the SMIC Database.
As shown in Table 12, both of the methods produce different accuracy in the CASME II database, while the proposed method in this paper and CNN-Net method take a better recognition rate. Although the other methods find some useful features for micro-expression, they sometimes fail to consider the psychological mechanisms involved to emotional state of micro-expression, especially on the NMPs. The CNN-Net algorithm [45] achieves higher accuracy from experimental results, but it has a fatal flaw: the uninterpretability of deep neural networks. However, the research of micro-expression recognition is still very immature and its mechanism is very unstable. Most micro-expression researchers focus on how to better understand the principle of micro-expression generation and the deeper emotional state behind it by means of machine learning. The uninterpretability of deep learning is inconsistent with the purpose of these studies, so in this paper we did not choose deep network as a learning tool for micro-expression recognition.

5. Conclusions

The main contribution of this paper is the analysis and determination of the NMPs for micro-expression recognition. Previously, only psychologists suggested that micro-expressions have specific NMPs and have the crucial ability to describe micro-expressions. This paper first applied the psychological concept to the field of computer recognition; it used related techniques to extract these important (feature) patches. We compared the optical flow between the on-set frame and the apex frame in this study. We then defined the regions that are full of muscle movements as potential facial active patches of the micro-expression sequences. The optical flow direction histograms and the LBP-TOP operators in these patches were cascaded into the joint features of micro-expressions. The random forest feature selection technique was used to select NMPs with discriminant ability. Finally, we tested the effectiveness of the proposed method via two famous spontaneous micro-expression databases. The experiments showed that NMPs can describe the muscle movement of micro-expressions better than using the whole facial region for recognition. It also eliminates several redundant features, reducing the feature dimension and improving the recognition accuracy.
In this paper, the NMPs of micro-expressions were automatically extracted. Some related psychological research shows that every emotion has its own specific necessary patches. Thus, in future studies, we will focus on analyzing the specific NMPs of each emotion and apply these patterns to automatic micro-expression recognition.

Author Contributions

Methodology, Y.Z.; validation, Y.Z.; writing—Original draft preparation, Y.Z.; writing—Review and editing, Y.Z. and J.X.; supervision, J.X.

Funding

This research received no external funding.

Acknowledgments

Thanks for the help of reviewers and editors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ekman, P.; Friesen, W.V. Detecting deception from emotional and unemotional cues. Psychiatry 1969, 32, 88–106. [Google Scholar] [CrossRef] [PubMed]
  2. Ekman, P. Mett. Micro Expression Training Tool. [CD-ROM]. 2003. Available online: https://www.paulekman.com/product/micro-facial-expressions-training-tool/ (accessed on 20 January 2019).
  3. Ekman, P.; Friesen, W. Facial Action Coding System: A Technique for the Measurement of Facial Movement; Consulting Psychologists Press: Palo Alto, CA, USA, 1978. [Google Scholar]
  4. Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J. Personal. Soc. Psychol. 1971, 17, 124–132. [Google Scholar] [CrossRef]
  5. Haggard, E.A.; Isaacs, K.S. Micro-momentary facial expressions as indicators of ego mechanisms in psychotherapy. In Methods of Research in Psychotherapy; Gottschalk, L.A., Auerbach, A.H., Eds.; Springer: New York, NY, USA, 1966; pp. 154–165. [Google Scholar]
  6. Warren, G.; Schertler, E.; Bull, P. Detecting deception from emotional and unemotional cues. J. Nonverbal Behav. 2009, 33, 59–69. [Google Scholar] [CrossRef]
  7. Ekman, P. Lie catching and microexpressions. In The Philosophy of Deception; Oxford University Press: Oxford, UK, 2009; pp. 118–133. [Google Scholar]
  8. Ekman, P. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage; W. W. Norton and Company: New York, NY, USA, 2009. [Google Scholar]
  9. Ekman, P. Facial Expressions of Emotion: An Old Controversy and New Findings. Philos. Trans. R. Soc. 1992, 1273, 63–69. [Google Scholar]
  10. Zhao, G.Y.; Pietikainen, M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 6, 915–928. [Google Scholar] [CrossRef]
  11. Pflster, T.; Li, X.B.; Zhao, G.Y.; Pietikainen, M. Recognising spontaneous facial micro-expressions. In Proceedings of the 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1449–1456. [Google Scholar]
  12. Căleanu, C.-D. Face expression recognition: A brief overview of the last decade. In Proceedings of the IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 23–25 May 2013. [Google Scholar]
  13. Li, S.; Deng, W. Deep Facial Expression Recognition: A Survey. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  14. Kulkarni, K.; Corneanu, C. Automatic recognition of facial displays of unfelt emotions. IEEE Trans. Affect. Comput. 2018. [Google Scholar] [CrossRef]
  15. Guo, J.; Lei, Z.; Wan, J. Dominant and complementary emotion recognition from still images of faces. IEEE Access. 2018, 6, 26391–26403. [Google Scholar] [CrossRef]
  16. Loob, C.; Rasti, P.; Lüsi, I. Dominant and complementary multi-emotional facial expression recognition using c-support vector classification. In Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition, Washington, DC, USA, 30 May–3 June 2017. [Google Scholar]
  17. Otberdout, N.; Kacem, A.; Daoudi, M.; Ballihi, L. Deep Covariance Descriptors for Facial Expression Recognition. arXiv 2018, arXiv:1805.03869. [Google Scholar]
  18. Wan, J.; Escalera, S.; Anbarjafari, G. Results and Analysis of ChaLearn LAP Multi-modal Isolated and Continuous Gesture Recognition, and Real Versus Fake Expressed Emotions Challenges. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar]
  19. Chen, J.; Chen, Z.; Chi, Z. Facial Expression Recognition in Video with Multiple Feature Fusion. IEEE Trans. Affect. Comput. 2018, 9, 38–50. [Google Scholar] [CrossRef]
  20. Shreve, M.; Godavarthy, S.; Goldgof, D.; Sarkar, S. Macroand micro-expression spotting in long videos using spatiotemporal strain. In Proceedings of the 2011 IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, Santa Barbara, CA, USA, 21–25 March 2011; pp. 51–56. [Google Scholar]
  21. Huang, X.H.; Zhao, G.Y.; Hong, X.P.; Pietikainen, M.; Zheng, W.M. Texture description with completed local quantized patterns. In Image Analysis; Springer: Berlin/Heidelberg, Germany, 2013; pp. 1–10. [Google Scholar]
  22. Wang, Y.D.; See, J.; Phan, P.C.W. LBP with six intersection points: Reducing redundant information in LBPTOP for micro-expression recognition. In Proceedings of the 12th Conference on Computer Vision, Singapore, 1–5 November 2014; pp. 21–23. [Google Scholar]
  23. Liu, Y.; Li, Y.; Ma, X.; Song, R. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas. Sensors 2017, 17, 712. [Google Scholar] [CrossRef]
  24. Happy, S.L.; Routray, A. Automatic Facial Expression Recognition Using Features of Salient Facial Patches. IEEE Trans. Affect. Comput. 2015, 1, 1–12. [Google Scholar] [CrossRef]
  25. Liu, Y.; Cao, Y.; Li, Y.; Liu, M.; Song, R. Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches. In Proceedings of the IEEE International Conference on Real-time Computer and Robotics, Angkor Wat, Cambodia, 6–10 June 2016; pp. 368–373. [Google Scholar]
  26. Rejila, R.C.; Menon, M. Automatic Facial Expression Recognition based on the Salient Facial Patches. Int. J. Sci. Technol. Eng. 2016, 2, 772–780. [Google Scholar]
  27. Zhu, X.; Ramanan, D. Face detection, pose estimation, and landmark localization in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 2879–2886. [Google Scholar]
  28. Beauchemin, S.S.; Barton, J.L. The computation of optical flow. ACM Comput. Surv. (CSUR) 1995, 3, 433–466. [Google Scholar] [CrossRef]
  29. Cootes, T.F.; Taylor, C.J.; Cooper, D.H.; Graham, J. Active shape models-their training and application. Comput. Vis. Image Underst. 1995, 1, 38–59. [Google Scholar] [CrossRef]
  30. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Volume 3, pp. 674–679. [Google Scholar]
  31. Horn, B.K.P.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–204. [Google Scholar] [CrossRef]
  32. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 10–17. [Google Scholar]
  33. Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 8, 888–905. [Google Scholar]
  34. Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 5, 603–619. [Google Scholar] [CrossRef]
  35. Yan, W.J.; Li, X.B.; Wang, S.J.; Zhao, G.Y.; Liu, Y.J.; Chen, Y.H.; Fu, X.L. CASME II: An improved spontaneous microexpression database and the baseline evaluation. PLoS ONE 2014, 1, 1–23. [Google Scholar] [CrossRef] [PubMed]
  36. Yan, W.J.; Wu, Q.; Liu, Y.J.; Wang, S.J.; Fu, X.L. CASME database: A dataset of spontaneous micro-expressions collected from neutralized faces. In Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, Shanghai, China, 22–26 April 2013; pp. 1–7. [Google Scholar]
  37. Qu, F.; Wang, S.; Yan, W. CAS(ME)2: A database for spontaneous macro-expression and micro-expression spotting and recognition. IEEE Trans. Affect. Comput. 2018, 9, 423–436. [Google Scholar] [CrossRef]
  38. Zhou, Z.H.; Zhao, G.Y.; Pietikainen, M. Towards a practical lipreading system. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 137–144. [Google Scholar]
  39. Wang, S.; Yan, W.; Li, X.; Zhao, G.; Fu, X. Micro-expression recognition using dynamic textures on tensor independent color space. In Proceedings of the 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 4678–4683. [Google Scholar]
  40. Mayya, V.; Pai, R.; Pai, M. Combining temporal interpolation and DCNN for faster recognition of micro-expressions in video sequences. In Proceedings of the International Conference on Advances in Computing, Ghaziabad, India, 11–12 November 2016; pp. 699–703. [Google Scholar]
  41. Patel, D.; Hong, X.; Zhao, G. Selective deep features for micro-expression. In Proceedings of the International Conference on Advances in Computing, Ghaziabad, India, 11–12 November 2016; pp. 2259–2264. [Google Scholar]
  42. Li, X.; Hong, X.; Moilanen, A.; Huang, X.; Pfister, T.; Zhao, G.; Pietikainen, M. Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Trans. Affect. Comput. 2017, 99, 563–577. [Google Scholar] [CrossRef]
  43. Xu, F.; Zhang, J.; Wang, J. Micro-expression identification and categorization using a facial dynamics map. IEEE Trans. Affect. Comput. 2016, 9, 254–267. [Google Scholar]
  44. Peng, M.; Wang, C.; Chen, T. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition. Front. Psychol. 2017, 8, 1745. [Google Scholar] [CrossRef] [PubMed]
  45. Peng, M.; Wu, Z.; Zhang, Z. From Macro to Micro Expression Recognition: Deep Learning on Small Datasets Using Transfer Learning. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, Xi’an, China, 15–19 May 2018. [Google Scholar]
  46. Liong, S.-T.; See, J. Hybrid Facial Regions Extraction for Micro-Expression Recognition System. J. Signal Process. Syst. 2018, 90, 601–617. [Google Scholar] [CrossRef]
  47. Zong, Y.; Huang, X.; Zheng, W. Learning from Hierarchical Spatiotemporal Descriptors for Micro-Expression Recognition. IEEE Trans. Multimedia 2018, 20, 3160–3172. [Google Scholar] [CrossRef]
  48. Lu, H.; Kpalma, K.; Ronsin, J. Motion descriptors for micro-expression recognition. Signal Process. Image Commun. 2018, 67, 108–117. [Google Scholar] [CrossRef]
  49. Liu, Y.; Li, B.; Lai, Y. Sparse MDMO: Learning a Discriminative Feature for Spontaneous Micro-Expression Recognition. IEEE Trans. Affect. Comput. 2018, 1–18. [Google Scholar] [CrossRef]
  50. Liong, S.; Wong, K. Micro-expression recognition using apex frame with phase information. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Kuala Lumpur, Malaysia, 12–15 December 2017. [Google Scholar]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.