#
Facial Expression Emotion Detection for Real-Time Embedded Systems^{ †}

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

- low spirits, anxiety, grief, dejection and despair;
- joy, high spirits, love, tender feelings and devotion;
- reflection, meditation, ill-temper and sulkiness;
- hatred and anger;
- disdain, contempt, disgust, guilt and pride;
- surprise, astonishment, fear and horror;
- self-attention, shame, shyness and modesty.

## 2. Related Work

## 3. Affective Dimension Recognition

#### 3.1. System Overview

#### 3.2. Image Feature Extraction

#### 3.3. k-NN Algorithm for Regression Modeling

#### 3.4. Cross-Validation: Evaluating Estimator Performance

#### 3.5. Pearson Product-Moment Correlation Coefficient

## 4. Overview of the Classifier Fusion System

## 5. Implementation and Results

#### 5.1. Dataset

#### 5.2. Implementing k-Fold Cross-Validation

#### 5.2.1. Correlation Coefficient Evaluation of 2- and 5-Fold Cross-Validation

#### 5.2.2. Evaluation of Confusion Matrix of the k-fold Cross-Validation

#### 5.3. System Overview and Implementation in MATLAB Simulink

- building the model;
- simulating the model;
- analyzing simulation result;
- managing projects;
- connecting to hardware.

#### 5.4. FPGA Implementation

**Performance**: The first advantage is that FPGA devices have hardware parallelism; FPGAs cross the power of digital signal processors (DSPs) by segregation of the consecutive execution per clock cycle.**Time to Market**: FPGAs have pliability and fast prototyping, as they can be tested on an idea and validate it in hardware, not needing to go through the long procedure of the ASIC scheme.**Cost**: The expenditure of creating variations to FPGA designs is less when compared with ASICs.**Reliability**: While there are tools available for designing on FPGA, it is still a difficult task for real-time implementations. The ASICs system is processor-based and contains numerous tools to help for planning tasks and sharing them between many processes. FPGAs reduce computing complexity as they use parallel execution.**Long-Term Maintenance**: FPGA devices are upgradable and do not need the expenditure and time required for ASIC re-design. As a product or system matures, it is possible to create useful improvements in a short time to re-design the hardware or change the design.

^{2}C (Inter-Integrated Circuit) line. It contains a selection of clock, image colour and data settings.

#### 5.5. Performance Comparison

## 6. Conclusions and Discussion

- Receiving and saving the data in MATLAB Simulink via a universal asynchronous receiver/transmitter (UART) available on the board.
- Saving data on an SD card and implementing and testing on the Xilinx Spartan-6 FPGA Industrial Video Processing Kit.
- Connecting the HDMI output to a computer and using the MATLAB Acquisition application to record the video.

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Ruta, D.; Gabrys, B. An Overview of Classifier Fusion Methods. Comput. Inf. Syst.
**2000**, 7, 1–10. [Google Scholar] - Calder, A.J.; Young, A.W.; Perrett, D.I.; Etco, N.L.; Rowland, D. Categorical perception of morphed facial expressions. Vis. Cogn.
**1996**, 3, 81–117. [Google Scholar] [CrossRef] - De Gelder, B.; Teunisse, J.-P.; Benson, P.J. Categorical perception of facial expressions: Categories and their internal structure. Cogn. Emot.
**1997**, 11, 1–23. [Google Scholar] [CrossRef] - Miwa, H.; Itoh, K.; Matsumoto, M.; Zecca, M.; Takanobu, H.; Rocella, S.; Carrozza, M.C.; Dario, P.; Takanishi, A. Effective emotional expressions with expression humanoid robot WE-4RII: Integration of humanoid robot hand RCH-1. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2203–2208. [Google Scholar]
- Turabzadeh, S.; Meng, H.; Swash, R.M.; Pleva, M.; Juhar, J. Real-time emotional state detection from facial expression on embedded devices. In Proceedings of the 2017 Seventh International Conference on Innovative Computing Technology (INTECH), Luton, UK, 16–18 August 2017; pp. 46–51. [Google Scholar]
- Darwin, C. The Expression of the Emotions in Man and Animals; Oxford University Press: Oxford, UK, 1998. [Google Scholar]
- Suwa, M.; Noboru, S.; Keisuke, F. A preliminary note on pattern recognition of human emotional expression. In Proceedings of the International Joint Conference on Pattern Recognition, Kyoto, Japan, 7–10 November 1978; pp. 408–410. [Google Scholar]
- Ekman, P.; Keltner, D. Universal facial expressions of emotion. Calif. Ment. Health Res. Dig.
**1970**, 8, 151–158. [Google Scholar] - Picard, R.W.; Picard, R. Affective Computing; MIT press: Cambridge, MA, USA, 1997; Volume 252. [Google Scholar]
- Cheng, J.; Deng, Y.; Meng, H.; Wang, Z. A facial expression based continuous emotional state monitoring system with GPU acceleration. In Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China, 22–26 April 2013; pp. 1–6. [Google Scholar]
- Scherer, K.R. What are emotions? And how can they be measured? Soc. Sci. Inf.
**2005**, 44, 695–729. [Google Scholar] [CrossRef] - Darwin, C. The Expression of Emotions in Man and Animals; Murray: London, UK, 1872; pp. 30 & 180. [Google Scholar]
- Ekman, P.; Friesen, W.V.; Hager, J.C. Facial Action Coding System The Manual. In Facial Action Coding System; Consulting Psychologists Press: Palo Alto, CA, USA, 2002; Available online: http://face-and-emotion.com/dataface/facs/manual/TitlePage.html (accessed on 23 November 2017).
- Fontaine, J.R.; Scherer, K.R.; Roesch, E.B.; Ellsworth, P.C. The World of Emotions is Not Two-Dimensional. Physiol. Sci.
**2017**, 18, 1050–1057. [Google Scholar] [CrossRef] [PubMed] - Davidson, R.J.; Ekman, P.; Saron, C.D.; Senulis, J.A.; Friesen, W.V. Approach-withdrawal and cerebral asymmetry: Emotional expression and brain physiology: I. J. Pers. Soc. Psychol.
**1990**, 58, 330–341. [Google Scholar] [CrossRef] [PubMed] - Stemmler, G. Methodological considerations in the psychophysiological study of emotion. In Handbook of Affective Sciences; Davidson, R.J., Scherer, K.R., Goldsmith, H., Eds.; Oxford University Press: Oxford, UK, 2003; pp. 225–255. [Google Scholar]
- Harrigan, J.; Rosenthal, R.; Scherer, K.R. The New Handbook of Methods in Nonverbal Behavior Research; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
- Picard, R.W.; Klein, J. Computers that Recognise and Respond to User: Theoretical and Practical Implications. Interact. Comput.
**2002**, 14, 141–169. [Google Scholar] [CrossRef] - Rajeshwari, J.; Karibasappa, K.; Gopalakrishna, M.T. S-Log: Skin based Log-Gabor Approach for Face Detection in Video. JMPT
**2016**, 7, 1–11. [Google Scholar] - El Kaliouby, R.; Robinson, P. Real-time inference of complex mental states from facial expressions and head gestures. In Real- Time Vision for Human-Computer Interaction; Kisačanin, B., Pavlović, V., Huang, T.S., Eds.; Springer: Boston, MA, USA, 2005; pp. 181–200. [Google Scholar]
- Hernandez, J.; Hoque, M.E.; Drevo, W.; Picard, R.W. Mood Meter: Counting Smiles in the Wild. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; pp. 301–310. [Google Scholar]
- Meng, H.; Bianchi-Berthouze, N. Naturalistic affective expression classification by a multi-stage approach based on Hidden Markov Models. In Affective Computing and Intelligent Interaction. Lecture Notes in Computer Science; D’Mello, S., Graesser, A., Schuller, B., Martin, J.C., Eds.; Springer: Heidelberg, German, 2011; Volume 6975, pp. 378–387. [Google Scholar]
- Meng, H.; Romera-Paredes, B.; Bianchi-Berthouze, N. Emotion recognition by two view SVM_2K classifier on dynamic facial expression features. In Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), Santa Barbara, CA, USA, 21–25 March 2011; pp. 854–859. [Google Scholar]
- Cognitive Services. Available online: https://azure.microsoft.com/en-us/services/cognitive-services/ (accessed on 1 June 2017).
- Zhang, Y.D.; Yang, Z.J.; Lu, H.M.; Zhou, X.X.; Philips, P.; Liu, Q.M.; Wang, S.H. Facial Emotion Recognition based on Biorthogonal Wavelet Entropy, Fuzzy Support Vector Machine, and Stratified Cross Validation. IEEE Access
**2016**, 4, 8375–8385. [Google Scholar] [CrossRef] - Wang, S.H.; Phillips, P.; Dong, Z.C.; Zhang, Y.D. Intelligent Facial Emotion Recognition based on Stationary Wavelet Entropy and Jaya algorithm. Neurocomputing
**2018**, 272, 668–676. [Google Scholar] [CrossRef] - Cowie, R.; Douglas-Cowie, E.; Savvidou, S.; McMahon, E.; Sawey, M.; Schröder, M. ’FEELTRACE’: An instrument for recording perceived emotion in real time. In Proceedings of the ITRW on SpeechEmotion-2000, Newcastle, UK, 5–7 September 2000; pp. 19–24. [Google Scholar]
- Ahonen, T.; Hadid, A.; Pietikainen, M. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell.
**2006**, 28, 2037–2041. [Google Scholar] [CrossRef] [PubMed] - Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with Local Binary Patterns. IEEE Trans. Pattern Anal. Mach. Intell.
**2002**, 24, 971–987. [Google Scholar] [CrossRef] - Altman, N.S. An introduction to kernel and nearest- neighbor nonparametric regression. Am. Stat.
**1992**, 46, 175–185. [Google Scholar] - Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the 14 International Joint Conference on Artificial Intelligence, San Mateo, CA, USA, 20–25 August 1995; pp. 1137–1143. [Google Scholar]
- Picard, R.; Cook, D. Cross-Validation of Regression Models. J. Am. Stat. Assoc.
**1984**, 79, 575–583. [Google Scholar] [CrossRef] - Ojala, T.; Pietikainen, M.; Harwood, D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In Proceedings of the 12th IAPR International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; pp. 582–585. [Google Scholar]
- Jaskowiak, P.A.; Campello, R.J.G.B. Comparing correlation coefficients as dissimilarity measures for cancer classification in gene expression data. In Proceedings of the Brazilian Symposium on Bioinformatics, Brasília, Brazil, 10–12 August 2011; pp. 1–8. [Google Scholar]
- Pearson, K. Note on regression and inheritance in the case of two parents. Proc. R. Soc. Lond.
**1895**, 58, 240–242. [Google Scholar] [CrossRef] - Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1988. [Google Scholar]
- Turabzadeh, S. Automatic Emotional State Detection and Analysis on Embedded Devices. Ph.D. Thesis, Brunel University London, London, UK, August 2015. [Google Scholar]
- Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol.
**2011**, 2, 27. [Google Scholar] [CrossRef] - The Linley Group. A Guide to FPGAs for Communications, 1st ed.; The Linley Group: Mountain View, CA, USA, 2009. [Google Scholar]
- Digilent Inc. AtlysTM Board Reference Manual. Available online: http://digilentinc.com/Data/Products/ATLYS/Atlys_rm.pdf (accessed on 11 July 2017).
- Digilent Inc. VmodCAMTM Reference Manual. Available online: http://digilentinc.com/Data/Products/VMOD-CAM/VmodCAM_rm.pdf (accessed on 30 June 2017).
- Chen, J.; Chen, Z.; Chi, Z.; Fu, H. Facial expression recognition in video with multiple feature fusion. IEEE Trans. Affect Comput.
**2016**. [Google Scholar] [CrossRef] - Mackova, L.; Cizmar, A.; Juhar, J. A study of acoustic features for emotional speaker recognition in I-vector representation. Acta Electrotech. Inform.
**2015**, 15, 15–20. [Google Scholar] [CrossRef] - Pleva, M.; Bours, P.; Ondas, S.; Juhar, J. Improving static audio keystroke analysis by score fusion of acoustic and timing data. Multimed. Tools Appl.
**2017**, 76, 25749–25766. [Google Scholar] [CrossRef]

**Figure 1.**Video annotation process for valence using FeelTrace software. The video were shown on the left and the emotion labels were made by moving the mouse on the right.

**Figure 3.**An example of an LBP histogram of an image. From one image, LBP algorithm can detect the angle or corner patterns of the pixels. Then a histogram on the patterns were generated.

**Table 1.**Descriptions of facial muscles involved in the emotions Darwin considered universal [12].

Emotion | Darwin’s Facial Description |
---|---|

Fear | Eyes open |

Mouth open | |

Lips retracted | |

Eyebrows raised | |

Anger | Eyes wide open |

Mouth compressed | |

Nostrils raised | |

Disgust | Mouth open |

Lower lip down | |

Upper lip raised | |

Contempt | Turn away eyes |

Upper lip raised | |

Lip protrusion | |

Nose wrinkle | |

Happiness | Eyes sparkle |

Mouth drawn back at corners | |

Skin under eyes wrinkled | |

Surprise | Eyes open |

Mouth open | |

Eyebrows raised | |

Lips protruded | |

Sadness | Corner of mouth depressed |

Inner corner of eyebrows raised | |

Joy | Upper lip raised |

Nose labial fold formed | |

Orbicularis | |

Zygomatic |

Cross Validation | |
---|---|

5-Fold | 2-Fold |

$\begin{array}{c}\mathrm{Subsection}\phantom{\rule{4.pt}{0ex}}1\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}1\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}2\end{array}$ | $\begin{array}{c}\mathrm{Odd}\phantom{\rule{4.pt}{0ex}}\mathrm{subsection}\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}1,\phantom{\rule{4.pt}{0ex}}3,\phantom{\rule{4.pt}{0ex}}5,\phantom{\rule{4.pt}{0ex}}7\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}9\\ \mathrm{Even}\phantom{\rule{4.pt}{0ex}}\mathrm{subsection}\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}2,\phantom{\rule{4.pt}{0ex}}4,\phantom{\rule{4.pt}{0ex}}6,\phantom{\rule{4.pt}{0ex}}8\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}10\end{array}$ |

$\begin{array}{c}\mathrm{Subsection}\phantom{\rule{4.pt}{0ex}}2\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}3\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}4\end{array}$ | |

$\begin{array}{c}\mathrm{Subsection}\phantom{\rule{4.pt}{0ex}}3\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}5\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}6\end{array}$ | |

$\begin{array}{c}\mathrm{Subsection}\phantom{\rule{4.pt}{0ex}}4\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}7\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}8\end{array}$ | |

$\begin{array}{c}\phantom{\rule{4.pt}{0ex}}\mathrm{Subsection}\phantom{\rule{4.pt}{0ex}}5\text{:}\phantom{\rule{4.pt}{0ex}}\mathrm{videos}\phantom{\rule{4.pt}{0ex}}9\phantom{\rule{4.pt}{0ex}}\mathrm{and}\phantom{\rule{4.pt}{0ex}}10\end{array}$ |

Average Correlation Coefficient | ||||
---|---|---|---|---|

5-Fold | 2-Fold | |||

k | Activation | Valence | Activation | Valence |

1 | 0.0226 | −0.0137 | 0.0561 | −0.0341 |

3 | 0.0260 | −0.0206 | 0.0713 | −0.0362 |

5 | 0.0294 | −0.0208 | 0.0785 | −0.0381 |

Accuracy (%) | ||
---|---|---|

$\mathit{k}$ | 5-Fold | 2-Fold |

1 | 27.66 | 45.82 |

3 | 28.13 | 49.47 |

5 | 27.77 | 51.28 |

Accuracy (%) | ||
---|---|---|

$\mathit{k}$ | 5-Fold (Person-Independent) | 2-Fold |

1 | 27.66 | 45.82 |

3 | 28.13 | 49.47 |

5 | 27.77 | 51.28 |

Accuracy (%) | ||
---|---|---|

$\mathit{k}$ | 5-Fold (Person-Independent) | 2-Fold |

1 | 25.52 | 43.84 |

3 | 26.81 | 45.92 |

5 | 26.07 | 47.44 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Turabzadeh, S.; Meng, H.; Swash, R.M.; Pleva, M.; Juhar, J.
Facial Expression Emotion Detection for Real-Time Embedded Systems. *Technologies* **2018**, *6*, 17.
https://doi.org/10.3390/technologies6010017

**AMA Style**

Turabzadeh S, Meng H, Swash RM, Pleva M, Juhar J.
Facial Expression Emotion Detection for Real-Time Embedded Systems. *Technologies*. 2018; 6(1):17.
https://doi.org/10.3390/technologies6010017

**Chicago/Turabian Style**

Turabzadeh, Saeed, Hongying Meng, Rafiq M. Swash, Matus Pleva, and Jozef Juhar.
2018. "Facial Expression Emotion Detection for Real-Time Embedded Systems" *Technologies* 6, no. 1: 17.
https://doi.org/10.3390/technologies6010017