# Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Overall Benefits of the Insights We Present in This Manuscript

#### 1.2. Using AI as a Novel Approach to Analyzing Facial Expressions of Pain and Pleasure

#### 1.3. Previous Reasearch into Facial Expression of (Intense) Affective States

#### 1.4. Novelty of the Approach Presented in This Paper

#### 1.5. Fields of Study in Which the Results Are of Importance

## 2. Materials and Methods

#### 2.1. Materials

#### 2.2. Methods

^{®}v12.4 from Wolfram Technology) to align the five faces for each actress in each video; (c) implement feature extraction algorithms to construct a feature vector for each face; (d) dimension-reduce the five feature vectors to five 2D vectors (which we call dimension-reduced feature vectors) for each actress’s five facial expressions; and (e) calculate the Euclidean distances between pairs (“neutral”–“neutral”, “neutral”–“pain”, “neutral”–“pleasure”, and “pleasure”–“pain”) of these dimension-reduced feature vectors. Details and code of the software implementation are listed in Appendix A.

_{95%}) [20].

## 3. Results

_{95%}[20] uncertainty interval are listed in Table 1 and the pdf of this ML distribution is displayed in Figure 3.

## 4. Discussion

## 5. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Appendix A

- Every user has his/her own folder structure. By “path” (below), we mean the path to the folders containing the video frames.
- The command line
**Join[…]**is long because it loads a segment of the video dynamically. It is modified accordingly for other videos loaded for further frame extraction.

**SetDirectory[**“path”

**]**

**= Join[Flatten[Table[StringJoin[**“A_pleasure”,

**StringJoin[{ToString [0],ToString[i]}]**”.jpg

**]**,{i,1,9}

**]**,

**Flatten[Table[StringJoin[**“A_Pleasure”,

**StringJoin[**{

**ToString[**j

**]**,

**ToString[**i

**]}]**,{i,0,9}

**]**,{j,1,4}

**]**,{

**StringJoin[**“A_Pleasure”,

**StringJoin[**{

**ToString [5]**,

**ToString [0]**}

**]**,”.jpg

**]**}

**]**

**= Import[**jpgApleasure[[25]]

**]**

**= FindFaces[**face,“Image”,

**Method⟶**“Haar”,

**PaddingSize⟶**30

**]**

- The above structure is suitably modified for the other faces of Actress A.

- The five faces are aligned.

**FaceAlign[**%,

**Automatic**,{60,60},

**PerformanceGoal⟶**“Quality”

**]**;

- The proprietary code from MATHEMATICA uses AI (internally trained) to extract feature vectors from the list of faces.

**= FeatureExtraction[**faceAjoin

**]**;

- The proprietary code from MATHEMATICA uses a neural network to dimension-reduce the feature vectors.

**DimensionReduce[**faceAextjoin,2,

**RandomSeeding⟶Prime [137]**];

**Mean[Table[**faceAextractReduce[[i]],{i,{1,2,5}}

**]]**;

- The (Euclidean) distances are computed.

**Norm[**faceAextractReduce[[3]]-meanAneutralReduce

**]**;

**Norm[**faceAextractReduce[[4]]-meanAneutralReduce

**]**;

**Norm[**faceAextractReduce[[3]]-faceAextractReduce[[4]]

**]**;

**Table[Norm[**faceAextractReduce[[i]]-

**Mean[Table[**faceAextractReduce[[j]],{j,{1,2,5}}]]]],{i,{1,2,5}

**]**

**Table[Norm[**faceTextractReduce[[i]]-

**Mean[Table[**faceTextractReduce[[j]],{j,{1,2,5}}]]]],{i,{1,2,5}

**]**}

- The above steps are repeated for the other $19\times 5$ faces; A
**⟶**B, A**⟶**C, … and so on up to and including A**⟶**T.

- The commands below are used to find the ML distribution of the distances.

**NormalDistribution[**μ,σ

**]**,

**LogNormalDistribution[**μ,σ

**]**,

**WeibullDistribution[**k,λ

**]**,

**GammaDistribution[**k,θ

**]**};

**EstimatedDistribution[**data,distributionList[[1]]

**]**

**LogLikelihood[**%,data

**]**

**EstimatedDistribution[**data,distributionList[[2]]

**]**

**LogLikelihood[**%,data

**]**

**EstimatedDistribution[**data,distributionList[[3]]

**]**

**LogLikelihood[**%,data

**]**

**EstimatedDistribution[**data,distributionList[[4]]

**]**

**LogLikelihood[**%,data

**]**

**Flatten[Position[**%,Max[%]]][[1]]

- The code below is used to determine the HDI
_{95%}uncertainty interval. Note that the precision arithmetic requires several hundered (decimal) digits.

- The code below calculates the SVD and the approximation using only the first three singular values.

**Table[Flatten[**{neutralDistances[[i]],painDistances[[i]],pleasureDistances[[i]]

**]**,{i,1,20}

**]**;

- The code below generates a list of colors needed for the graphics.

**Join[**%,$\left\{\mathbf{Darker}\left[\mathrm{Brown},0.15\right]\right\},\left\{\mathrm{Green}\right\},\left\{\mathrm{Cyan}\right\}$];

**Join[**%,$\left\{\mathbf{Darker}\left[\mathrm{Yellow},0.15\right]\right\},\left\{\mathbf{Lighter}\left[\mathrm{Orange},0.2\right]\right\},\left\{\mathrm{Pink}\right\}]$;

**Join[**%,$\left\{\mathbf{Darker}\left[\mathrm{LightPurple},0.1\right]\right\},\left\{\mathbf{Darker}\left[\mathrm{LightGreen},0.35\right]\right\}]$;

- The code below finds the clusters of the SVD-3 approximated coordinates of the affective state distances.

- A suite of graphics routines (not listed) are used to display the results for the manuscript.

## References

- Prossinger, H.; Hladky, T.; Binter, J.; Boschetti, S.; Riha, D. Visual Analysis of Emotions Using AI Image-Processing Software: Possible Male/Female Differences between the Emotion Pairs “Neutral”–“Fear” and “Pleasure”–“Pain”. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, Virtual Event, 29 June–2 July 2021; pp. 342–346. [Google Scholar]
- Butow, P.; Hoque, E. Using artificial intelligence to analyse and teach communication in healthcare. Breast
**2020**, 50, 49–55. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Hassan, T.; Seuß, D.; Wollenberg, J.; Weitz, K.; Kunz, M.; Lautenbacher, S.; Schmid, U. Automatic detection of pain from facial expressions: A survey. IEEE Trans. Pattern Anal. Mach. Intell.
**2019**, 43, 1815–1831. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Namba, S.; Sato, W.; Osumi, M.; Shimokawa, K. Assessing automated facial action unit detection systems for analyzing cross-domain facial expression databases. Sensors
**2021**, 21, 4222. [Google Scholar] [CrossRef] [PubMed] - Weitz, K.; Hassan, T.; Schmid, U.; Garbas, J.U. Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods. Tm-Tech. Mess.
**2019**, 86, 404–412. [Google Scholar] [CrossRef] - Dildine, T.C.; Atlas, L.Y. The need for diversity in research on facial expressions of pain. Pain
**2019**, 160, 1901. [Google Scholar] [PubMed] - Barrett, L.F. AI weighs in on debate about universal facial expressions. Nature
**2021**, 589, 202–204. [Google Scholar] [CrossRef] [PubMed] - Cowen, A.S.; Keltner, D.; Schroff, F.; Jou, B.; Adam, H.; Prasad, G. Sixteen facial expressions occur in similar contexts worldwide. Nature
**2021**, 589, 251–257. [Google Scholar] [CrossRef] [PubMed] - Ekman, P. Facial expression and emotion. Am. Psychol.
**1993**, 48, 384–392. [Google Scholar] [CrossRef] - Valente, D.; Theurel, A.; Gentaz, E. The role of visual experience in the production of emotional facial expressions by blind people: A review. Psychon. Bull. Rev.
**2018**, 25, 483–497. [Google Scholar] [CrossRef] [Green Version] - Gendron, M.; Barrett, L.F. Reconstructing the past: A century of ideas about emotion in psychology. Emot. Rev.
**2009**, 1, 316–339. [Google Scholar] [CrossRef] [PubMed] - van der Struijk, S.; Huang, H.H.; Mirzaei, M.S.; Nishida, T. FACSvatar: An Open Source Modular Framework for Real-Time FACS based Facial Animation. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, Sydney, Australia, 5–8 November 2018; pp. 159–164. [Google Scholar]
- Chen, C.; Crivelli, C.; Garrod, O.G.; Schyns, P.G.; Fernández-Dols, J.M.; Jack, R.E. Distinct facial expressions represent pain and pleasure across cultures. Proc. Natl. Acad. Sci. USA
**2018**, 115, E10013–E10021. [Google Scholar] [CrossRef] [Green Version] - Wenzler, S.; Levine, S.; van Dick, R.; Oertel-Knöchel, V.; Aviezer, H. Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations. Emotion
**2016**, 16, 807. [Google Scholar] [CrossRef] - Aviezer, H.; Trope, Y.; Todorov, A. Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science
**2012**, 338, 1225–1229. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Fernández-Dols, J.M.; Carrera, P.; Crivelli, C. Facial behavior while experiencing sexual excitement. J. Nonverbal Behav.
**2011**, 35, 63–71. [Google Scholar] [CrossRef] - Hughes, S.M.; Nicholson, S.E. Sex differences in the assessment of pain versus sexual pleasure facial expressions. J. Soc. Evol. Cult. Psychol.
**2008**, 2, 289. [Google Scholar] [CrossRef] - Abramson, L.; Marom, I.; Petranker, R.; Aviezer, H. Is fear in your head? A comparison of instructed and real-life expressions of emotion in the face and body. Emotion
**2017**, 17, 557. [Google Scholar] [CrossRef] [PubMed] - Elisason, S.R. Maximum Likelihood Estimation. Logic and Practice; SAGE Publications: Newbury Park, CA, USA, 1993. [Google Scholar]
- Kruschke, J.K. Doing Bayesian Data Analysis. A Tutorial with R, JAGS, and Stan, 2nd ed.; Academic Press/Elsevier: London, UK, 2015. [Google Scholar]
- Leon, S.J. Linear Algebra with Applications, 5th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
- Strang, G. Linear Algebra and Learning from Data; Wellesley-Cambridge Press: Wellesley, MI, USA, 2019. [Google Scholar]
- Russell, S.; Norvig, P. Artificial Intelligence. A Modern Approach, 3rd ed.; Pearson: Harlow, UK, 2010. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning, 3rd ed.; MIT Press: Cambridge, MI, USA, 2016. [Google Scholar]
- Boschetti, S.; Hladký, T.; Machová, K.; Říha, D.; Binter, J. Judgement of extreme affective state expressions in approach/avoidance paradigm. Hum. Ethol.
**2021**, 36, 7. [Google Scholar] - McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Shetty, S. International evaluation of an AI system for breast cancer screening. Nature
**2020**, 577, 89–94. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**A symbolic rendition of an autoencoder used for dimension reduction. The 11 inputs (in the drawing) are represented by blue arrows from left to right. The inputs are conventionally labelled neurons (hence the name “neural network”). Each input neuron (light blue) has as many outputs as there are neurons in the next layer. Each ‘yellow’ neuron thus has 11 inputs/edges (represented as thin black lines). Each neuron in the yellow layer has as many outputs as there are neurons in the next layer: six outputs for each ‘yellow’ neuron and therefore eight inputs for each ‘green’ neuron of the next layer. It continues: each ‘green’ neuron has as many outputs as there are neurons in the next layer (consisting of two ‘orange’ neurons) and each ‘orange’ neuron has six inputs. The light blue neurons on the left are called the input layer while the light blue neurons on the right are called the output layer. The yellow layers, the green layers, and the orange layer are called the hidden layers. An autoencoder always has the same number of output neurons as it has input neurons. The number of hidden layers is part of the design by the engineer constructing the autoencoder. The numerical values along the black edges between neurons are determined by an algorithm. The autoencoder attempts to produce an output equal to the input (hence the name ‘autoencoder’) without being an identity mapping. An important feature for modern autoencoders is the ability to cut (set to zero) certain interconnections or make them numerically very small (usually by using a sigmoid function). The central layer is called the code. If the inputs are the feature vectors, then the numerical values of the code are the components of the dimension-reduced feature vector. If this is a successful autoencoder, it detects nonlinear combinations between the components of the (input) feature vector that can be represented by two variables.

**Figure 2.**The locations of the dimension-reduced feature vectors of the facial display of the (labeled) affective states of one actress. In addition to these five points, we rendered (and used in subsequent calculations) the arithmetic mean (center of mass) of the neutral states, which we call ‘mean neutral’ for this female. Distances of the dimension-reduced feature vectors for this female are also displayed.

**Figure 3.**The ML distribution of the pain–pleasure distances for all 20 actresses. The distribution is asymmetric, thus displaying the interval $\mathrm{mean}\pm \mathrm{SD}$ is not meaningful. We used the HD

_{95%}interval [20] to display the uncertainty. The ML mode and ML expectation are close to midway between the ends of the HDI

_{95%}interval, and, furthermore, the ends of this interval are very close to $\frac{1}{2}\times $ and $\frac{3}{2}\times $ the mode. The shaded area under the pdf-curve has an area of 95%.

**Figure 4.**The clusters detected in the scaled, shifted pattern matrix. Distances are scaled by the neutral mean. There are six clusters with membership sizes: 3, 3, 8, and 4, along with two further clusters containing singletons (isolates). We observed the following: (i) In Cluster #1, the distances of pain and pleasure from the neutral mean are nearly equal and larger than those of the neutrals to the neutral mean. (ii) In Cluster #2, the distances of pain and pleasure to the neutral mean are also nearly equal but considerably less than the distances of the neutrals to the neutral mean. (iii) In Cluster #3, there is hardly any overlap between the pain and pleasure distances, and the pain distances overlap the neutral distances to the neutral mean. Additionally, the pleasure distances are larger than the pain distances. (iv) In Cluster #4, the pain distances overlap the neutral distances to the neutral mean, but the pleasure distances do not; the latter are, furthermore, much smaller than the other distances.

**Table 1.**Parameters of the ML distribution of the pain–pleasure distances of the 20 actresses; it is a Weibull distribution. E is the expectation and HDI

_{95%}is the 95% highest density interval [20].

Parameters | ML Numerical Values |
---|---|

$\mathrm{WeibullDistribution}\left[k,\lambda \right]$ | $k=4.34$ $\lambda =2.25$ |

Mode | $2.12$ |

E | $2.05$ |

HDI_{95%} | $\left({s}_{1},{s}_{2}\right)=\left(0.99,3.06\right)$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Prossinger, H.; Hladký, T.; Boschetti, S.; Říha, D.; Binter, J.
Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions. *Technologies* **2022**, *10*, 75.
https://doi.org/10.3390/technologies10040075

**AMA Style**

Prossinger H, Hladký T, Boschetti S, Říha D, Binter J.
Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions. *Technologies*. 2022; 10(4):75.
https://doi.org/10.3390/technologies10040075

**Chicago/Turabian Style**

Prossinger, Hermann, Tomáš Hladký, Silvia Boschetti, Daniel Říha, and Jakub Binter.
2022. "Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions" *Technologies* 10, no. 4: 75.
https://doi.org/10.3390/technologies10040075