Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = facial action coding (FAC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4892 KiB  
Article
Study on the Wrinkling Mechanisms of Human Skin Based on the Digital Image Correlation and Facial Action Coding System
by Huixin Wei, Mingjian Chen, Shibin Wang, Zhiyong Wang, Baopeng Liao, Zehui Lin, Lisha He and Wei He
Appl. Sci. 2025, 15(12), 6803; https://doi.org/10.3390/app15126803 - 17 Jun 2025
Viewed by 364
Abstract
Facial wrinkles are a key indicator of aging and hold significant importance in skincare, cosmetics, and cosmetology. Their formation is closely linked to mechanical deformation, yet the underlying processes remain complex. This study integrates the Facial Action Coding System (FACS) with three-dimensional digital [...] Read more.
Facial wrinkles are a key indicator of aging and hold significant importance in skincare, cosmetics, and cosmetology. Their formation is closely linked to mechanical deformation, yet the underlying processes remain complex. This study integrates the Facial Action Coding System (FACS) with three-dimensional digital image correlation (3D-DIC) to dynamically capture and quantitatively analyze skin deformation during facial expression. Principal strains and their orientation are introduced as important parameters to investigate the relationship between mechanical behavior and wrinkle formation. To further explore these interactions, a four-layer finite element (FE) model incorporating a muscle layer is developed, simulating muscle contraction and its influence on skin deformation. The findings provide a mechanobiological framework for understanding wrinkle formation and may inspire the development of strain-sensitive sensors for real-time detection of microstructural deformations. Full article
(This article belongs to the Section Materials Science and Engineering)
Show Figures

Figure 1

40 pages, 20840 KiB  
Article
Facial Biosignals Time–Series Dataset (FBioT): A Visual–Temporal Facial Expression Recognition (VT-FER) Approach
by João Marcelo Silva Souza, Caroline da Silva Morais Alves, Jés de Jesus Fiais Cerqueira, Wagner Luiz Alves de Oliveira, Orlando Mota Pires, Naiara Silva Bonfim dos Santos, Andre Brasil Vieira Wyzykowski, Oberdan Rocha Pinheiro, Daniel Gomes de Almeida Filho, Marcelo Oliveira da Silva and Josiane Dantas Viana Barbosa
Electronics 2024, 13(24), 4867; https://doi.org/10.3390/electronics13244867 - 10 Dec 2024
Viewed by 1304
Abstract
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, [...] Read more.
Visual biosignals can be used to analyze human behavioral activities and serve as a primary resource for Facial Expression Recognition (FER). FER computational systems face significant challenges, arising from both spatial and temporal effects. Spatial challenges include deformations or occlusions of facial geometry, while temporal challenges involve discontinuities in motion observation due to high variability in poses and dynamic conditions such as rotation and translation. To enhance the analytical precision and validation reliability of FER systems, several datasets have been proposed. However, most of these datasets focus primarily on spatial characteristics, rely on static images, or consist of short videos captured in highly controlled environments. These constraints significantly reduce the applicability of such systems in real-world scenarios. This paper proposes the Facial Biosignals Time–Series Dataset (FBioT), a novel dataset providing temporal descriptors and features extracted from common videos recorded in uncontrolled environments. To automate dataset construction, we propose Visual–Temporal Facial Expression Recognition (VT-FER), a method that stabilizes temporal effects using normalized measurements based on the principles of the Facial Action Coding System (FACS) and generates signature patterns of expression movements for correlation with real-world temporal events. To demonstrate feasibility, we applied the method to create a pilot version of the FBioT dataset. This pilot resulted in approximately 10,000 s of public videos captured under real-world facial motion conditions, from which we extracted 22 direct and virtual metrics representing facial muscle deformations. During this process, we preliminarily labeled and qualified 3046 temporal events representing two emotion classes. As a proof of concept, these emotion classes were used as input for training neural networks, with results summarized in this paper and available in an open-source online repository. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1796 KiB  
Review
Neurobiology and Anatomy of Facial Expressions in Great Apes: Application of the AnimalFACS and Its Possible Association with the Animal’s Affective State
by Adriana Domínguez-Oliva, Cuauhtémoc Chávez, Julio Martínez-Burnes, Adriana Olmos-Hernández, Ismael Hernández-Avalos and Daniel Mota-Rojas
Animals 2024, 14(23), 3414; https://doi.org/10.3390/ani14233414 - 26 Nov 2024
Viewed by 1790
Abstract
The Facial Action Coding System (FACS) is an anatomically based system to study facial expression in humans. Currently, it is recognized that nonhuman animals, particularly nonhuman primates, have an extensive facial ethogram that changes according to the context and affective state. The facial [...] Read more.
The Facial Action Coding System (FACS) is an anatomically based system to study facial expression in humans. Currently, it is recognized that nonhuman animals, particularly nonhuman primates, have an extensive facial ethogram that changes according to the context and affective state. The facial expression of great apes, the closest species to humans, has been studied using the ChimpFACS and OrangFACS as reliable tools to code facial expressions. However, although the FACS does not infer animal emotions, making additional evaluations and associating the facial changes with other parameters could contribute to understanding the facial expressions of nonhuman primates during positive or negative emotions. The present review aims to discuss the neural correlates and anatomical components of emotional facial expression in great apes. It will focus on the use of Facial Action Coding Systems (FACSs) and the movements of the facial muscles (AUs) of chimpanzees, orangutans, and gorillas and their possible association with the affective state of great apes. Full article
Show Figures

Figure 1

26 pages, 18961 KiB  
Article
Application of Stereo Digital Image Correlation on Facial Expressions Sensing
by Xuanshi Cheng, Shibin Wang, Huixin Wei, Xin Sun, Lipan Xin, Linan Li, Chuanwei Li and Zhiyong Wang
Sensors 2024, 24(8), 2450; https://doi.org/10.3390/s24082450 - 11 Apr 2024
Cited by 2 | Viewed by 1379
Abstract
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field [...] Read more.
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 1688 KiB  
Article
Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding
by Chun-Ting Hsu and Wataru Sato
Sensors 2023, 23(22), 9076; https://doi.org/10.3390/s23229076 - 9 Nov 2023
Cited by 7 | Viewed by 2432
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we [...] Read more.
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark. Full article
(This article belongs to the Special Issue Advanced-Sensors-Based Emotion Sensing and Recognition)
Show Figures

Figure 1

10 pages, 573 KiB  
Communication
Facial Expressions Track Depressive Symptoms in Old Age
by Hairin Kim, Seyul Kwak, So Young Yoo, Eui Chul Lee, Soowon Park, Hyunwoong Ko, Minju Bae, Myogyeong Seo, Gieun Nam and Jun-Young Lee
Sensors 2023, 23(16), 7080; https://doi.org/10.3390/s23167080 - 10 Aug 2023
Cited by 3 | Viewed by 3998
Abstract
Facial expressions play a crucial role in the diagnosis of mental illnesses characterized by mood changes. The Facial Action Coding System (FACS) is a comprehensive framework that systematically categorizes and captures even subtle changes in facial appearance, enabling the examination of emotional expressions. [...] Read more.
Facial expressions play a crucial role in the diagnosis of mental illnesses characterized by mood changes. The Facial Action Coding System (FACS) is a comprehensive framework that systematically categorizes and captures even subtle changes in facial appearance, enabling the examination of emotional expressions. In this study, we investigated the association between facial expressions and depressive symptoms in a sample of 59 older adults without cognitive impairment. Utilizing the FACS and the Korean version of the Beck Depression Inventory-II, we analyzed both “posed” and “spontaneous” facial expressions across six basic emotions: happiness, sadness, fear, anger, surprise, and disgust. Through principal component analysis, we summarized 17 action units across these emotion conditions. Subsequently, multiple regression analyses were performed to identify specific facial expression features that explain depressive symptoms. Our findings revealed several distinct features of posed and spontaneous facial expressions. Specifically, among older adults with higher depressive symptoms, a posed face exhibited a downward and inward pull at the corner of the mouth, indicative of sadness. In contrast, a spontaneous face displayed raised and narrowed inner brows, which was associated with more severe depressive symptoms in older adults. These findings suggest that facial expressions can provide valuable insights into assessing depressive symptoms in older adults. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

28 pages, 3449 KiB  
Article
Evaluating the Influence of Room Illumination on Camera-Based Physiological Measurements for the Assessment of Screen-Based Media
by Joseph Williams, Jon Francombe and Damian Murphy
Appl. Sci. 2023, 13(14), 8482; https://doi.org/10.3390/app13148482 - 22 Jul 2023
Cited by 4 | Viewed by 2033
Abstract
Camera-based solutions can be a convenient means of collecting physiological measurements indicative of psychological responses to stimuli. However, the low illumination playback conditions commonly associated with viewing screen-based media oppose the bright conditions recommended for accurately recording physiological data with a camera. A [...] Read more.
Camera-based solutions can be a convenient means of collecting physiological measurements indicative of psychological responses to stimuli. However, the low illumination playback conditions commonly associated with viewing screen-based media oppose the bright conditions recommended for accurately recording physiological data with a camera. A study was designed to determine the feasibility of obtaining physiological data, for psychological insight, in illumination conditions representative of real world viewing experiences. In this study, a novel method was applied for testing a first-of-its-kind system for measuring both heart rate and facial actions from video footage recorded with a single discretely placed camera. Results suggest that conditions representative of a bright domestic setting should be maintained when using this technology, despite this being considered a sub-optimal playback condition. Further analyses highlight that even within this bright condition, both the camera-measured facial action and heart rate data contained characteristic errors. In future research, the influence of these performance issues on psychological insights may be mitigated by reducing the temporal resolution of the heart rate measurements and ignoring fast and low-intensity facial movements. Full article
(This article belongs to the Special Issue Advances in Emotion Recognition and Affective Computing)
Show Figures

Figure 1

19 pages, 3197 KiB  
Article
What Is Written on a Dog’s Face? Evaluating the Impact of Facial Phenotypes on Communication between Humans and Canines
by Courtney L. Sexton, Colleen Buckley, Jake Lieberfarb, Francys Subiaul, Erin E. Hecht and Brenda J. Bradley
Animals 2023, 13(14), 2385; https://doi.org/10.3390/ani13142385 - 22 Jul 2023
Cited by 8 | Viewed by 12375
Abstract
Facial phenotypes are significant in communication with conspecifics among social primates. Less is understood about the impact of such markers in heterospecific encounters. Through behavioral and physical phenotype analyses of domesticated dogs living in human households, this study aims to evaluate the potential [...] Read more.
Facial phenotypes are significant in communication with conspecifics among social primates. Less is understood about the impact of such markers in heterospecific encounters. Through behavioral and physical phenotype analyses of domesticated dogs living in human households, this study aims to evaluate the potential impact of superficial facial markings on dogs’ production of human-directed facial expressions. That is, this study explores how facial markings, such as eyebrows, patches, and widow’s peaks, are related to expressivity toward humans. We used the Dog Facial Action Coding System (DogFACS) as an objective measure of expressivity, and we developed an original schematic for a standardized coding of facial patterns and coloration on a sample of more than 100 male and female dogs (N = 103), aged from 6 months to 12 years, representing eight breed groups. The present study found a statistically significant, though weak, correlation between expression rate and facial complexity, with dogs with plainer faces tending to be more expressive (r = −0.326, p ≤ 0.001). Interestingly, for adult dogs, human companions characterized dogs’ rates of facial expressivity with more accuracy for dogs with plainer faces. Especially relevant to interspecies communication and cooperation, within-subject analyses revealed that dogs’ muscle movements were distributed more evenly across their facial regions in a highly social test condition compared to conditions in which they received ambiguous cues from their owners. On the whole, this study provides an original evaluation of how facial features may impact communication in human–dog interactions. Full article
(This article belongs to the Special Issue Dog–Human Relationships: Behavior, Physiology, and Wellbeing)
Show Figures

Figure 1

14 pages, 885 KiB  
Article
Psychometric Values of a New Scale: The Rett Syndrome Fear of Movement Scale (RSFMS)
by Meir Lotan, Moti Zwilling and Alberto Romano
Diagnostics 2023, 13(13), 2148; https://doi.org/10.3390/diagnostics13132148 - 23 Jun 2023
Cited by 2 | Viewed by 1725
Abstract
(1) Background: One of the characteristics associated with Rett syndrome (RTT) is a fear of movement (FOM). Despite the grave consequences on health, function, and the caregiver’s burden associated with bradykinesia accompanying FOM, there is no specific FOM assessment tool for RTT. (2) [...] Read more.
(1) Background: One of the characteristics associated with Rett syndrome (RTT) is a fear of movement (FOM). Despite the grave consequences on health, function, and the caregiver’s burden associated with bradykinesia accompanying FOM, there is no specific FOM assessment tool for RTT. (2) Objective: To construct and assess the psychometric values of a scale evaluating FOM in RTT (Rett syndrome fear of movement scale—RSFMS). (3) Methods: Twenty-five girls aged 5–33, including a research group (N = 12 individuals with RTT) and control group (N = 13 typically developing girls at equivalent ages). The Pain and Discomfort Scale (PADS) and Facial Action Coding System (FACS) assessed the participants’ behavior and facial expressions in rest and movement situations. (4) Results: Significant behavioral differences were recorded in these rest and movement situations within the research groups using the RSFMS (p = 0.003), FACS (p = 0.002) and PADS (p = 0.002). No differences in reactions were found within the control group. The new scale, RSFMS, was found to show a high inter- and intra-rater reliability (r = 0.993, p < 0.001; r = 0.958, p < 0.001; respectively), good internal consistency (α = 0.77), and high accuracy (94.4%). (5) Conclusions: The new scale for measuring FOM in RTT, the RSFMS, was validated using the FACS and PADS. The RSFMS was found to be a tool that holds excellent psychometric values. The new scale can help clinicians working with individuals with RTT to plan appropriate management strategies for this population. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

19 pages, 1515 KiB  
Article
Specific Behavioral Responses Rather Than Autonomic Responses Can Indicate and Quantify Acute Pain among Individuals with Intellectual and Developmental Disabilities
by Ruth Defrin, Tali Benromano and Chaim G. Pick
Brain Sci. 2021, 11(2), 253; https://doi.org/10.3390/brainsci11020253 - 18 Feb 2021
Cited by 10 | Viewed by 3638
Abstract
Individuals with intellectual and developmental disabilities (IDD) are at a high risk of experiencing pain. Pain management requires assessment, a challenging mission considering the impaired communication skills in IDD. We analyzed subjective and objective responses following calibrated experimental stimuli to determine whether they [...] Read more.
Individuals with intellectual and developmental disabilities (IDD) are at a high risk of experiencing pain. Pain management requires assessment, a challenging mission considering the impaired communication skills in IDD. We analyzed subjective and objective responses following calibrated experimental stimuli to determine whether they can differentiate between painful and non-painful states, and adequately quantify pain among individuals with IDD. Eighteen adults with IDD and 21 healthy controls (HC) received experimental pressure stimuli (innocuous, mildly noxious, and moderately noxious). Facial expressions (analyzed with the Facial Action Coding System (FACS)) and autonomic function (heart rate, heart rate variability (HRV), pulse, and galvanic skin response (GSR)) were continuously monitored, and self-reports using a pyramid and a numeric scale were obtained. Significant stimulus-response relationships were observed for the FACS and pyramid scores (but not for the numeric scores), and specific action units could differentiate between the noxious levels among the IDD group. FACS scores of the IDD group were higher and steeper than those of HC. HRV was overall lower among the IDD group, and GSR increased during noxious stimulation in both groups. In conclusion, the facial expressions and self-reports seem to reliably detect and quantify pain among individuals with mild-moderate IDD; their enhanced responses may indicate increased pain sensitivity that requires careful clinical consideration. Full article
(This article belongs to the Special Issue Pain Assessment in Impaired Cognition)
Show Figures

Figure 1

20 pages, 21639 KiB  
Article
FACS-Based Graph Features for Real-Time Micro-Expression Recognition
by Adamu Muhammad Buhari, Chee-Pun Ooi, Vishnu Monn Baskaran, Raphaël C. W. Phan, KokSheik Wong and Wooi-Haw Tan
J. Imaging 2020, 6(12), 130; https://doi.org/10.3390/jimaging6120130 - 30 Nov 2020
Cited by 18 | Viewed by 5309
Abstract
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., [...] Read more.
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

20 pages, 1980 KiB  
Article
Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition
by Vianney Perez-Gomez, Homero V. Rios-Figueroa, Ericka Janet Rechy-Ramirez, Efrén Mezura-Montes and Antonio Marin-Hernandez
Sensors 2020, 20(17), 4847; https://doi.org/10.3390/s20174847 - 27 Aug 2020
Cited by 18 | Viewed by 4766
Abstract
An essential aspect in the interaction between people and computers is the recognition of facial expressions. A key issue in this process is to select relevant features to classify facial expressions accurately. This study examines the selection of optimal geometric features to classify [...] Read more.
An essential aspect in the interaction between people and computers is the recognition of facial expressions. A key issue in this process is to select relevant features to classify facial expressions accurately. This study examines the selection of optimal geometric features to classify six basic facial expressions: happiness, sadness, surprise, fear, anger, and disgust. Inspired by the Facial Action Coding System (FACS) and the Moving Picture Experts Group 4th standard (MPEG-4), an initial set of 89 features was proposed. These features are normalized distances and angles in 2D and 3D computed from 22 facial landmarks. To select a minimum set of features with the maximum classification accuracy, two selection methods and four classifiers were tested. The first selection method, principal component analysis (PCA), obtained 39 features. The second selection method, a genetic algorithm (GA), obtained 47 features. The experiments ran on the Bosphorus and UIVBFED data sets with 86.62% and 93.92% median accuracy, respectively. Our main finding is that the reduced feature set obtained by the GA is the smallest in comparison with other methods of comparable accuracy. This has implications in reducing the time of recognition. Full article
Show Figures

Figure 1

16 pages, 6966 KiB  
Article
Adaptive 3D Model-Based Facial Expression Synthesis and Pose Frontalization
by Yu-Jin Hong, Sung Eun Choi, Gi Pyo Nam, Heeseung Choi, Junghyun Cho and Ig-Jae Kim
Sensors 2020, 20(9), 2578; https://doi.org/10.3390/s20092578 - 1 May 2020
Cited by 4 | Viewed by 6594
Abstract
Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images [...] Read more.
Facial expressions are one of the important non-verbal ways used to understand human emotions during communication. Thus, acquiring and reproducing facial expressions is helpful in analyzing human emotional states. However, owing to complex and subtle facial muscle movements, facial expression modeling from images with face poses is difficult to achieve. To handle this issue, we present a method for acquiring facial expressions from a non-frontal single photograph using a 3D-aided approach. In addition, we propose a contour-fitting method that improves the modeling accuracy by automatically rearranging 3D contour landmarks corresponding to fixed 2D image landmarks. The acquired facial expression input can be parametrically manipulated to create various facial expressions through a blendshape or expression transfer based on the FACS (Facial Action Coding System). To achieve a realistic facial expression synthesis, we propose an exemplar-texture wrinkle synthesis method that extracts and synthesizes appropriate expression wrinkles according to the target expression. To do so, we constructed a wrinkle table of various facial expressions from 400 people. As one of the applications, we proved that the expression-pose synthesis method is suitable for expression-invariant face recognition through a quantitative evaluation, and showed the effectiveness based on a qualitative evaluation. We expect our system to be a benefit to various fields such as face recognition, HCI, and data augmentation for deep learning. Full article
(This article belongs to the Special Issue Sensor Applications on Emotion Recognition)
Show Figures

Figure 1

12 pages, 718 KiB  
Article
Interpretation and Working through Contemptuous Facial Micro-Expressions Benefits the Patient-Therapist Relationship
by Felicitas Datz, Guoruey Wong and Henriette Löffler-Stastka
Int. J. Environ. Res. Public Health 2019, 16(24), 4901; https://doi.org/10.3390/ijerph16244901 - 4 Dec 2019
Cited by 23 | Viewed by 5478
Abstract
Introduction: The significance of psychotherapeutic micro-processes, such as nonverbal facial expressions and relationship quality, is widely known, yet hitherto has not been investigated satisfactorily. In this exploratory study, we aim to examine the occurrence of micro-processes during psychotherapeutic treatment sessions, specifically facial micro-expressions, [...] Read more.
Introduction: The significance of psychotherapeutic micro-processes, such as nonverbal facial expressions and relationship quality, is widely known, yet hitherto has not been investigated satisfactorily. In this exploratory study, we aim to examine the occurrence of micro-processes during psychotherapeutic treatment sessions, specifically facial micro-expressions, in order to shed light on their impact on psychotherapeutic interactions and patient-clinician relationships. Methods: In analyzing 22 video recordings of psychiatric interviews in a routine/acute psychiatric care unit of Vienna General Hospital, we were able to investigate clinicians’ and patients’ facial micro-expressions in conjunction with verbal interactions and types. To this end, we employed the Emotion Facial Action Coding System (EmFACS)—assessing the action units and microexpressions—and the Psychodynamic Intervention List (PIL). Also, the Working Alliance Inventory (WAI), assessed after each session by both patients and clinicians, provided information on the subjective quality of the clinician–patient relationship. Results: We found that interpretative/confrontative interventions are associated with displays of contempt from both therapists and patients. Interestingly, displays of contempt also correlated with higher WAI scores. We propose that these seemingly contradictory results may be a consequence of the complexity of affects and the interplay of primary and secondary emotions with intervention type. Conclusion: Interpretation, confrontation, and working through contemptuous microexpressions are major elements to the adequate control major pathoplastic elements. Affect-cognitive interplay is an important mediator in the working alliance. Full article
(This article belongs to the Special Issue Ingredients for a Sustainable Wholesome Network in Mental Health)
Show Figures

Figure 1

32 pages, 10353 KiB  
Article
Predicting Depression, Anxiety, and Stress Levels from Videos Using the Facial Action Coding System
by Mihai Gavrilescu and Nicolae Vizireanu
Sensors 2019, 19(17), 3693; https://doi.org/10.3390/s19173693 - 25 Aug 2019
Cited by 116 | Viewed by 15663
Abstract
We present the first study in the literature that has aimed to determine Depression Anxiety Stress Scale (DASS) levels by analyzing facial expressions using Facial Action Coding System (FACS) by means of a unique noninvasive architecture on three layers designed to offer high [...] Read more.
We present the first study in the literature that has aimed to determine Depression Anxiety Stress Scale (DASS) levels by analyzing facial expressions using Facial Action Coding System (FACS) by means of a unique noninvasive architecture on three layers designed to offer high accuracy and fast convergence: in the first layer, Active Appearance Models (AAM) and a set of multiclass Support Vector Machines (SVM) are used for Action Unit (AU) classification; in the second layer, a matrix is built containing the AUs’ intensity levels; and in the third layer, an optimal feedforward neural network (FFNN) analyzes the matrix from the second layer in a pattern recognition task, predicting the DASS levels. We obtained 87.2% accuracy for depression, 77.9% for anxiety, and 90.2% for stress. The average prediction time was 64 s, and the architecture could be used in real time, allowing health practitioners to evaluate the evolution of DASS levels over time. The architecture could discriminate with 93% accuracy between healthy subjects and those affected by Major Depressive Disorder (MDD) or Post-traumatic Stress Disorder (PTSD), and 85% for Generalized Anxiety Disorder (GAD). For the first time in the literature, we determined a set of correlations between DASS, induced emotions, and FACS, which led to an increase in accuracy of 5%. When tested on AVEC 2014 and ANUStressDB, the method offered 5% higher accuracy, sensitivity, and specificity compared to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors for Affective Computing and Sentiment Analysis)
Show Figures

Graphical abstract

Back to TopTop