Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = micro-expression magnification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 5806 KB  
Article
Optical Flow Magnification and Cosine Similarity Feature Fusion Network for Micro-Expression Recognition
by Heyou Chang, Jiazheng Yang, Kai Huang, Wei Xu, Jian Zhang and Hao Zheng
Mathematics 2025, 13(15), 2330; https://doi.org/10.3390/math13152330 - 22 Jul 2025
Viewed by 1017
Abstract
Recent advances in deep learning have significantly advanced micro-expression recognition, yet most existing methods process the entire facial region holistically, struggling to capture subtle variations in facial action units, which limits recognition performance. To address this challenge, we propose the Optical Flow Magnification [...] Read more.
Recent advances in deep learning have significantly advanced micro-expression recognition, yet most existing methods process the entire facial region holistically, struggling to capture subtle variations in facial action units, which limits recognition performance. To address this challenge, we propose the Optical Flow Magnification and Cosine Similarity Feature Fusion Network (MCNet). MCNet introduces a multi-facial action optical flow estimation module that integrates global motion-amplified optical flow with localized optical flow from the eye and mouth–nose regions, enabling precise capture of facial expression nuances. Additionally, an enhanced MobileNetV3-based feature extraction module, incorporating Kolmogorov–Arnold networks and convolutional attention mechanisms, effectively captures both global and local features from optical flow images. A novel multi-channel feature fusion module leverages cosine similarity between Query and Key token sequences to optimize feature integration. Extensive evaluations on four public datasets—CASME II, SAMM, SMIC-HS, and MMEW—demonstrate MCNet’s superior performance, achieving state-of-the-art results with 92.88% UF1 and 86.30% UAR on the composite dataset, surpassing the best prior method by 1.77% in UF1 and 6.0% in UAR. Full article
(This article belongs to the Special Issue Representation Learning for Computer Vision and Pattern Recognition)
Show Figures

Figure 1

22 pages, 23958 KB  
Article
A Lightweight Dual-Stream Network with an Adaptive Strategy for Efficient Micro-Expression Recognition
by Xinyu Liu, Ju Zhou, Feng Chen, Shigang Li, Hanpu Wang, Yingjuan Jia and Yuhao Shan
Sensors 2025, 25(9), 2866; https://doi.org/10.3390/s25092866 - 1 May 2025
Cited by 2 | Viewed by 1382
Abstract
Micro-expressions (MEs), characterized by their brief duration and subtle facial muscle movements, pose significant challenges for accurate recognition. These ultra-fast signals, typically captured by high-speed vision sensors, require specialized computational methods to extract spatio-temporal features effectively. In this study, we propose a lightweight [...] Read more.
Micro-expressions (MEs), characterized by their brief duration and subtle facial muscle movements, pose significant challenges for accurate recognition. These ultra-fast signals, typically captured by high-speed vision sensors, require specialized computational methods to extract spatio-temporal features effectively. In this study, we propose a lightweight dual-stream network with an adaptive strategy for efficient ME recognition. Firstly, a motion magnification network based on transfer learning is employed to magnify the motion states of facial muscles in MEs. This process can generate additional samples, thereby expanding the training set. To effectively capture the dynamic changes of facial muscles, dense optical flow is extracted from the onset frame and the magnified apex frame, thereby obtaining magnified dense optical flow (MDOF). Subsequently, we design a dual-stream spatio-temporal network (DSTNet), using the magnified apex frame and MDOF as inputs for the spatial and temporal streams, respectively. An adaptive strategy that dynamically adjusts the magnification factor based on the top-1 confidence is introduced to enhance the robustness of DSTNet. Experimental results show that our proposed method outperforms existing methods in terms of F1-score on the SMIC, CASME II, SAMM, and composite dataset, as well as in cross-dataset tasks. Adaptive DSTNet significantly enhances the handling of sample imbalance while demonstrating robustness and featuring a lightweight design, indicating strong potential for future edge sensor deployment. Full article
Show Figures

Figure 1

17 pages, 5065 KB  
Article
Genome-Wide microRNA Expression Profiling in Human Spermatozoa and Its Relation to Sperm Quality
by Nino-Guy Cassuto, Florence Boitrelle, Hakima Mouik, Lionel Larue, Gwenola Keromnes, Nathalie Lédée, Laura Part-Ellenberg, Geraldine Dray, Léa Ruoso, Alexandre Rouen, John De Vos and Said Assou
Genes 2025, 16(1), 53; https://doi.org/10.3390/genes16010053 - 4 Jan 2025
Cited by 6 | Viewed by 2996
Abstract
Background: Sperm samples are separated into bad and good quality samples in function of their phenotype, but this does not indicate their genetic quality. Methods: Here, we used GeneChip miRNA arrays to analyze microRNA expression in ten semen samples selected based on high-magnification [...] Read more.
Background: Sperm samples are separated into bad and good quality samples in function of their phenotype, but this does not indicate their genetic quality. Methods: Here, we used GeneChip miRNA arrays to analyze microRNA expression in ten semen samples selected based on high-magnification morphology (score 6 vs. score 0) to identify miRNAs linked to sperm phenotype. Results: We found 86 upregulated and 21 downregulated miRNAs in good-quality sperm (score 6) compared with bad-quality sperm samples (score 0) (fold change > 2 and p-value < 0.05). MiR-34 (FC × 30, p = 8.43 × 10−8), miR-30 (FC × 12, p = 3.75 × 10−6), miR-122 (FC × 8, p = 0.0031), miR-20 (FC × 5.6, p = 0.0223), miR-182 (FC × 4.83, p = 0.0008) and miR-191 (FC × 4, p = 1.61 × 10−6) were among these upregulated miRNAs. In silico prediction algorithms predicted that miRNAs upregulated in good-quality sperm targeted 910 genes involved in key biological functions of spermatozoa, such as cell death and survival, cellular movement, molecular transport, response to stimuli, metabolism, and the regulation of oxidative stress. Genes deregulated in bad-quality sperm were involved in cell growth and proliferation. Conclusions: This study reveals that miRNA profiling may provide potential biomarkers of sperm quality. Full article
(This article belongs to the Section Human Genomics and Genetic Diseases)
Show Figures

Figure 1

14 pages, 3010 KB  
Article
Micro-Expression Recognition Using Uncertainty-Aware Magnification-Robust Networks
by Mengting Wei, Yuan Zong, Xingxun Jiang, Cheng Lu and Jiateng Liu
Entropy 2022, 24(9), 1271; https://doi.org/10.3390/e24091271 - 9 Sep 2022
Cited by 6 | Viewed by 2782
Abstract
A micro-expression (ME) is a kind of involuntary facial expressions, which commonly occurs with subtle intensity. The accurately recognition ME, a. k. a. micro-expression recognition (MER), has a number of potential applications, e.g., interrogation and clinical diagnosis. Therefore, the subject has received a [...] Read more.
A micro-expression (ME) is a kind of involuntary facial expressions, which commonly occurs with subtle intensity. The accurately recognition ME, a. k. a. micro-expression recognition (MER), has a number of potential applications, e.g., interrogation and clinical diagnosis. Therefore, the subject has received a high level of attention among researchers in affective computing and pattern recognition communities. In this paper, we proposed a straightforward and effective deep learning method called uncertainty-aware magnification-robust networks (UAMRN) for MER, which attempts to address two key issues in MER including the low intensity of ME and imbalance of ME samples. Specifically, to better distinguish subtle ME movements, we reconstructed a new sequence by magnifying the ME intensity. Furthermore, a sparse self-attention (SSA) block was implemented which rectifies the standard self-attention with locality sensitive hashing (LSH), resulting in the suppression of artefacts generated during magnification. On the other hand, for the class imbalance problem, we guided the network optimization based on the confidence about the estimation, through which the samples from rare classes were allotted greater uncertainty and thus trained more carefully. We conducted the experiments on three public ME databases, i.e., CASME II, SAMM and SMIC-HS, the results of which demonstrate improvement compared to recent state-of-the-art MER methods. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

21 pages, 3201 KB  
Article
A Convolutional Neural Network for Compound Micro-Expression Recognition
by Yue Zhao and Jiancheng Xu
Sensors 2019, 19(24), 5553; https://doi.org/10.3390/s19245553 - 16 Dec 2019
Cited by 31 | Viewed by 27920
Abstract
Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and [...] Read more.
Human beings are particularly inclined to express real emotions through micro-expressions with subtle amplitude and short duration. Though people regularly recognize many distinct emotions, for the most part, research studies have been limited to six basic categories: happiness, surprise, sadness, anger, fear, and disgust. Like normal expressions (i.e., macro-expressions), most current research into micro-expression recognition focuses on these six basic emotions. This paper describes an important group of micro-expressions, which we call compound emotion categories. Compound micro-expressions are constructed by combining two basic micro-expressions but reflect more complex mental states and more abundant human facial emotions. In this study, we firstly synthesized a Compound Micro-expression Database (CMED) based on existing spontaneous micro-expression datasets. These subtle feature of micro-expression makes it difficult to observe its motion track and characteristics. Consequently, there are many challenges and limitations to synthetic compound micro-expression images. The proposed method firstly implemented Eulerian Video Magnification (EVM) method to enhance facial motion features of basic micro-expressions for generating compound images. The consistent and differential facial muscle articulations (typically referred to as action units) associated with each emotion category have been labeled to become the foundation of generating compound micro-expression. Secondly, we extracted the apex frames of CMED by 3D Fast Fourier Transform (3D-FFT). Moreover, the proposed method calculated the optical flow information between the onset frame and apex frame to produce an optical flow feature map. Finally, we designed a shallow network to extract high-level features of these optical flow maps. In this study, we synthesized four existing databases of spontaneous micro-expressions (CASME I, CASME II, CAS(ME)2, SAMM) to generate the CMED and test the validity of our network. Therefore, the deep network framework designed in this study can well recognize the emotional information of basic micro-expressions and compound micro-expressions. Full article
(This article belongs to the Special Issue MEMS Technology Based Sensors for Human Centered Applications)
Show Figures

Figure 1

Back to TopTop