sensors-logo

Journal Browser

Journal Browser

Advanced Signal Processing for Affective Computing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (25 August 2025) | Viewed by 4075

Special Issue Editors


E-Mail Website
Guest Editor
Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
Interests: electro-physiological signals; electrodermal activity; heart rate variability; electromyography; signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information Technology and Systems, University of Canberra, Canberra, ACT 2617, Australia
Interests: neurophysiological sensors; EEG; fNIRS; ECG; signal processing; cognitive computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
Interests: nonlinear signal processing; electrodermal activity; electromyography; electroencephalogram; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
Interests: wearable devices such as accelerometers, electrocardiograms, photoplethysmograms, and electrodermal activity sensors to monitor and understand human physiology in uncomfortable or challenging environments
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Affective computing, the field dedicated to understanding and responding to human emotions, has seen significant advances driven by breakthroughs in biomedical sensing, especially in methods for processing such sensor-derived signals. This Special Issue will gather cutting-edge research that explores innovative signal processing techniques to accurately detect, recognize, and interpret human emotions, intentions, and physiological states. We request contributions that address the challenges of the acquisition, preprocessing, feature extraction, and classification of various physiological signals, such as EEG, ECG, EMG, EDA, and multimodal data. We encourage papers to demonstrate the practical application of the proposed methods in real-world scenarios, including but not limited to healthcare, human–computer interaction, and mental health.

Dr. Hugo F. Posada-Quintero
Dr. Raul Fernandez Rojas
Dr. Yedukondala Rao Veeranki
Dr. Youngsun Kong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • affective computing
  • signal processing
  • emotion recognition
  • physiological signals
  • multimodal data
  • feature extraction
  • human–computer interaction
  • healthcare
  • mental health
  • artificial intelligence
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2846 KB  
Article
Cross-Domain Object Detection with Hierarchical Multi-Scale Domain Adaptive YOLO
by Sihan Zhu, Peipei Zhu, Yuan Wu and Wensheng Qiao
Sensors 2025, 25(17), 5363; https://doi.org/10.3390/s25175363 - 29 Aug 2025
Viewed by 212
Abstract
To alleviate the performance degradation caused by domain shift, domain adaptive object detection (DAOD) has achieved compelling success in recent years. DAOD aims to improve the model’s detection performance on the target domain by reducing the distribution discrepancy between different domains. However, most [...] Read more.
To alleviate the performance degradation caused by domain shift, domain adaptive object detection (DAOD) has achieved compelling success in recent years. DAOD aims to improve the model’s detection performance on the target domain by reducing the distribution discrepancy between different domains. However, most existing methods are built on two-stage Faster RCNN, which is not suitable for real applications due to the detection efficiency. In this paper, we propose a novel Hierarchical Multi-scale Domain Adaptive (HMDA) method by integrating a simple but effective one-stage YOLO framework. HMDAYOLO mainly consists of the hierarchical backbone adaptation and the multi-scale head adaptation. The former performs hierarchical adaptation based on the differences in representational information of features at different depths of the backbone network, which promotes comprehensive distribution alignment and suppresses the negative transfer. The latter makes full use of the rich discriminative information in the feature maps to be detected for multi-scale adaptation. Additionally, it can reduce local instance divergence and ensure the model’s multi-scale detection capability. In this way, HMDA can improve the model’s generalization ability while ensuring its discriminating capability. We empirically verify the effectiveness of our method on four cross-domain object detection scenarios, comprising different domain shifts. Experimental results and analyses demonstrate that HMDA-YOLO can achieve competitive performance with real-time detection efficiency. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

16 pages, 2132 KB  
Article
Development of Machine-Learning-Based Facial Thermal Image Analysis for Dynamic Emotion Sensing
by Budu Tang, Wataru Sato and Yasutomo Kawanishi
Sensors 2025, 25(17), 5276; https://doi.org/10.3390/s25175276 - 25 Aug 2025
Viewed by 572
Abstract
Information on the relationship between facial thermal responses and emotional state is valuable for sensing emotion. Yet, previous research has typically relied on linear methods of analysis based on regions of interest (ROIs), which may overlook nonlinear pixel-wise information across the face. To [...] Read more.
Information on the relationship between facial thermal responses and emotional state is valuable for sensing emotion. Yet, previous research has typically relied on linear methods of analysis based on regions of interest (ROIs), which may overlook nonlinear pixel-wise information across the face. To address this limitation, we investigated the use of machine learning (ML) for pixel-level analysis of facial thermal images to estimate dynamic emotional arousal ratings. We collected facial thermal data from 20 participants who viewed five emotion-eliciting films and assessed their dynamic emotional self-reports. Our ML models, including random forest regression, support vector regression, ResNet-18, and ResNet-34, consistently demonstrated superior estimation performance compared to traditional simple or multiple linear regression models for the ROIs. To interpret the nonlinear relationships between facial temperature changes and arousal, saliency maps and integrated gradients were used for the ResNet-34 model. The results show nonlinear associations of arousal ratings in nose = tip, forehead, and cheek temperature changes. These findings imply that ML-based analysis of facial thermal images can estimate emotional arousal more effectively, pointing to potential applications of non-invasive emotion sensing for mental health, education, and human–computer interaction. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

22 pages, 5294 KB  
Article
Text-in-Image Enhanced Self-Supervised Alignment Model for Aspect-Based Multimodal Sentiment Analysis on Social Media
by Xuefeng Zhao, Yuxiang Wang and Zhaoman Zhong
Sensors 2025, 25(8), 2553; https://doi.org/10.3390/s25082553 - 17 Apr 2025
Viewed by 831
Abstract
The rapid development of social media has driven the need for opinion mining and sentiment analysis based on multimodal samples. As a fine-grained task within multimodal sentiment analysis, aspect-based multimodal sentiment analysis (ABMSA) enables the accurate and efficient determination of sentiment polarity for [...] Read more.
The rapid development of social media has driven the need for opinion mining and sentiment analysis based on multimodal samples. As a fine-grained task within multimodal sentiment analysis, aspect-based multimodal sentiment analysis (ABMSA) enables the accurate and efficient determination of sentiment polarity for aspect-level targets. However, traditional ABMSA methods often perform suboptimally on social media samples, as the images in these samples typically contain embedded text that conventional models overlook. Such text influences sentiment judgment. To address this issue, we propose a text-in-image enhanced self-supervised alignment model (TESAM) that accounts for multimodal information more comprehensively. Specifically, we employed Optical Character Recognition technology to extract embedded text from images and, based on the principle that text-in-image is an integral part of the visual modality, fused it with visual features to obtain more comprehensive image representations. Additionally, we incorporate aspect words to guide the model in disregarding irrelevant semantic features, thereby reducing noise interference. Furthermore, to mitigate the semantic gap between modalities, we propose pre-training the feature extraction module with self-supervised alignment. During this pre-training stage, unimodal semantic embeddings from both modalities are aligned by calculating errors using Euclidean distance and cosine similarity. Experimental results demonstrate that TESAM achieved remarkable performances on three ABMSA benchmarks. These results validate the rationale and effectiveness of our proposed improvements. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

17 pages, 918 KB  
Article
Fractal Analysis of Electrodermal Activity for Emotion Recognition: A Novel Approach Using Detrended Fluctuation Analysis and Wavelet Entropy
by Luis R. Mercado-Diaz, Yedukondala Rao Veeranki, Edward W. Large and Hugo F. Posada-Quintero
Sensors 2024, 24(24), 8130; https://doi.org/10.3390/s24248130 - 19 Dec 2024
Cited by 2 | Viewed by 1646
Abstract
The field of emotion recognition from physiological signals is a growing area of research with significant implications for both mental health monitoring and human–computer interaction. This study introduces a novel approach to detecting emotional states based on fractal analysis of electrodermal activity (EDA) [...] Read more.
The field of emotion recognition from physiological signals is a growing area of research with significant implications for both mental health monitoring and human–computer interaction. This study introduces a novel approach to detecting emotional states based on fractal analysis of electrodermal activity (EDA) signals. We employed detrended fluctuation analysis (DFA), Hurst exponent estimation, and wavelet entropy calculation to extract fractal features from EDA signals obtained from the CASE dataset, which contains physiological recordings and continuous emotion annotations from 30 participants. The analysis revealed significant differences in fractal features across five emotional states (neutral, amused, bored, relaxed, and scared), particularly those derived from wavelet entropy. A cross-correlation analysis showed robust correlations between fractal features and both the arousal and valence dimensions of emotion, challenging the conventional view of EDA as a predominantly arousal-indicating measure. The application of machine learning for emotion classification using fractal features achieved a leave-one-subject-out accuracy of 84.3% and an F1 score of 0.802, surpassing the performance of previous methods on the same dataset. This study demonstrates the potential of fractal analysis in capturing the intricate, multi-scale dynamics of EDA signals for emotion recognition, opening new avenues for advancing emotion-aware systems and affective computing applications. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

Back to TopTop