Skip Content
You are currently on the new version of our website. Access the old version .

Signals

Signals is an international, peer-reviewed, open access journal on signals and signal processing published bimonthly online by MDPI.

Quartile Ranking JCR - Q2 (Engineering, Electrical and Electronic)

All Articles (302)

Automatic emotion recognition based on EEG has been a key research frontier in recent years, involving the direct extraction of emotional states from brain dynamics. However, existing deep learning approaches often treat EEG either as a sequence or as a static spatial map, thereby failing to jointly capture the temporal evolution and spatial dependencies underlying emotional responses. To address this limitation, we propose an Interpretable Residual Spatio-Temporal Graph Attention Network (IRSTGANet) that integrates temporal convolutional encoding with residual graph-attention blocks. The temporal module enhances short-term EEG dynamics, while the graph-attention layers learn adaptive node connectivity relationships and preserve contextual information through residual links. Evaluated on the DEAP and SEED datasets, the proposed model achieved exceptional performance on valence and arousal, as well as four-class and nine-class classification on the DEAP dataset and on the three-class SEED dataset, exceeding state-of-the-art methods. These results demonstrate that combining temporal enhancement with residual graph attention yields both improved recognition performance and interpretable insights into emotion-related neural connectivity.

5 February 2026

The architecture of the IRSTGANet model (a) preprocessing, (b) temporal feature extractor, (c) projection block, (d) stacked residual GAT blocks with N = 2, global aggregation, (e) classification head, and (f) the four emotion classes used.

Despite successful revascularization, patients with non-ST elevation myocardial infarction (NSTEMI) remain at higher risk of mortality and morbidity. Accurately predicting mortality risk in this cohort can improve outcomes through timely interventions. This study for the first time predicts 1-year all-cause mortality in an NSTEMI cohort using features extracted primarily from the aortic pressure (AP) signal recorded during cardiac catheterization. We analyzed data from 497 NSTEMI patients (66.3 ± 12.9 years, 187 (37.6%) females) retrospectively. We developed three survival models, the multivariate Cox proportional hazards, DeepSurv, and random survival forest, to predict mortality. Then, used Shapley additive explanations (SHAP) to interpret the decision-making process of the best survival model. Using 5-fold stratified cross-validation, DeepSurv achieved an average C-index of 0.935, an IBS of 0.028, and a mean time-dependent AUC of 0.939, outperforming the other models. Ejection systolic time, ejection systolic period, the difference between systolic blood pressure and dicrotic notch pressure (DesP), skewness, the age-modified shock index, and myocardial oxygen supply/demand ratio were identified by SHAP as the most characteristic AP features. In conclusion, AP signal features offer valuable prognostic insight for predicting 1-year all-cause mortality in the NSTEMI population, leading to enhanced risk stratification and clinical decision-making.

2 February 2026

Block diagram of the proposed methodology. CPH = Cox proportional hazards, RSF = random survival forest, C-index= concordance index, and IBS = integrated brier score. The green squares highlight the key stages of the study, including signal preprocessing, feature selection, and the implementation of two stratified K-fold cross-validation steps for model development and evaluation.

Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination changes, dynamic backgrounds, cast shadows, and camera movements. The emergence of supervised deep learning-based methods has significantly enhanced performance, surpassing traditional approaches on the benchmark dataset CDnet2014. In this context, this paper provides a comprehensive review of recent supervised deep learning techniques applied to background subtraction, along with an in-depth comparative analysis of state-of-the-art approaches available on the official CDnet2014 results platform. Specifically, we examine several key architecture families, including convolutional neural networks (CNN and FCN), encoder–decoder models such as FgSegNet and Motion U-Net, adversarial frameworks (GAN), Transformer-based architectures, and hybrid methods combining intermittent semantic segmentation with rapid detection algorithms such as RT-SBS-v2. Beyond summarizing existing works, this review contributes a structured cross-family comparison under a unified benchmark, a focused analysis of performance behavior across challenging CDnet2014 scenarios, and a critical discussion of the trade-offs between segmentation accuracy, robustness, and computational efficiency for practical deployment.

2 February 2026

Tree diagram illustrating the taxonomy of major supervised approaches for background subtraction and moving object segmentation. Methods are categorized by architectural families: CNN/FCN, U-Net, GAN, FgSegNet, Transformers, lightweight models, and hybrid approaches.

Feature extraction and description are fundamental components of visual perception systems used in applications such as visual odometry, Simultaneous Localization and Mapping (SLAM), and autonomous navigation. In resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs), achieving real-time hardware acceleration on Field-Programmable Gate Arrays (FPGAs) is challenging. This work demonstrates an FPGA-based implementation of an adaptive ORB (Oriented FAST and Rotated BRIEF) feature extraction pipeline designed for high-throughput and energy-efficient embedded vision. The proposed architecture is a completely new design for the main algorithmic blocks of ORB, including the FAST (Features from Accelerated Segment Test) feature detector, Gaussian image filtering, moment computation, and descriptor generation. Adaptive mechanisms are introduced to dynamically adjust thresholds and filtering behavior, improving robustness under varying illumination conditions. The design is developed using a High-Level Synthesis (HLS) approach, where all processing modules are implemented as reusable hardware IP cores and integrated at the system level. The architecture is deployed and evaluated on two FPGA platforms, PYNQ-Z2 and KRIA KR260, and its performance is compared against CPU and GPU implementations using a dedicated C++ testbench based on OpenCV. Experimental results demonstrate significant improvements in throughput and energy efficiency while maintaining stable and scalable performance, making the proposed solution suitable for real-time embedded vision applications on UAVs and similar platforms. Notably, the FPGA implementation increases DSP utilization from 11% to 29% compared to the previous designs implemented by other researchers, effectively offloading computational tasks from general purpose logic (LUTs and FFs), reducing LUT usage by 6% and FF usage by 13%, while maintaining overall design stability, scalability, and acceptable thermal margins at 2.387 W. This work establishes a robust foundation for integrating the optimized ORB pipeline into larger drone systems and opens the door for future system-level enhancements.

2 February 2026

Sample figure of FAST corner detection for a threshold of 200.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Signals - ISSN 2624-6120