Next Issue
Volume 7, April
Previous Issue
Volume 6, December
 
 

Signals, Volume 7, Issue 1 (February 2026) – 18 articles

Cover Story (view full-size image): The accurate detection of sleep stages and disorders in older adults is essential for the effective diagnosis and treatment of sleep disorders affecting millions worldwide. Although Polysomnography (PSG) remains the primary method for monitoring sleep in medical settings, it is costly and time-consuming. Recent automated models have not fully explored and effectively fused the sleep features that are essential to identify sleep stages and disorders. This study proposes a novel automated model for the detection of sleep stages and disorders in older adults by analyzing PSG recordings. PSG data include multiple channels, and the use of our proposed advanced methods reveals the potential correlations and complementary features across EEG, EOG, and EMG signals. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
17 pages, 11487 KB  
Article
ML-CDAE: Multi-Lead Convolutional Denoising Autoencoder for Denoising 12-Lead ECG Signals
by Malaz Alfa, Fars Samann and Thomas Schanze
Signals 2026, 7(1), 18; https://doi.org/10.3390/signals7010018 - 19 Feb 2026
Viewed by 526
Abstract
Background: Electrocardiography (ECG), particularly the 12-lead configuration, is a crucial method for identifying heart rhythm abnormalities. However, its effectiveness can be reduced by noise contamination. State-of-the-art denoising methods based on neural networks have demonstrated promising performance in denoising complex biosignals like ECG. However, [...] Read more.
Background: Electrocardiography (ECG), particularly the 12-lead configuration, is a crucial method for identifying heart rhythm abnormalities. However, its effectiveness can be reduced by noise contamination. State-of-the-art denoising methods based on neural networks have demonstrated promising performance in denoising complex biosignals like ECG. However, most of these methods have focused on denoising single-lead ECG recordings. Methods: This research aims to leverage the inherent correlation among multi-lead ECG signals. Therefore, a multi-lead convolutional denoising autoencoder (ML-CDAE) model is proposed, to learn more effective representations, leading simultaneously to improved denoising performance and enhanced quality of 12-lead ECG recordings. Results: The findings indicate that ML-CDAE consistently outperforms a single-lead convolutional denoising autoencoder (SL-CDAE) and fully convolutional denoising autoencoder (FCN-DAE) model in denoising ECG signals corrupted by a mixture of physical noises. In particular, the mean squared error (MSE) and signal-to-noise ratio improvement (SNRimp) are used as evaluation metrics to assess the performance. Conclusions: The strong correlation among multi-lead ECG signals can be leveraged not only to enhance the denoising performance of the ML-CDAE model but also to simultaneously denoise 12-lead ECG signals more successfully compared to both the SL-CDAE and FCN-DAE models. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing II)
Show Figures

Figure 1

48 pages, 3308 KB  
Review
From Neurons to Networks: A Holistic Review of Electroencephalography (EEG) from Neurophysiological Foundations to AI Techniques
by Christos Kalogeropoulos, Konstantinos Theofilatos and Seferina Mavroudi
Signals 2026, 7(1), 17; https://doi.org/10.3390/signals7010017 - 16 Feb 2026
Viewed by 1951
Abstract
Electroencephalography (EEG) has transitioned from a subjective observational method into a data-intensive analytical field that utilises sophisticated algorithms and mathematical models. This review provides a holistic foundation by detailing the neurophysiological basis, recording techniques, and applications of EEG before providing a rigorous examination [...] Read more.
Electroencephalography (EEG) has transitioned from a subjective observational method into a data-intensive analytical field that utilises sophisticated algorithms and mathematical models. This review provides a holistic foundation by detailing the neurophysiological basis, recording techniques, and applications of EEG before providing a rigorous examination of traditional and modern analytical pillars. Statistical and Time-Series Analysis, Spectral and Time-Frequency Analysis, Spatial Analysis and Source Modelling, Connectivity and Network Analysis, and Nonlinear and Chaotic Analysis are explored. Afterwards, while acknowledging the historical role of Machine Learning (ML) and Deep Learning (DL) architectures, such as Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs), this review shifts the primary focus toward current state-of-the-art Artificial Intelligence (AI) trends. We place emphasis on the emergence of Foundation Models, including Large Language Models (LLMs) and Large Vision Models (LVMs), adapted for high-dimensional neural sequences. Finally, we explore the integration of Generative AI for data augmentation and review Explainable AI (XAI) frameworks designed to bridge the gap between “black-box” decoding and clinical interpretability. We conclude that the next generation of EEG analysis will likely converge into Neuro-Symbolic architectures, synergising the massive generative power of foundation models with the rigorous, rule-based interpretability of classical signal theory. Full article
Show Figures

Figure 1

20 pages, 2117 KB  
Article
An Interpretable Residual Spatio-Temporal Graph Attention Network for Multiclass Emotion Recognition from EEG
by Manal Hilali, Abdellah Ezzati, Said Ben Alla and Ahmed El Badaoui
Signals 2026, 7(1), 16; https://doi.org/10.3390/signals7010016 - 5 Feb 2026
Viewed by 704
Abstract
Automatic emotion recognition based on EEG has been a key research frontier in recent years, involving the direct extraction of emotional states from brain dynamics. However, existing deep learning approaches often treat EEG either as a sequence or as a static spatial map, [...] Read more.
Automatic emotion recognition based on EEG has been a key research frontier in recent years, involving the direct extraction of emotional states from brain dynamics. However, existing deep learning approaches often treat EEG either as a sequence or as a static spatial map, thereby failing to jointly capture the temporal evolution and spatial dependencies underlying emotional responses. To address this limitation, we propose an Interpretable Residual Spatio-Temporal Graph Attention Network (IRSTGANet) that integrates temporal convolutional encoding with residual graph-attention blocks. The temporal module enhances short-term EEG dynamics, while the graph-attention layers learn adaptive node connectivity relationships and preserve contextual information through residual links. Evaluated on the DEAP and SEED datasets, the proposed model achieved exceptional performance on valence and arousal, as well as four-class and nine-class classification on the DEAP dataset and on the three-class SEED dataset, exceeding state-of-the-art methods. These results demonstrate that combining temporal enhancement with residual graph attention yields both improved recognition performance and interpretable insights into emotion-related neural connectivity. Full article
Show Figures

Figure 1

27 pages, 7101 KB  
Article
Predicting 1-Year Mortality in Patients with Non-ST Elevation Myocardial Infarction (NSTEMI) Using Survival Models and Aortic Pressure Signals Recorded During Cardiac Catheterization
by Seyed Reza Razavi, Ashish H. Shah and Zahra Moussavi
Signals 2026, 7(1), 15; https://doi.org/10.3390/signals7010015 - 2 Feb 2026
Viewed by 450
Abstract
Despite successful revascularization, patients with non-ST elevation myocardial infarction (NSTEMI) remain at higher risk of mortality and morbidity. Accurately predicting mortality risk in this cohort can improve outcomes through timely interventions. This study for the first time predicts 1-year all-cause mortality in an [...] Read more.
Despite successful revascularization, patients with non-ST elevation myocardial infarction (NSTEMI) remain at higher risk of mortality and morbidity. Accurately predicting mortality risk in this cohort can improve outcomes through timely interventions. This study for the first time predicts 1-year all-cause mortality in an NSTEMI cohort using features extracted primarily from the aortic pressure (AP) signal recorded during cardiac catheterization. We analyzed data from 497 NSTEMI patients (66.3 ± 12.9 years, 187 (37.6%) females) retrospectively. We developed three survival models, the multivariate Cox proportional hazards, DeepSurv, and random survival forest, to predict mortality. Then, used Shapley additive explanations (SHAP) to interpret the decision-making process of the best survival model. Using 5-fold stratified cross-validation, DeepSurv achieved an average C-index of 0.935, an IBS of 0.028, and a mean time-dependent AUC of 0.939, outperforming the other models. Ejection systolic time, ejection systolic period, the difference between systolic blood pressure and dicrotic notch pressure (DesP), skewness, the age-modified shock index, and myocardial oxygen supply/demand ratio were identified by SHAP as the most characteristic AP features. In conclusion, AP signal features offer valuable prognostic insight for predicting 1-year all-cause mortality in the NSTEMI population, leading to enhanced risk stratification and clinical decision-making. Full article
Show Figures

Figure 1

28 pages, 2553 KB  
Review
Comparative Study of Supervised Deep Learning Architectures for Background Subtraction and Motion Segmentation on CDnet2014
by Oussama Boufares, Wajdi Saadaoui and Mohamed Boussif
Signals 2026, 7(1), 14; https://doi.org/10.3390/signals7010014 - 2 Feb 2026
Viewed by 487
Abstract
Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination [...] Read more.
Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination changes, dynamic backgrounds, cast shadows, and camera movements. The emergence of supervised deep learning-based methods has significantly enhanced performance, surpassing traditional approaches on the benchmark dataset CDnet2014. In this context, this paper provides a comprehensive review of recent supervised deep learning techniques applied to background subtraction, along with an in-depth comparative analysis of state-of-the-art approaches available on the official CDnet2014 results platform. Specifically, we examine several key architecture families, including convolutional neural networks (CNN and FCN), encoder–decoder models such as FgSegNet and Motion U-Net, adversarial frameworks (GAN), Transformer-based architectures, and hybrid methods combining intermittent semantic segmentation with rapid detection algorithms such as RT-SBS-v2. Beyond summarizing existing works, this review contributes a structured cross-family comparison under a unified benchmark, a focused analysis of performance behavior across challenging CDnet2014 scenarios, and a critical discussion of the trade-offs between segmentation accuracy, robustness, and computational efficiency for practical deployment. Full article
Show Figures

Figure 1

20 pages, 1202 KB  
Article
Adaptive ORB Accelerator on FPGA: High Throughput, Power Consumption, and More Efficient Vision for UAVs
by Hussam Rostum and József Vásárhelyi
Signals 2026, 7(1), 13; https://doi.org/10.3390/signals7010013 - 2 Feb 2026
Viewed by 671
Abstract
Feature extraction and description are fundamental components of visual perception systems used in applications such as visual odometry, Simultaneous Localization and Mapping (SLAM), and autonomous navigation. In resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs), achieving real-time hardware acceleration on Field-Programmable Gate Arrays [...] Read more.
Feature extraction and description are fundamental components of visual perception systems used in applications such as visual odometry, Simultaneous Localization and Mapping (SLAM), and autonomous navigation. In resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs), achieving real-time hardware acceleration on Field-Programmable Gate Arrays (FPGAs) is challenging. This work demonstrates an FPGA-based implementation of an adaptive ORB (Oriented FAST and Rotated BRIEF) feature extraction pipeline designed for high-throughput and energy-efficient embedded vision. The proposed architecture is a completely new design for the main algorithmic blocks of ORB, including the FAST (Features from Accelerated Segment Test) feature detector, Gaussian image filtering, moment computation, and descriptor generation. Adaptive mechanisms are introduced to dynamically adjust thresholds and filtering behavior, improving robustness under varying illumination conditions. The design is developed using a High-Level Synthesis (HLS) approach, where all processing modules are implemented as reusable hardware IP cores and integrated at the system level. The architecture is deployed and evaluated on two FPGA platforms, PYNQ-Z2 and KRIA KR260, and its performance is compared against CPU and GPU implementations using a dedicated C++ testbench based on OpenCV. Experimental results demonstrate significant improvements in throughput and energy efficiency while maintaining stable and scalable performance, making the proposed solution suitable for real-time embedded vision applications on UAVs and similar platforms. Notably, the FPGA implementation increases DSP utilization from 11% to 29% compared to the previous designs implemented by other researchers, effectively offloading computational tasks from general purpose logic (LUTs and FFs), reducing LUT usage by 6% and FF usage by 13%, while maintaining overall design stability, scalability, and acceptable thermal margins at 2.387 W. This work establishes a robust foundation for integrating the optimized ORB pipeline into larger drone systems and opens the door for future system-level enhancements. Full article
Show Figures

Figure 1

19 pages, 2385 KB  
Article
Multitrack Music Transcription Based on Joint Learning of Onset and Frame Streams
by Tomoki Matsunaga and Hiroaki Saito
Signals 2026, 7(1), 12; https://doi.org/10.3390/signals7010012 - 2 Feb 2026
Viewed by 697
Abstract
Multitrack music transcription is the task of converting music recordings into symbolic music representations that are assigned to individual instruments. This task requires simultaneous transcription of note onset and offset events for individual instruments. In addition, the limited resources of many transcription datasets [...] Read more.
Multitrack music transcription is the task of converting music recordings into symbolic music representations that are assigned to individual instruments. This task requires simultaneous transcription of note onset and offset events for individual instruments. In addition, the limited resources of many transcription datasets make multitrack music transcription challenging. Thus, even state-of-the-art transcription systems are inadequate for applications requiring high accuracy. In this paper, we propose a framework to jointly transcribe onsets and frames for multiple instruments by integrating a deep learning architecture based on U-Net with an architecture based on Perceiver, which is a variant of the Transformer architecture. The proposed framework effectively detects the pitches of different instruments by employing the multi-layer combined frequency and periodicity (ML-CFP) with multilayered frequency-domain and quefrency-domain features as the input data representation. Our experiments demonstrate that the proposed multitrack music transcription system outperforms existing systems on five transcription datasets, including low-resource datasets. Furthermore, we evaluate the proposed system in terms of instrument type and show that the system provides high-quality transcription results for the predominant instruments. Full article
Show Figures

Figure 1

39 pages, 4251 KB  
Article
An Experimental Tabletop Platform for Bidirectional Molecular Communication Using Advection–Diffusion Dynamics in Bio-Inspired Nanonetworks
by Nefeli Chatzisavvidou, Stefanos Papasotiriou, Ioanna Vrachni, Konstantinos Kantelis, Petros Nicopolitidis and Georgios Papadimitriou
Signals 2026, 7(1), 11; https://doi.org/10.3390/signals7010011 - 2 Feb 2026
Viewed by 450
Abstract
With rapid advances in nanotechnology and synthetic biology, biological nanonetworks are emerging for biomedical and environmental applications within the Internet of Bio-NanoThings. While they rely on molecular communication, experimental validation remains limited, especially for non-ideal effects such as molecular accumulation. In this work, [...] Read more.
With rapid advances in nanotechnology and synthetic biology, biological nanonetworks are emerging for biomedical and environmental applications within the Internet of Bio-NanoThings. While they rely on molecular communication, experimental validation remains limited, especially for non-ideal effects such as molecular accumulation. In this work, we present a novel table-top experimental system that emulates the core functionalities of a biological nanonetwork and is straightforward to reproduce in standard laboratory environments, also making it suitable for educational demonstrations. To the best of our knowledge, this is the first experimental platform that incorporates two end nodes capable of acting interchangeably as transmitter and receiver, thereby enabling true bidirectional molecular communication. Information transfer is realized through controlled release, advection and diffusion of molecules, using molecular concentration coding analogous to concentration shift keying, while the receiver decodes messages by comparing measured concentrations against predefined thresholds. Based on the measurements reported herein, the drop-based algorithm substantially outperforms the threshold-based scheme. Specifically, it reduces first-message latency by more than 2.5× across the tested volumes and reduces latest-message latency by up to 71%, providing approximately 3.7× better message delivery. A key experimental outcome is the observation of channel saturation: beyond a certain operating period, residual molecules accumulate and effectively saturate the medium, inhibiting reliable further message exchange until sufficient clearance occurs. This saturation-induced “channel memory” emerges as a fundamental practical constraint on sustained communication and achievable data rates. Overall, the proposed platform provides a scalable, controllable, and experimentally accessible testbed for systematically studying signal degradation, saturation, clearance dynamics, and throughput limits, thereby bridging the gap between theoretical models and practical implementations in the Internet of Bio-NanoThings era. Full article
Show Figures

Figure 1

16 pages, 4287 KB  
Article
A Bispectral Slice Negentropy Analysis Method for the Detection and Diagnosis of Rolling Bearing Faults
by Yifan Liu, Yonggang Xu, Yanping Zhu, Xue Zou and Huaming Zhang
Signals 2026, 7(1), 10; https://doi.org/10.3390/signals7010010 - 2 Feb 2026
Viewed by 328
Abstract
Bearing fault diagnosis is critical in rotating machinery, and collecting and analyzing vibration signals from faulty bearings is a widely employed method in fault diagnosis. To efficiently extract the information of periodic pulse from complex signals and accurately identify fault characteristic frequencies, this [...] Read more.
Bearing fault diagnosis is critical in rotating machinery, and collecting and analyzing vibration signals from faulty bearings is a widely employed method in fault diagnosis. To efficiently extract the information of periodic pulse from complex signals and accurately identify fault characteristic frequencies, this paper proposes a BSNA (Bispectral Slice Negentropy Analysis) method. This method leverages the nonlinear characteristics of bispectral analysis and the sensitivity of negentropy measures to transform one-dimensional signals into two-dimensional spectra. By utilizing the demodulation capability of the time-frequency modulation bispectrum, it highlights the relationship between resonance bands and modulation frequency, while maximizing the preservation of critical fault information and minimizing the impact of interference signals. The fault information contained in the slices is subsequently quantified using the CSNE (correlation spectral negentropy), which effectively captures the magnitude of periodic pulse energy. By calculating the CSNE of each modulation frequency slice and visualizing it, the energy distribution of periodic pulses within each slice can be effectively observed. The feasibility of this method in rolling bearing fault diagnosis has been validated through simulation analysis and experimental comparison. This approach enables the accurate identification of fault characteristic frequency and its harmonics, thereby significantly enhancing the accuracy and robustness of fault diagnosis, particularly in complex and noisy background environments. Full article
(This article belongs to the Special Issue Condition Monitoring and Intelligent Fault Diagnosis of Rotor System)
Show Figures

Figure 1

31 pages, 753 KB  
Article
Event-Triggered Robust Fusion Estimation for Multi-Sensor Systems Under Random Packet Drops
by Shaoxun Lu and Huabo Liu
Signals 2026, 7(1), 9; https://doi.org/10.3390/signals7010009 - 21 Jan 2026
Viewed by 319
Abstract
This paper focuses on the design of robust fusion estimators for multi-sensor systems experiencing constrained communications, model uncertainties, and random packet dropouts. To mitigate the impact of modeling errors, a sensitivity-penalized robust state estimator is employed at each local estimator. At the local [...] Read more.
This paper focuses on the design of robust fusion estimators for multi-sensor systems experiencing constrained communications, model uncertainties, and random packet dropouts. To mitigate the impact of modeling errors, a sensitivity-penalized robust state estimator is employed at each local estimator. At the local fusion estimators, a centralized robust fusion estimation algorithm is derived by improving the cost function of the sensitivity-penalized estimator. The implementation of an event-triggered strategy effectively alleviates the burden on the communication channels linking the sensors and the fusion center. Moreover, the fusion estimator is capable of handling packet drops caused by unreliable communication channels, and the pseudo cross-covariance matrix is accordingly formulated. Sufficient conditions are derived to ensure the uniform boundedness of the estimation error for the proposed robust fusion estimator. Finally, simulation experiments using a tractor-car system validate the performance and advantages of the presented algorithm. Full article
Show Figures

Figure 1

34 pages, 18159 KB  
Review
Firebug Swarm Optimization Algorithm: An Overview and Applications
by Faroq Awin, Yasser Alginahi and Esam Abdel-Raheem
Signals 2026, 7(1), 8; https://doi.org/10.3390/signals7010008 - 13 Jan 2026
Viewed by 547
Abstract
This survey delves into the Firebug Swarm Optimization (FSO) algorithm, an advanced global optimization algorithm that plays a pivotal role in modern swarm intelligence optimization techniques. It explores the core principles of the FSO algorithm and examines the various hybrid variants developed to [...] Read more.
This survey delves into the Firebug Swarm Optimization (FSO) algorithm, an advanced global optimization algorithm that plays a pivotal role in modern swarm intelligence optimization techniques. It explores the core principles of the FSO algorithm and examines the various hybrid variants developed to address complex optimization challenges. This survey also traces the evolution of swarm optimization methods, shedding light onto the natural phenomena and biological processes that have inspired these algorithms. Furthermore, it highlights the diverse real-world applications of the FSO algorithm, showcasing its effectiveness in fields such as engineering, data science, and artificial intelligence. To provide a comprehensive comparison, the survey includes a case study that evaluates the FSO algorithm’s performance against other existing algorithms. Lastly, the survey identifies key open research questions and suggests potential future directions for advancing the FSO algorithm and other nature-inspired optimization techniques, aiming to overcome current limitations and unlock new possibilities. Full article
Show Figures

Figure 1

26 pages, 29009 KB  
Article
Quantifying the Relationship Between Speech Quality Metrics and Biometric Speaker Recognition Performance Under Acoustic Degradation
by Ajan Ahmed and Masudul H. Imtiaz
Signals 2026, 7(1), 7; https://doi.org/10.3390/signals7010007 - 12 Jan 2026
Viewed by 1056
Abstract
Self-supervised learning (SSL) models have achieved remarkable success in speaker verification tasks, yet their robustness to real-world audio degradation remains insufficiently characterized. This study presents a comprehensive analysis of how audio quality degradation affects three prominent SSL-based speaker verification systems (WavLM, Wav2Vec2, and [...] Read more.
Self-supervised learning (SSL) models have achieved remarkable success in speaker verification tasks, yet their robustness to real-world audio degradation remains insufficiently characterized. This study presents a comprehensive analysis of how audio quality degradation affects three prominent SSL-based speaker verification systems (WavLM, Wav2Vec2, and HuBERT) across three diverse datasets: TIMIT, CHiME-6, and Common Voice. We systematically applied 21 degradation conditions spanning noise contamination (SNR levels from 0 to 20 dB), reverberation (RT60 from 0.3 to 1.0 s), and codec compression (various bit rates), then measured both objective audio quality metrics (PESQ, STOI, SNR, SegSNR, fwSNRseg, jitter, shimmer, HNR) and speaker verification performance metrics (EER, AUC-ROC, d-prime, minDCF). At the condition level, multiple regression with all eight quality metrics explained up to 80% of the variance in minDCF for HuBERT and 78% for WavLM, but only 35% for Wav2Vec2; EER predictability was lower (69%, 67%, and 28%, respectively). PESQ was the strongest single predictor for WavLM and HuBERT, while Shimmer showed the highest single-metric correlation for Wav2Vec2; fwSNRseg yielded the top single-metric R2 for WavLM, and PESQ for HuBERT and Wav2Vec2 (with much smaller gains for Wav2Vec2). WavLM and HuBERT exhibited more predictable quality-performance relationships compared to Wav2Vec2. These findings establish quantitative relationships between measurable audio quality and speaker verification accuracy at the condition level, though substantial within-condition variability limits utterance-level prediction accuracy. Full article
Show Figures

Figure 1

12 pages, 3032 KB  
Article
Inverse Synthetic Aperture Radar Imaging of Space Objects Using Probing Signal with a Zero Autocorrelation Zone
by Roman N. Ipanov and Aleksey A. Komarov
Signals 2026, 7(1), 6; https://doi.org/10.3390/signals7010006 - 12 Jan 2026
Viewed by 438
Abstract
To obtain radar images of a group of small space objects or to resolve individual elements of complex space objects in near-Earth orbit, a radar system must have high spatial resolution. High range resolution is achieved by using complex probing signals with a [...] Read more.
To obtain radar images of a group of small space objects or to resolve individual elements of complex space objects in near-Earth orbit, a radar system must have high spatial resolution. High range resolution is achieved by using complex probing signals with a wide spectrum bandwidth. Achieving high angular resolution for small or complex space objects is based on the inverse synthetic aperture antenna effect. Among the various classes of complex signals, only two have found practical application in Inverse Synthetic Aperture Radar (ISAR) systems so far: the Linear Frequency-Modulated signal (chirp) and the Stepped-Frequency signal. Over the coherent integration interval of the echo signals, which corresponds to the ISAR aperture synthesis time, the combined correlation characteristics of the signal ensemble are analyzed. A high level of integral correlation noise in the ensemble of probing signals degrades the quality of the radar image. Therefore, a probing signal with a Zero Autocorrelation Zone (ZACZ) is highly relevant for ISAR applications. In this work, through simulation, radar images of a complex space object were obtained using both chirp and ZACZ probing signals. A comparative analysis of the correlation characteristics of the echo signals and the resulting radar images of the complex space object was performed. Full article
Show Figures

Figure 1

15 pages, 1856 KB  
Article
EMG-Based Muscle Synergy Analysis: Leg Dominance Effects During One-Leg Stance on Stable and Unstable Surfaces
by Arunee Promsri
Signals 2026, 7(1), 5; https://doi.org/10.3390/signals7010005 - 9 Jan 2026
Viewed by 690
Abstract
Leg dominance has been linked to an increased risk of lower-limb injuries in sports. This study examined bilateral asymmetry in muscle synergy patterns during one-leg stance on stable and multiaxial unstable surfaces. Twenty-five active young adults (25.6 ± 3.9 years) performed unipedal stance [...] Read more.
Leg dominance has been linked to an increased risk of lower-limb injuries in sports. This study examined bilateral asymmetry in muscle synergy patterns during one-leg stance on stable and multiaxial unstable surfaces. Twenty-five active young adults (25.6 ± 3.9 years) performed unipedal stance tasks on their dominant and non-dominant legs while surface electromyography (EMG) was recorded from seven lower-limb muscles per leg. Muscle synergies were extracted using non-negative matrix factorization (NMF), and structural similarity was assessed via cosine similarity with the Hungarian matching algorithm. Four consistent synergies were identified under both surface conditions, accounting for 88% of the total variance. On the stable surface, significant asymmetry in muscle weightings was observed in the rectus femoris (p = 0.030) for Synergy 1 and in the rectus femoris (p = 0.042), tibialis anterior (p = 0.024), peroneus longus (p = 0.023), and soleus (p = 0.006) for Synergy 2. On the unstable surface, asymmetry was evident in the biceps femoris (p = 0.048) for Synergy 2 and the rectus femoris (p = 0.045) for Synergy 3. Overall, dominance-related asymmetry was more pronounced under stable conditions and became more subtle as postural demand increased, revealing bilateral asymmetry in neuromuscular coordination during unipedal stance. Full article
Show Figures

Figure 1

22 pages, 1781 KB  
Article
Multimodal Hybrid CNN-Transformer with Attention Mechanism for Sleep Stages and Disorders Classification Using Bio-Signal Images
by Innocent Tujyinama, Bessam Abdulrazak and Rachid Hedjam
Signals 2026, 7(1), 4; https://doi.org/10.3390/signals7010004 - 8 Jan 2026
Cited by 1 | Viewed by 1044
Abstract
Background and Objective: The accurate detection of sleep stages and disorders in older adults is essential for the effective diagnosis and treatment of sleep disorders affecting millions worldwide. Although Polysomnography (PSG) remains the primary method for monitoring sleep in medical settings, it is [...] Read more.
Background and Objective: The accurate detection of sleep stages and disorders in older adults is essential for the effective diagnosis and treatment of sleep disorders affecting millions worldwide. Although Polysomnography (PSG) remains the primary method for monitoring sleep in medical settings, it is costly and time-consuming. Recent automated models have not fully explored and effectively fused the sleep features that are essential to identify sleep stages and disorders. This study proposes a novel automated model for detecting sleep stages and disorders in older adults by analyzing PSG recordings. PSG data include multiple channels, and the use of our proposed advanced methods reveals the potential correlations and complementary features across EEG, EOG, and EMG signals. Methods: In this study, we employed three novel advanced architectures, (1) CNNs, (2) CNNs with Bi-LSTM, and (3) CNNs with a transformer encoder, for the automatic classification of sleep stages and disorders using multichannel PSG data. The CNN extracts local features from RGB spectrogram images of EEG, EOG, and EMG signals individually, followed by an appropriate column-wise feature fusion block. The Bi-LSTM and transformer encoder are then used to learn and capture intra-epoch feature transition rules and dependencies. A residual connection is also applied to preserve the characteristics of the original joint feature maps and prevent gradient vanishing. Results: The experimental results in the CAP sleep database demonstrated that our proposed CNN with transformer encoder method outperformed standalone CNN, CNN with Bi-LSTM, and other advanced state-of-the-art methods in sleep stages and disorders classification. It achieves an accuracy of 95.2%, Cohen’s kappa of 93.6%, MF1 of 91.3%, and MGm of 95% for sleep staging, and an accuracy of 99.3%, Cohen’s kappa of 99.1%, MF1 of 99.2%, and MGm of 99.6% for disorder detection. Our model also achieves superior performance to other state-of-the-art approaches in the classification of N1, a stage known for its classification difficulty. Conclusions: To the best of our knowledge, we are the first group going beyond the standard to investigate and innovate a model architecture which is accurate and robust for classifying sleep stages and disorders in the elderly for both patient and non-patient subjects. Given its high performance, our method has the potential to be integrated and deployed into clinical routine care settings. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing II)
Show Figures

Figure 1

16 pages, 917 KB  
Article
A Novel Deterministic Algorithm for Atrial Fibrillation Detection
by Alessandro Filisetti, Pietro Bia, Germana Luciani, Margherita Losardo, Riccardo Ardoino and Antonio Manna
Signals 2026, 7(1), 3; https://doi.org/10.3390/signals7010003 - 8 Jan 2026
Viewed by 616
Abstract
The absence of a recognizable P wave in an electrocardiogram (ECG) is a critical indicator for the diagnosis of atrial fibrillation (AF). An algorithm capable of distinguishing between physiological and pathological states in a short period of time could serve as a valuable [...] Read more.
The absence of a recognizable P wave in an electrocardiogram (ECG) is a critical indicator for the diagnosis of atrial fibrillation (AF). An algorithm capable of distinguishing between physiological and pathological states in a short period of time could serve as a valuable tool for timely and effective diagnosis, even in a home setting. To achieve this goal, a deterministic algorithm is proposed. The Fantasia Database and the AF Termination Challenge Database were used for training the model. Subsequently, for the test session, a one-minute recording was extracted from the Autonomic Aging Dataset and the Long-Term AF Database. After band-pass filtering, characteristic points such as R-peaks and P waves were extracted. The R-peak detection algorithm was compared with the gold standard Pan-Tompkins, obtaining a p-value > 0.05 on the Fantasia Database, which means that there is no statistical difference between them. Subsequently derived features such as duration, amplitude, subtended area, and P wave slope have been used to discriminate healthy subjects from AF patients. The P-wave slope emerged as the most effective feature, achieving a classification accuracy of 100% and 96% for the training and test sets, respectively. This algorithm thus represents a significant advancement as it achieves a performance comparable to other deterministic methods based on P wave analysis using only one-minute recordings, thereby enabling accurate diagnosis in a shorter time frame. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing II)
Show Figures

Figure 1

17 pages, 9683 KB  
Article
Combined Infinity Laplacian and Non-Local Means Models Applied to Depth Map Restoration
by Vanel Lazcano, Mabel Vega-Rojas and Felipe Calderero
Signals 2026, 7(1), 2; https://doi.org/10.3390/signals7010002 - 7 Jan 2026
Viewed by 458
Abstract
Scene depth information is a key component of any robotic mobile application. Range sensors, such as LiDAR, sonar, or radar, capture depth data of a scene. However, the data captured by these sensors frequently presents missing regions or information with a low confidence [...] Read more.
Scene depth information is a key component of any robotic mobile application. Range sensors, such as LiDAR, sonar, or radar, capture depth data of a scene. However, the data captured by these sensors frequently presents missing regions or information with a low confidence level. These missing regions in the depth data could be large areas without information, making it difficult to make decisions, for instance, for an autonomous vehicle. Recovering depth data has become a primary activity for computer vision applications. This work proposes and evaluates an interpolation model to infer dense depth maps from a Lab color space reference picture and an incomplete-depth image embedded in a completion pipeline. The complete proposal pipeline comprises convolutional layers and a convex combination of the infinity Laplacian and non-local means model. The proposed model infers dense depth maps by considering depth data and utilizing clues from a color picture of the scene, along with a metric for computing differences between two pixels. The work contributes (i) the convex combination of the two models to interpolate the data, and (ii) the proposal of a class of function suitable for balancing between different models. The obtained results show that the model outperforms similar models in the KITTI dataset and outperforms our previous implementation in the NYU_v2 dataset, dropping the MSE by 34.86%, 3.35%, and 34.42% for 4×, 8×, 16× upsampling tasks, respectively. Full article
(This article belongs to the Special Issue Recent Development of Signal Detection and Processing)
Show Figures

Figure 1

32 pages, 5517 KB  
Article
Evaluation of Jamming Attacks on NR-V2X Systems: Simulation and Experimental Perspectives
by Antonio Santos da Silva, Kevin Herman Muraro Gularte, Giovanni Almeida Santos, Davi Salomão Soares Corrêa, Luís Felipe Oliveira de Melo, João Paulo Javidi da Costa, José Alfredo Ruiz Vargas, Daniel Alves da Silva and Tai Fei
Signals 2026, 7(1), 1; https://doi.org/10.3390/signals7010001 - 19 Dec 2025
Cited by 1 | Viewed by 1166
Abstract
Autonomous vehicles (AVs) are transforming transportation by improving safety, efficiency, and intelligence through integrated sensing, computing, and communication technologies. However, their growing reliance on Vehicle-to-Everything (V2X) communication exposes them to cybersecurity vulnerabilities, particularly at the physical layer. Among these, jamming attacks represent a [...] Read more.
Autonomous vehicles (AVs) are transforming transportation by improving safety, efficiency, and intelligence through integrated sensing, computing, and communication technologies. However, their growing reliance on Vehicle-to-Everything (V2X) communication exposes them to cybersecurity vulnerabilities, particularly at the physical layer. Among these, jamming attacks represent a critical threat by disrupting wireless channels and compromising message delivery, severely impacting vehicle coordination and safety. This work investigates the robustness of New Radio (NR)-V2X-enabled vehicular systems under jamming conditions through a dual-methodology approach. First, two Cooperative Intelligent Transport System (C-ITS) scenarios standardized by 3GPP—Do Not Pass Warning (DNPW) and Intersection Movement Assist (IMA)—are implemented in the OMNeT++ simulation environment using Simu5G, Veins, and SUMO. The simulations incorporate four types of jamming strategies and evaluate their impact on key metrics such as packet loss, signal quality, inter-vehicle spacing, and collision risk. Second, a complementary laboratory experiment is conducted using AnaPico vector signal generators (a Keysight Technologies brand) and an Anritsu multi-channel spectrum receiver, replicating controlled wireless conditions to validate the degradation effects observed in the simulation. The findings reveal that jamming severely undermines communication reliability in NR-V2X systems, both in simulation and in practice. These findings highlight the urgent need for resilient NR-V2X protocols and countermeasures to ensure the integrity of cooperative autonomous systems in adversarial environments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop