Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,835)

Search Parameters:
Keywords = noise type

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2477 KB  
Article
Experimental Validation of Robust Backstepping Control for TRMS Using an Interval Type-2 Fuzzy Observer
by Azeddine Beloufa, Souaad Tahraoui, Abderrahmane Kacimi, Hadje Allouach, Jun-Jiat Tiang and Abdelbasset Azzouz
Eng 2026, 7(4), 171; https://doi.org/10.3390/eng7040171 - 8 Apr 2026
Abstract
This research focuses on the trajectory tracking control of a Twin Rotor MIMO System (TRMS) with time-varying sinusoidal inputs. Initial design considerations include a backstepping controller integrated with a high-gain observer (HGO) to estimate unmeasured states. While the outcomes of the simulation show [...] Read more.
This research focuses on the trajectory tracking control of a Twin Rotor MIMO System (TRMS) with time-varying sinusoidal inputs. Initial design considerations include a backstepping controller integrated with a high-gain observer (HGO) to estimate unmeasured states. While the outcomes of the simulation show good accuracy of tracking, real-time implementation shows instability and performance degradation. This divergence is attributed to the static high gains of the observer that amplify measurement noise and inject inaccurate state estimates into the controller during actual deployment. To overcome this drawback without altering the core control structure, we propose a strategy of online gain tuning based on Interval Type-2 Takagi–Sugeno (TS) fuzzy logic. The proposed mechanism dynamically adjusts the observer gain based on estimation errors to balance the trade-off between convergence speed and noise sensitivity. Experimental evaluations on the physical TRMS confirm that the fuzzy-tuned observer eliminates instability in real-time. Quantitative analysis demonstrates that the proposed method reduces the Root Mean Square Error (RMSE) by 65.6% in the Pitch axis and 92.3% in the Yaw axis compared to the fixed-gain counterpart. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

68 pages, 7705 KB  
Review
An Overview of Complex Time Series Analysis
by Alejandro Ramírez-Rojas, Leonardo Di G. Sigalotti, Luciano Telesca and Fidel Cruz
Mathematics 2026, 14(7), 1231; https://doi.org/10.3390/math14071231 - 7 Apr 2026
Abstract
Different methodologies have been developed for the analysis and study of dynamical systems, including both theoretical models and natural systems. Examples span a wide range of applications, such as astronomy, financial and economic time series, biophysical systems, physiological phenomena, and Earth sciences, including [...] Read more.
Different methodologies have been developed for the analysis and study of dynamical systems, including both theoretical models and natural systems. Examples span a wide range of applications, such as astronomy, financial and economic time series, biophysical systems, physiological phenomena, and Earth sciences, including seismicity and climatic processes. The study of these complex systems is commonly based on the analysis of the signals they generate, using mathematical tools to extract relevant information. A broad spectrum of mathematical disciplines converges in this context, including stochastic, probability and statistical theory, entropic and informational measures, fractal and multifractal analysis, natural time analysis, modeling of non-linearity and recurrence methods, generalized entropies, non-extensive systems, machine learning, and high-dimensional and multivariate complexity. Research in this area is largely focused on the characterization of complex systems, providing indicators of determinism or stochasticity, distinguishing between regularity, chaos, and noise, and identifying topological as well as disorder-regularity features. In addition, short- and long-term forecasting, together with the identification of short- and long-range correlations, play a central role in such characterization. To address these objectives, numerous mathematical tools have been developed for the analysis of time series and point processes, each designed to capture specific signal properties. In this work, many of the most important tools used in time series analysis are compiled and reviewed, highlighting their main characteristics and the different types of complex systems to which they have been applied. Full article
(This article belongs to the Special Issue Recent Advances in Time Series Analysis, 2nd Edition)
33 pages, 1215 KB  
Review
Integration of Bulk and Single-Cell RNA Sequencing Analyses in Biomedicine
by Nikita Golushko and Anton Buzdin
Int. J. Mol. Sci. 2026, 27(7), 3334; https://doi.org/10.3390/ijms27073334 - 7 Apr 2026
Abstract
Transcriptome profiling is a cornerstone of functional genomics, enabling the detailed characterization of gene expression in health and disease. Bulk RNA sequencing (bulk RNAseq) remains the most widely used approach in clinical and large-cohort studies due to its cost-effectiveness, robustness, and comprehensive transcriptome [...] Read more.
Transcriptome profiling is a cornerstone of functional genomics, enabling the detailed characterization of gene expression in health and disease. Bulk RNA sequencing (bulk RNAseq) remains the most widely used approach in clinical and large-cohort studies due to its cost-effectiveness, robustness, and comprehensive transcriptome coverage. However, bulk RNAseq inherently averages gene expression signals across heterogeneous cell populations, thereby masking cellular diversity and obscuring rare cell types. In contrast, single-cell RNA sequencing (scRNAseq) enables a high-resolution analysis of cellular heterogeneity, allowing the identification of distinct cell types, transitional states, and developmental trajectories. Nevertheless, scRNAseq is associated with higher cost, limited scalability, increased technical noise, sparse expression matrices, and protocol-dependent biases introduced during tissue dissociation or nuclear isolation. In this review, we summarize the conceptual and methodological foundations of integrating bulk RNAseq and scRNAseq data, emphasizing their complementary strengths and limitations. We discuss how scRNAseq-derived cell-type atlases can serve as reference matrices for computational reconstruction (deconvolution) of bulk RNAseq profiles and examine key sources of technical and biological variability. Furthermore, we outline major integration strategies, including reference-based deconvolution, pseudobulk aggregation, and Bayesian joint modeling to provide an overview of widely used analytical tools and essential components of scRNAseq data processing workflows. Full article
Show Figures

Figure 1

25 pages, 6093 KB  
Article
Reliability-Aware Heterogeneous Graph Attention Networks with Temporal Post-Processing for Electronic Power System State Estimation
by Qing Wang, Jian Yang, Pingxin Wang, Yaru Sheng and Hongxia Zhu
Electronics 2026, 15(7), 1536; https://doi.org/10.3390/electronics15071536 - 7 Apr 2026
Abstract
Nonlinear state estimation in electric power systems remains challenging under mixed-measurement conditions due to the coexistence of legacy SCADA and PMU data with markedly different reliability levels, the sensitivity of classical Gauss–Newton-type methods to heterogeneous noise and numerical conditioning, and the increasing complexity [...] Read more.
Nonlinear state estimation in electric power systems remains challenging under mixed-measurement conditions due to the coexistence of legacy SCADA and PMU data with markedly different reliability levels, the sensitivity of classical Gauss–Newton-type methods to heterogeneous noise and numerical conditioning, and the increasing complexity of large-scale grids. To address these issues, this paper proposes ST-ResGAT, a spatio-temporal residual graph attention framework for nonlinear state estimation under heterogeneous sensing conditions. The proposed method models the problem on an augmented heterogeneous factor graph, employs a reliability-aware heterogeneous graph attention mechanism with residual propagation to adaptively fuse measurements of different quality, and further refines the graph-based estimates through a lightweight LSTM post-processing module that exploits short-term temporal continuity. All datasets are generated using pandapower on the IEEE 30-bus, IEEE 118-bus, and IEEE 1354-bus benchmark systems to ensure full reproducibility of the experimental pipeline. Experimental results show that the proposed method consistently achieves lower estimation errors than WLS, DNN, GAT, and PINN baselines across all three systems, while also exhibiting more compact node-level error distributions and stronger spatial consistency. Multi-seed ablation studies further indicate that residual propagation, reliability-aware attention, and temporal refinement play complementary roles across different system scales. Robustness experiments additionally show that, under random measurement exclusion as well as bias, Gaussian, and mixed corrupted-measurement settings, ST-ResGAT exhibits smooth and progressive degradation, including on the newly added large-scale IEEE 1354-bus benchmark. These results suggest that the proposed framework is a promising direction for data-driven state estimation under controlled mixed-measurement benchmark conditions. Full article
Show Figures

Figure 1

22 pages, 22745 KB  
Article
Spectral Phenological Typologies for Improving Cross-Dataset in Mediterranean Winter Cereals
by Patricia Arizo-García, Sergio Castiñeira-Ibáñez, Beatriz Ricarte, Alberto San Bautista and Constanza Rubio
Appl. Sci. 2026, 16(7), 3598; https://doi.org/10.3390/app16073598 - 7 Apr 2026
Abstract
Accurate monitoring of crop phenology is essential for precision agriculture and yield forecasting. However, satellite-derived time series often suffer from inherent noise, such as residual atmospheric effects and mixed pixels, as well as a frequent lack of ground-truth data in agriculture. In response, [...] Read more.
Accurate monitoring of crop phenology is essential for precision agriculture and yield forecasting. However, satellite-derived time series often suffer from inherent noise, such as residual atmospheric effects and mixed pixels, as well as a frequent lack of ground-truth data in agriculture. In response, this study proposes an algorithm to define the type of spectral signatures for the principal phenological stages of crops, using them as the foundation for training supervised machine learning classification models. The algorithm was developed using Fuzzy C-Means (FCM) clustering to identify the spectral signature reference groups in winter wheat across the Burgos region (Spain) during the 2020 and 2021 growing seasons. To enhance cluster independence and biological coherence, a multi-step filtering process was implemented, including spectral purity (membership degree, SAM, and SAMder) and temporal coherence filters. The filtered and labeled dataset (80% original Burgos dataset) was used to train supervised classification models (KNN and XGBoost). The models’ reliability was verified through three wheat tests (remaining 20%), labeled using other clustering techniques, and an independent barley dataset from diverse geographic locations (Valladolid and Soria). The filtering process significantly improved cluster stability by removing outliers and transition spectral signatures. The supervised models demonstrated exceptional performance; the KNN model slightly outperformed XGB, achieving a mean Accuracy of 0.977, a Kappa of 0.967, and an F1-score of 0.977 in the wheat external test. Furthermore, the model showed, when applied to barley, that its phenological spectral signatures are equivalent in shape to those of wheat, with an Accuracy of 0.965 and an F1-score of 0.974. In addition, it was verified that the type spectral signatures remain the same regardless of the location. This study presents a robust classification tool capable of labeling four key phenological stages (tillering, stem elongation, ripening, and senescence) without ground truth. By effectively removing inherent satellite noise, the proposed methodology produces organized, cleaned datasets. This structured foundation is critical for future research integrating spectral signatures with harvester data to develop high-precision yield prediction models. Full article
(This article belongs to the Special Issue Digital Technologies in Smart Agriculture)
Show Figures

Figure 1

32 pages, 43664 KB  
Article
MVFF: Multi-View Feature Fusion Network for Small UAV Detection
by Kunlin Zou, Haitao Zhao, Xingwei Yan, Wei Wang, Yan Zhang and Yaxiu Zhang
Drones 2026, 10(4), 264; https://doi.org/10.3390/drones10040264 - 4 Apr 2026
Viewed by 287
Abstract
With the widespread adoption of various types of Unmanned Aerial Vehicles (UAVs), their non-compliant operations pose a severe challenge to public safety, necessitating the urgent identification and detection of UAV targets. However, in complex backgrounds, UAV targets exhibit small-scale dimensions and low contrast, [...] Read more.
With the widespread adoption of various types of Unmanned Aerial Vehicles (UAVs), their non-compliant operations pose a severe challenge to public safety, necessitating the urgent identification and detection of UAV targets. However, in complex backgrounds, UAV targets exhibit small-scale dimensions and low contrast, coupled with extremely low signal-to-noise ratios. This forces conventional target detection methods to confront issues such as feature convergence, missed detections, and false alarms. To address these challenges, we propose a Multi-View Feature Fusion Network (MVFF) that achieves precise identification of small, low-contrast UAV targets by leveraging complementary multi-view information. First, we design a collaborative view alignment fusion module. This module employs a cross-map feature fusion attention mechanism to establish pixel-level mapping relationships and perform deep fusion, effectively resolving geometric distortion and semantic overlap caused by imaging angle differences. Furthermore, we introduce a view feature smoothing module that employs displacement operators to construct a lightweight long-range modeling mechanism. This overcomes the limitations of traditional convolutional local receptive fields, effectively eliminating ghosting artifacts and response discontinuities arising from multi-view fusion. Additionally, we developed a small object binary cross-entropy loss function. By incorporating scale-adaptive gain factors and confidence-aware weights, this function enhances the learning capability of edge features in small objects, significantly reducing prediction uncertainty caused by background noise. Comparative experiments conducted on a multi-perspective UAV dataset demonstrate that our approach consistently outperforms existing state-of-the-art methods across multiple performance metrics. Specifically, it achieves a Structure-measure of 91.50% and an F-measure of 85.14%, validating the effectiveness and superiority of the proposed method. Full article
Show Figures

Figure 1

30 pages, 4178 KB  
Article
An Intelligent Evaluation Algorithm for Pilot Flight Training Ability Based on Multimodal Information Fusion
by Heming Zhang, Changyuan Wang and Pengbo Wang
Sensors 2026, 26(7), 2245; https://doi.org/10.3390/s26072245 - 4 Apr 2026
Viewed by 231
Abstract
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field [...] Read more.
Intelligent-assisted assessment of pilot flight training ability is a method of automating the evaluation of pilots’ flight skills using artificial intelligence. Currently, using AI to assist or replace human instructors in flight skill assessment has become a mainstream research direction in the field of intelligent aviation. Existing flight skill assessment methods suffer from limitations in data types and insufficient assessment accuracy. To address these issues, we evaluate and predict pilot performance in simulated flight missions based on physiological signals. Following the “OODA loop” theory, we established a multimodal dataset including pilot eye movement, electroencephalogram (EEG), electrocardiogram (ECG), electrodermal signaling (EDS), heart rate, respiration, and flight attitude data. This dataset records changes in physiological rhythms and flight behaviors during pilots’ flight training at different difficulty levels. To enhance the signal-to-noise ratio, we propose an enhanced wavelet fuzzy thresholding denoising algorithm utilizing LSTM optimization. We address the problem of isolated features across different time frames in multimodal data modeling by introducing a multi-feature fusion algorithm based on STFT. Furthermore, by combining a high-efficiency sub-attention mechanism with a Transformer network, we construct a multi-classification network for intelligent-assisted assessment of pilot flight training ability, further improving the output accuracy of each category. Experiments show that our designed algorithm can achieve a classification accuracy of up to 85% on the dataset (5-fold cross-validation), which meets the requirements for auxiliary assessment of flight capabilities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 2796 KB  
Article
A Multi-Center Trained Residual Neural Network for Robust Classification of Atrial High-Rate Episodes in Remotely Monitored Pacemakers and Defibrillators
by Lars van Krimpen, Arlene John, Anand Thiyagarajah, Tanner Carbonati, Benjamin Sacristan, Karim Benali, Antoine Da Costa, Pierre Mondoly, Rémi Chauvel, Romain Eschalier, Josselin Duchateau, Remi Dubois, Sylvain Ploux, Pierre Bordachar and Marc Strik
Sensors 2026, 26(7), 2241; https://doi.org/10.3390/s26072241 - 4 Apr 2026
Viewed by 148
Abstract
Remote monitoring of pacemakers and defibrillators increases patient safety but also increases clinical workload. Review of atrial high-rate episodes is particularly demanding as episodes can contain atrial tachycardia or atrial fibrillation (AT/AF), noise, or far-field oversensing (FFO). Automatic review of atrial high-rate episodes [...] Read more.
Remote monitoring of pacemakers and defibrillators increases patient safety but also increases clinical workload. Review of atrial high-rate episodes is particularly demanding as episodes can contain atrial tachycardia or atrial fibrillation (AT/AF), noise, or far-field oversensing (FFO). Automatic review of atrial high-rate episodes by an Artificial Intelligence (AI) model can decrease the workload of remote monitoring, provided it maintains high sensitivity for true atrial tachycardia. A residual network is trained using a center-level fourfold cross validation. The four resulting models achieved a precision of 97.2–99.4% for AT/AF, 93.1–97.7% for noise, and 75.4–94.4% for FFO, while maintaining high sensitivity 98.9–99.3% for AT/AF. The four models were combined through averaging prediction probabilities to create an ensemble model. Thresholding ensemble predictions with probability > 95% resulted in a robust ensemble model that made only two errors (<0.1%) after reviewing 3925 episodes (91.9%) of the total 4271 episodes. This shows how AI models can reliably assist in remote monitoring. Future research should be aimed at classification models for other episode types and clinical validation of AI models to assist remote monitoring of pacemakers and defibrillators. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Signal Processing)
Show Figures

Figure 1

21 pages, 1291 KB  
Article
Development of a Software Model for Classification and Automatic Cataloging of Archive Documents
by Adilbek Dauletov, Bahodir Muminov, Noila Matyakubova, Uldona Abdurahmonova, Khurshida Bakhriyeva and Makhbubakhon Fayzieva
Information 2026, 17(4), 341; https://doi.org/10.3390/info17040341 - 1 Apr 2026
Viewed by 295
Abstract
This study proposes an integrated software model for automatic document classification and metadata generation based on the Dublin Core standard to address the issue of rapid and consistent management of archival documents in a digital environment. This approach combines the stages of receiving [...] Read more.
This study proposes an integrated software model for automatic document classification and metadata generation based on the Dublin Core standard to address the issue of rapid and consistent management of archival documents in a digital environment. This approach combines the stages of receiving incoming documents, converting them to text using optical character recognition (OCR), image preprocessing (binarization, deskew, noise reduction), and text cleaning and vectorization (TF–IDF) into a single pipeline. In the document classification stage, the Bidirectional Encoder Representations from Transformers (BERT) model with a context-sensitive transformer architecture is used, along with classical machine learning models (Logistic Regression, Naive Bayes, Support Vector Machine) and an ensemble approach (LightGBM), to increase the accuracy by modeling the document content at a deep semantic level. Experiments were conducted on the RVL-CDIP dataset, and the OCR efficiency was evaluated using the Character Error Rate (CER) indicator, and the classification results were evaluated using the accuracy, precision, recall and F1-score metrics. The results confirmed the high stability and generalization ability of the BERT (accuracy, 95.1%; F1, 95.0%) and LightGBM (accuracy, 93.2%; F1, 93.2%) models. In the final stage, OCR, NER, and classification outputs are automatically organized into Dublin Core metadata elements (Title, Creator, Date, Description, Subject, Type, Format, Language) and exported in JSON/XML formats. This automation significantly reduces manual cataloging effort and improves indexing and retrieval efficiency in digital archival systems. Full article
Show Figures

Graphical abstract

23 pages, 9705 KB  
Article
Wear Condition Assessment of Gear Transmission System Based on Wear Debris Boundary Energy
by Congrui Xu, Wei Cao, Yang Yan, Letian Ding, Yifan Wang, Rongrong Hao, Rui Su and Niraj Khadka
Lubricants 2026, 14(4), 153; https://doi.org/10.3390/lubricants14040153 - 1 Apr 2026
Viewed by 200
Abstract
The gear transmission system is the core component in industrial equipment, and its wear state directly affects the reliability and use life of equipment. The wear debris image contains important information on the mechanical wear state. By processing it and analyzing the characteristics [...] Read more.
The gear transmission system is the core component in industrial equipment, and its wear state directly affects the reliability and use life of equipment. The wear debris image contains important information on the mechanical wear state. By processing it and analyzing the characteristics and types of wear debris, the health status of mechanical equipment and components can be evaluated. However, wear debris images collected in real time are often affected by Gaussian noise. The improved K-SVD dictionary learning algorithm was used in this paper to remove Gaussian noise, using objective metrics to demonstrate the effectiveness of the improved K-SVD algorithm for wear debris images. Secondly, the improved marked watershed segmentation algorithm (B-FSL) was studied to segment the wear debris chains. After that, the boundary energy (BE) characteristics of the wear debris were extracted to warn about the severe wear state of equipment in advance, an EfficientNetB3 network based on transfer learning was constructed for the recognition and classification of the wear debris image, and the severity of the wear of the mechanical equipment was analyzed. Finally, an experiment was conducted to validate the above methods, proved that the BE characteristics of the wear debris can predict the failure of a planetary gearbox in advance, with the accuracy of the wear debris recognition and classification algorithm exceeding 98%. Full article
Show Figures

Figure 1

29 pages, 2771 KB  
Review
Multiphysics Modeling and Simulation of NVH Phenomena in Electric Vehicle Powertrains
by Krisztian Horvath
World Electr. Veh. J. 2026, 17(4), 183; https://doi.org/10.3390/wevj17040183 - 1 Apr 2026
Viewed by 316
Abstract
The rapid electrification of road vehicles has fundamentally reshaped the priorities of noise, vibration, and harshness (NVH) engineering. In the absence of combustion-related broadband masking, tonal and order-related phenomena originating from the electric machine, inverter switching, and high-speed reduction gearing have become clearly [...] Read more.
The rapid electrification of road vehicles has fundamentally reshaped the priorities of noise, vibration, and harshness (NVH) engineering. In the absence of combustion-related broadband masking, tonal and order-related phenomena originating from the electric machine, inverter switching, and high-speed reduction gearing have become clearly perceptible and, in many cases, acoustically dominant. Consequently, drivetrain noise in electric vehicles can no longer be assessed at component level alone; it must be understood as a coupled system response shaped by excitation mechanisms, structural dynamics, transfer paths, radiation efficiency, and ultimately human perception. This review adopts a source-to-perception perspective and consolidates the principal physical mechanisms governing vibro-acoustic behavior in integrated electric drive units. Electromagnetic force harmonics and torque ripple are discussed alongside transmission-error-driven gear mesh excitation, while bearing and shaft nonlinearities are examined in the context of high-speed operation. In addition, ancillary thermoacoustic and aerodynamic contributions are considered, reflecting the increasingly integrated packaging of modern e-axle architectures. On this mechanism-oriented basis, dominant excitation types are linked to frequency-appropriate modeling strategies, spanning electromagnetic force extraction, multibody drivetrain simulation, structural finite element analysis, transfer path analysis, and acoustic radiation prediction. Particular attention is given to workflow integration across domains. Finally, the paper identifies research challenges that predominantly arise at system level, including multi-source interaction effects, installation-dependent transfer-path variability, emergent resonances in assembled structures, manufacturing-induced tonal artifacts, and the still limited correlation between predicted vibration fields and perceived sound quality. Full article
(This article belongs to the Section Propulsion Systems and Components)
Show Figures

Graphical abstract

29 pages, 2627 KB  
Article
Building-Level Energy Disaggregation Using AI-Based NILM Techniques in Heterogeneous Environments
by Ana Rubio-Bustos, Gloria Calleja-Rodríguez, Jorge De-La-Torre-García, Unai Fernandez-Gamiz and Ekaitz Zulueta
AI 2026, 7(4), 122; https://doi.org/10.3390/ai7040122 - 1 Apr 2026
Viewed by 311
Abstract
Non-Intrusive Load Monitoring (NILM) represents a powerful approach for energy disaggregation, which enables detailed insights into energy consumption patterns without requiring extensive sensor deployment. While significant advances have been achieved in residential NILM applications, commercial and industrial buildings remain largely underexplored despite their [...] Read more.
Non-Intrusive Load Monitoring (NILM) represents a powerful approach for energy disaggregation, which enables detailed insights into energy consumption patterns without requiring extensive sensor deployment. While significant advances have been achieved in residential NILM applications, commercial and industrial buildings remain largely underexplored despite their substantial contribution to global energy consumption. This study addresses this gap by developing and evaluating multiple artificial intelligence approaches for energy disaggregation across residential, commercial, and industrial buildings under a unified experimental protocol. We implement and compare several AI-based models, including Vision Transformer (ViT), Variational Autoencoder (VAE), Random Forest (RF), and custom architectures inspired by TimeGPT and Prophet, alongside traditional baseline methods. The proposed framework is validated using three benchmark datasets representing residential (AMPds), commercial (COmBED), and industrial (IMDELD) environments. Experimental results demonstrate that architecture–load interactions, rather than model complexity alone, are the primary determinants of disaggregation accuracy: the ViT-small configuration achieves superior performance for complex industrial loads with R2 values exceeding 0.94, Random Forest proves most effective for finite-state commercial HVAC systems with R2 up to 0.97, and the Prophet-inspired model excels in capturing seasonal patterns in residential appliances. These findings provide evidence-based guidelines for selecting appropriate AI models based on load characteristics, signal-to-noise ratio, and building type, contributing to the practical deployment of NILM in heterogeneous building environments. Full article
Show Figures

Figure 1

26 pages, 3170 KB  
Article
Understanding the Impact of Noise on ECG Biometrics: A Comparative Theoretical and Experimental Analysis
by David Velez, André Lourenço, Miguel Pereira, David P. Coutinho and Carlos Carreiras
J. Exp. Theor. Anal. 2026, 4(2), 14; https://doi.org/10.3390/jeta4020014 - 31 Mar 2026
Viewed by 166
Abstract
Electrocardiogram (ECG)-based biometrics have emerged as a promising solution for continuous and intrinsic human identification; nevertheless, the robustness of these systems under realistic noise conditions remains a critical challenge for practical deployment. This work presents a theoretical and experimental analysis of how different [...] Read more.
Electrocardiogram (ECG)-based biometrics have emerged as a promising solution for continuous and intrinsic human identification; nevertheless, the robustness of these systems under realistic noise conditions remains a critical challenge for practical deployment. This work presents a theoretical and experimental analysis of how different noise types and levels affect ECG biometric recognition by comparing three methodological families: fiducial-based approaches using morphological features with traditional classifiers such as SVM and k-NN, non-fiducial methods based on signal compression and global descriptors, and Deep Learning models. Controlled distortions and additive noise injection into public ECG databases enable systematic quantification of feature degradation. Experimental validation is performed using the CardioWheel system, a real-world in-vehicle ECG acquisition platform, to evaluate performance under realistic motion and noise conditions. The methodological framework proposed for robustness evaluation and noise-aware training is inherently generic and can be extended to other biometric tasks subject to noise. Results show that different algorithmic families exhibit distinct resilience profiles under noise contamination and reveal a practical signal quality boundary for reliable ECG biometric recognition, with performance deteriorating under severe noise conditions. Noise-aware training improves robustness, particularly for Deep Learning and SVM-based classifiers, highlighting the trade-off between interpretability and robustness. By bridging theoretical analysis and applied experimentation, this work provides practical signal quality guidelines for real-world ECG biometric systems. Full article
32 pages, 7391 KB  
Article
Robust and Noise-Resilient Botnet Detection Framework Using Heterogeneous Radial Basis Function Neural Network
by Lama Awad, Sherenaz Al-Haj Baddar and Azzam Sleit
Appl. Sci. 2026, 16(7), 3379; https://doi.org/10.3390/app16073379 - 31 Mar 2026
Viewed by 128
Abstract
The rapid evolution of botnet attacks poses a critical challenge facing cybersecurity, necessitating the development of intrusion detection models that are both highly accurate and computationally efficient. This paper proposes a heterogeneous radial basis function neural network structure that employs non-uniform RBF kernels [...] Read more.
The rapid evolution of botnet attacks poses a critical challenge facing cybersecurity, necessitating the development of intrusion detection models that are both highly accurate and computationally efficient. This paper proposes a heterogeneous radial basis function neural network structure that employs non-uniform RBF kernels to enhance discriminative capability between normal and botnet activities, leveraging flow-level packet length distribution features derived from the CTU-13 dataset, which encompasses 30 distinct botnet types, to ensure comprehensive detection across several botnet behaviors. The model was accurately evaluated across several dimensions, including training stability, robustness to noise, and overall detection accuracy and generalization performance. Experimental results demonstrate that the proposed model achieves a superior accuracy of 97.86%, with an AUC of 0.9968 and a notably low false-positive rate of 0.02. The model effectively mitigates class-imbalance bias, with an average detection rate of 94.62% even for minority botnet classes. Furthermore, inference-time evaluation showed a latency of approximately 1.0118 microseconds, confirming that the model is well-suited for high-speed networks. In addition, robustness analysis under controlled noise injection revealed a smooth degradation in performance, with accuracy remaining at 96%, highlighting the structural resilience of the proposed model and making it a robust solution for detecting modern botnet attacks. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

9 pages, 1362 KB  
Communication
Comfortable Flower Electrodes for Dry EEG in Epilepsy and Clinical Neurophysiology Diagnostics
by Dimitrios Dimitrakopoulos, Justus Marquetand, Joji Kuramatsu, Patrique Fiedler and Johannes Lang
Sensors 2026, 26(7), 2146; https://doi.org/10.3390/s26072146 - 31 Mar 2026
Viewed by 153
Abstract
Dry electroencephalography (EEG) electrodes enable rapid, gel-free setups, which are crucial for point-of-care diagnostics, but often face challenges with comfort and signal quality—especially in a clinical context. Novel “flower” dry electrodes are a special type of reusable scalp electrodes for dry EEG, featuring [...] Read more.
Dry electroencephalography (EEG) electrodes enable rapid, gel-free setups, which are crucial for point-of-care diagnostics, but often face challenges with comfort and signal quality—especially in a clinical context. Novel “flower” dry electrodes are a special type of reusable scalp electrodes for dry EEG, featuring a distinct flower-like shape with angled pins in three intertwined layers. While the new electrode design has been validated in an in vivo study on healthy volunteers, we tested its clinical applicability in a proof-of-concept study involving three patients diagnosed with epilepsy and delirium. The recordings were of high diagnostic quality, enabling the reliable identification of pathological patterns, such as generalized spike–wave complexes and intermittent delta activity, with a signal-to-noise ratio comparable to prior reports for sponge-based EEG systems (limited case series). The signal-to-noise ratio (SNR) proved to be sufficiently high for clinical diagnostic purposes, resulting in visually clear and interpretable EEG data that enabled effective assessment of patients’ neurophysiological signals. Consequently, our findings demonstrate that the comfortable flower-electrode design is a viable and effective tool for epilepsy diagnostics, extended recording, and clinical neurophysiology. It represents a significant step towards patient-centered and gel-free EEG technology, specifically in point-of-care and emergency applications, without compromising the diagnostic quality of the recordings. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Back to TopTop