Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (921)

Search Parameters:
Keywords = data sparsity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1819 KB  
Article
CoACL: Coupled Augmentation for Contrastive Learning on Text-Attributed Graphs Under Semantic Supervision from Large Language Models
by Hailun Kang, Kexin Zhao, Shuying Du, Xi Wu, Zhong Zhang, Jiquan Peng, Zhongping Zhang and Jibing Gong
Electronics 2026, 15(4), 844; https://doi.org/10.3390/electronics15040844 - 16 Feb 2026
Abstract
Text-attributed graphs (TAGs) couple graph topology with node-level text, but real data often contain spurious edges, missing links, and text–structure mismatch that destabilize learning under scarce labels. We propose CoACL (Coupled Augmentation for Contrastive Learning), a framework that uses LLM semantic supervision to [...] Read more.
Text-attributed graphs (TAGs) couple graph topology with node-level text, but real data often contain spurious edges, missing links, and text–structure mismatch that destabilize learning under scarce labels. We propose CoACL (Coupled Augmentation for Contrastive Learning), a framework that uses LLM semantic supervision to denoise structural and textual information and alleviate data sparsity. CoACL first prunes the candidate edge space using structural similarity and then queries an LLM to discard suspicious edges and confirm plausible links, yielding semantically consistent positive and negative pairs. We further introduce keyword-focused text augmentations and learn coupled representations by optimizing a joint text–graph contrastive objective guided by semantics. Experiment results on Cora, PubMed, and the Open Graph Benchmark Arxiv dataset (OGBN-Arxiv) show that CoACL consistently outperforms strong baselines and yields up to 7.1% absolute improvement in node classification accuracy, with the largest gains in low-label regimes. By constraining LLM evaluation to similarity-based candidates, CoACL targets neighborhood-level noise with controlled cost. Full article
(This article belongs to the Section Artificial Intelligence)
22 pages, 604 KB  
Article
A Mixture-of-Experts Model for Improved Generalization in Session-Aware Recommendation
by Sungshin Kwak, Jaedong Lee and Sohyun Park
Electronics 2026, 15(4), 825; https://doi.org/10.3390/electronics15040825 - 14 Feb 2026
Viewed by 39
Abstract
Recently, recommendation systems have actively integrated Transformers to capture real-time context. However, these systems often suffer from generalization imbalance, where predictions are biased toward popular (head) items due to the sparsity and volatility inherent in session-based data. To address this challenge, this paper [...] Read more.
Recently, recommendation systems have actively integrated Transformers to capture real-time context. However, these systems often suffer from generalization imbalance, where predictions are biased toward popular (head) items due to the sparsity and volatility inherent in session-based data. To address this challenge, this paper proposes MoE-SLMRec, a Mixture-of-Experts (MoE)-based recommendation model that selects expert networks based on session-level contextual information. The proposed model extracts a session latent representation, h, through a session-aware controller and forms balanced predictive characteristics across the entire data distribution via dynamic routing. Experimental results demonstrate that MoE-SLMRec significantly outperforms the baseline SLMRec, improving accuracy by 1.51 percentage points (from 18.76% to 20.27%). Furthermore, the model achieved state-of-the-art performance in Recall@20 (0.8358) and MRR@20 (0.3455), validating simultaneous improvements in both retrieval capability and ranking quality. Notably, the model effectively stabilized the performance for head items while coordinating the generalization trade-off between head and tail segments. By ensuring a favorable capacity–cost trade-off while maintaining robust performance, this study presents a promising alternative under session-based recommendation settings, facilitating scalable deployment in real-time recommendation services. Full article
49 pages, 5086 KB  
Article
Class-Specific GAN-Based Minority Data Augmentation for Cyberattack Detection Using the UWF-ZeekData22 Dataset
by Asfaw Debelie, Sikha S. Bagui, Dustin Mink and Subhash C. Bagui
Technologies 2026, 14(2), 117; https://doi.org/10.3390/technologies14020117 - 12 Feb 2026
Viewed by 214
Abstract
Intrusion detection systems (IDS) often struggle to detect rare but high-impact attack behaviors due to severe class imbalance in real-world network traffic. This work proposes a class-specific GAN-based augmentation framework that explicitly targets sparsity in the minority-class in structured cybersecurity datasets. Unlike prior [...] Read more.
Intrusion detection systems (IDS) often struggle to detect rare but high-impact attack behaviors due to severe class imbalance in real-world network traffic. This work proposes a class-specific GAN-based augmentation framework that explicitly targets sparsity in the minority-class in structured cybersecurity datasets. Unlike prior GAN-based approaches that employ global augmentation or anomaly-driven synthesis, separate Generative Adversarial Networks (GANs) are trained independently for each MITRE ATT&CK tactic using only real minority-class samples, enabling focused distribution learning without contamination from benign traffic. Using a relatively new network traffic dataset, UWF-ZeekData22, the proposed framework augments minority classes under conditions of extreme sample sparsity, where traditional classifiers and interpolation-based oversampling methods are ineffective or statistically unreliable. Five traditional classifiers—Logistic Regression, Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree, and Random Forest—are evaluated before and after augmentation using stratified 5-fold cross-validation. Experimental results show that class-specific GAN augmentation consistently improves recall and F1-score for rare attack tactics, with the largest gains observed under extreme sparsity where pre-augmentation evaluation was infeasible. Notably, false-negative rates are substantially reduced without degrading majority-class performance, demonstrating that the proposed approach enhances minority-class separability rather than inflating evaluation metrics. These findings demonstrate that class-specific GAN-based augmentation is a practical and robust data-level strategy for improving the detection of rare MITRE ATT&CK-aligned attack behaviors in machine-learning-based IDSs. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

20 pages, 1842 KB  
Article
TLFormer: Scalable Taylor Linear Attention in Transformer for Collaborative Filtering
by Dongdong Hao, Dongxiao Yu and Xiaowen Hou
Electronics 2026, 15(4), 759; https://doi.org/10.3390/electronics15040759 - 11 Feb 2026
Viewed by 216
Abstract
Graph Neural Networks (GNNs) have become foundational models in recommender systems due to their ability to propagate information over user–item bipartite graphs via neighborhood aggregation. Despite their empirical success, GNNs are inherently constrained by their reliance on local connectivity, which limits their ability [...] Read more.
Graph Neural Networks (GNNs) have become foundational models in recommender systems due to their ability to propagate information over user–item bipartite graphs via neighborhood aggregation. Despite their empirical success, GNNs are inherently constrained by their reliance on local connectivity, which limits their ability to capture global interaction patterns, particularly in large-scale recommendation scenarios characterized by severe data sparsity. To address these challenges, we propose the Taylor Linear attention in Transformer (TLFormer), which enhances recommendation performance by enabling global attention across all user–item pairs while preserving graph structural information. Unlike existing Transformer-based recommendation approaches that focus on local attention patterns, TLFormer introduces a novel linear attention mechanism derived from the first-order Taylor approximation, allowing efficient computation of all-pair interactions. TLFormer integrates spatial topology as positional encoding while maintaining linear complexity, effectively balancing computational efficiency with model expressiveness for large-scale recommendation scenarios. Extensive experiments across multiple datasets demonstrate that TLFormer significantly outperforms state-of-the-art methods, particularly in scenarios with sparse interactions and long-tail distributions. Full article
Show Figures

Figure 1

15 pages, 4058 KB  
Article
Multiscale Region-Based Convolutional Neural Networks for 3D Object Detection with LiDAR Sensors
by Wei-Jong Yang, Song-Bo Yao and Jar-Ferr Yang
Sensors 2026, 26(4), 1156; https://doi.org/10.3390/s26041156 - 11 Feb 2026
Viewed by 97
Abstract
LiDAR-based 3D object detection is essential for autonomous driving vehicles under poor lighting conditions. With LiDAR data, point cloud technologies have become increasingly important, as LiDAR sensors are largely cost down. However, the sparsity of point cloud poses a challenge for 3D object [...] Read more.
LiDAR-based 3D object detection is essential for autonomous driving vehicles under poor lighting conditions. With LiDAR data, point cloud technologies have become increasingly important, as LiDAR sensors are largely cost down. However, the sparsity of point cloud poses a challenge for 3D object detection, requiring advancements in sparse convolutional networks. Given that the multiscale feature fusion mechanism can improve object detection performance using rich information across scale features, we added a refinement fusion network with cross-attention modules to existing 3D voxel-based object detection networks. We also employed a realistic strategy to refine existing point cloud data augmentation techniques to enable the trained detection networks to achieve substantially improved results. The experimental results demonstrate the effectiveness of our proposed detection system across three categories on the KITTI dataset. These enhancements address the limitations of current approaches and highlight the superior performance of the proposed system. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

21 pages, 3921 KB  
Article
Adversarial Example Generation Method Based on Wavelet Transform
by Meng Bi, Xiaoguo Liang, Baiyu Wang, Longxin Liu, Xin Yin and Jiafeng Liu
Information 2026, 17(2), 182; https://doi.org/10.3390/info17020182 - 10 Feb 2026
Viewed by 190
Abstract
Adversarial examples are crucial tools for assessing the robustness of deep neural networks (DNNs) and revealing potential security vulnerabilities. Adversarial example generation methods based on Generative Adversarial Networks (GANs) have made significant progress in generating image adversarial examples, but still suffer from insufficient [...] Read more.
Adversarial examples are crucial tools for assessing the robustness of deep neural networks (DNNs) and revealing potential security vulnerabilities. Adversarial example generation methods based on Generative Adversarial Networks (GANs) have made significant progress in generating image adversarial examples, but still suffer from insufficient sparsity and transferability. To address these issues, this study proposes a novel semi-white-box untargeted adversarial example generation method named Wavelet-AdvGAN, with an explicit threat model defined as follows. Specifically, the attack is strictly untargeted without predefined target categories, aiming solely to mislead DNNs into classifying adversarial examples into any category other than the original label. It adopts a semi-white-box setting where attackers are denied access to the target model’s private information. Regarding the generator’s information dependence, the training phase only utilizes public resources (i.e., the target model’s public architecture and CIFAR-10 public training data), while the test phase generates adversarial examples through one-step feedforward of clean images without interacting with the target model. The method incorporates a Frequency Sub-band Difference (FSD) module and a Wavelet Transform Local Feature (WTLF) extraction module, evaluating the differences between original and adversarial examples from the frequency domain perspective. This approach constrains the magnitude of perturbations, reinforces feature regions, and further enhances the attack effectiveness, thereby improving the sparsity and transferability of adversarial examples. Experimental results demonstrate that the Wavelet-AdvGAN method achieves an average increase of 1.26% in attack success rates under two defense strategies—data augmentation and adversarial training. Additionally, the adversarial transferability improves by an average of 2.7%. Moreover, the proposed method exhibits a lower l0 norm, indicating better perturbation sparsity. Consequently, it effectively evaluates the robustness of deep neural networks. Full article
Show Figures

Figure 1

34 pages, 1144 KB  
Article
BAF–FedLLM: Behavior-Aware Federated Modeling of Student Actions via Privacy-Preserving Large Language Model
by Wei Ji, Zuobin Ying and Hanying Gan
Mathematics 2026, 14(4), 604; https://doi.org/10.3390/math14040604 - 9 Feb 2026
Viewed by 227
Abstract
Analyzing fine-grained student actions across institutions can drive timely feedback, early warning, and personalized support, yet it is constrained by privacy regulations, heterogeneous curricula, and non-IID behavior logs. This paper introduces BAF–FedLLM, a behavior-aware federated modeling framework that adapts large language models to [...] Read more.
Analyzing fine-grained student actions across institutions can drive timely feedback, early warning, and personalized support, yet it is constrained by privacy regulations, heterogeneous curricula, and non-IID behavior logs. This paper introduces BAF–FedLLM, a behavior-aware federated modeling framework that adapts large language models to next-action and outcome prediction without centralizing student data. The key idea is to treat multichannel interaction streams as semantically typed action tokens linked by a learned ActionGraph, and to align their temporal structure with an LLM through behavior prompts that inject domain context (task, resource, pedagogy, and affordance cues). We propose three novel components: (i) BP–FIT, a behavior-prompted federated instruction tuning scheme that trains low-rank adapters locally and aggregates them with secure masking and Rényi–DP accounting to ensure client-level privacy; (ii) ProtoAlign, a cross-client prototype contrastive objective that shares only noisy class-conditional anchors via secure aggregation to mitigate drift under non-IID partitions; and (iii) CBR, a causal behavior regularizer that penalizes intervention-sensitive shortcuts by enforcing invariance of predicted risks across detected instructional regimes. We further derive convergence guarantees for federated instruction tuning with noisy, partial participation and provide end-to-end privacy bounds. On three public education datasets (EdNet, ASSISTments, and OULAD) with institution-level partitions, BAF–FedLLM improves next-action AUC by 4.2–7.1% over strong federated baselines while reducing expected calibration error by up to 28% and communication by 5× through adapter sparsity, under a typical privacy budget of ε1.7 at δ=105. These results indicate that behavior-aware prompting and prototype alignment make LLMs practical for privacy-preserving student action analysis at scale, offering a principled path to deployable, regulation-compliant analytics across diverse learning ecosystems. Full article
Show Figures

Figure 1

19 pages, 7655 KB  
Article
DeepGene-BC: Deep Learning-Based Breast Cancer Subtype Prediction via Somatic Point Mutation Profiles
by Pengfei Hou, Liangjie Liu, Yijia Duan, Shanshan Yin, Wenqian Yan, Chongchen Pang, Yang Yan, Sabreena Aziz, Mika Torhola, Henna Kujanen, Klaus Förger, Hui Shi, Guang He and Yi Shi
Cancers 2026, 18(4), 570; https://doi.org/10.3390/cancers18040570 - 9 Feb 2026
Viewed by 201
Abstract
Background: Molecular subtyping of breast cancer usually relies on transcriptomic profiles, a method constrained by limitations in robustness and clinical applicability. While somatic point mutations represent a stable genomic alternative, their predictive utility is hindered by high dimensionality, extreme sparsity, and weak [...] Read more.
Background: Molecular subtyping of breast cancer usually relies on transcriptomic profiles, a method constrained by limitations in robustness and clinical applicability. While somatic point mutations represent a stable genomic alternative, their predictive utility is hindered by high dimensionality, extreme sparsity, and weak single-gene associations. Methods: Here, we present deepGene-BC, a deep learning framework that synergizes a pathway-informed feature selection strategy with a hybrid neural network tailored for sparse binary data. To distill sparse genome-wide mutations into a compact and interpretable feature set, deepGene-BC integrates mutation recurrence filtering, curated pathway priors, and mutual information-based gene prioritization. These refined features are subsequently modeled using a specialized hybrid architecture designed to capture complex linear effects, feature interactions, and higher-order nonlinear patterns. Results: When benchmarked against an independent test set (n = 273) from the TCGA breast cancer cohort, deepGene-BC achieved an overall accuracy of 77.3% and an average sensitivity of 75.2%, accompanied by a strong overall discriminative performance (macro-averaged AU-ROC = 0.94, 95% CI: 0.92–0.96). Conclusions: By effectively combining biologically informed feature engineering with deep learning, deepGene-BC holds significant promise for non-invasive molecular stratification and precision oncology. Full article
(This article belongs to the Special Issue Advancements in Preclinical Models for Solid Cancers)
Show Figures

Figure 1

27 pages, 2046 KB  
Article
2QGRU: Power-of-Two Quantization for Efficient FPGA-Based Gated Recurrent Unit Architectures
by Miguel Molina Fernandez, Shao Jie Hu Chen, Javier Mendez Gomez, Diego P. Morales Santos, Manuel Pegalajar Cuellar and Marisa Lopez-Vallejo
Electronics 2026, 15(4), 722; https://doi.org/10.3390/electronics15040722 - 7 Feb 2026
Viewed by 192
Abstract
This paper proposes a power-of-two-based quantization technique aimed at improving the hardware efficiency of artificial neural networks (ANNs) implemented on field-programmable gate arrays (FPGAs). The effectiveness of the proposed approach is validated using gated recurrent unit (GRU) models. The resulting architecture, referred to [...] Read more.
This paper proposes a power-of-two-based quantization technique aimed at improving the hardware efficiency of artificial neural networks (ANNs) implemented on field-programmable gate arrays (FPGAs). The effectiveness of the proposed approach is validated using gated recurrent unit (GRU) models. The resulting architecture, referred to as 2QGRU, exploits parallelism, optimized operation scheduling, and fine-grained data bit-width management to achieve efficient hardware realization. Compared with state-of-the-art FPGA implementations based on sparsity compression, 2QGRU demonstrates superior performance in terms of resource utilization and power consumption, while eliminating the need for dedicated DSP blocks. Furthermore, area and power efficiency can be further improved by trading latency for reduced hardware cost through an integrated implementation reduction strategy, enabling deployment on highly resource-constrained devices. Finally, the 2QGRU model is integrated into an automated ANN framework, allowing the proposed quantization and hardware optimization techniques to be readily extended to other ANN models and FPGA-based deployments. Full article
Show Figures

Figure 1

44 pages, 816 KB  
Article
Enhanced Deep Reinforcement Learning for Robustness Falsification of Partially Observable Cyber-Physical Systems
by Yangwei Xing, Ting Shu, Xuesong Yin and Jinsong Xia
Symmetry 2026, 18(2), 304; https://doi.org/10.3390/sym18020304 - 7 Feb 2026
Viewed by 141
Abstract
Robustness falsification is a critical verification task for ensuring the safety of cyber-physical systems (CPS). Under partially observable conditions, where internal states are hidden and only input–output data is accessible, existing deep reinforcement learning (DRL) approaches for CPS robustness falsification face two key [...] Read more.
Robustness falsification is a critical verification task for ensuring the safety of cyber-physical systems (CPS). Under partially observable conditions, where internal states are hidden and only input–output data is accessible, existing deep reinforcement learning (DRL) approaches for CPS robustness falsification face two key limitations: inadequate temporal modeling due to unidirectional network architectures, and sparse reward signals that impede efficient exploration. These limitations severely undermine the efficacy of DRL in black-box falsification, leading to low success rates and high computational costs. This study addresses these limitations by proposing DRL-BiT-MPR, a novel framework whose core innovation is the synergistic integration of a bidirectional temporal network with a multi-granularity reward function. Specifically, the bidirectional temporal network captures bidirectional temporal dependencies, remedies inadequate temporal modeling, and complements unobservable state information. The multi-granularity reward function includes fine-grained, medium-grained and coarse-grained layers, corresponding to single-step local feedback, phased progress feedback, and global result feedback, respectively, providing multi-time-scale incentives to resolve reward sparsity. Experiments are conducted on three benchmark CPS models: the continuous CARS model, the hybrid discrete-continuous AT model, and the controller-based PTC model. Results show that DRL-BiT-MPR increases the falsification success rate by an average of 39.6% compared to baseline methods and reduces the number of simulations by more than 50.2%. The framework’s robustness is further validated through theoretical analysis of convergence and soundness properties, along with systematic parameter sensitivity studies. Full article
Show Figures

Figure 1

17 pages, 1497 KB  
Article
SPARTA: Sparse Parallel Architecture for Real-Time Threat Analysis for Lightweight Edge Network Defense
by Shi Li, Xiyun Mi, Lin Zhang and Ye Lu
Future Internet 2026, 18(2), 88; https://doi.org/10.3390/fi18020088 - 6 Feb 2026
Viewed by 161
Abstract
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) [...] Read more.
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) architectures fail to efficiently leverage element-wise sparsity due to the mismatch between fine-grained pruning patterns and the coarse-grained parallelism of Tensor Cores, leading to latency bottlenecks that compromise real-time analysis of high-volume security telemetry. To bridge this gap, we propose SPARTA (Sparse Parallel Architecture for Real-Time Threat Analysis), an algorithm–architecture co-design framework. Specifically, we integrate a hardware-based address remapping interface to enable flexible row-offset access. This mechanism facilitates a novel graph-based column vector merging strategy that aligns sparse data with Tensor Core parallelism, complemented by a pipelined execution scheme to mask decoding latencies. Evaluations on Llama2-7B and Llama2-13B benchmarks demonstrate that SPARTA achieves an average speedup of 2.35× compared to Flash-LLM, with peak speedups reaching 5.05×. These findings indicate that hardware-aware microarchitectural adaptations can effectively mitigate the penalties of unstructured sparsity, providing a viable pathway for efficient deployment in resource-constrained edge security. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

26 pages, 8513 KB  
Article
A Sparsity-Assisted Minimum-Entropy Autofocus Algorithm for SAR Moving Target Imaging
by Xuejiao Wen, Xiaolan Qiu and Weidong Chen
Remote Sens. 2026, 18(3), 529; https://doi.org/10.3390/rs18030529 - 6 Feb 2026
Viewed by 199
Abstract
To address the slow convergence and sensitivity to a low signal-to-noise ratio (SNR) of the minimum-entropy autofocus (MEA) algorithm in the refocusing of moving targets, this paper proposes a sparsity-assisted minimum-entropy autofocus algorithm. Within the framework of the traditional gradient descent MEA with [...] Read more.
To address the slow convergence and sensitivity to a low signal-to-noise ratio (SNR) of the minimum-entropy autofocus (MEA) algorithm in the refocusing of moving targets, this paper proposes a sparsity-assisted minimum-entropy autofocus algorithm. Within the framework of the traditional gradient descent MEA with variable step size, the proposed method introduces soft-thresholding-based sparse reconstruction to make moving targets more prominent and suppress background clutter in the image domain. A joint metric combining image entropy and the Hoyer sparsity measure is then constructed, and a three-point adaptive, variable step-size search is employed to reduce the number of evaluations of the cost function, thereby effectively mitigating clutter interference and significantly accelerating the optimization while maintaining good focusing quality. Simulation and real-data experiments demonstrate that, under complex phase errors and different SNR conditions, the proposed algorithm outperforms the conventional variable-step MEA in terms of image entropy, image sparsity, and runtime, while keeping the phase error estimation accuracy within a small range. These results indicate that the proposed method can achieve satisfactory moving-target focusing performance and exhibits promising engineering applicability. Full article
Show Figures

Graphical abstract

24 pages, 6709 KB  
Article
Leveraging Cross-Subject Transfer Learning and Signal Augmentation for Enhanced RGB Color Decoding from EEG Data
by Metin Kerem Öztürk and Dilek Göksel Duru
Brain Sci. 2026, 16(2), 195; https://doi.org/10.3390/brainsci16020195 - 6 Feb 2026
Viewed by 251
Abstract
Objectives: Decoding neural patterns for RGB colors from electroencephalography (EEG) signals is an important step towards advancing the use of visual features as input for brain–computer interfaces (BCIs). This study aims to overcome challenges such as inter-subject variability and limited data availability by [...] Read more.
Objectives: Decoding neural patterns for RGB colors from electroencephalography (EEG) signals is an important step towards advancing the use of visual features as input for brain–computer interfaces (BCIs). This study aims to overcome challenges such as inter-subject variability and limited data availability by investigating whether transfer learning and signal augmentation can improve decoding performance. Methods: This research introduces an approach that combines transfer learning for cross-subject information transfer and data augmentation to increase representational diversity in order to improve RGB color classification from EEG data. Deep learning models, including CNN-based DeepConvNet (DCN) and Adaptive Temporal Convolutional Network (ATCNet) using the attention mechanism, were pre-trained on subjects with representative brain responses and fine-tuned on target subjects to parse individual differences. Signal augmentation techniques such as frequency slice recombination and Gaussian noise addition improved model generalization by enriching the training dataset. Results: The combined methodology yielded a classification accuracy of 83.5% for all subjects on the EEG dataset of 31 previously studied subjects. Conclusions: The improved accuracy and reduced variability underscore the effectiveness of transfer learning and signal augmentation in addressing data sparsity and variability, offering promising implications for EEG-based classification and BCI applications. Full article
Show Figures

Figure 1

21 pages, 2066 KB  
Article
A Multi-Behavior and Sequence-Aware Recommendation Method
by Dan Yin and Tianshuo Wang
Electronics 2026, 15(3), 700; https://doi.org/10.3390/electronics15030700 - 5 Feb 2026
Viewed by 141
Abstract
This paper proposes a multi-behavior and sequence-aware recommendation method that effectively integrates diverse user–item interaction behaviors and their sequential dependencies to enhance recommendation accuracy. Unlike existing studies that treat different user–item interactions independently, our approach integrates diverse behaviors and their natural sequential dependencies [...] Read more.
This paper proposes a multi-behavior and sequence-aware recommendation method that effectively integrates diverse user–item interaction behaviors and their sequential dependencies to enhance recommendation accuracy. Unlike existing studies that treat different user–item interactions independently, our approach integrates diverse behaviors and their natural sequential dependencies to better capture user preferences and alleviate data sparsity caused by single-behavior modeling. Different from the traditional single-behavior models, our approach constructs a multi-behavior heterogeneous graph and defines multiple meta-path patterns to capture implicit relationships between users and items. By generating subgraph instances, we extract fine-grained interaction patterns and employ a LightGCN with residual connections to learn user representations under different behavioral sequences. Furthermore, an attention mechanism is introduced to fuse features across subgraphs, enabling more expressive preference modeling. Experimental results on two real-world datasets, Taobao and Tmall, demonstrate that our method outperforms state-of-the-art single- and multi-behavior recommendation models, achieving up to 10.0% and 11.1% improvements in HR@10 and NDCG@10 on Taobao and 9.0% and 10.6% on Tmall, respectively. These results confirm the effectiveness of leveraging both multi-behavior information and sequence dependencies in capturing deeper user preferences for more accurate recommendations. Full article
Show Figures

Figure 1

22 pages, 4379 KB  
Article
Impact of Rainfall on Driving Speed: Combining Radar-Based Measurements and Floating Car Data
by Nico Becker, Uwe Ulbrich and Henning W. Rust
Future Transp. 2026, 6(1), 38; https://doi.org/10.3390/futuretransp6010038 - 3 Feb 2026
Viewed by 148
Abstract
It is known that rainfall leads to a reduction in driving speed. However, the results of various studies are inconsistent regarding the amount of speed reduction. In this study, we combine high-resolution radar-based rainfall estimates for three days with heavy rainfall with driving [...] Read more.
It is known that rainfall leads to a reduction in driving speed. However, the results of various studies are inconsistent regarding the amount of speed reduction. In this study, we combine high-resolution radar-based rainfall estimates for three days with heavy rainfall with driving speeds derived from floating car data on 1.5 million road sections in Germany. Using linear regression models, we investigate the functional relationship between rainfall and driving speeds depending on road section characteristics like speed limit and number of lanes. We find that the speed reduction due to rainfall is higher at road section with higher speed limits and on multi-lane roads. On highway road section with speed limits of 130 km/h, for example, heavy rainfall of more than 8 L/m2 in five minutes leads to an average speed reduction of more than 30%, although estimates at very high rainfall intensities are subject to increased uncertainty due to data sparsity. Cross-validation shows that including rainfall as a predictor for driving speed reduces mean squared errors by up 14% in general and up to 50% in heavy rainfall conditions. Furthermore, rainfall as a continuous variable should be preferred over categorical variables for a parsimonious model. Our results demonstrate that parsimonious, interpretable models combining radar rainfall data with floating car data can capture systematic rainfall-related speed reductions across a wide range of road types. However, the analysis should be interpreted strictly as a descriptive, event-specific study. It does not support generalizable inference across time, seasons, or broader traffic conditions. To make this approach suitable for operational applications such as real-time speed prediction, route planning, and traffic management, larger multi-event datasets and the consideration of effects like weekday structure and diurnal demand patterns are required to better constrain effects under heavy rainfall conditions. Full article
Show Figures

Figure 1

Back to TopTop