Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (200)

Search Parameters:
Keywords = graph contrastive learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2210 KB  
Article
SPINET-KSP: A Multi-Modal LLM-Graph Foundation Model for Contextual Prediction of Kinase-Substrate-Phosphatase Triads
by Michael Olaolu Arowolo, Marian Emmanuel Okon, Davis Austria, Muhammad Azam and Sulaiman Olaniyi Abdulsalam
Kinases Phosphatases 2026, 4(1), 3; https://doi.org/10.3390/kinasesphosphatases4010003 - 22 Jan 2026
Viewed by 50
Abstract
Reversible protein phosphorylation is an important regulatory mechanism in cellular signalling and disease, regulated by the opposing actions of kinases and phosphatases. Modern computer methods predict kinase–substrate or phosphatase–substrate interactions in isolation and lack specificity for biological conditions, neglecting triadic regulation. We present [...] Read more.
Reversible protein phosphorylation is an important regulatory mechanism in cellular signalling and disease, regulated by the opposing actions of kinases and phosphatases. Modern computer methods predict kinase–substrate or phosphatase–substrate interactions in isolation and lack specificity for biological conditions, neglecting triadic regulation. We present SPINET-KSP, a multi-modal LLM–Graph foundation model engineered for the prediction of kinase–substrate–phosphatase (KSP) triads with contextual awareness. SPINET-KSP integrates high-confidence interactomes (SIGNOR, BioGRID, STRING), structural contacts obtained from AlphaFold3, ESM-3 sequence embeddings, and a 512-dimensional cell-state manifold with 1612 quantitative phosphoproteomic conditions. A heterogeneous KSP graph is examined utilising a cross-attention Graphormer with Reversible Triad Attention to mimic kinase–phosphatase antagonism. SPINET-KSP, pre-trained on 3.41 million validated phospho-sites utilising masked phosphorylation modelling and contrastive cell-state learning, achieves an AUROC of 0.852 for kinase-family classification (sensitivity 0.821, specificity 0.834, MCC 0.655) and a Pearson correlation coefficient of 0.712 for phospho-occupancy prediction. In distinct 2025 mass spectrometry datasets, it identifies 72% of acknowledged cancer-resistance triads within the top 10 rankings and uncovers 247 supplementary triads validated using orthogonal proteomics. SPINET-KSP is the first foundational model for simulating context-dependent reversible phosphorylation, enabling the targeting of dysregulated kinase-phosphatase pathways in diseases. Full article
Show Figures

Figure 1

17 pages, 5486 KB  
Article
Enhancing Parameter-Efficient Code Representations with Retrieval and Structural Priors
by Shihao Zheng, Yong Li and Xiang Ma
Appl. Sci. 2026, 16(2), 1106; https://doi.org/10.3390/app16021106 - 21 Jan 2026
Viewed by 71
Abstract
High-quality code representations are fundamental to code intelligence. Achieving such representations with parameter-efficient fine-tuning (PEFT) remains a key challenge. While code pre-trained models (CodePTMs) offer a robust foundation for general-purpose embeddings, current PEFT approaches face two main obstacles when adapting them: (i) they [...] Read more.
High-quality code representations are fundamental to code intelligence. Achieving such representations with parameter-efficient fine-tuning (PEFT) remains a key challenge. While code pre-trained models (CodePTMs) offer a robust foundation for general-purpose embeddings, current PEFT approaches face two main obstacles when adapting them: (i) they fail to adequately capture the deep structural characteristics of programs, and (ii) they are limited by the model’s finite internal parameters, restricting their ability to overcome inherent knowledge bottlenecks. To address these challenges, we introduce a parameter-efficient code representation learning framework that combines retrieval augmentation with structure-aware priors. Our framework features three complementary, lightweight modules: first, a structure–semantic dual-channel retrieval mechanism that infuses high-quality external code knowledge as non-parametric memory to alleviate the knowledge bottleneck; second, a graph relative bias module that strengthens the attention mechanism’s capacity to model structural relationships within programs; and third, a span-discriminative contrastive objective that sharpens the distinctiveness and boundary clarity of span-level representations. Extensive experiments on three benchmarks spanning six programming languages show that our method consistently outperforms state-of-the-art parameter-efficient baselines. Notably, on structure-sensitive tasks using the PLBART backbone, RS-Rep surpasses full fine-tuning, delivering a 22.1% improvement in Exact Match for code generation and a 4.4% increase in BLEU scores for code refinement, all while utilizing only about 5% of the trainable parameters. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 399 KB  
Article
Enhancing Cybersecurity Monitoring in Battery Energy Storage Systems with Graph Neural Networks
by Danilo Greco and Giovanni Battista Gaggero
Energies 2026, 19(2), 479; https://doi.org/10.3390/en19020479 - 18 Jan 2026
Viewed by 143
Abstract
Battery energy storage systems (BESSs) play a vital role in contemporary smart grids, but their increasing digitalisation exposes them to sophisticated cyberattacks. Existing anomaly detection approaches typically treat sensor measurements as flat feature vectors, overlooking the intrinsic relational structure of cyber–physical systems. This [...] Read more.
Battery energy storage systems (BESSs) play a vital role in contemporary smart grids, but their increasing digitalisation exposes them to sophisticated cyberattacks. Existing anomaly detection approaches typically treat sensor measurements as flat feature vectors, overlooking the intrinsic relational structure of cyber–physical systems. This work introduces an enhanced Graph Neural Network (GNN) autoencoder for unsupervised BESS anomaly detection that integrates multiscale graph construction, multi-head graph attention, manifold regularisation via latent compactness and graph smoothness, contrastive embedding shaping, and an ensemble anomaly scoring mechanism. A comprehensive evaluation across seven BESS and firmware cyberattack datasets demonstrates that the proposed method achieves near-perfect Receiver Operating Characteristic (ROC) and Precision–Recall Area Under the Curve (PR AUC) (up to 1.00 on several datasets), outperforming classical one-class models such as Isolation Forest, One-Class Support Vector Machine (One-Class SVM), and Local Outlier Factor on the most challenging scenarios. These results illustrate the strong potential of graph-informed representation learning for cybersecurity monitoring in distributed energy resource infrastructures. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

26 pages, 14905 KB  
Article
Data–Knowledge Collaborative Learning Framework for Cellular Traffic Forecasting via Enhanced Correlation Modeling
by Keyi An, Qiangjun Li, Kaiqi Chen, Min Deng, Yafei Liu, Senzhang Wang and Kaiyuan Lei
ISPRS Int. J. Geo-Inf. 2026, 15(1), 43; https://doi.org/10.3390/ijgi15010043 - 16 Jan 2026
Viewed by 287
Abstract
Forecasting the spatio-temporal evolutions of cellular traffic is crucial for urban management. However, achieving accurate forecasting is challenging due to “complex correlation modeling” and “model-blindness” issues. Specifically, cellular traffic is generated within complex urban systems characterized by an intricate structure and human mobility. [...] Read more.
Forecasting the spatio-temporal evolutions of cellular traffic is crucial for urban management. However, achieving accurate forecasting is challenging due to “complex correlation modeling” and “model-blindness” issues. Specifically, cellular traffic is generated within complex urban systems characterized by an intricate structure and human mobility. Existing approaches, often based on proximity or attributes, struggle to learn the latent correlation matrix governing traffic evolution, which limits forecasting accuracy. Furthermore, while substantial knowledge about urban systems can supplement the modeling of correlations, existing methods for integrating this knowledge—typically via loss functions or embeddings—overlook the synergistic collaboration between data and knowledge, resulting in weak model robustness. To address these challenges, we develop a data–knowledge collaborative learning framework termed the knowledge-empowered spatio-temporal neural network (KESTNN). This framework first extracts knowledge triplets representing urban structures to construct a knowledge graph. Representation learning is then conducted to learn the correlation matrix. Throughout this process, data and knowledge are integrated collaboratively via backpropagation, contrasting with the forward feature injection methods typical of existing approaches. This mechanism ensures that data and knowledge directly guide the dynamic updating of model parameters through backpropagation, rather than merely serving as a static feature prompt, thereby fundamentally alleviating the “model-blindness” issue. Finally, the optimized matrix is embedded into a forecasting module. Experiments on the Milan dataset demonstrate that the KESTNN exhibits excellent forecast performance, reducing RMSE by up to 23.91%, 16.73%, and 10.40% for 3-, 6-, and 9-step forecasts, respectively, compared to the best baseline. Full article
Show Figures

Figure 1

19 pages, 6052 KB  
Article
SGMT-IDS: A Dual-Branch Semi-Supervised Intrusion Detection Model Based on Graphs and Transformers
by Yifei Wu and Liang Wan
Electronics 2026, 15(2), 348; https://doi.org/10.3390/electronics15020348 - 13 Jan 2026
Viewed by 255
Abstract
Network intrusion behaviors exhibit high concealment and diversity, making intrusion detection methods based on single-behavior modeling unable to accurately characterize such activities. To overcome this limitation, we propose SGMT-IDS, a dual-branch semi-supervised intrusion detection model based on Graph Neural Networks (GNNs) and Transformers. [...] Read more.
Network intrusion behaviors exhibit high concealment and diversity, making intrusion detection methods based on single-behavior modeling unable to accurately characterize such activities. To overcome this limitation, we propose SGMT-IDS, a dual-branch semi-supervised intrusion detection model based on Graph Neural Networks (GNNs) and Transformers. By constructing two views of network attacks, namely structural and behavioral semantics, the model performs collaborative analysis of intrusion behaviors from both perspectives. The model adopts a dual-branch architecture. The SGT branch captures the structural embeddings of network intrusion behaviors, and the GML-Transformer branch extracts the semantic information of intrusion behaviors. In addition, we introduce a two-stage training strategy that optimizes the model through pseudo-labeling and contrastive learning, enabling accurate intrusion detection with only a small amount of labeled data. We conduct experiments on the NF-Bot-IoT-V2, NF-ToN-IoT-V2, and NF-CSE-CIC-IDS2018-V2 datasets. The experimental results demonstrate that SGMT-IDS achieves superior performance across multiple evaluation metrics. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 4559 KB  
Article
Language-Guided Spatio-Temporal Context Learning for Next POI Recommendation
by Chunyang Liu and Chuxiao Fu
ISPRS Int. J. Geo-Inf. 2026, 15(1), 28; https://doi.org/10.3390/ijgi15010028 - 6 Jan 2026
Viewed by 219
Abstract
With the proliferation of mobile internet and location-based services, location-based social networks (LBSNs) have accumulated extensive user check-in data, driving the advancement of next Point-of-Interest (POI) recommendation systems. Although existing approaches can model sequential dependencies and spatio-temporal patterns, they often fail to fully [...] Read more.
With the proliferation of mobile internet and location-based services, location-based social networks (LBSNs) have accumulated extensive user check-in data, driving the advancement of next Point-of-Interest (POI) recommendation systems. Although existing approaches can model sequential dependencies and spatio-temporal patterns, they often fail to fully capture users’ dynamic preferences under varying spatio-temporal contexts and lack effective integration of fine-grained semantic information. To address these limitations, this paper proposes Language-Guided Spatio-Temporal Context Learning for Next POI Recommendation (LSCNP). It employs a pre-trained BERT model to encode multi-dimensional spatio-temporal context—including geographic coordinates, visiting hours, and surrounding POI categories—into structured textual sequences for semantic understanding; constructs dual-graph structures to model spatial constraints and user transition patterns; and introduces a contrastive learning module to align spatio-temporal context with POI features, enhancing the discriminability of representations. A Transformer-based sequential encoder is adopted to capture long-range dependencies, while a neural matrix factorization decoder generates final recommendations. Experiments on three real-world LBSN datasets demonstrate that LSCNP consistently outperforms state-of-the-art baselines. Ablation studies and hyperparameter analyses further validate the contribution of each component to the overall performance. Full article
Show Figures

Figure 1

22 pages, 2594 KB  
Article
SG-MuRCL: Smoothed Graph-Enhanced Multi-Instance Contrastive Learning for Robust Whole-Slide Image Classification
by Bo Yi Lin, Seyed Sahand Mohammadi Ziabari, Yousuf Nasser Al Husaini and Ali Mohammed Mansoor Alsahag
Information 2026, 17(1), 37; https://doi.org/10.3390/info17010037 - 3 Jan 2026
Viewed by 303
Abstract
Multiple-Instance Learning (MIL) is a standard paradigm for classifying gigapixel Whole-Slide Images (WSIs). However, prominent models such as Attention-Based MIL (ABMIL) treat image patches as independent instances, ignoring their inherent spatial context. More advanced frameworks like MuRCL employ reinforcement learning for instance selection [...] Read more.
Multiple-Instance Learning (MIL) is a standard paradigm for classifying gigapixel Whole-Slide Images (WSIs). However, prominent models such as Attention-Based MIL (ABMIL) treat image patches as independent instances, ignoring their inherent spatial context. More advanced frameworks like MuRCL employ reinforcement learning for instance selection but do not explicitly enforce spatial coherence, often resulting in noisy localizations. Although Graph Neural Networks (GNNs), attention smoothing, and reinforcement learning (RL) are each powerful, state-of-the-art strategies for addressing these issues individually, their integration remains a significant challenge. This paper introduces SG-MuRCL, a framework that enhances MuRCL by first employing a GNN to explicitly model spatial relationships, departing from ABMIL’s independence assumption and, second, incorporating an attention-smoothing operator to regularize the MIL aggregator, aiming to improve robustness by generating more coherent and clinically meaningful heatmaps. Empirical evaluation yielded an important finding: while the baseline MuRCL trained successfully, the integrated SG-MuRCL consistently collapsed into a trivial solution. This outcome shows that the theoretical synergy between GNNs, attention smoothing, and RL does not trivially translate into practice. The contribution of this work is therefore not a high-performing model, but a concrete demonstration of scalability and stability challenges that arise when unifying these advanced paradigms. Full article
(This article belongs to the Special Issue Artificial Intelligence for Signal, Image and Video Processing)
Show Figures

Figure 1

46 pages, 852 KB  
Systematic Review
The Intelligent Evolution of Radar Signal Deinterleaving: A Systematic Review from Foundational Algorithms to Cognitive AI Frontiers
by Zhijie Qu, Jinquan Zhang, Yuewei Zhou and Lina Ni
Sensors 2026, 26(1), 248; https://doi.org/10.3390/s26010248 - 31 Dec 2025
Viewed by 625
Abstract
The escalating complexity, density, and agility of the modern electromagnetic environment (CME) pose unprecedented challenges to radar signal deinterleaving, a cornerstone of electronic intelligence. While traditional methods face significant performance bottlenecks, the advent of artificial intelligence, particularly deep learning, has catalyzed a paradigm [...] Read more.
The escalating complexity, density, and agility of the modern electromagnetic environment (CME) pose unprecedented challenges to radar signal deinterleaving, a cornerstone of electronic intelligence. While traditional methods face significant performance bottlenecks, the advent of artificial intelligence, particularly deep learning, has catalyzed a paradigm shift. This review provides a systematic, comprehensive, and forward-looking analysis of the radar signal deinterleaving landscape, critically bridging foundational techniques with the cognitive frontiers. Previous reviews often focused on specific technical branches or predated the deep learning revolution. In contrast, our work offers a holistic synthesis. It explicitly links the evolution of algorithms to the persistent challenges of the CME. We first establish a unified mathematical framework and systematically evaluate classical approaches, such as PRI-based search and clustering algorithms, elucidating their contributions and inherent limitations. The core of our review then pivots to the deep learning-driven era, meticulously dissecting the application paradigms, innovations, and performance of mainstream architectures, including Recurrent Neural Networks (RNNs), Transformers, Convolutional Neural Networks (CNNs), and Graph Neural Networks (GNNs). Furthermore, we venture into emerging frontiers, exploring the transformative potential of self-supervised learning, meta-learning, multi-station fusion, and the integration of Large Language Models (LLMs) for enhanced semantic reasoning. A critical assessment of the current dataset landscape is also provided, highlighting the crucial need for standardized benchmarks. Finally, this paper culminates in a comprehensive comparative analysis, identifying key open challenges such as open-set recognition, model interpretability, and real-time deployment. We conclude by offering in-depth insights and a roadmap for future research, aimed at steering the field towards end-to-end intelligent and autonomous deinterleaving systems. This review is intended to serve as a definitive reference and insightful guide for researchers, catalyzing future innovation in intelligent radar signal processing. Full article
Show Figures

Figure 1

33 pages, 3147 KB  
Review
Perception–Production of Second-Language Mandarin Tones Based on Interpretable Computational Methods: A Review
by Yujiao Huang, Zhaohong Xu, Xianming Bei and Huakun Huang
Mathematics 2026, 14(1), 145; https://doi.org/10.3390/math14010145 - 30 Dec 2025
Viewed by 438
Abstract
We survey recent advances in second-language (L2) Mandarin lexical tones research and show how an interpretable computational approach can deliver parameter-aligned feedback across perception–production (P ↔ P). We synthesize four strands: (A) conventional evaluations and tasks (identification, same–different, imitation/read-aloud) that reveal robust tone-pair [...] Read more.
We survey recent advances in second-language (L2) Mandarin lexical tones research and show how an interpretable computational approach can deliver parameter-aligned feedback across perception–production (P ↔ P). We synthesize four strands: (A) conventional evaluations and tasks (identification, same–different, imitation/read-aloud) that reveal robust tone-pair asymmetries and early P ↔ P decoupling; (B) physiological and behavioral instrumentation (e.g., EEG, eye-tracking) that clarifies cue weighting and time course; (C) audio-only speech analysis, from classic F0 tracking and MFCC–prosody fusion to CNN/RNN/CTC and self-supervised pipelines; and (D) interpretable learning, including attention and relational models (e.g., graph neural networks, GNNs) opened with explainable AI (XAI). Across strands, evidence converges on tones as time-evolving F0 trajectories, so movement, turning-point timing, and local F0 range are more diagnostic than height alone, and the contrast between Tone 2 (rising) and Tone 3 (dipping/low) remains the persistent difficulty; learners with tonal vs. non-tonal language backgrounds weight these cues differently. Guided by this synthesis, we outline a tool-oriented framework that pairs perception and production on the same items, jointly predicts tone labels and parameter targets, and uses XAI to generate local attributions and counterfactual edits, making feedback classroom-ready. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

28 pages, 7491 KB  
Article
Graph-Propagated Multi-Scale Hashing with Contrastive Learning for Unsupervised Cross-Modal Retrieval
by Yan Zhao and Guohua Shi
Appl. Sci. 2026, 16(1), 389; https://doi.org/10.3390/app16010389 - 30 Dec 2025
Viewed by 185
Abstract
This paper introduces Graph-Propagated Multi-Scale Hashing with Contrastive Learning (GPMCL), a novel unsupervised cross-modal hashing framework designed to address the semantic deficiency in large-scale unlabeled multimodal data. GPMCL first constructs an initial similarity matrix via cross-modal graph propagation, effectively capturing potential inter-modal relationships. [...] Read more.
This paper introduces Graph-Propagated Multi-Scale Hashing with Contrastive Learning (GPMCL), a novel unsupervised cross-modal hashing framework designed to address the semantic deficiency in large-scale unlabeled multimodal data. GPMCL first constructs an initial similarity matrix via cross-modal graph propagation, effectively capturing potential inter-modal relationships. A multi-scale enhancement strategy is then employed to integrate both local and global similarities, resulting in a more informative and robust similarity representation. To adaptively distinguish sample relationships, a Gaussian Mixture Model (GMM) is utilized to determine dynamic thresholds. Additionally, contrastive learning is incorporated in the feature space to enhance intra-class compactness and inter-class separability. Extensive experiments conducted on three public benchmark datasets demonstrate that GPMCL consistently outperforms existing state-of-the-art unsupervised cross-modal hashing methods in terms of retrieval performance. These results validate the effectiveness and generalization capability of the proposed method, highlighting its potential for practical cross-modal retrieval applications. Full article
(This article belongs to the Special Issue New Advances in Information Retrieval)
Show Figures

Figure 1

29 pages, 2471 KB  
Article
MISA-GMC: An Enhanced Multimodal Sentiment Analysis Framework with Gated Fusion and Momentum Contrastive Modality Relationship Modeling
by Zheng Du, Yapeng Wang, Xu Yang, Sio-Kei Im and Zhiwen Wang
Mathematics 2026, 14(1), 115; https://doi.org/10.3390/math14010115 - 28 Dec 2025
Viewed by 464
Abstract
Multimodal sentiment analysis jointly exploits textual, acoustic, and visual signals to recognize human emotions more accurately than unimodal models. However, real-world data often contain noisy or partially missing modalities, and naive fusion may allow unreliable signals to degrade overall performance. To address this, [...] Read more.
Multimodal sentiment analysis jointly exploits textual, acoustic, and visual signals to recognize human emotions more accurately than unimodal models. However, real-world data often contain noisy or partially missing modalities, and naive fusion may allow unreliable signals to degrade overall performance. To address this, we propose an enhanced framework named MISA-GMC, a lightweight extension of the widely used MISA backbone that explicitly accounts for modality reliability. The core idea is to adaptively reweight modalities at the sample level while regularizing cross-modal representations during training. Specifically, a reliability-aware gated fusion module down-weights unreliable modalities, and two auxiliary training-time regularizers (momentum contrastive learning and a lightweight correlation graph) help stabilize and refine multimodal representations without adding inference-time overhead. Experiments on three benchmark datasets—CMU-MOSI, CMU-MOSEI, and CH-SIMS—demonstrate the effectiveness of MISA-GMC. For instance, on CMU-MOSI, the proposed model improves 7-class accuracy from 43.29 to 45.92, reduces the mean absolute error (MAE) from 0.785 to 0.712, and increases the Pearson correlation coefficient (Corr) from 0.764 to 0.795. This indicates more accurate fine-grained sentiment prediction and better sentiment-intensity estimation. On CMU-MOSEI and CH-SIMS, MISA-GMC also achieves consistent gains over MISA and strong baselines such as LMF, ALMT, and MMIM across both classification and regression metrics. Ablation studies and missing-modality experiments further verify the contribution of each component and the robustness of MISA-GMC under partial-modality settings. Full article
(This article belongs to the Special Issue Applications of Machine Learning and Pattern Recognition)
Show Figures

Figure 1

40 pages, 5707 KB  
Review
Graph Representation Learning for Battery Energy Systems in Few-Shot Scenarios: Methods, Challenges and Outlook
by Xinyue Zhang and Shunli Wang
Batteries 2026, 12(1), 11; https://doi.org/10.3390/batteries12010011 - 26 Dec 2025
Viewed by 389
Abstract
Graph representation learning (GRL) has emerged as a unifying paradigm for modeling the relational and heterogeneous nature of battery energy storage systems (BESS), yet a systematic synthesis focused on data-scarce (few-shot) battery scenarios is still lacking. Graph representation learning offers a natural way [...] Read more.
Graph representation learning (GRL) has emerged as a unifying paradigm for modeling the relational and heterogeneous nature of battery energy storage systems (BESS), yet a systematic synthesis focused on data-scarce (few-shot) battery scenarios is still lacking. Graph representation learning offers a natural way to describe the structure and interaction of battery cells, modules and packs. At the same time, battery applications often suffer from very limited labeled data, especially for new chemistries, extreme operating conditions and second-life use. This review analyzes how graph representation learning can be combined with few-shot learning to support key battery management tasks under such data-scarce conditions. We first introduce the basic ideas of graph representation learning, including models based on neighborhood aggregation, contrastive learning, autoencoders and transfer learning, and discuss typical data, model and algorithm challenges in few-shot scenarios. We then connect these methods to battery state estimation problems, covering state of charge, state of health, remaining useful life and capacity. Particular attention is given to approaches that use graph neural models, meta-learning, semi-supervised and self-supervised learning, Bayesian deep networks, and federated learning to extract transferable features from early-cycle data, partial charge–discharge curves and large unlabeled field datasets. Reported studies show that, with only a small fraction of labeled samples or a few initial cycles, these methods can achieve state and life prediction errors that are comparable to or better than conventional models trained on full datasets, while also improving robustness and, in some cases, providing uncertainty estimates. Based on this evidence, we summarize the main technical routes for few-shot battery scenarios and identify open problems in data preparation, cross-domain generalization, uncertainty quantification and deployment on real battery management systems. The review concludes with a research outlook, highlighting the need for pack-level graph models, physics-guided and probabilistic learning, and unified benchmarks to advance reliable graph-based few-shot methods for next-generation intelligent battery management. Full article
(This article belongs to the Section Battery Modelling, Simulation, Management and Application)
Show Figures

Figure 1

26 pages, 1143 KB  
Article
Debiasing Session-Based Recommendation for the Digital Economy: Propensity-Aware Training and Temporal Contrast on Graph Transformers
by Yongjian Wang, Junru Si, Xuhua Qiu and Kunjie Zhu
Electronics 2026, 15(1), 84; https://doi.org/10.3390/electronics15010084 - 24 Dec 2025
Viewed by 434
Abstract
Session-based recommender systems (SBRs) are critically impaired by exposure bias in observational training logs, causing models to overfit to logging policies rather than true user preferences. This bias distorts offline evaluation and harms generalization, particularly for long-tail items. To address this, we propose [...] Read more.
Session-based recommender systems (SBRs) are critically impaired by exposure bias in observational training logs, causing models to overfit to logging policies rather than true user preferences. This bias distorts offline evaluation and harms generalization, particularly for long-tail items. To address this, we propose the Propensity- and Temporal-consistency Enhanced Graph Transformer (PTE-GT), a principled framework that enhances a recent interval-aware graph transformer backbone with two synergistic training-time modules. This Graph Neural Network -based architecture is adept at modeling the complex, graph-structured nature of session data, capturing intricate item transitions that sequential models might miss. First, we introduce a propensity-aware (PA) optimization objective based on the self-normalized inverse propensity scoring (SNIPS) estimator. This module leverages logs containing randomized exposure or logged behavior-policy propensities to learn an unbiased risk estimate, correcting for the biased data distribution. Second, we design a lightweight, view-free temporal consistency (TC) contrastive regularizer that enforces alignment between session prefixes and suffixes, improving representation robustness without computationally expensive graph augmentations, which are often a bottleneck for graph-based contrastive methods. We conduct comprehensive evaluations on three public session-based benchmarks—KuaiRand, the OTTO e-commerce challenge dataset (OTTO), and the YOOCHOOSE-1/64 split (YOOCHOOSE)—and additionally on the publicly available Open Bandit Dataset (OBD) containing logged bandit propensities. Our results demonstrate that PTE-GT significantly outperforms strong baselines. Critically, on datasets with randomized exposure or logged propensities, our unbiased evaluation protocol, using SNIPS-weighted metrics, reveals a substantial performance leap that is masked by standard, biased metrics. Our method also shows marked improvements in model calibration and long-tail item recommendation. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Graph Neural Networks)
Show Figures

Figure 1

44 pages, 29351 KB  
Article
Bayesian-Inspired Dynamic-Lag Causal Graphs and Role-Aware Transformers for Landslide Displacement Forecasting
by Fan Zhang, Yuanfa Ji, Xiaoming Liu, Siyuan Liu, Zhang Lu, Xiyan Sun, Shuai Ren and Xizi Jia
Entropy 2026, 28(1), 7; https://doi.org/10.3390/e28010007 - 20 Dec 2025
Viewed by 350
Abstract
Increasingly frequent intense rainfall is increasing landslide occurrence and risk. In southern China in particular, steep slopes and thin residual soils produce frequent landslide events with pronounced spatial heterogeneity. Therefore, displacement prediction methods that function across sites and deformation regimes in similar settings [...] Read more.
Increasingly frequent intense rainfall is increasing landslide occurrence and risk. In southern China in particular, steep slopes and thin residual soils produce frequent landslide events with pronounced spatial heterogeneity. Therefore, displacement prediction methods that function across sites and deformation regimes in similar settings are essential for early warning. Most existing approaches adopt a multistage pipeline that decomposes, predicts, and recombines, often leading to complex architectures with weak cross-domain transfer and limited adaptability. To address these limitations, we present CRAFormer, a causal role-aware Transformer guided by a dynamic-lag Bayesian network-style causal graph learned from historical observations. In our system, the discovered directed acyclic graph (DAG) partitions drivers into five causal roles and induces role-specific, non-anticipative masks for lightweight branch encoders, while a context-aware Top-2 gate sparsely fuses the branch outputs, yielding sample-wise attributions. To safely exploit exogenous rainfall forecasts, next-day rainfall is entered exclusively through an ICS tail with a leakage-free block mask, a non-negative readout, and a rainfall monotonicity regularizer. In this study, we curate two long-term GNSS datasets from Guangxi (LaMenTun and BaYiTun) that capture slow creep and step-like motions during extreme rainfall. Under identical inputs and a unified protocol, CRAFormer reduces the MAE and RMSE by 59–79% across stations relative to the strongest baseline, and it lowers magnitude errors near turning points and step events, demonstrating robust performance for two contrasting landslides within a shared regional setting. Ablations confirm the contributions of the DBN-style causal masks, the leakage-free ICS tail, and the monotonicity prior. These results highlight a practical path from causal discovery to forecast-compatible neural predictors for rainfall-induced landslides. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
Show Figures

Figure 1

23 pages, 6967 KB  
Article
Semantics- and Physics-Guided Generative Network for Radar HRRP Generalized Zero-Shot Recognition
by Jiaqi Zhou, Tao Zhang, Siyuan Mu, Yuze Gao, Feiming Wei and Wenxian Yu
Remote Sens. 2026, 18(1), 4; https://doi.org/10.3390/rs18010004 - 19 Dec 2025
Viewed by 380
Abstract
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks [...] Read more.
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks have emerged as the dominant approaches. Nevertheless, these traditional closed-set recognition methods require labeled data for every class in training, while in reality, seen classes and unseen classes coexist. Therefore, it is necessary to explore methods that can identify both seen and unseen classes simultaneously. To this end, a semantic- and physical-guided generative network (SPGGN) was innovatively proposed for HRRP generalized zero-shot recognition; it combines a constructed knowledge graph with attribute vectors to comprehensively represent semantics and reconstructs strong scattering points to introduce physical constraints. Specifically, to boost the robustness, we reconstructed the strong scattering points from deep features of HRRPs, where class-aware contrastive learning in the middle layer effectively mitigates the influence of target-aspect variations. In the classification stage, discriminative features are produced through attention-based feature fusion to capture multi-faceted information, while the design of balancing loss abates the bias towards seen classes. Experiments on two measured aircraft HRRP datasets validated the superior recognition performance of our method. Full article
Show Figures

Figure 1

Back to TopTop