Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (710)

Search Parameters:
Keywords = label graph

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2963 KB  
Article
LawLLM-DS: A Two-Stage LoRA Framework for Multi-Label Legal Judgment Prediction with Structured Label Dependencies
by Pengcheng Zhao, Chengcheng Han and Kun Han
Symmetry 2026, 18(1), 150; https://doi.org/10.3390/sym18010150 - 13 Jan 2026
Abstract
Legal judgment prediction (LJP) increasingly relies on large language models whose full fine-tuning is memory-intensive and susceptible to catastrophic forgetting. We present LawLLM-DS, a two-stage Low-Rank Adaptation (LoRA) framework that first performs legal knowledge pre-tuning with an aggressive learning rate and subsequently refines [...] Read more.
Legal judgment prediction (LJP) increasingly relies on large language models whose full fine-tuning is memory-intensive and susceptible to catastrophic forgetting. We present LawLLM-DS, a two-stage Low-Rank Adaptation (LoRA) framework that first performs legal knowledge pre-tuning with an aggressive learning rate and subsequently refines judgment relations with conservative updates, using dedicated LoRA adapters, 4-bit quantization, and targeted modification of seven Transformer projection matrices to keep only 0.21% of parameters trainable. From a structural perspective, the twenty annotated legal elements form a symmetric label co-occurrence graph that exhibits both cluster-level regularities and asymmetric sparsity patterns, and LawLLM-DS implicitly captures these graph-informed dependencies while remaining compatible with downstream GNN-based representations. Experiments on 5096 manually annotated divorce cases show that LawLLM-DS lifts macro F1 to 0.8893 and achieves an accuracy of 0.8786, outperforming single-stage LoRA and BERT baselines under the same data regime. Ablation studies further verify the contributions of stage-wise learning rates, adapter placement, and low-rank settings. These findings demonstrate that curriculum-style, parameter-efficient adaptation provides a practical path toward lightweight yet structure-aware LJP systems for judicial decision support. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Study in Graph Theory)
Show Figures

Figure 1

19 pages, 6052 KB  
Article
SGMT-IDS: A Dual-Branch Semi-Supervised Intrusion Detection Model Based on Graphs and Transformers
by Yifei Wu and Liang Wan
Electronics 2026, 15(2), 348; https://doi.org/10.3390/electronics15020348 - 13 Jan 2026
Viewed by 41
Abstract
Network intrusion behaviors exhibit high concealment and diversity, making intrusion detection methods based on single-behavior modeling unable to accurately characterize such activities. To overcome this limitation, we propose SGMT-IDS, a dual-branch semi-supervised intrusion detection model based on Graph Neural Networks (GNNs) and Transformers. [...] Read more.
Network intrusion behaviors exhibit high concealment and diversity, making intrusion detection methods based on single-behavior modeling unable to accurately characterize such activities. To overcome this limitation, we propose SGMT-IDS, a dual-branch semi-supervised intrusion detection model based on Graph Neural Networks (GNNs) and Transformers. By constructing two views of network attacks, namely structural and behavioral semantics, the model performs collaborative analysis of intrusion behaviors from both perspectives. The model adopts a dual-branch architecture. The SGT branch captures the structural embeddings of network intrusion behaviors, and the GML-Transformer branch extracts the semantic information of intrusion behaviors. In addition, we introduce a two-stage training strategy that optimizes the model through pseudo-labeling and contrastive learning, enabling accurate intrusion detection with only a small amount of labeled data. We conduct experiments on the NF-Bot-IoT-V2, NF-ToN-IoT-V2, and NF-CSE-CIC-IDS2018-V2 datasets. The experimental results demonstrate that SGMT-IDS achieves superior performance across multiple evaluation metrics. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 1048 KB  
Article
Differentiated Information Mining: Semi-Supervised Graph Learning with Independent Patterns
by Kai Liu and Long Wang
Mathematics 2026, 14(2), 279; https://doi.org/10.3390/math14020279 - 12 Jan 2026
Viewed by 65
Abstract
Graph pseudo-labeling is an effective semi-supervised learning (SSL) approach to improve graph neural networks (GNNs) by leveraging unlabeled data. However, its success heavily depends on the reliability of pseudo-labels, which can often result in confirmation bias and training instability. To address these challenges, [...] Read more.
Graph pseudo-labeling is an effective semi-supervised learning (SSL) approach to improve graph neural networks (GNNs) by leveraging unlabeled data. However, its success heavily depends on the reliability of pseudo-labels, which can often result in confirmation bias and training instability. To address these challenges, we propose a dual-layer consistency semi-supervised framework (DiPat), which integrates an internal differentiating pattern consistency mechanism and an external multimodal knowledge verification mechanism. In the internal layer, DiPat extracts multiple differentiating patterns from a single information source and enforces their consistency to improve the reliability of intrinsic decisions. During the supervised training phase, the model learns to extract and separate these patterns. In the semi-supervised learning phase, the model progressively selects highly consistent samples and ranks pseudo-labels based on the minimum margin principle, mitigating the overconfidence problem common in confidence-based or ensemble-based methods. In the external layer, DiPat also integrates large multimodal language models (MLLMs) as auxiliary information sources. These models provide latent textual knowledge to cross-validate internal decisions and introduce a responsibility scoring mechanism to filter out inconsistent or unreliable external judgments. Extensive experiments on multiple benchmark datasets show that DiPat demonstrates superior robustness and generalization in low-label settings, consistently outperforming strong baseline methods. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 1916 KB  
Article
ServiceGraph-FM: A Graph-Based Model with Temporal Relational Diffusion for Root-Cause Analysis in Large-Scale Payment Service Systems
by Zhuoqi Zeng and Mengjie Zhou
Mathematics 2026, 14(2), 236; https://doi.org/10.3390/math14020236 - 8 Jan 2026
Viewed by 133
Abstract
Root-cause analysis (RCA) in large-scale microservice-based payment systems is challenging due to complex failure propagation along service dependencies, limited availability of labeled incident data, and heterogeneous service topologies across deployments. We propose ServiceGraph-FM, a pretrained graph-based model for RCA, where “foundation” denotes a [...] Read more.
Root-cause analysis (RCA) in large-scale microservice-based payment systems is challenging due to complex failure propagation along service dependencies, limited availability of labeled incident data, and heterogeneous service topologies across deployments. We propose ServiceGraph-FM, a pretrained graph-based model for RCA, where “foundation” denotes a self-supervised graph encoder pretrained on large-scale production cluster traces and then adapted to downstream diagnosis. ServiceGraph-FM introduces three components: (1) masked graph autoencoding pretraining to learn transferable service-dependency embeddings for cross-topology generalization; (2) a temporal relational diffusion module that models anomaly propagation as graph diffusion on dynamic service graphs (i.e., Laplacian-governed information flow with learnable edge propagation strengths); and (3) a causal attention mechanism that leverages multi-hop path signals to better separate likely causes from correlated downstream effects. Experiments on the Alibaba Cluster Trace and synthetic PayPal-style topologies show that ServiceGraph-FM outperforms state-of-the-art baselines, improving Top-1 accuracy by 23.7% and Top-3 accuracy by 18.4% on average, and reducing mean time to detection by 31.2%. In zero-shot deployment on unseen architectures, the pretrained model retains 78.3% of its fully fine-tuned performance, indicating strong transferability for practical incident management. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 10516 KB  
Article
SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification
by Haotian Shi, Zihang Luo, Yiyang Ma, Guanquan Zhu and Xin Dai
Remote Sens. 2026, 18(2), 199; https://doi.org/10.3390/rs18020199 - 7 Jan 2026
Viewed by 222
Abstract
Hyperspectral image (HSI) classification is fundamental to a wide range of remote sensing applications, such as precision agriculture, environmental monitoring, and urban planning, because HSIs provide rich spectral signatures that enable the discrimination of subtle material differences. Deep learning approaches, including Convolutional Neural [...] Read more.
Hyperspectral image (HSI) classification is fundamental to a wide range of remote sensing applications, such as precision agriculture, environmental monitoring, and urban planning, because HSIs provide rich spectral signatures that enable the discrimination of subtle material differences. Deep learning approaches, including Convolutional Neural Networks (CNNs), Graph Convolutional Networks (GCNs), and Transformers, have achieved strong performance in learning spatial–spectral representations. However, these models often face difficulties in jointly modeling long-range dependencies, fine-grained local structures, and non-Euclidean spatial relationships, particularly when labeled training data are scarce. This paper proposes a Spectral–Spatial Graph Transformer Network (SSGTN), a dual-branch architecture that integrates superpixel-based graph modeling with Transformer-based global reasoning. SSGTN consists of four key components, namely (1) an LDA-SLIC superpixel graph construction module that preserves discriminative spectral–spatial structures while reducing computational complexity, (2) a lightweight spectral denoising module based on 1×1 convolutions and batch normalization to suppress redundant and noisy bands, (3) a Spectral–Spatial Shift Module (SSSM) that enables efficient multi-scale feature fusion through channel-wise and spatial-wise shift operations, and (4) a dual-branch GCN-Transformer block that jointly models local graph topology and global spectral–spatial dependencies. Extensive experiments on three public HSI datasets (Indian Pines, WHU-Hi-LongKou, and Houston2018) under limited supervision (1% training samples) demonstrate that SSGTN consistently outperforms state-of-the-art CNN-, Transformer-, Mamba-, and GCN-based methods in overall accuracy, Average Accuracy, and the κ coefficient. The proposed framework provides an effective baseline for HSI classification under limited supervision and highlights the benefits of integrating graph-based structural priors with global contextual modeling. Full article
Show Figures

Figure 1

41 pages, 25791 KB  
Article
TGDHTL: Hyperspectral Image Classification via Transformer–Graph Convolutional Network–Diffusion with Hybrid Domain Adaptation
by Zarrin Mahdavipour, Nashwan Alromema, Abdolraheem Khader, Ghulam Farooque, Ali Ahmed and Mohamed A. Damos
Remote Sens. 2026, 18(2), 189; https://doi.org/10.3390/rs18020189 - 6 Jan 2026
Viewed by 296
Abstract
Hyperspectral image (HSI) classification is pivotal for remote sensing applications, including environmental monitoring, precision agriculture, and urban land-use analysis. However, its accuracy is often limited by scarce labeled data, class imbalance, and domain discrepancies between standard RGB and HSI imagery. Although recent deep [...] Read more.
Hyperspectral image (HSI) classification is pivotal for remote sensing applications, including environmental monitoring, precision agriculture, and urban land-use analysis. However, its accuracy is often limited by scarce labeled data, class imbalance, and domain discrepancies between standard RGB and HSI imagery. Although recent deep learning approaches, such as 3D convolutional neural networks (3D-CNNs), transformers, and generative adversarial networks (GANs), show promise, they struggle with spectral fidelity, computational efficiency, and cross-domain adaptation in label-scarce scenarios. To address these challenges, we propose the Transformer–Graph Convolutional Network–Diffusion with Hybrid Domain Adaptation (TGDHTL) framework. This framework integrates domain-adaptive alignment of RGB and HSI data, efficient synthetic data generation, and multi-scale spectral–spatial modeling. Specifically, a lightweight transformer, guided by Maximum Mean Discrepancy (MMD) loss, aligns feature distributions across domains. A class-conditional diffusion model generates high-quality samples for underrepresented classes in only 15 inference steps, reducing labeled data needs by approximately 25% and computational costs by up to 80% compared to traditional 1000-step diffusion models. Additionally, a Multi-Scale Stripe Attention (MSSA) mechanism, combined with a Graph Convolutional Network (GCN), enhances pixel-level spatial coherence. Evaluated on six benchmark datasets including HJ-1A and WHU-OHS, TGDHTL consistently achieves high overall accuracy (e.g., 97.89% on University of Pavia) with just 11.9 GFLOPs, surpassing state-of-the-art methods. This framework provides a scalable, data-efficient solution for HSI classification under domain shifts and resource constraints. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

21 pages, 2747 KB  
Article
OPICE: Ontology-Guided Pseudo-Label Generation and Inter-Modal Consistency Enhancement for Self-Supervised Multi-Modal Entity Alignment
by Yingdi Wang, Ziyu Guo, Yongheng Mu, Xuewei Li, Lixu Shao, Guangxu Mei and Feng Li
Electronics 2026, 15(2), 254; https://doi.org/10.3390/electronics15020254 - 6 Jan 2026
Viewed by 193
Abstract
Multi-modal entity alignment (MMEA) identifies identical real-world entities across two multi-modal knowledge graphs. Most existing methods heavily rely on costly manually labeled seed alignments; in response, self-supervised MMEA has emerged to reduce this dependency. However, current self-supervised approaches suffer from two key issues: [...] Read more.
Multi-modal entity alignment (MMEA) identifies identical real-world entities across two multi-modal knowledge graphs. Most existing methods heavily rely on costly manually labeled seed alignments; in response, self-supervised MMEA has emerged to reduce this dependency. However, current self-supervised approaches suffer from two key issues: (1) low-quality pseudo-labels (sparse and noisy) weakening self-supervised signals; and (2) inter-modal semantic inconsistencies (structure, text, vision) hindering unified entity representations. To resolve these issues, we propose OPICE, an Ontology-guided Pseudo-label Generation and Inter-modal Consistency Enhancement for self-supervised MMEA. It adopts a robust pseudo-label generation strategy to produce more initial alignments with less noise, and it uses an inter-modal consistency enhancement module to narrow inter-modal semantic gaps for unified representations. Experiments on FB–DB15K and FB–YAGO15K show that OPICE achieves state-of-the-art performance, improving Hit@1 by 6.8% on average over the strongest self-supervised baseline and being competitive with most supervised baselines under standard reported settings. Full article
Show Figures

Figure 1

21 pages, 7802 KB  
Article
A Structure-Based Deep Learning Framework for Correcting Marine Natural Products’ Misannotations Attributed to Host–Microbe Symbiosis
by Xiaohe Tian, Chuanyu Lyu, Yiran Zhou, Liangren Zhang, Aili Fan and Zhenming Liu
Mar. Drugs 2026, 24(1), 20; https://doi.org/10.3390/md24010020 - 1 Jan 2026
Viewed by 310
Abstract
Marine natural products (MNPs) are a diverse group of bioactive compounds with varied chemical structures, but their biological origins are often misannotated due to complex host–microbe symbiosis. Propagated through public databases, such errors hinder biosynthetic studies and AI-driven drug discovery. Here, we develop [...] Read more.
Marine natural products (MNPs) are a diverse group of bioactive compounds with varied chemical structures, but their biological origins are often misannotated due to complex host–microbe symbiosis. Propagated through public databases, such errors hinder biosynthetic studies and AI-driven drug discovery. Here, we develop a structure-based workflow of origin classification and misannotation correction for marine datasets. Using CMNPD and NPAtlas compounds, we integrate a two-step cleaning strategy that detects label inconsistencies and filters structural outliers with a microbial-pretrained graph neural network. The optimized model achieves a balanced accuracy of 85.56% and identifies 3996 compounds whose predicted microbial origins contradict their Animalia labels. These putative symbiotic metabolites cluster within known high-risk taxa, and interpretability analysis reveal biologically coherent structural patterns. This framework provides a scalable quality-control approach for natural product databases and supports more accurate biosynthetic gene cluster (BGC) tracing, host selection, and AI-driven marine natural product discovery. Full article
(This article belongs to the Special Issue Chemoinformatics for Marine Drug Discovery)
Show Figures

Figure 1

23 pages, 14883 KB  
Article
A Structure-Invariant Transformer for Cross-Regional Enterprise Delisting Risk Identification
by Kang Li and Xinyang Li
Sustainability 2026, 18(1), 397; https://doi.org/10.3390/su18010397 - 31 Dec 2025
Viewed by 199
Abstract
Cross-regional enterprise financial distress can undermine long-term corporate viability, weaken regional industrial resilience, and amplify systemic risk, making robust early-warning tools essential for sustainable financial governance. This study investigates the problem of cross-regional enterprise delisting-related distress identification under heterogeneous economic structures and highly [...] Read more.
Cross-regional enterprise financial distress can undermine long-term corporate viability, weaken regional industrial resilience, and amplify systemic risk, making robust early-warning tools essential for sustainable financial governance. This study investigates the problem of cross-regional enterprise delisting-related distress identification under heterogeneous economic structures and highly imbalanced risk samples. We propose a cross-domain learning framework that aims to deliver stable, interpretable, and transferable risk signals across regions without requiring access to labeled data from the target domain. Using a multi-source empirical dataset covering Beijing, Shanghai, Jiangsu, and Zhejiang, we conduct leave-one-domain-out evaluations that simulate real-world regulatory deployment. The results demonstrate consistent improvements over representative sequential and graph-based baselines, indicating stronger cross-regional generalization and more reliable identification of borderline and noisy cases. By linking cross-domain stability with uncertainty-aware risk screening, this work contributes a practical and economically meaningful solution for sustainable corporate oversight, offering actionable value for policy-oriented financial supervision and regional economic sustainability. Full article
Show Figures

Figure 1

25 pages, 10798 KB  
Article
BERTSC: A Multi-Modal Fusion Framework for Stablecoin Phishing Detection Based on Graph Convolutional Networks and Soft Prompt Encoding
by Weixin Xie, Qihao Chen, Kexin Zhu, Chen Feng and Zhide Chen
Electronics 2026, 15(1), 179; https://doi.org/10.3390/electronics15010179 - 30 Dec 2025
Viewed by 279
Abstract
As stablecoins become increasingly prevalent in financial crimes, their usage for illicit activities has reached a scale of USD 51.3 billion. Detecting phishing activities within stablecoin transactions has emerged as a critical challenge in blockchain security. Currently, existing detection methods predominantly target mainstream [...] Read more.
As stablecoins become increasingly prevalent in financial crimes, their usage for illicit activities has reached a scale of USD 51.3 billion. Detecting phishing activities within stablecoin transactions has emerged as a critical challenge in blockchain security. Currently, existing detection methods predominantly target mainstream cryptocurrencies like Ethereum and lack specialized models tailored to the unique transaction patterns of stablecoin networks. This paper introduces a deep learning framework, BERTSC, based on multi-modal fusion. The model integrates three core modules graph convolutional networks (GCNs), BERT semantic encoders, and soft prompt encoders to identify malicious accounts. The GCN constructs directed multi-graph representations of account interactions, incorporating multi-dimensional edge features; the BERT encoder transforms discrete transaction attributes into semantically rich continuous vector representations; the soft prompt encoder maps account interaction features into learnable prompt vectors. An innovative three-way gated dynamic fusion mechanism optimally combines the information from these sources. The fused features are then classified to predict phishing account labels, facilitating the detection of phishing scams in stablecoin transaction datasets. Experimental results on large-scale stablecoin datasets demonstrate that BERTSC outperforms baseline models, achieving improvements of 4.96%, 3.60%, and 4.23% in Precision, Recall, and F1-score, respectively. Ablation studies validate the effectiveness of each module and confirm the necessity and superiority of the three-way gating fusion mechanism. This research offers a novel technical approach for phishing detection within blockchain stablecoin ecosystems. Full article
Show Figures

Figure 1

33 pages, 3147 KB  
Review
Perception–Production of Second-Language Mandarin Tones Based on Interpretable Computational Methods: A Review
by Yujiao Huang, Zhaohong Xu, Xianming Bei and Huakun Huang
Mathematics 2026, 14(1), 145; https://doi.org/10.3390/math14010145 - 30 Dec 2025
Viewed by 363
Abstract
We survey recent advances in second-language (L2) Mandarin lexical tones research and show how an interpretable computational approach can deliver parameter-aligned feedback across perception–production (P ↔ P). We synthesize four strands: (A) conventional evaluations and tasks (identification, same–different, imitation/read-aloud) that reveal robust tone-pair [...] Read more.
We survey recent advances in second-language (L2) Mandarin lexical tones research and show how an interpretable computational approach can deliver parameter-aligned feedback across perception–production (P ↔ P). We synthesize four strands: (A) conventional evaluations and tasks (identification, same–different, imitation/read-aloud) that reveal robust tone-pair asymmetries and early P ↔ P decoupling; (B) physiological and behavioral instrumentation (e.g., EEG, eye-tracking) that clarifies cue weighting and time course; (C) audio-only speech analysis, from classic F0 tracking and MFCC–prosody fusion to CNN/RNN/CTC and self-supervised pipelines; and (D) interpretable learning, including attention and relational models (e.g., graph neural networks, GNNs) opened with explainable AI (XAI). Across strands, evidence converges on tones as time-evolving F0 trajectories, so movement, turning-point timing, and local F0 range are more diagnostic than height alone, and the contrast between Tone 2 (rising) and Tone 3 (dipping/low) remains the persistent difficulty; learners with tonal vs. non-tonal language backgrounds weight these cues differently. Guided by this synthesis, we outline a tool-oriented framework that pairs perception and production on the same items, jointly predicts tone labels and parameter targets, and uses XAI to generate local attributions and counterfactual edits, making feedback classroom-ready. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 29209 KB  
Article
WSI-GT: Pseudo-Label Guided Graph Transformer for Whole-Slide Histology
by Zhongao Sun, Alexander Khvostikov, Andrey Krylov, Ilya Mikhailov and Pavel Malkov
Mach. Learn. Knowl. Extr. 2026, 8(1), 8; https://doi.org/10.3390/make8010008 - 29 Dec 2025
Viewed by 263
Abstract
Whole-slide histology images (WSIs) can exceed 100 k × 100 k pixels, making direct pixel-level segmentation infeasible and requiring patch-level classification as a practical alternative for downstream WSI segmentation. However, most approaches either treat patches independently, ignoring spatial and biological context, or rely [...] Read more.
Whole-slide histology images (WSIs) can exceed 100 k × 100 k pixels, making direct pixel-level segmentation infeasible and requiring patch-level classification as a practical alternative for downstream WSI segmentation. However, most approaches either treat patches independently, ignoring spatial and biological context, or rely on deep graph models prone to oversmoothing and loss of local tissue detail. We present WSI-GT (Pseudo-Label Guided Graph Transformer), a simple yet effective architecture that addresses these challenges and enables accurate WSI-level tissue segmentation. WSI-GT combines a lightweight local graph convolution block for neighborhood feature aggregation with a pseudo-label guided attention mechanism that preserves intra-class variability and mitigates oversmoothing. To cope with sparse annotations, we introduce an area-weighted sampling strategy that balances class representation while maintaining tissue topology. WSI-GT achieves a Macro F1 of 0.95 on PATH-DT-MSU WSS2v2, improving by up to 3 percentage points over patch-based CNNs and by about 2 points over strong graph baselines. It further generalizes well to the Placenta benchmark and standard graph node classification datasets, highlighting both clinical relevance and broader applicability. These results position WSI-GT as a practical and scalable solution for graph-based learning on extremely large images and for generating clinically meaningful WSI segmentations. Full article
(This article belongs to the Special Issue Deep Learning in Image Analysis and Pattern Recognition, 2nd Edition)
Show Figures

Graphical abstract

18 pages, 2688 KB  
Article
Rolling Bearing Fault Diagnosis Based on Multi-Source Domain Joint Structure Preservation Transfer with Autoencoder
by Qinglei Jiang, Tielin Shi, Xiuqun Hou, Biqi Miao, Zhaoguang Zhang, Yukun Jin, Zhiwen Wang and Hongdi Zhou
Sensors 2026, 26(1), 222; https://doi.org/10.3390/s26010222 - 29 Dec 2025
Viewed by 264
Abstract
Domain adaptation methods have been extensively studied for rolling bearing fault diagnosis under various conditions. However, some existing methods only consider the one-way embedding of original space into a low-dimensional subspace without backward validation, which leads to inaccurate embeddings of data and poor [...] Read more.
Domain adaptation methods have been extensively studied for rolling bearing fault diagnosis under various conditions. However, some existing methods only consider the one-way embedding of original space into a low-dimensional subspace without backward validation, which leads to inaccurate embeddings of data and poor diagnostic performance. In this paper, a rolling bearing fault diagnosis method based on multi-source domain joint structure preservation transfer with autoencoder (MJSPTA) is proposed. Firstly, similar source domains are screened by inter-domain metrics; then, the high-dimensional data of both the source and target domains are projected into a shared subspace with different projection matrices, respectively, during the encoding stage. Finally, the decoding stage reconstructs the low-dimensional data back to the original high-dimensional space to minimize the reconstruction accuracy. In the shared subspace, the difference between source and target domains is reduced through distribution matching and sample weighting. Meanwhile, graph embedding theory is introduced to maximally preserve the local manifold structure of the samples during domain adaptation. Next, label propagation is used to obtain the predicted labels, and a voting mechanism ultimately determines the fault type. The effectiveness and robustness of the method are verified through a series of diagnostic tests. Full article
Show Figures

Figure 1

40 pages, 5707 KB  
Review
Graph Representation Learning for Battery Energy Systems in Few-Shot Scenarios: Methods, Challenges and Outlook
by Xinyue Zhang and Shunli Wang
Batteries 2026, 12(1), 11; https://doi.org/10.3390/batteries12010011 - 26 Dec 2025
Viewed by 309
Abstract
Graph representation learning (GRL) has emerged as a unifying paradigm for modeling the relational and heterogeneous nature of battery energy storage systems (BESS), yet a systematic synthesis focused on data-scarce (few-shot) battery scenarios is still lacking. Graph representation learning offers a natural way [...] Read more.
Graph representation learning (GRL) has emerged as a unifying paradigm for modeling the relational and heterogeneous nature of battery energy storage systems (BESS), yet a systematic synthesis focused on data-scarce (few-shot) battery scenarios is still lacking. Graph representation learning offers a natural way to describe the structure and interaction of battery cells, modules and packs. At the same time, battery applications often suffer from very limited labeled data, especially for new chemistries, extreme operating conditions and second-life use. This review analyzes how graph representation learning can be combined with few-shot learning to support key battery management tasks under such data-scarce conditions. We first introduce the basic ideas of graph representation learning, including models based on neighborhood aggregation, contrastive learning, autoencoders and transfer learning, and discuss typical data, model and algorithm challenges in few-shot scenarios. We then connect these methods to battery state estimation problems, covering state of charge, state of health, remaining useful life and capacity. Particular attention is given to approaches that use graph neural models, meta-learning, semi-supervised and self-supervised learning, Bayesian deep networks, and federated learning to extract transferable features from early-cycle data, partial charge–discharge curves and large unlabeled field datasets. Reported studies show that, with only a small fraction of labeled samples or a few initial cycles, these methods can achieve state and life prediction errors that are comparable to or better than conventional models trained on full datasets, while also improving robustness and, in some cases, providing uncertainty estimates. Based on this evidence, we summarize the main technical routes for few-shot battery scenarios and identify open problems in data preparation, cross-domain generalization, uncertainty quantification and deployment on real battery management systems. The review concludes with a research outlook, highlighting the need for pack-level graph models, physics-guided and probabilistic learning, and unified benchmarks to advance reliable graph-based few-shot methods for next-generation intelligent battery management. Full article
(This article belongs to the Section Battery Modelling, Simulation, Management and Application)
Show Figures

Figure 1

20 pages, 1101 KB  
Article
scANMF: Prior Knowledge and Graph-Regularized NMF for Accurate Cell Type Annotation in scRNA-seq
by Weilai Chi, Ying Zheng, Huaying Fang and Shi Shi
Int. J. Mol. Sci. 2026, 27(1), 125; https://doi.org/10.3390/ijms27010125 - 22 Dec 2025
Viewed by 294
Abstract
Single-cell RNA sequencing (scRNA-seq) provides a high-resolution view of cellular heterogeneity, yet accurate cell-type annotation remains challenging due to data sparsity, technical noise, and variability across tissues, platforms, and species. Many existing annotation tools depend on a single form of prior knowledge, such [...] Read more.
Single-cell RNA sequencing (scRNA-seq) provides a high-resolution view of cellular heterogeneity, yet accurate cell-type annotation remains challenging due to data sparsity, technical noise, and variability across tissues, platforms, and species. Many existing annotation tools depend on a single form of prior knowledge, such as marker genes or reference profiles, which can limit performance when these resources are incomplete or inconsistent. Here, we present scANMF, a prior- and graph-regularized non-negative matrix factorization framework that integrates marker-gene information, partial label supervision, and the local manifold structure into a unified annotation model. scANMF factorizes the expression matrix into interpretable gene–factor and cell–factor representations, enabling accurate annotation in settings with limited or noisy prior information. Across multiple real scRNA-seq collections, scANMF achieved a high annotation accuracy in within-dataset, cross-platform, and cross-species evaluations. The method remained stable under varying levels of label sparsity and marker-gene noise and showed a broad robustness to hyperparameter choices. Ablation analyses indicated that marker priors, label supervision, and graph regularization contribute complementary information to the overall performance. These results support scANMF as a practical and robust framework for single-cell annotation, particularly in applications where high-quality prior knowledge is restricted. Full article
Show Figures

Figure 1

Back to TopTop