entropy-logo

Journal Browser

Journal Browser

Entropy-Aware Graph Neural Networks: Theory, Methods, and Applications

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 353

Special Issue Editors

School of Computing, National University of Singapore, Singapore 117417, Singapore
Interests: graph neural networks; structural entropy; out-of-distribution detection; anomaly detection; financial time series; social network
School of Cyberspace Science and Techonology, Beijing Jiaotong University, Beijing 100044, China
Interests: hypergraph; graph neural network; network security anomaly detection; attack tracing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Graph neural networks (GNNs) have emerged as a fundamental framework for learning from graph-structured data, demonstrating remarkable success across domains such as social network analysis, recommender systems, bioinformatics, and financial modeling. Despite their empirical success, the theoretical understanding of GNNs, particularly from the perspective of information theory, remains limited. As GNN architectures become increasingly deep, dynamic, and heterogeneous, issues related to information loss, over-smoothing, and representation degradation call for a principled, entropy-based perspective.

Entropy and information-theoretic principles provide a natural lens for analyzing the expressivity, generalization, and robustness of GNNs. From the viewpoint of Fisher information, mutual information, and information bottlenecks, entropy-aware frameworks can help explain and improve the propagation, compression, and preservation of structural information in networks. Moreover, integrating information geometry and Riemannian representations with GNNs offers new theoretical and algorithmic insights into stability, optimization, and representation learning.

This Special Issue aims to advance the understanding of entropy-aware graph neural networks by bridging information theory and graph representation learning. We invite original research and review articles that (1) provide information-theoretic analysis of GNN mechanisms and architectures, (2) propose new entropy- or information-driven GNN methods, or (3) explore applications of entropy-aware graph learning in scientific, industrial, and social domains.

Dr. Junran Wu
Prof. Dr. Friedhelm Schwenker
Dr. Nan Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph neural networks
  • information theory in graph learning
  • entropy and information bottleneck
  • fisher information and Riemannian geometry
  • mutual information estimation on graphs
  • information-theoretic regularization
  • over-smoothing and entropy loss
  • representation learning and graph compression
  • entropy-based optimization in deep GNNs
  • applications in finance, biology, social networks, etc.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3126 KB  
Article
SS-AdaMoE: Spatio-Spectral Adaptive Mixture of Experts with Global Structural Priors for Graph Node Classification
by Xilin Kang, Tianyue Yu, Letao Wang, Yutong Guo and Fengjun Zhang
Entropy 2026, 28(3), 355; https://doi.org/10.3390/e28030355 - 21 Mar 2026
Abstract
Graph Neural Networks (GNNs) have emerged as the standard for learning representations from graph-structured data. While traditional architectures relying on message-passing mechanisms excel in homophilic settings, they essentially function as fixed low-pass filters. However, this smoothing operation limits their ability to generalize to [...] Read more.
Graph Neural Networks (GNNs) have emerged as the standard for learning representations from graph-structured data. While traditional architectures relying on message-passing mechanisms excel in homophilic settings, they essentially function as fixed low-pass filters. However, this smoothing operation limits their ability to generalize to heterophilic graphs, where connected nodes often exhibit dissimilar labels and high-frequency signals are crucial for discrimination. Furthermore, existing Mixture-of-Experts (MoE) methods for graphs often suffer from local-view routing, failing to capture global structural context during expert selection. To address these challenges, this paper proposes SS-AdaMoE, a novel Spatio-Spectral Adaptive Mixture of Experts framework designed for robust node classification across diverse graph patterns. Specifically, a Dual-Domain Expert System is constructed, integrating heterogeneous spatial aggregators with learnable spectral filters based on Bernstein polynomials. This allows the model to adaptively capture arbitrary frequency responses—including high-pass and band-pass signals—which are overlooked by standard GNNs. To resolve the locality bias, a Hierarchical Global-Prior Gating Network augmented by a Linear Graph Transformer is introduced, ensuring that expert selection is guided by both local node features and global topological awareness. Extensive experiments are conducted on five benchmark datasets spanning both homophilic and heterophilic networks. The results demonstrate that SS-AdaMoE consistently outperforms baselines, achieving accuracy improvements of up to 2.65% on Chameleon and 1.41% on Roman-empire over the strongest MoE baseline, while surpassing traditional GCN architectures by margins exceeding 28% on heterophilic datasets such as Texas. These findings validate that the synergy of learnable spectral priors and global gating effectively bridges the gap between spatial aggregation and spectral filtering. Full article
Show Figures

Figure 1

17 pages, 639 KB  
Article
Characterizing the Evolution of Inter-Actor Networks in the South China Sea Arbitration via Entropy-Driven Graph Representation Learning from Massive Media Event Data
by Menglan Ma, Hong Yu and Peng Fang
Entropy 2026, 28(3), 347; https://doi.org/10.3390/e28030347 - 19 Mar 2026
Abstract
On 12 July 2016, the ruling on the South China Sea Arbitration was announced and rapidly drew worldwide attention, turning the event into a major international hotspot. Quantifying the dynamics of such hotspot events and understanding the evolution of media-based inter-actor networks during [...] Read more.
On 12 July 2016, the ruling on the South China Sea Arbitration was announced and rapidly drew worldwide attention, turning the event into a major international hotspot. Quantifying the dynamics of such hotspot events and understanding the evolution of media-based inter-actor networks during major shocks are of substantial research interest. Viewing these interactions as dynamic networks, we analyze the time-varying actor interaction structure surrounding the arbitration using the Global Database of Events, Location and Tone (GDELT), a large-scale media-based event database with global coverage since 1979. We extract nearly 30,000 events related to the arbitration from 5 July to 25 July 2016, constructing daily cooperation and conflict networks to quantify structural changes via network size and degree-entropy dynamics. To further reveal actor-level structural roles, we learn node embeddings on each daily network via an entropy-driven graph representation learning scheme and perform embedding-based clustering with automatically selected cluster numbers, visualized via t-SNE. The results show that key dates in the event window are associated with pronounced structural shifts in the networks, including changes in participation breadth, degree-distribution heterogeneity, and clearer differentiation and reconfiguration of actor roles, with distinct patterns between cooperation and conflict networks. These findings demonstrate the potential of massive media event data for characterizing structural responses and actor-role evolution in event-driven inter-actor networks. Full article
Show Figures

Figure 1

Back to TopTop