Symmetry and Its Applications in Deep Learning and Artificial Intelligence Methods

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: 31 December 2026 | Viewed by 1356

Special Issue Editor

School of Software, Tiangong University, Tianjin, China
Interests: deep learning; image analysis; edge intelligence

Special Issue Information

Dear Colleagues,

The study of symmetry plays a crucial role in advancing both theoretical and applied research in deep learning and artificial intelligence (AI). Symmetry concepts help enhance the efficiency and interpretability of AI models, enabling them to generalize better, reduce computational costs, and understand complex patterns more effectively. This Special Issue aims to explore innovative methods and applications where symmetry principles are integrated into deep learning and AI frameworks. We invite contributions that focus on the use of symmetry in various domains such as image recognition, natural language processing, reinforcement learning, and multi-agent systems. Papers should present novel algorithms, architectures, or case studies demonstrating how symmetry can improve model performance, robustness, or adaptability. We also encourage submissions discussing symmetry-based optimization techniques, symmetry-invariant architectures, and the role of symmetry in unsupervised and semi-supervised learning approaches. By bringing together diverse perspectives and cutting-edge research, this Special Issue will advance our understanding of how symmetry can be leveraged to create more powerful, scalable, and interpretable AI systems.

Dr. Sibo Qiao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • symmetry in deep learning
  • AI model optimization
  • symmetry-invariant architectures
  • symmetry and pattern recognition
  • reinforcement learning and symmetry
  • symmetry-based algorithms
  • neural network generalization
  • symmetry in multi-agent systems
  • deep learning efficiency
  • symmetry in unsupervised learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3040 KB  
Article
Rank-Aware Conditional Synthesis: Feasible Quantum Generative Modeling on Matrix Product State Manifolds
by Dongkyu Lee, Won-Gyeong Lee, Hyunjun Hong and Ohbyung Kwon
Symmetry 2026, 18(4), 605; https://doi.org/10.3390/sym18040605 - 2 Apr 2026
Viewed by 379
Abstract
Matrix Product States (MPSs) have become an indispensable symmetry-based representation for simulating quantum systems on near-term hardware by constraining entanglement entropy through a fixed bond dimension χ. This study identifies a critical “rank explosion” phenomenon that destabilizes this low-rank manifold during conditional [...] Read more.
Matrix Product States (MPSs) have become an indispensable symmetry-based representation for simulating quantum systems on near-term hardware by constraining entanglement entropy through a fixed bond dimension χ. This study identifies a critical “rank explosion” phenomenon that destabilizes this low-rank manifold during conditional quantum diffusion processes. We empirically demonstrate that the introduction of conditional guidance—essential for semantic control—injects global correlations that drive the effective Schmidt rank to increase by 4× (from χ=4 to 16), saturating the simulation limits and necessitating quantum circuits with approximately 1.8×103 Controlled-NOT (CNOT) gates. Such circuit depths fundamentally exceed the operational coherence budgets of Noisy Intermediate-Scale Quantum (NISQ) devices. To mitigate this structural instability, we propose Rank-Aware Conditional Synthesis (RACS), a sampling framework that maintains the latent trajectory within a prescribed MPS manifold through step-wise manifold projection and time-shift error correction. Experimental results on real-world semantic data reveal that RACS reduces reconstruction error, or Mean Squared Error (MSE) by 30.8% and enhances latent trajectory smoothness by 36.8% compared to conventional post hoc truncation. At a fixed hardware-efficient rank of χ=4, RACS achieves a +4.8% fidelity gain and exhibits superior robustness against depolarizing noise. By resolving the tension between conditional expressivity and entanglement constraints, RACS provides a principled, hardware-aware methodology for high-fidelity quantum generative modeling. Full article
Show Figures

Figure 1

26 pages, 1071 KB  
Article
FC-SBAAT: A Few-Shot Image Classification Approach Based on Feature Collaboration and Sparse Bias-Aware Attention in Transformers
by Min Wang, Chengyu Yang, Lin Sha, Jiaqi Li and Shikai Tang
Symmetry 2026, 18(1), 95; https://doi.org/10.3390/sym18010095 - 5 Jan 2026
Viewed by 636
Abstract
Few-shot classification aims to generalize from very limited samples, providing an effective solution for data-scarce scenarios. From a symmetry viewpoint, an ideal Few-Shot classifier should be invariant to class permutations and treat support and query features in a balanced manner, preserving intra-class cohesion [...] Read more.
Few-shot classification aims to generalize from very limited samples, providing an effective solution for data-scarce scenarios. From a symmetry viewpoint, an ideal Few-Shot classifier should be invariant to class permutations and treat support and query features in a balanced manner, preserving intra-class cohesion while enlarging inter-class separation in the embedding space. However, existing methods often violate this symmetry because prototypes are estimated from few noisy samples, which induces asymmetric representations and task-dependent biases under complex inter-class relations. To address this, we propose FC-SBAAT, feature collaboration, and Sparse Bias-Aware Attention Transformer, a framework that explicitly leverages symmetry in feature collaboration and prototype construction. First, we enhance symmetric interactions between support and query samples in both attention and contrastive subspaces and adaptively fuse these complementary representations via learned weights. Second, we refine prototypes by symmetrically aggregating intra-class features with learned importance weights, improving prototype quality while maintaining intra-class symmetry and increasing inter-class discrepancy. For matching, we introduce a Sparse Bias-Aware Attention Transformer that corrects asymmetric task bias through bias-aware attention with a low computational overhead. Extensive experiments show that FC-SBAAT achieves 55.71% and 73.87% accuracy for 1-shot and 5-shot tasks on MiniImageNet and 70.37% and 83.86% on CUB, outperforming prior methods. Full article
Show Figures

Figure 1

Back to TopTop