The Future of LLM Architectures

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 August 2026 | Viewed by 583

Special Issue Editors


E-Mail Website
Guest Editor
College of Intelligence and Computing, Tianjin University, Tianjin 300072, China
Interests: large language models; sentiment analysis; quantum cognition; sarcasm detection; affective computing; natural language processing

E-Mail Website
Guest Editor
Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250000, China
Interests: affective computing; time series mining

E-Mail Website
Guest Editor
School of Cyber Security, Tianjin University, Tianjin 300072, China
Interests: software testing; model checking; software verification

Special Issue Information

Dear Colleagues,

This Special Issue, entitled “The Future of LLM Architectures”, aims to gather cutting-edge research and visionary perspectives on the rapidly evolving landscape of large language models (LLMs). As LLMs continue to redefine the boundaries of artificial intelligence, this Special Issue will explore the full spectrum of theoretical, methodological, and applied advances shaping the next generation of intelligent systems.

Recent years have witnessed significant enhancements in the scale, capability, and impact of LLMs. Innovations such as multi-modal architectures, agent-based reasoning frameworks, System I and System II task modeling, and value alignment techniques are transforming how machines perceive, reason, and interact with the world. The integration of advanced fine-tuning strategies, reinforcement learning, and human feedback has transformed LLMs from static knowledge repositories to dynamic decision-makers that are capable of complex inference, adaptation, and creativity.

However, new challenges have arisen: How can we ensure that LLMs exhibit robust reasoning, fairness, and transparency across diverse applications? What are the best practices for evaluating capabilities such as logical reasoning, ethical value alignment, and multi-modal understanding? How can agent-based and modular architectures unlock new levels of intelligence and collaboration? This Special Issue provides a platform for the latest breakthroughs, emerging paradigms, and innovative discussions in the field.

We welcome original research articles, surveys, and position papers on topics including, but not limited to, the following:

  • System I vs. System II task modelling in LLMs;
  • Scalable architectures for advanced reasoning and planning;
  • Fine-tuning and reinforcement learning with human or multi-agent feedback;
  • Multi-modal large models: vision, language, audio, and beyond;
  • Value alignment, ethical and cultural considerations in LLMs;
  • Benchmarking, evaluation, and emergent capabilities analysis;
  • Agent-based LLM frameworks and compositional intelligence;
  • Robustness, interpretability, and trustworthiness of next-generation LLMs;
  • New training methods for efficiency, safety, and continual learning;
  • Application case studies in science, engineering, business, and education.

We invite researchers to contribute and share insights that will shape the future trajectory of LLM architectures and their transformative impact on society.

Dr. Yazhou Zhang
Dr. Xiang Li
Dr. Yao Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Large Language Models (LLMs)
  • System I and System II reasoning
  • multi-modal LLMs
  • value alignment
  • reinforcement learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

39 pages, 10297 KB  
Article
On Memorization and Generalization in Compact Transformers
by Aki Härmä, Ali Al-Saeedi, Anton Changalidis, Dumitru Verşebeniuc, Marcin Pietrasik and Anna Wilbik
Electronics 2026, 15(9), 1847; https://doi.org/10.3390/electronics15091847 - 27 Apr 2026
Viewed by 51
Abstract
Large language models (LLMs) seem to demonstrate human-like understanding and generalization of language content. These arise from the capabilities of the models to memorize and generalize the training content. In this paper, we review the recent literature and theories on the mechanisms in [...] Read more.
Large language models (LLMs) seem to demonstrate human-like understanding and generalization of language content. These arise from the capabilities of the models to memorize and generalize the training content. In this paper, we review the recent literature and theories on the mechanisms in self-attention neural networks. We also report three computational experiments that give insights into the underlying mechanisms and capabilities of the models. We also report three computational experiments showing that memorization capacity in compact transformers can be empirically linked to architectural parameters, that structured domain knowledge can be retained in small decoder-only models, and that in-context abstraction requires sufficient architectural depth. These findings suggest that the current models are superfluous for many specific applications, especially in on-edge use cases. A better understanding of application requirements and architecture details can be expected to help in building new LLM architectures that can be efficiently implemented on dedicated on-edge circuits. Full article
(This article belongs to the Special Issue The Future of LLM Architectures)
Back to TopTop