The Future of LLM Architectures

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 February 2026 | Viewed by 7

Special Issue Editors


E-Mail Website
Guest Editor
College of Intelligence and Computing, Tianjin University, Tianjin 300072, China
Interests: large language models; sentiment analysis; quantum cognition; sarcasm detection; affective computing; natural language processing

E-Mail Website
Guest Editor
Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250000, China
Interests: affective computing; time series mining

E-Mail Website
Guest Editor
School of Cyber Security, Tianjin University, Tianjin 300072, China
Interests: software testing; model checking; software verification

Special Issue Information

Dear Colleagues,

This Special Issue, entitled “The Future of LLM Architectures”, aims to gather cutting-edge research and visionary perspectives on the rapidly evolving landscape of large language models (LLMs). As LLMs continue to redefine the boundaries of artificial intelligence, this Special Issue will explore the full spectrum of theoretical, methodological, and applied advances shaping the next generation of intelligent systems.

Recent years have witnessed significant enhancements in the scale, capability, and impact of LLMs. Innovations such as multi-modal architectures, agent-based reasoning frameworks, System I and System II task modeling, and value alignment techniques are transforming how machines perceive, reason, and interact with the world. The integration of advanced fine-tuning strategies, reinforcement learning, and human feedback has transformed LLMs from static knowledge repositories to dynamic decision-makers that are capable of complex inference, adaptation, and creativity.

However, new challenges have arisen: How can we ensure that LLMs exhibit robust reasoning, fairness, and transparency across diverse applications? What are the best practices for evaluating capabilities such as logical reasoning, ethical value alignment, and multi-modal understanding? How can agent-based and modular architectures unlock new levels of intelligence and collaboration? This Special Issue provides a platform for the latest breakthroughs, emerging paradigms, and innovative discussions in the field.

We welcome original research articles, surveys, and position papers on topics including, but not limited to, the following:

  • System I vs. System II task modelling in LLMs;
  • Scalable architectures for advanced reasoning and planning;
  • Fine-tuning and reinforcement learning with human or multi-agent feedback;
  • Multi-modal large models: vision, language, audio, and beyond;
  • Value alignment, ethical and cultural considerations in LLMs;
  • Benchmarking, evaluation, and emergent capabilities analysis;
  • Agent-based LLM frameworks and compositional intelligence;
  • Robustness, interpretability, and trustworthiness of next-generation LLMs;
  • New training methods for efficiency, safety, and continual learning;
  • Application case studies in science, engineering, business, and education.

We invite researchers to contribute and share insights that will shape the future trajectory of LLM architectures and their transformative impact on society.

Dr. Yazhou Zhang
Dr. Xiang Li
Dr. Yao Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Large Language Models (LLMs)
  • System I and System II reasoning
  • multi-modal LLMs
  • value alignment
  • reinforcement learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers

This special issue is now open for submission.
Back to TopTop