entropy-logo

Journal Browser

Journal Browser

Complexity of AI

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Complexity".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 214

Special Issue Editors


E-Mail Website
Guest Editor
School of Physical & Mathematical Sciences, Nanyang Technological University, Singapore
Interests: science of science; complex systems; network science; critical transitions; early warnings; agent-based modeling; econophysics; temporal networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
2. Department of Physics, Faculty of Science, National University of Singapore, Singapore
Interests: complex systems; dynamical systems; deep learning dynamics; network science; complexity theory
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
Interests: quantum systems; biological systems; social systems (social-ecological and social-economic); urban systems; health systems; climatic systems; foundation of complex systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Modern artificial intelligence (AI) models, specifically artificial neural networks, are typically complex systems that consist of a large number of interactions. With the accelerating progress made in the capabilities of AI, there is a growing need for a better understanding of their working principles. Complexity science, the field underlying Giorgio Parisi’s work which won the 2021 Nobel prize in physics and providing some of the key theoretical foundations for the AI contributions presented by John J. Hopfield and Geoffrey Hinton, which won the 2024 Nobel prize, is receiving increasing attention in relation to the pursuit of an understanding of AI models.

As complex systems, AI models typically exhibit the phenomena of phase transition, emergence (of intelligence), multiple meta-stabilities, chaos, self-organization, etc. Studying and analyzing AI models from the perspective of complexity science can pave the way for a deeper understanding of how AI models process information, carry out logic reasoning, and generate new data like images or text. Such findings can potentially help us to improve the capabilities, safety, and efficiency of AI models through untangling their complexities.

This Special Issue is organized in conjunction with the Focused Session entitled “Complexity of AI” at the Asia-Pacific Summer School and Conference on Networks and Complex Systems (APCNCS) 2026, held in Singapore (https://apcncs2026.github.io/). Authors are invited to contribute to both the Special Issue and the Focused Session. We welcome original research exploring the complexity of AI models from diverse perspectives, including methods, theories, applications, and empirical studies.

Topics of interest in this special issue:

  • Emergence and phase transitions in AI/ML models
  • Statistical mechanics of neural networks
  • Dynamics of neural networks training and inference
  • Power laws in AI/ML models
  • Self organization in neural networks
  • Energy based models and analysis
  • Reinforcement learning in multi-agent systems
  • Adaptive and causal ML and AI

You may choose our Joint Special Issue in Complexities.

Dr. Siew Ann Cheong
Dr. Ling Feng
Dr. Lock Yue Chew
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • complexity science
  • artificial intelligence
  • neural networks
  • statistical physics
  • phase transitions
  • dynamical systems
  • self-organization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 525 KB  
Article
Compact and Interpretable Neural Networks Using Lehmer Activation Units
by Masoud Ataei, Sepideh Forouzi and Xiaogang Wang
Entropy 2026, 28(2), 157; https://doi.org/10.3390/e28020157 (registering DOI) - 31 Jan 2026
Abstract
We introduce Lehmer Activation Units (LAUs), a class of aggregation-based neural activations derived from the Lehmer transform that unify feature weighting and nonlinearity within a single differentiable operator. Unlike conventional pointwise activations, LAUs operate on collections of features and adapt their aggregation behavior [...] Read more.
We introduce Lehmer Activation Units (LAUs), a class of aggregation-based neural activations derived from the Lehmer transform that unify feature weighting and nonlinearity within a single differentiable operator. Unlike conventional pointwise activations, LAUs operate on collections of features and adapt their aggregation behavior through learnable parameters, yielding intrinsically interpretable representations. We develop both real-valued and complex-valued formulations, with the complex extension enabling phase-sensitive interactions and enhanced expressive capacity. We establish a universal approximation theorem for LAU-based networks, providing formal guarantees of expressive completeness. Empirically, we show that LAUs enable highly compact architectures to achieve strong predictive performance under tightly controlled experimental settings, demonstrating that expressive power can be concentrated within individual neurons rather than architectural depth. These results position LAUs as a principled, interpretable, and efficient alternative to conventional activation functions. Full article
(This article belongs to the Special Issue Complexity of AI)
Back to TopTop